Abstract
Research and teaching are the two most characteristic activities of the professional life of academics. Since the second half of the last century, a plurality of studies focused on the link between these activities, with often contrasting conclusions. While some studies are in line with the von-Humboldtian view of research and teaching as synergistic activities, other studies theorize their uncorrelation or even negative tension. This divergence of views probably stems from the fact that investigations are often based on heterogeneous, limited and difficult-to-generalise data, using mainly qualitative metrics. This paper deepens the study of the research-teaching link, through a survey of 251 academics from Politecnico di Torino, i.e., one of the major Italian technical universities. From a methodological point of view, research and teaching are both analysed from the dual perspective of workload and quality of results obtained, on the basis of data of various kinds, including bibliometric indicators, teaching satisfaction indexes, number of credits awarded to students, etc. Next, a correlation analysis investigates possible links between teaching and research, showing that they tend to be weak and/or statistically insignificant. For instance, the investigation excludes both (i) the existence of a negative link in terms of workload—contradicting considerations such as “Those who do more teaching have less time to do research and vice versa”—and (ii) the existence of a positive link in terms of the quality of the results obtained—contradicting considerations such as “Those who obtain high quality results in research are likely to do the same in teaching and vice versa”. The results of this study are limited to the Italian context and do not necessarily have general validity. Nevertheless, they enhance previous findings in the scientific literature and may be useful for university administrators and those involved in the formulation of incentive strategies for academics.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction and literature review
Research and teaching are the two predominant knowledge-dissemination activities in the working day of academics (Burke-Smalley et al., 2017). Opinions on possible links, interactions or even interferences between them are extremely varied. Since the time of von Humboldt, most academic institutions claimed research and teaching among their pivotal missions, with a close connection, and mutual stimulation (Harland, 2016; Sinclair, 2013; Teichler, 2017). On the other hand, some academics argue that research and teaching are sometimes decoupled or even reciprocally interfering (Gendron, 2008; Moya et al., 2015).
The policies of most higher education institutions promote research at the expense of teaching. For instance, career advancements (and salaries) are typically linked to research results and not to teaching results (Cadez et al., 2017), and universities usually stimulate academic staff to improve in terms of scientific output through internal research evaluation exercises (Franceschini & Maisano, 2017; Karlsson, 2017). Conversely, the evaluation of teaching is often limited to checking the fulfilment of a minimum number of hours per year, without any evaluation of real effectiveness or, even when it is carried out, not linking the result to any incentive or penalty (Brownell & Tanner, 2012; Moya et al., 2015). Additionally, academic institutions and mentors generally encourage young researchers (i.e., future professors) to carry out research activities organically during their (post-)doctoral studies, but they rarely provide pathways to train them for teaching activity (Burke-Smalley et al., 2017; Hollywood et al., 2020; Shortlidge & Eddy, 2018). As a result, academics generally tend to focus more on research, “economising” on teaching and related activities, such as student tutoring, mentoring, thesis supervision or, more generally, those activities that may “steal time” from research (Cadez et al., 2017; Teichler, 2017).
The university incentives to focus on research at the expense of teaching are sometimes more explicit. In some cases, academics who achieve significant results in research are rewarded with a reduction in the contractual teaching workload. On the other hand, academics who do not achieve “decent” research results (e.g., a minimum number of publications in medium–high impact journals, such as those within the SCImago Journal Rank’s (SJR) Q1 or Q2) can be “punished” with extra teaching loads (Brownell & Tanner, 2012; González-Pereira et al., 2010). This debatable form of compensation certainly contrasts with von Humboldt's synergistic view of the research-teaching duo (Sinclair, 2013).
For more than half a century, the research-teaching relationship in academia has been the subject of numerous scientific investigations, from the different perspectives of students and academics. Regarding the perspective of students, the learning benefits of integrating research and teaching are documented in a variety of studies, some of which show that students can benefit significantly from activities that allow them to develop their research skills (Brew, 1999; Elton, 2001). Other studies show that engaging students in research challenges that are anchored in a real-life context gives them the opportunity to develop critical thinking and to better absorb/apply what they have learned (Coombs & Elden, 2004). Healey et al. (2010) show that the benefits of linking research and teaching in class are also visible to students, who feel that having an active researcher as a teacher helps to improve interest, understanding and enthusiasm for the subject. Furthermore, integrating research into teaching can foster interdisciplinarity and collaboration between students and teachers (Le Heron et al., 2006). The scientific literature also includes studies within pedagogy and learning sciences, which propose direct experiments of innovative educational practices and evaluate their effects on student learning (Vermeir et al., 2017).
A more anthropological perspective of the research-teaching link is that of academics, who have to manage teaching and research on a daily basis—with the related synergies, interferences and, not infrequently, ongoing assessments. According to some studies, there is a positive mutual stimulus (Shortlidge & Eddy, 2018; Trautmann & Krasny, 2006). While for other studies, the two activities are disconnected and even in conflict (Burke-Smalley et al., 2017). Cadez et al. (2017) documented a positive correlation concerning excellent academics, which sounds like: “Best researchers are often best teachers”. On the other hand, Brennan et al. (2019) showed that good teachers are not necessarily good researchers and vice versa. Recent research theorises a potential synergy between research and teaching that should be cultivated and stimulated with appropriate institutional tools/incentives, in line with a Socratic maieutic perspective (Shortlidge & Eddy, 2018). Other studies argue that recognising and rewarding teaching and research excellence through appropriate institutional incentives is a relevant policy lever to foster their development and integration (Burke-Smalley et al., 2017).
The aforementioned diversity of opinions and (apparent) contradictions are probably the consequence of inherent analysis limitations, such as the fact that the samples of academics examined are often restricted to a few dozen subjects, which makes comparisons and generalizations difficult (Lawson et al., 2015). In addition, analysis methodologies are heterogeneous and often adapted to extremely different socio-cultural contexts, which often evolve dynamically, preventing comparisons and/or generalisations. Lastly, evaluations are mostly qualitative and based on the results of subjective indicators (Brennan et al., 2019). Although the implementation of studies of general validity is precluded by the previous limitations, it remains interesting to monitor the research-teaching relationship in different contexts, in order to broaden the analysis domain and identify any trends, specificities or discrepancies. This article takes an in-depth look at the research-teaching link by considering both of these activities from the dual viewpoint of the workload and quality of results obtained by individual academics, as illustrated in the four-quadrant scheme in Table 1 (Stack, 2003).
The analysis is carried out using quantitative indicators referring to a sample of several hundred academics affiliated with Politecnico di Torino, i.e., one of the most important Italian technical universities with a mixed population of academics, ranging from mathematicians to mechanical engineers, chemists, material scientists, physicists, management engineers, etc. This university provides several tens of study programmes in engineering and architecture to approximately 35 thousand national/international students (www.polito.it, last accessed on February 2023). After defining appropriate indicators for the quadrants in Table 1, a (bivariate and multivariate) correlation analysis is provided to answer two major research questions:
(RQ#1) “Is there any (direct/inverse) relationship between the research and teaching workload of individual academics?”. The scholarly literature includes several conflicting arguments in this regard; for example—in line with a form of “scarcity model” that postulates scarcity of time and energy—it seems reasonable to assume that those academics who spend more time in research conceptually tend to spend less time in teaching and vice versa, leading to a negative relationship. On the other hand, it might be argued (i) that academics who are more active in research tend to be more organized in teaching, being able to sustain a high workload in both activities, or (ii) that some academics "economize" in both activities, perhaps because they are engaged in administrative or non-academic activities, leading to a positive relationship (Hattie & Marsh, 1996).
(RQ#2) “Is there any (direct/inverse) relationship between the quality of research results and that of teaching results of individual academics?”. Some arguments suggest that the abilities underlining successful teaching and those underlining successful research are similar, leading to a positive relationship that sounds like: “Those who obtain high quality results in research are likely to do the same in teaching” (Cadez et al., 2017). Other arguments suggest that research and teaching simply require different, often even “orthogonal”, qualities that may or may not coexist in the same person, leading to a nearly zero relationship (Barnett, 1992).
Although the scientific literature includes other analyses aimed at developing similar research questions, the proposed methodology is characterized by the use of strictly quantitative indicators constructed on a relatively large dataset. Nevertheless, it remains interesting to compare the results of this study and those of other studies based on different methodological approaches.
The remainder of this paper is organised into four sections. Section "Methodology" describes the research methodology, including the procedure to select a sample of academics, collect data and construct indicators, both for research and teaching. Section "Empirical results" presents a statistical correlation analysis structured in two stages: bivariate analysis, using Pearson’s correlation coefficient, and multivariate analysis, based on Principal Component Analysis. Section "Conclusions" summarises the original contributions of this research, its implications, limitations and suggestions for future research. The Appendix section provides additional material for further investigation.
Methodology
The flow chart di Fig. 1 outlines the methodological structure of the research, which is described in detail in the following four subsections: (2.1) selection of the sample of academics, (2.2) data collection and indicators relating to research, (2.3) data collection and indicators relating to teaching, and (2.4) correlation analysis.
Selection of the sample of academics
In Italy, every tenured academic belongs to one-and-only-one specific “Scientific and Disciplinary Sector”—in Italian “Settore Scientifico Disciplinare” or just “SSD”—of 383 in all (Abramo et al., 2019; Maisano et al., 2020); a complete list is accessible at (Ministero dell’Istruzione, 2022). For the sake of simplicity, the expression “discipline” will be used from here on. Although the academics from technical universities—like Politecnico di Torino, henceforth abbreviated as “PoliTO”—are scientifically more homogeneous than those from generalist universities, they may belong to disciplines with significant differences in terms of propensity to publish and cite (Maisano et al., 2020).
PoliTO comprises a population of around 900 tenured academics (i.e., assistant, associate and full professors). Table 2 describes the sample of selected PoliTO academics, who belong to sixteen disciplines (i.e., A, B, C, …, O, and P) that are very specific of engineering (Consiglio Universitario Nazionale, 2022). The selection was limited to academics with a relatively well-established career, both in terms of research and teaching, in order to avoid possible “outliers”, such as young academics with little teaching experience. Therefore, only active academics with a permanent contract with PoliTO in the three-year period 2018 to 2020 and, at the same time, with any Italian university (including PoliTO) in the five-year period 2013 to 2017 were considered. This eight-year permanent contract period protects against possible changes in the staff number (henceforth abbreviated as N), due to retirements, new hires, transfers, etc. It is also a form of assurance that the population of academics considered is relatively homogeneous in terms of contractual obligations, incentives, etc.
The second last column of Table 2 reports the N values related to the selected PoliTO academics, for each of the disciplines of interest. The resulting sample of 251 individuals covers more than ¼ of the whole population of PoliTO academics. Academics were identified through the public directory https://cercauniversita.cineca.it/php5/docenti/cerca.php.
Data collection and indicators relating to research
Research data basically concern scientific publications by the academics of interest and relevant citations obtained. In order to implement a discipline normalization—allowing comparisons between academics from heterogeneous scientific disciplines (Franceschini & Maisano, 2014; Moed, 2010)—the sample of PoliTO academics was extended to academics belonging to the same disciplines but affiliated to all the Italian universities. Consistently with the data regarding PoliTO staff, only academics with a permanent contract in the period from 2013 to 2020 were considered. The last column of Table 2 shows the resulting number of academics selected from all (Italian) universities (including PoliTO), which will simply be referred to as “All”.
For all academics, the corresponding Scopus Author ID was manually determined, in order to uniquely identify the publication output (Kawashima & Tomizawa, 2015). The Scopus database was chosen since (i) it provides a higher degree of coverage than Web of Science (WoS) for the discipline of interest (Visser et al., 2021), and (ii) at least for Italian academics, it is generally more accurate than WoS, due to the systematic cleaning undergone in the recent national research quality assessment exercises (denominated “VQRs”) (Franceschini et al., 2016; Franceschini & Maisano, 2017; D'Angelo and van Eck, 2020).
For each of the academics of interest, the publications produced in the three-year period from 2018 to 2020 were identified. This period seems reasonably broad to provide a “taste” of individual research output, absorbing temporary interruptions due to health problems, maternity leave, sabbaticals, etc. Publications produced later (i.e., from 2021 onwards) were excluded as they are still too “immature” in terms of citation impact (Bar-Ilan & Halevi, 2018). Only papers in international scientific journals were considered (De Bellis, 2009). For each j-th academic’s article, the issue year (i.e., 2018, 2019 or 2020), the number of co-authors, and the number of citations obtained by journal papers up to the time of data collection (i.e., February 2023) were also collected. These data are used to construct two (normalized) bibliometric indicators for each PoliTO academic, as described below.
Discipline-normalised total no. of papers, fractionalized by no. of co-authors:
j being the academic of interest from PoliTO; \(All\equiv \left\{\dots ,j,\dots \right\}\) being the set of academics from all Italian universities, in the same discipline of j; k being a generic academic \(\in All\); \(N=\left|All\right|\) being the cardinality of the set All (see last column of Table 2); i being the generic i-th paper by the j-th/k-th academic; \({a}_{i,*}\) being the number of co-authors of the i-th paper by the j-th/k-th academic (i.e., “*” in the subscript).
The fractionalization by number of co-authors was introduced to make a fair comparison between academics with different propensities for co-authorship (Franceschini et al., 2010; Perianes-Rodriguez et al., 2016). Moreover, given that the propensity to publish papers may depend on the discipline—i.e., the scientific production of academics belonging to certain disciplines may tend to be higher/lower than that of academics belonging to other disciplines—Pj implements a discipline normalisation (cf. denominator of the last term of Eq. 1) (Franceschini & Maisano, 2014; Maisano et al., 2020; Moed, 2010; Prathap et al., 2016).
Pj is used as a proxy for the workload spent on research by a certain academic (cf. quadrant (a) of the scheme in Table 1). In fact, it is assumed a proportionality between the effort expended in research activities—whether carried out on independent initiative or financed within the framework of projects, specific funding, etc.—and the dissemination of the publishing results (De Bellis, 2009). The adoption of this indicator deserves further explanation. In principle, publication is a final act (not necessarily due) of a previous research activity. In other words, it is not an obligation since all research, although relevant and rigorous, does not necessarily result in publication(s). Extending the reasoning, the number of publications is not necessarily proportional to the research workload, also since the workload required to achieve one publication is not a fixed quantity.Footnote 1 That said, it is appropriate to make a few remarks on the Italian academic context of the last 10–15 years. Recent national research quality assessment exercises (VQRs) and the pervasive use of bibliometric criteria to assess research output have increasingly pushed academics to “valorize” their research activity in terms of publications on Scopus-indexed or WoS-indexed scientific journals of a certain relevance (e.g., with relatively high Impact Factor or SJR values). Whether one likes it or not, academics who do not conform to this practice are inevitably penalised (Franceschini & Maisano, 2017; Karlsson, 2017); this applies both to younger academics, who would jeopardise promotions and career advancement, and to senior academics, who would be cut off from participation in scientific committees of strategic importance in various fields (e.g., projects, public competitions and selections, institutional positions, etc.). From this perspective, it is improbable that—at least in the Italian academic context of the last 10 to 15 years—individuals with a relevant research activity (on a quantitative basis) would not have “valorised” it in terms of scientific publications. In addition, the fact of considering publications in Scopus-indexed or WoS-indexed journals, excluding non-indexed journals or other types of publications (such as conference proceedings), constitutes a further guarantee that each publication reflects substantial workload.
Discipline-normalised average no. of citations per paper:
j being the academic of interest from PoliTO; \(All \equiv \left\{ { \ldots ,j, \ldots } \right\}\) being the set of academics from all Italian universities, in the same discipline of j; k being a generic academic \(\in All\); \(N = \left| {All} \right|\) being the cardinality of the set All (see last column of Table 2); i being the generic i-th paper by the j-th/k-th academic; \(c_{i,*,y}\) being the total number of citations obtained up to the moment of data collection (i.e., February 2023) by the i-th paper of the j-th/k-th (i.e., “*” in the subscript) academic of interest, issued in the y-th year; \(\left| {i{\text{ by}}*,y} \right|\) being the total number of papers, issued in the year y, of the j-th/k-th academic of interest; y ∈ {2018, 2019, 2020} being the single issue year (the total issue years are |y|= 3). \(C_{j}\) embeds two forms of normalization: by discipline and by age, since both these factors can affect the propensity to obtain citations (Franceschini & Maisano, 2014; Moed, 2010). Precisely, the (annual) citations per article of each PoliTO academic (j) are divided by the average value of the same quantity, with reference to the totality of academics from all universities, in the same discipline of j (cf. the last term of Eq. 3). Then, the discipline-normalized statistics related to the three issue years are combined with a simple arithmetic mean (i.e., \(\frac{{\mathop \sum \nolimits_{\forall y} \left\{ \cdots \right\}}}{\left| y \right|}\)). Fractionalization by number of co-authors (which is implemented in \(P_{j}\), cf. Eq. 1) is not needed here, since \(C_{j}\) is not “size dependent” (Prathap et al., 2016).
Describing the average level of diffusion of papers produced by a certain academic, \(C_{j}\) is used as a proxy for the quality of research results (Braun et al., 2010; De Bellis, 2009; Moed, 2010).
Data collection and indicators relating to teaching
For over twenty-five years, questionnaires have been regularly administered to students at the end of each PoliTO’s B.Sc. or M.Sc. course, in order to assess the quality of teaching. These questionnaires—which have undergone several improvements over the years—cover various aspects, such as course organization, teacher effectiveness, infrastructure, student’s interest/satisfaction, etc. Table 4 (in the appendix) reports the questionnaire template used in the academic years 2017–2018, 2018–2019 and 2019–2020. Each of the eighteen questions (q1 to q18) is rated on a four-level ordinal scale, with the following numerical conversions: 1 = “Definitely not”, 2 = “More no than yes”, 3 = “More yes than no”, 4 = “Definitely yes”, expressing an increasing level of liking/satisfaction regarding the item of interest. For each question, the mean value of respondent ratings is determined. The authors are aware that arithmetically averaging numerical ratings expressed on ordinal scale levels (i.e., 1, 2, 3 and 4 in this case) is conceptually questionable (Franceschini et al., 2019, 2022; Roberts, 1979).
The five (k-th) questions from q9 to q13 specifically concern “teaching effectiveness” (cf. Table 4); the mean values of the relevant respondent ratings can be aggregated through a further arithmetic mean:
c representing every single (B.Sc. or M.Sc.) annual course taught by j in the academic years 2017–2018, 2018–2019, and 2019–2020. This reference period is consistent with that used in the research analysis (i.e., 2018, 2019, and 2020, cf. Section "Data collection and indicators relating to research"). The offset of half a year back (e.g., 2017–2018 versus 2018, etc.) in some ways compensates for the lead time associated with the publication of scientific papers, from the moment of their submission (Björk & Solomon, 2013). \(e_{{c,q_{j} }}\) being the mean value of the respondent ratings related to the q-th question, considering the c-th course.
The \(e_{{c_{j} }}\) values related to all courses taught by any j-th PoliTO academic of interest were collected from the PoliTO website (www.polito.it). For reasons of confidentiality, the data are presented at an aggregate level and without making explicit the names of the academics involved. Other (publicly available) data were collected and used to construct the indicators related to teaching, precisely sc, i.e., number of students attending the c-th course, and ECTSc, i.e., number of ECTS (European Credit Transfer and Accumulation System) creditsFootnote 2 associated with the c-th course.
For the sake of simplicity, each academic is assigned exclusively to the courses of which he/she is the holder, not just a collaborator. This assumption is justified by the fact that a course’s organization, content and teaching method, which are decisive for its effectiveness, are generally the responsibility of the course holder (not collaborators). Moreover, the selected academics all have a relatively well-established career (i.e., at least eight years with a permanent university contract) and did most of their teaching work as course holders rather than collaborators.
Two aggregated indicators are used to describe the teaching activity of each academic. The first one is a proxy of teaching workload, which depends on the two factors: amount of teaching delivered to students and number of students attending every course. The first factor can be expressed in terms of ECTS credits associated with the relevant courses. Focusing on the Italian university scenario, each academic is usually required to deliver at least 12 ECTS per year.Footnote 3 Of course, the number of ECTS credits delivered by some academics may be higher than this lower bound. Focusing on the second factor, several preparatory/accompanying (teaching) activities tend to increase with the number of students: e.g., tutoring/mentoring, practical exercises/workshops, supervision of internships, theses/dissertations, proofreading of coursework, assistance to undergraduates with applications for admission to doctoral or postgraduate master's programmes, etc. In addition, some courses include laboratory exercises in small groups (e.g., no more than 10–20 units), which must be replicated several times, significantly increasing the workload of the academics involved.
These two factors are aggregated into the following indicator:
c being each course taught by j during the three-year reference period (2017–2018, 2018–2019 and 2019–2020); \(s_{{c_{j} }}\) being the number of students in the specific c-th course held by j;
\(ECTS_{{c_{j} }}\) being the number of ECTS credits associated with each c-th course held in the reference period by j. \(w_{j}\) can be interpreted as the total number of credits obtained by students who attended the course(s) held by j, i.e., a proxy for the quantitative impact of these course(s) on the student population. This indicator is currently used in PoliTO as a proxy for the teaching workload of individual faculty members.
The aggregation through a multiplicative model (cf. Eq. 4) is typical of indicators that aggregate heterogeneous quantities (e.g., in this case, number of students and number of ECTS credits) (Franceschini et al., 2019, 2022).
The second aggregated indicator is defined as:
\(e_{j}\) is actually a weighted average of the \(e_{{c_{j} }}\) values (cf. Eq. 3) with respect to the corresponding ECTS credits; \(e_{j}\) is used as a proxy for the quality of teaching, since it depicts the average teaching effectiveness.
Correlation analysis
The analysis described in Sects. "Data collection and indicators relating to research" and "Data collection and indicators relating to teaching" makes it possible to determine four aggregate indicators for each (j-th) of the 251 PoliTO academics of interest: Pj and Cj concerning research, and wj and ej concerning teaching. Next, the (presumed) link between research and teaching is studied through a correlation analysis between these indicators, which is organized in two parts: bivariate analysis and multivariate analysis. Although the two research questions (RQ#1 and RQ#2, at the end of Section "Introduction and literature review") basically refer to the relationship between the two indicators of workload (Pj and wj) and those of quality of results (Cj and ej), it is useful to study all six potential correlations between the above four indicators, because they could give extra insights to the interpretation of the results obtained.
Regarding the bivariate analysis, the potential correlation between pairs of indicators is assessed through the Pearson’s correlation coefficient (R) (Ross, 2021). The choice of R is driven by (i) its relative simplicity (Franceschini et al., 2019) and (ii) the absence of other forms of non-linear relationships between the pairs of datasets, as observed by a preliminary graphical investigation (cf. Section "Results of correlation analysis").
Regarding the multivariate analysis, it aims to integrate and confirm the results of the bivariate analysis, providing a complementary analytical perspective. A Principal Component Analysis (PCA) of the four indicators of interest is performed, being particularly effective for relatively large datasets with several potentially correlated variables (Abdi & Williams, 2010; Bro & Smilde, 2014).
Empirical results
Relevance of normalizations
A relatively laborious task of the present study was the construction of bibliometric indicators for research evaluation (cf. Eqs. 1 and 2). A large sample of academics were involved: i.e., 3,444 at the Italian level (“All Italian universities”), of which 251 from PoliTO (cf. Table 2). The several normalizations implemented by the indicators in use contribute to avoid undue comparisons (cf. Section "Data collection and indicators relating to research".); Section A.2 (in the appendix) provides some evidence of this.
Indicators resulting from the analysis
For each of the 251 PoliTO academics, both the aggregate indicators relating to research (i.e., \(P_{j}\) and \(C_{j}\)) and those relating to teaching (i.e., \(w_{j}\) and \(e_{j}\)) were determined. Table 5 (in the appendix) collects these indicators for each academic, with additional information regarding academic position (i.e., assistant, associate, or full professor) and gender (i.e., male or female). Figure 9 (in the appendix) contains relevant histograms and descriptive statistics.
The distributions of \(P_{j}\), \(C_{j}\) and \(w_{j}\) are right-skewed, while that of \(e_{j}\) is left-skewed. Surprisingly, the \(e_{{j}}\) values are polarised between a minimum value of 2.64 and a maximum of 3.86. This may denote a certain homogeneity in the teaching quality of PoliTO academics, but also a biased use of the four-level scale by respondents (cf. Section "Data collection and indicators relating to teaching"), resulting in a reduction of its potential discriminatory power (Franceschini et al., 2019). The fact that the distributions of \(P_{j}\), \(C_{j}\) and \(w_{j}\) are right-skewed denotes the presence of so-called "outliers" located in the right-hand tail, with significantly higher performance than the rest of the population (e.g., one academic with \(P_{j}\) > 4 and another with \(C_{j}\) > 9 are noted).
It is interesting to note substantial agreement between the \(P_{j}\) and \(C_{j}\) distributions related to PoliTO academics and those from all the national universities (see Fig. 2). This indicates that the overall performance of PoliTO's population reflects quite well that of the corresponding national counterpart universities.
Results of correlation analysis
Results of bivariate analysis
A preliminary investigation revealed the general absence of non-linear relationships between the pairs of indicators. For example, there is no non-linear relationship (e.g., higher-order polynomial, exponential, logarithmic, etc.) appearing from the scatter plot of \(w_{j}\) versus \(P_{j}\) values in Fig. 3. Similar considerations can be extended to the other pairs of indicators.
Table 3 contains the R values for the pairs of indicators of interest, accompanied by the p-value for the significance test of R being zero (i.e., null hypothesis of absence of correlation) (Ross, 2021). The correlation analysis was carried out considering both academics in their totality (“Total”) and subsets by “Discipline”, “Academic position” and “Gender”.
From a preliminary analysis of Table 3, statistically significant correlations (i.e., p-value < 0.05) are very few. Considering the totality of academics (“Total”), there is only a weak positive correlation (R ≈ + 0.224) between \(P_{j}\) and \(C_{j}\) values—confirming that research productivity and impact tend to “go hand in hand” (Sandström and van den Besselaar, 2016)—and an even weaker positive correlation (R ≈ + 0.129) between \(P_{j}\) and \(w_{j}\) values. Interestingly, these correlations may disappear and new ones may emerge when considering subsets of academics (e.g., for discipline, academic position or gender). For example, the correlation between \(P_{j}\) and \(w_{j}\) is not significant for many subsets, while some correlations are only present at the level of specific discipline, such as that between \(C_{j}\) and \(w_{j}\) for disciplines “A. Chemical foundations of technologies” and “D. Thermal engineering and industrial energy systems”. At a later stage, we will return to comment more specifically on these results (cf. Section "Answering to research questions").
Finally, it should be remembered that the R coefficient tends to lose its effectiveness for subsets with less than 25–30 units (Ross, 2021), with the risk of revealing false correlations.Footnote 4 Therefore, almost all correlations concerning disciplinary subsets are of little relevance (i.e., N < 25, cf. Table 3).
Results of multivariate analysis
The PCA was applied to the indicators of interest, which are essentially quantitative variables with a relatively high sample size (251 units). To facilitate the comparison, the indicators—which have different numerical ranges and variances—were previously standardisedFootnote 5: this operation was carried out automatically through the Minitab® statistical software. Figure 4 summarises the PCA results. It is worth noting that (i) any principal component is a linear combination of the source indicators and (ii) the principal components e mutually orthogonal (uncorrelated) variables by construction (Abdi & Williams, 2010; Bro & Smilde, 2014).
The summary table (a) and scree plot (b) show that the first two principal components—i.e., PC1 and PC2, both with eigenvalue > 1 (Bro & Smilde, 2014)—together explain a significant portion of the total variance, i.e., 0.345 + 0.265 = 0.610. Regarding PC1, the predominant coefficients are those relating to Pj (0.669) and Cj (0.649), while regarding PC2, the predominant coefficients are those relating to wj (0.653) and ej (-0.752). This confirms the decoupling between research and teaching that emerged from the bivariate analysis: PC1, which is predominantly linked to research indicators, is uncorrelated to PC2, which is predominantly linked to teaching indicators. Additionally, the loading plot (c) and biplot (d) confirm the strong correlation between Pj and Cj and the weak/absent correlation between the other pairs of indicators.Footnote 6
Answering to research questions
Returning to the two research questions (cf. Section "Introduction and literature review"), we provide punctual answers in the light of the analysis results.
(RQ#1) “Is there any (direct/inverse) relationship between the research and teaching workload of individual academics?”. In contrast to the findings of other studies, the present one shows no negative link between research and teaching workload (Burke-Smalley et al., 2017). Therefore, at least in the limited context of PoliTO academics, the basic idea of the "scarcity model" (cf. Section "Introduction and literature review") would seem to be contradicted. On the other hand, a weak positive link seems to emerge, suggesting that academics who are more productive in terms of research (\(P_{j}\)) also tend to deliver more teaching (\(w_{j}\)). This may probably stem from the fact that the more active academics tend to be equally active in both research and teaching, by virtue of their better ability to organize their time, while the least active tend to be equally inactive in both contexts. Another reason might be that those academics who deliver more teaching not infrequently have collaborators and a larger pool of students that may also support research—e.g., through theses, dissertations, etc.
(RQ#2) “Is there any (direct/inverse) relationship between the quality of research results and that of teaching results of individual academics?”. The lack of correlation seems to explain that those academics who produce research with the highest average impact/diffusion in the scientific community are not necessarily the most didactically effective. This result is in line with the findings of other studies, including those by Hattie and Marsh (1996) and Marsh and Hattie (2002), which—while relying on different methodological approaches and on a meta-analysis of dozens of other heterogeneous studies—conclude that research and teaching quality are nearly uncorrelated. These authors themselves provide a plausible explanation for this result, which can be summarized as follows: “Those academics who devote more time to research have higher quality results,Footnote 7 but those who devote more time to teaching do not appear to be more effective teachers.Footnote 8 Therefore, assuming (but not conceding) that there may be a relationship (positive or negative) between the workload in teaching and the workload in research, this does not imply the existence of any relationship in terms of the quality of the respective results”. Considering the subsets of academics, particularly discipline “O. Information processing systems”, one can even observe a relatively weak negative correlation. It is not easy to find a plausible justification for such local behaviour; perhaps those who offer more effective and creative teaching tend, by contrast, to retreat into more routine research, and vice versa.
The above answers to research questions certainly have practical implications for funding agencies and university administrators. We believe that some earlier considerations by Marsh and Hattie (2002) fit this framework very well: “Good researchers are neither more nor less likely to be effective teachers than are poor researchers. Good teachers are neither more nor less likely to be productive researchers than are good teachers. There are roughly equal numbers of academics who—relative to other academics—are: (a) good at both teaching and research, (b) poor at both teaching and research, (c) good at teaching but poor at research, and (d) poor at teaching but good at research. Thus, personnel selection and promotion decisions must be based on separate measures of teaching and research and on how academics provide evidence that their research and teaching are mutually supporting”.
Conclusions
Main findings and implications
This article focused on the (presumed) link between research and teaching in academia, considering each of them from the dual perspective of workload and quality of results. Partially contrasting with other state-of-art studies and the apparent academic myth that these two activities are complementary (Harland, 2016; Teichler, 2017), it revealed some decoupling between them. Firstly, there seems to be no negative link to support considerations like: “Those who do more teaching tend to neglect research more”. Only a few weak negative correlations—which are, however, not statistically significant—are noted at the level of some disciplines (e.g., “O. Information processing systems” and “H. Manufacturing technology and systems”, in Table 3). On the other hand, the quality of teaching results seems to be unrelated to both (i) research workload and (ii) quality of research results. This to some extent contradicts the findings of other studies, according to which “Those who excel in research are more propense to excel in teaching” (Cadez et al., 2017). The results of a bivariate analysis based on Pearson’s correlation coefficient between pairs of indicators (with relevant significance test) were confirmed by a multivariate analysis based on PCA.
On the other hand, it is surprising to observe that the conclusion that teaching and research are nearly uncorrelated activities—although carried out in a different context, period and methodological approach—is fully in line with the results of other previous studies (Marsh & Hattie, 2002). The results of the study can be taken into consideration by university administrators and those involved in formulating incentive strategies for academics. Furthermore, the methodological framework adopted could be replicated in other universities to observe possible similarities/differences.
A relevant aspect of this research is the use of quantitative indicators built on a relatively large database. In fact, discipline-normalisations were implemented based on the bibliometric statistics of more than 3,000 other Italian academics, in order to ensure a fair comparison among PoliTO academics. Furthermore, the indicator \(e_{j}\), which depicts the teaching effectiveness, is constructed taking into account several thousands of student-satisfaction ratings by B.Sc. and M.Sc. students.
Limitations
This research has several limitations, summarised as follows. The indicators in use—although bibliometrically rigorous—are still proxies for what they are meant to represent (i.e., workload and quality of results in research and teaching). Since the study is limited to academics from a single technical university (i.e., PoliTO), results do not necessarily apply to other technical or—a fortiori—generalist universities. Moreover, the assessment of the research and teaching workloads could have been more in-depth by having additional specific data (currently being collected), such as data on (i) ongoing research projects and (ii) students tutored for internships or dissertations. Lastly, the comparison between individual academics did not consider the organizational and managerial tasks that they carried out.
Future research
Regarding the future, several research activities will be undertaken to overcome at least part of the previous limitations. A factorial plan (Ross, 2021) will be constructed to assess more precisely the effects and interactions of certain contingent factors on the link between research and teaching, such as discipline, gender, academic position, career stage, contractual obligations, incentives/bonuses of academics. Then, the study will be extended to other technical and generalist universities, having found a way to uniformly assess and compare the teaching performance of academics.
Notes
For example, one academic may publish one article per year in a high-quality journal and another may publish four articles per year in low-quality journals, yet both may spend the same number of hours (i.e., workload) for their respective output. It would not be fair to say that the second academic has four times more research workload than the first one.
In the Italian university system, each credit point corresponds approximately to 25 working hours (European Commission, 2017).
Rare exceptions are academics with part-time contracts or enjoying teaching reductions as they serve important institutional roles (e.g., management of departments, faculties, colleges, graduate schools, etc.).
Extremizing, for a subset of only two units, the correlation would by definition always be perfect (i.e., R = + 1 or -1).
Standardisation was performed through the so-called z-score: \(z = \frac{x - \mu }{\sigma }\)., being x the observed indicator, μ and σ the sample mean and sample standard deviation respectively (Ross, 2021).
Precisely, the cosine of the angle between pairs of vectors indicates the correlation between the corresponding indicators. Highly correlated indicators (such as Pj and Cj) point in similar directions; uncorrelated indicators (such as wj and ej) are nearly perpendicular to each other. Furthermore, the cosine of the angle between a vector and an axis indicates the importance of the contribution of the corresponding indicator to the principal component (e.g., Pj and Cj contribute mainly to PC1, while wj and ej contribute mainly to PC2) (Abdi and Williams, 2010; Bro and Smilde, 2014).
References
Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4), 433–459.
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2019). The collaboration behavior of top scientists. Scientometrics, 118(1), 215–232.
Bar-Ilan, J., & Halevi, G. (2018). Temporal characteristics of retracted articles. Scientometrics, 116(3), 1771–1783.
Barnett, B. (1992). Teaching and research are inescapably incompatible. Chronicle of Higher Education, 38(39), A4O.
Björk, B. C., & Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. Journal of Informetrics, 7(4), 914–923.
Braun, T., Glänzel, W., & Schubert, A. (2010). On Sleeping Beauties, Princes and other tales of citation distributions…. Research Evaluation, 19(3), 195–202.
Brennan, L., Cusack, T., Delahunt, E., Kuznesof, S., & Donnelly, S. (2019). Academics’ conceptualisations of the research-teaching nexus in a research-intensive Irish university: A dynamic framework for growth & development. Learning and Instruction, 60, 301–309.
Brew, A. (1999). Research and teaching: Changing relationship in a changing context. Studies in High-Er Education, 24, 291–301.
Bro, R., & Smilde, A. K. (2014). Principal Component Analysis. Analytical Methods, 6(9), 2812–2831.
Brownell, S.E., Tanner, K.D. (2012). Barriers to faculty pedagogical change: Lack of training, time, incentives, and… tensions with professional identity?. CBE—Life Sciences Education, 11(4), 339–346.
Burke-Smalley, L. A., Rau, B. L., Neely, A. R., & Evans, W. R. (2017). Factors perpetuating the research-teaching gap in management: A review and propositions. The International Journal of Management Education, 15(3), 501–512.
Cadez, S., Dimovski, V., & Zaman Groff, M. (2017). Research, teaching and performance evaluation in academia: The salience of quality. Studies in Higher Education, 42(8), 1455–1473.
Calude, C. S., & Longo, G. (2017). The deluge of spurious correlations in big data. Foundations of Science, 22(3), 595–612.
Coombs, G., & Elden, M. (2004). Problem-based learning as social inquiry—PBL and management education. Journal of Management Education, 28(5), 523–535.
Consiglio Universitario Nazionale. (2022). Academic disciplines list for italian university research and teaching. Retrieved https://www.cun.it/uploads/storico/settori_scientifico_disciplinari_english.pdf [retrieved on February 2023].
D’Angelo, C. A., & van Eck, N. J. (2020). Collecting large-scale publication data at the level of individual researchers: A practical proposal for author name disambiguation. Scientometrics, 123, 883–907.
De Bellis, N. (2009). Bibliometrics and citation analysis: From the science citation index to cybermetrics. Scarecrow Press.
Elton, L. (2001). Research and teaching: What are the real relationships? Teaching in Higher Educa-Tion, 6(1), 43–56.
European Commission—Directorate-General for Education, Youth, Sport and Culture. (2017). ECTS users' guide 2015, Publications Office, 2017, https://data.europa.eu/doi/https://doi.org/10.2766/87192.
Feldman, K. A. (1987). Research productivity and scholarly accomplishment of college teachers as related to their instructional effectiveness: A review and exploration. Research in Higher Education, 26, 227–298.
Franceschini, F., & Maisano, D. (2014). Sub-field normalization of the IEEE scientific journals based on their connection with Technical Societies. Journal of Informetrics, 8(3), 508–533.
Franceschini, F., Maisano, D., Perotti, A., & Proto, A. (2010). Analysis of the ch-index: An indicator to evaluate the diffusion of scientific research output by citers. Scientometrics, 85(1), 203–217.
Franceschini, F., Maisano, D., & Mastrogiacomo, L. (2016). Empirical analysis and classification of database errors in Scopus and Web of Science. Journal of Informetrics, 10(4), 933–953.
Franceschini, F., & Maisano, D. (2017). Critical remarks on the Italian research assessment exercise VQR 2011–2014. Journal of Informetrics, 11(2), 337–357.
Franceschini, F., Galetto, M., & Maisano, D. (2019). Designing performance measurement systems: Theory and practice of key performance indicators. Springer.
Franceschini, F., Maisano, D., & Mastrogiacomo, L. (2022). Rankings and decisions in engineering: Conceptual and practical insights. Springer International Publishing.
Gendron, Y. (2008). Constituting the academic performer: The spectre of superficiality and stagnation in academia. European Accounting Review, 17(1), 97–127.
Glänzel, W., & Moed, H. F. (2013). Opinion paper: Thoughts and facts on bibliometric indicators. Scientometrics, 96(1), 381–394.
González-Pereira, B., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010). A new approach to the metric of journals’ scientific prestige: The SJR indicator. Journal of Informetrics, 4(3), 379–391.
Harland, T. (2016). Teaching to enhance research. Higher Education Research & Development, 35(3), 461–472.
Hattie, J., & Marsh, H. W. (1996). The relationship between research and teaching: A meta-analysis. Review of Educational Research, 66(4), 507–542.
Healey, M., Jordan, J., Pell, B., & Short, C. (2010). The research–teaching nexus: A case study of stu-dents’ awareness, experiences and perceptions of research. Innovations in Education and Teaching International, 47(2), 235–246.
Henriksen, D. (2016). The rise in co-authorship in the social sciences (1980–2013). Scientometrics, 107(2), 455–476.
Hollywood, A., McCarthy, D., Spencely, C., & Winstone, N. (2020). ‘Overwhelmed at first’: The experience of career development in early career academics. Journal of Further and Higher Education, 44(7), 998–1012.
Karlsson, S. (2017). Evaluation as a travelling idea: Assessing the consequences of Research Assessment Exercises. Research Evaluation, 26(2), 55–65.
Kawashima, H., & Tomizawa, H. (2015). Accuracy evaluation of Scopus Author ID based on the largest funding database in Japan. Scientometrics, 103(3), 1061–1071.
Koltun, V., & Hafner, D. (2021). The h-index is no longer an effective correlate of scientific reputation. PLoS ONE, 16(6), e0253397.
Lawson, T., Çakmak, M., Gündüz, M., & Busher, H. (2015). Research on teaching practicum–a systematic review. European Journal of Teacher Education, 38(3), 392–407.
Le Heron, R., Baker, R., McEwen, L., et al. (2006). Co-learning: Re-linking research and teaching in geography. Journal of Geography in Higher Education, 30(1), 77–87.
Marsh, H. W., & Hattie, J. (2002). The relation between research productivity and teaching effectiveness: Complementary, antagonistic, or independent constructs? The Journal of Higher Education, 73(5), 603–641.
Maisano, D. A., Mastrogiacomo, L., & Franceschini, F. (2020). Short-term effects of non-competitive funding to single academic researchers. Scientometrics, 123(3), 1261–1280.
Ministero dell’Istruzione (2022). Settori Concorsuali e Settori Scientifico-Disciplinari. Retrieved February 2023, from https://www.miur.gov.it/settori-concorsuali-e-settori-scientifico-disciplinari.
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.
Moya, S., Prior, D., & Rodríguez-Pérez, G. (2015). Performance-based incentives and the behavior of accounting academics: Responding to changes. Accounting Education, 24(3), 208–232.
Perianes-Rodriguez, A., Waltman, L., & Van Eck, N. J. (2016). Constructing bibliometric networks: A comparison between full and fractional counting. Journal of Informetrics, 10(4), 1178–1195.
Prathap, G., Nishy, P., Savithri, S. (2016). On the orthogonality of indicators of journal performance. Current Science, 876–881.
Roberts, F. S. (1979). Measurement theory. Addison-Wesley.
Ross, S. M. (2021). Introduction to probability and statistics for engineers and scientists (6th ed.). Academic Press.
Sandström, U., & van den Besselaar, P. (2016). Quantity and/or quality? The importance of publishing many papers. PLoS ONE, 11(11), e0166149.
Shortlidge, E. E., & Eddy, S. L. (2018). The trade-off between graduate student research and teaching: A myth? PLoS ONE, 13(6), e0199576.
Sinclair, M. (2013). Heidegger, von Humboldt and the idea of the university. Intellectual History Review, 23(4), 499–515.
Stack, S. (2003). Research productivity and student evaluation of teaching in social science classes: A research note. Research in Higher Education, 44(5), 539–556.
Teichler, U. (2017). Teaching versus research: an endangered balance?. In Challenges and Options: The Academic Profession in Europe (pp. 11–28), The Changing Academic Profession in International Comparative Perspective book series (CHAC, volume 18), Springer, Cham.
Trautmann, N. M., & Krasny, M. E. (2006). Integrating teaching and research: A new model for graduate education? BioScience, 56(2), 159–165.
Vermeir, K., Kelchtermans, G., & März, V. (2017). Implementing artifacts: An interactive frame analysis of innovative educational practices. Teaching and Teacher Education, 63, 116–125.
Visser, M., van Eck, N. J., & Waltman, L. (2021). Large-scale comparison of bibliographic data sources: Scopus, web of science, dimensions, crossref, and microsoft academic. Quantitative Science Studies, 2(1), 20–41.
Acknowledgements
This research was partially funded by the European Commission, through the Erasmus+ Project “REMOTE: Assessing and evaluating remote learning practices in STEM” (code 2022-1-ES01-KA220-HED-000085829), within the “KA220-HED Cooperation partnerships in higher education” programme.
Funding
Open access funding provided by Politecnico di Torino within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Example of teaching-evaluation questionnaire
Insight into normalisations
This section provides some evidence of the relevance of the normalisations introduced for the construction of bibliometric indicators (cf. Eqs. 1 and 2).
Normalisation by number of co-authors
Figure 5 exemplifies the distribution of the average number of co-authors for the papers examined in the discipline “G. Design methods for industrial engineering” (cf. Table 2).
A certain dispersion of the distribution is noticeable: the average number of co-authors for the papers of individual academics ranges from a minimum of 1.6 to a maximum of 10.7, which makes the proposed fractionalization reasonable (De Bellis, 2009; Henriksen, 2016). Similar considerations can be made for the other disciplines.
Normalization by scientific discipline
The boxplot in Fig. 6 confirms the existence of systematic differences in terms of propensity to publish between academics from different disciplines. For example, the box of “O. Information processing systems analysts” is significantly smaller than that of “J. Material scientists and technologists”, denoting a systematically lower propensity to publish. This confirms the need to introduce the so-called discipline normalisation, implemented by \(P_{j}\) (cf. Eq. 1).
Similarly, the boxplots in Fig. 7 summarise the distributions of the average citations per paper for individual academic, referring to papers issued in 2018, 2019 and 2020 respectively, discipline by discipline.
These diagrams show systematic differences between the various disciplines in terms of propensity to obtain citations. Consider, for example, the largely non-overlapping boxplots of “J. Materials science and technology” and “P. Mathematical analysis”, for papers issued in any year. This confirms the need for the normalisation by discipline, implemented by \(C_{j}\) (cf. Eq. 2).
Age normalization
The box plot in Fig. 8 shows that age of a paper significantly influences the “maturation” of its citation impact (Glänzel & Moed, 2013): older papers tend to obtain more citations on average than more recent ones. This confirms the appropriateness of the normalisation by age, implemented by \(C_{j}\) (cf. Eq. 2).
Resulting indicators
Table 5 contains the indicators (Pj, Cj, wj and ej) pertaining to each (j-th) academic, while Fig. 9 contains the histograms of the corresponding distributions and related statistics.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Maisano, D.A., Mastrogiacomo, L. & Franceschini, F. Empirical evidence on the relationship between research and teaching in academia. Scientometrics 128, 4475–4507 (2023). https://doi.org/10.1007/s11192-023-04770-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-023-04770-x