Abstract
Research funding is essential to promote the scientific activity of researchers and the dissemination of their results. Simplifying, funding schemes can be classified in two categories—competitive and non-competitive—with several corresponding advantages and shortcomings, which are widely discussed in the scientific literature. The researchers of Politecnico di Torino (i.e., one of the major Italian technical universities) have recently been funded through a non-competitive research funding, consisting of 14k€ for every single researcher in each of the last three years (i.e., from 2017 to 2019), for a total of 42k€. This somewhat unusual initiative—also called “diffused funding” (DF)—represents an important opportunity to investigate the effects of the relatively large allocation of non-competitive funding to single researchers. In this regard, this paper investigates the effects of the DF on the researchers’ scientific output, according to four dimensions of analysis: publishing productivity, publishing diffusion/impact, journal reputation, and international research relations. Preliminary results do not indicate any improvement in the publication output, at least in the short term.
Similar content being viewed by others
Notes
As a rule, salaries are progressively increased depending on the length of service, with two-yearly increments.
In Italian, “Consiglio di Amministrazione”.
In Italian, “finanziamento diffuso”.
In Italian, “Settore Scientifico Disciplinare”, which means “Scientific and Disciplinary Sector”.
The “natural performance” may be defined as the performance that PoliTO researchers tended to exhibit before benefiting from the DF.
As for dimension (2), this phenomenon is partly hidden since the so-called “citation inflation” is compensated by the lower time available for citation accumulation of the more recent journal articles.
References
Abramo, G., Cicero, T., & D’Angelo, C. A. (2011). The dangers of performance-based research funding in non-competitive higher education systems. Scientometrics,87(3), 641–654.
Abramo, G., Cicero, T., & D’Angelo, C. A. (2013). The impact of unproductive and top researchers on overall university research performance. Journal of Informetrics,7(1), 166–175.
Abramo, G., & D'Angelo, C. A. (2015). The VQR, Italy's second national research assessment: Methodological failures and ranking distortions. Journal of the Association for Information Science and Technology,66(11), 2202–2214.
Auranen, O., & Nieminen, M. (2010). University research funding and publication performance—An international comparison. Research Policy,39(6), 822–834.
Bar-Ilan, J., & Halevi, G. (2018). Temporal characteristics of retracted articles. Scientometrics,116(3), 1771–1783.
Bolli, T., & Somogyi, F. (2011). Do competitively acquired funds induce universities to increase productivity? Research Policy,40(1), 136–147.
Brockwell, P. J., & Davis, R. A. (2016). Introduction to time series and forecasting. Switzerland: Springer Nature.
Butler, L. (2003). Explaining Australia’s increased share of ISI publications—The effects of a funding formula based on publication counts. Research Policy,32(1), 143–155.
Demetrescu, C., Lupia, F., Mendicelli, A., Ribichini, A., Scarcello, F., & Schaerf, M. (2019). On the Shapley value and its application to the Italian VQR research assessment exercise. Journal of Informetrics,13(1), 87–104.
Fedderke, J. W., & Goldschmidt, M. (2015). Does massive funding support of researchers work?: Evaluating the impact of the South African research chair funding initiative. Research Policy,44(2), 467–482.
Franceschini, F., & Maisano, D. (2014). Sub-field normalization of the IEEE scientific journals based on their connection with Technical Societies. Journal of Informetrics,8(3), 508–533.
Franceschini, F., & Maisano, D. (2017). Critical remarks on the Italian research assessment exercise VQR 2011–2014. Journal of Informetrics,11(2), 337–357.
Franceschini, F., Maisano, D., & Mastrogiacomo, L. (2013). Evaluating research institutions: the potential of the success-index. Scientometrics,96(1), 85–101.
Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: An international comparison. Minerva,41(4), 277–304.
HEFCE (Higher Education Funding Council for England). (2017). Guide to funding 2017–18: How HEFCE allocates its funds. Retrieved April 4, 2020, from https://dera.ioe.ac.uk/29341/.
Hicks, D. (2012). Performance-based university research funding systems. Research Policy,41(2), 251–261.
Horta, H., Huisman, J., & Heitor, M. (2008). Does competitive research funding encourage diversity in higher education? Science and Public Policy,35(3), 146–158.
Jacob, B. A., & Lefgren, L. (2011). The impact of research grant funding on scientific productivity. Journal of Public Economics,95(9–10), 1168–1177.
Kendall, M. G. (1973). Time series. Griffin. ISBN 9780852642207.
Laudel, G. (2006). The art of getting funded: how scientists adapt to their funding conditions. Science and Public Policy,33(7), 489–504.
Maisano, D., Mastrogiacomo, L., Franceschini, F. (2019). Allocation of non-competitive research funding to single researchers: preliminary analysis of the short-term effects. In Proceedings of the 17th international conference on scientometrics and informetrics (ISSI2019) (pp. 259–270), 2–5 September 2019, Rome, Italy, ISBN: 978–88–3381–118–5.
Mateos-González, J. L., & Boliver, V. (2019). Performance-based university funding and the drive towards ‘institutional meritocracy’in Italy. British Journal of Sociology of Education,40(2), 145–158.
MIUR (Ministero dell’Istruzione dell’Universita` e della Ricerca). (2020a). MIUR Settori scientifico-disciplinari. Retrieved April 4, 2020, from https://cercauniversita.cineca.it/php5/settori/index.php.
MIUR (Ministero dell’Istruzione dell’Universita` e della Ricerca). (2020b). MIUR Settori scientifico-disciplinari. Retrieved April 4, 2020, from https://cercauniversita.cineca.it/php5/docenti/cerca.php.
Moed, H. F. (2010a). CWTS crown indicator measures citation impact of a research group’s publication oeuvre. Journal of Informetrics,3(3), 436–438.
Moed, H. F. (2010b). Measuring contextual citation impact of scientific journals. Journal of Informetrics,4(3), 265–277.
Muscio, A., Quaglione, D., & Vallanti, G. (2013). Does government funding complement or substitute private research funding to universities? Research Policy,42(1), 63–75.
Petersen, A. M. (2018). Multiscale impact of researcher mobility 15. Journal of The Royal Society Interface. https://doi.org/10.1098/rsif.2018.0580.
Reardon, S. (2007). NIH to limit the amount of grant money a scientist can receive. Nature News. https://doi.org/10.1038/nature.2017.21930.
Ross, S. M. (2009). Introduction to probability and statistics for engineers and scientists. New York: Academic Press.
Van Den Besselaar, P., Heyman, U., & Sandström, U. (2017). Perverse effects of output-based research funding? Butler’s Australian case revisited. Journal of Informetrics,11(3), 905–918.
Wang, J., Lee, Y. N., & Walsh, J. P. (2018). Funding model and creativity in science: Competitive versus block funding and status contingency effects. Research Policy,47(6), 1070–1083.
Wolszczak-Derlacz, J. (2017). An evaluation and explanation of (in) efficiency in higher education institutions in Europe and the US with the application of two-stage semi-parametric DEA. Research Policy,46(9), 1595–1605.
Acknowledgements
The authors acknowledge that this paper extends and completes the research presented by the authors at ISSI2019 (17th International Conference on Scientometrics and Informetrics) in Rome (Italy), 2–5 September 2019 (Maisano et al. 2019).
Author information
Authors and Affiliations
Corresponding author
Additional information
This paper is dedicated to the memory of Judit Bar-Ilan (1958–2019), an outstanding scholar and an inimitable friend and colleague.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendices
Appendix
Additional statistical tests
Table 7 contains the results of an additional Kendall’s Turning Point test for each of the four dimensions considered (Kendall 1973; Brockwell and Davis 2016). This other test is performed to check the randomness of the time series, using the totality of the data (i.e., from 2008 to 2019). For the first three time series, the null hypothesis of randomness cannot be rejected at a confidence level of 95% (p-values > 0.05); for the last one, randomness is doubtful (p-value = 0.013), due to the relatively high number of turning points; this result can somehow be affected by the relatively low number of data, therefore it should be considered with prudence (Ross 2009).
Figures 3 and 4 illustrate two Anderson Darling normality tests at 95%, for each of the four time series in Table 6, considering respectively (1) the 2008–2016 data (i.e., excluding the effects of DF), and (2) the 2008–2019 data (i.e., including the effects of DF). The authors are aware that the power of these tests is not very high, due to the relatively limited number of data (i.e., 9 in the first case and 12 in the second one) (Ross 2009). Nevertheless, it is interesting to note that for all the considered time series the null hypothesis of normal distribution cannot be rejected.
Rights and permissions
About this article
Cite this article
Maisano, D.A., Mastrogiacomo, L. & Franceschini, F. Short-term effects of non-competitive funding to single academic researchers. Scientometrics 123, 1261–1280 (2020). https://doi.org/10.1007/s11192-020-03449-x
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-020-03449-x