Abstract
The predatory nature of a journal is in constant debate because it depends on multiple factors, which keep evolving. The classification of a journal as being predatory, or not, is no longer exclusively associated with its open access status, by inclusion or exclusion on perceived reputable academic indexes and/or on whitelists or blacklists. Inclusion in the latter may itself be determined by a host of criteria, may be riddled with type I errors (e.g., erroneous inclusion of a truly predatory journal in a whitelist) and/or type II errors (e.g., erroneous exclusion of a truly valid scholarly journal in a whitelist). While extreme cases of predatory publishing behavior may be clear cut, with true predatory journals displaying ample predatory properties, journals in non-binary grey zones of predatory criteria are difficult to classify. They may have some legitimate properties, but also some illegitimate ones. In such cases, it might be too extreme to refer to such entities as “predatory”. Simply referring to them as “potentially predatory” or “borderline predatory” also does little justice to discern a predatory entity from an unscholarly, low-quality, unprofessional, or exploitative one. Faced with the limitations caused by this gradient of predatory dimensionality, this paper introduces a novel credit-like rating system, based in part on well-known financial credit ratings companies used to assess investment risk and creditworthiness, to assess journal or publisher quality. Cognizant of the weaknesses and criticisms of these rating systems, we suggest their use as a new way to view the scholarly nature of a journal or publisher. When used as a tool to supplement, replace, or reinforce current sets of criteria used for whitelists and blacklists, this system may provide a fresh perspective to gain a better understanding of predatory publishing behavior. Our tool does not propose to offer a definitive solution to this problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Why does “predatory publishing” pose a risk to academia and society?
Currently, there is no global or industry-wide consensus as to what a predatory journal or publisher is (Grudniewicz et al., 2019). Therefore, the debate regarding predatory journals will continue until academics, policy-makers, ethicists, publishers, funders, and government research agencies reach an agreement. In some cases, the defining predatory characteristics can lead some, like OMICS International, to be taken to court for deceitful behavior (Manley, 2019). Some academics agree that truly predatory entities seek to exploit vanity publishing principles to draw benefit, as monetary rewards, in extreme cases publishing junk science or pseudoscience, and potentially threatening the integrity of the entire publishing landscape (Frandsen, 2017). Consequently, “predatory publishing” also poses a threat to public opinion and trust in science, especially if researchers that are publicly funded have been found to be supporting, intellectually or financially, such operations (Eriksson & Helgesson, 2017). If veritable cybercrimes are involved in a publishing operation, the risk becomes a threat (Umlauf & Mochizuki, 2018). The greatest risk of pseudoscience being published in both veritable and “predatory” journals is the risk to health (Harvey & Weinstein, 2017), even more so when such research populates public databases such as PubMed (Manca et al., 2020; Teixeira da Silva, 2021a). Unscholarly journals and publishers might also place a burden on society if tax-payer money is used to pay for subscriptions or open access article processing charges (APCs) of such operations. Finally, early career researchers (ECRs), and established researchers, for that matter, who may naïvely or precipitously seek to publish in easy-to-publish venues where quality control is low, or where peer review is non-existent or is falsely claimed, risk tarnishing their reputations (McCann & Polacsek, 2018).
Since public funding may be attributed based on the rank (e.g. metrics) and/or indexing of a journal, as is discussed later, a negative stigma or reputation, and thus distrust by the public may arise in the ability of tax-payer-supported funding agencies to allocate funding to veritable scholarly publishing enterprises (Eriksson & Helgesson, 2017; Eykens et al., 2019). Public anger, discontent and mistrust—all valid emotive responses if the public perceives that their hard-earned taxes are somehow being squandered on unethical researchers or on the financial support of publishing venues of suspect academic quality like “predatory” journals or publishers (Hasan, 2018)—are aspects that academia cannot, and must not, ignore because academia, the public, public funding, and integrity are all intricately interwoven in this day and age (Bisbee et al., 2019). To date, however, to the authors’ knowledge, no paper has formally assessed the public’s interest in “predatory” publishing.
What about financially exploitative publishers that may control global academic and/or publishing markets but do not need to revert to spamming, for example, because they do not have to, in order to extract wealth from academia via subscriptions or APCs, or intellectual property? Do they exhibit some exploitative behavior, but limited predatory behavior, as suggested by some academics (Brembs, 2019)? More recently, Macháček and Srholec (2021) claimed that Scopus was populated by, or included, indexed “predatory” journals as classified according to Jeffrey Beall’s blacklists, drawing rebuke of a flawed methodology from Elsevier, and a defense of its selection of journals for inclusion based on “quality” (Holland et al., 2021). Manca et al. (2018) also reported the existence of potentially predatory journals among PubMed Central® (PMC) and Medline journals, an aspect that, compounded by its indexing of paper mill-derived literature, are beginning to call into question the integrity of screening and management of indexed literature at, and by, PubMed (Teixeira da Silva, 2021a). These cases highlight, however, the real reputational risks associated with the term “predatory” when describing a journal or publisher. It also indicates the continued use of flawed blacklists that fuel hyperbolic language and comparisons that make an understanding of the phenomenon of “predatory” publishing more difficult to interpret (Kendall, 2021). Where does one draw the line between some / many predatory characteristics and some / many exploitative characteristics (Teixeira da Silva et al., 2019b)? How does one differentiate low- or poor-quality from unscholarly, or from “predatory” publishing (Teixeira da Silva, 2020a)? Our proposal later on this paper aims to narrow that gap by providing a tool that might be able to better differentiate both publishing camps. Despite this, neither our tool nor our proposal claim to fully resolve the problem of "predatory" publishing.
Why does an indistinct zone of predatory publishing exist?
The credibility of a scientific or academic journal tends to be associated with inclusion in publishing whitelists such as those curated by the Directory of Open Access Journals (DOAJ), Cabell’s International (hereafter Cabells), and ABDC Journal Quality List (JQL), academic indexing databases or platforms such as Web of Science (WoS), Scopus, PMC, and Medline, as well as citation metrics such as Clarivate Analytics’ Impact Factor (IF) and Elsevier/Scopus’ CiteScore (Frandsen, 2017; Siler, 2020a), since all of them have a selection process that consists of inclusion and exclusion criteria. Apart from these lists of general criteria used for selection—and the teams who oversee the selection processes—which are made available online, to the best of our knowledge, none of these entities share or make any details or evaluation reports publicly accessible. The closest are Cabells’ journal evaluation reports, but even those are locked behind a paywall. Consequently, there is a general perception or understanding that journals that are included on these indexing databases or platforms, or that have these citation metrics, have thus supposedly met a number of “quality” criteria that then qualifies them as being “legitimate” academic or scholarly (i.e., whitelisted) journals (Siler, 2020a).
Thus, it can be argued that inclusion in a publishing whitelist or academic index, or having one of these metrics is an association with some measure of “quality”. However, it is notable that the criteria used by each of these entities, which operate independently, can differ considerably, as can their verification procedures, leading to disparities between criteria. For example, both PMC and Medline, which are a subset of PubMed, have different measures of quality control: while PMC focuses on technical criteria (e.g. inclusion of manuscripts to comply with mandates related to funding, format of paper such as XML and PDF, details of XML tags, resolution of images, etc.), Medline focuses on scientific criteria (i.e., quality of published articles, peer-review process, etc.), suggesting that Medline-indexed journals are more reliable than PMC journals (Williamson & Minter, 2019).
However, whitelists, blacklists, and any entity that relies on inclusion or exclusion criteria has inherent flaws, namely subjectivity, a constantly evolving publishing landscape that requires lists and criteria to be adjusted and refreshed, especially as some dishonest players in the process (the so-called predatory entities) fortify their operations and publishing practices to avoid being negatively classified (Cortegiani et al., 2020; Topper et al., 2019).
Though some scholars have articulated a more precise definition for “predatory” journals and publishers (e.g., Aromataris & Stern, 2020), even absent clear identification characters (Grudniewicz et al., 2019), such definitions frequently rely on group consensus (i.e., aggregated subjective judgments), rather than objective, empirical criteria (Siler, 2020b). Consequently, any system (i.e. whitelists, blacklists) employed to demarcate between predatory and non-predatory journals and publishers, by relying on a definition of predatoriness (if using one at all), must ultimately be considered (at least partly) subjective. This is not to say that such “lists” should be considered useless—only that they are imperfect. They are, nonetheless, potentially useful tools for navigating the scholarly publication landscape. The offshoot of all of this is that there can be high variability/reliability in the assessments made across metrics and other evaluative tools (including the system proposed in this article) and that such lists, to be properly understood/appraised, must have their underlying criteria made transparent—in line with best practice standards regarding use of metrics (e.g. Wilsdon et al., 2015). Any tool that is used to assess predatoriness, that has its criteria made closed from view and critique, is not a tool of any real use or value at all since such tools themselves cannot be properly vetted.
To compound these issues, lists and criteria may be populated with type I errors (erroneous inclusion) and/or type II errors (erroneous exclusion) (Teixeira da Silva & Tsigaris, 2018; Strinzel et al., 2019; Teixeira da Silva and Tsigaris, 2020a; Tsigaris & Teixeira da Silva, 2021). For these reasons, it is not always easy to differentiate predatory from exploitative (Teixeira da Silva et al., 2019b) or low-quality (Teixeira da Silva, 2020a), or to classify a journal or publisher as unscholarly, unprofessional, or a host of other often subjectively assigned adjectives (Eriksson & Helgesson, 2018). Consequently, the entries (journals and publishers) in the DOAJ, Cabells’ and ABDC’s JQL are in constant change, attempting constantly to adjust to a publishing environment that is itself in constant flux (Teixeira da Silva et al., 2018; Teixeira da Silva, 2020b). This may also cause indirect financial and intellectual damage to universities and government agencies that need to invest resources into the prevention and detection of publications in such journals. Their reputations are also harmed, as is science’s perception by society, when science policy is constantly being adjusted in a futile attempt to make the publishing environment more trustworthy while the actual threat (“predatory” publishing) remains unknown or unclear. The downstream impacts of poorly developed policies, when implemented at a national scale, may hurt many academics by providing poor advice, e.g., in India (Patwardhan et al., 2018). When such variable, fluctuating, and to some extent unreliable (Kratochvil et al., 2020) lists or criteria are used to classify academics or academic institutions, there is the risk or misclassification (e.g., calling a surgery journal that may be blacklisted as a “predatory surgery journal”) (Teixeira da Silva, 2021b), which may cause inaccurate and unfair reputational damage through false accusations or unsubstantiated mischaracterizations (Tsigaris & Teixeira da Silva, 2019, 2020). Moreover, such issues and limitations are not limited to open access (Olivarez et al., 2018). The need to remove the erroneous association between open access and predatory behavior, an unfortunate side-effect that has emerged because of the existence of Beall’s blacklists, cannot be over-emphasized (Krawczyk & Kulczycki, 2021).
If then, for argument’s sake, a “prestigious” journal, for example, with an IF and that is indexed (e.g., in WoS), is shown to be or have been a “propagator” of “peer reviewed”Footnote 1 papers derived from a paper mill, such as Portland Press’ Bioscience Reports (Cooper & Han, 2021), how then should academics characterize such a journal? Would fallible peer review, the publication of paper mill-derived papers or an increase in retracted papers indicate characteristics equivalent to low quality, unscholarly work, lack of professionalism, or “predatory” publishing, or some sort of grey zone in between any two or more of these characteristics?
In essence, whether discussing white- or blacklisting criteria, journal, indexing or metrics “quality”, and other aspects that need to quantifiably assess the nature or quality of a parameter in academic publishing, there exist degrees or dimensionality whereby a journal may be increasingly/decreasingly predatory (or legitimate) in comparison to some standard or criteria (whether ideal or in relative contrast to one another). This results from the desire to exact a greater form of precision, improving an imprecise criterion to a more concrete, definable, or precise one (Justus, 2012). As it currently stands, “predatory” or other related terms might ultimately resemble a disjunctive category (Bowker & Star, 1999; Bruner et al., 1986).Footnote 2 This inadequacy in classifying journals as being ‘predatory’ or ‘legitimate’ is a reflection of current defunct evaluation systems in place, not of the elements which are being classified. Using only two check-boxes does injustice to the complexity and different continuities academics and others are faced with when assessing the trustworthiness of a scientific journal or publisher.
It is likely that the existence of these fluxes, inconsistencies, grey zones, and non-binary nature of the state of “predatory” led Grudniewicz et al. (2019) to conclude that the nature of a “predatory” entity is unknown and unclear, despite the proponents of that group indicating precisely the opposite for years. The tool that we propose in this paper may help to capture the multi-dimensionality of imprecise or vague concepts and hopes to reduce, but likely not eliminate, some of the grey zones that exist between a “predatory” (or unscholarly) and a non-“predatory” (or scholarly) entity. Before the current journal evaluation and indexing systems get entirely black boxed, it is about high-time that a transparent and more fine-tuned alternative gets introduced.
The objective of this study was to establish a credit rating system (CRS)-based method to supplement, replace, or reinforce both whitelists and blacklists, in a bid to try and fortify them, or to try and increase their sensitivity and specificity (Teixeira da Silva & Tsigaris, 2020). Sensitivity refers to the ‘recall’ of the evaluation system, or the fraction of relevant instances which are retrieved by it. Specificity is the ‘precision’ of the system. Perfect precision (or specificity) would mean no false positives indicated. We do so by moving away from a reductionist perspective and allowing for more detailed assessments which take into account subtleties—or differences—in the degrees to which some journals or publishers might be more trustworthy than others, or not. The system aims to introduce a standardized and fine-grained rating system, allowing for interpretation ‘at a glance’ and comparison between journals without needing to sift through every detail or criterion in order to judge quality, efficiency of operations, levels of error, or the severity of misconduct.
What criteria for predatory publishing have been proposed to date?
Following Beall’s now-defunct online-based blacklist criteria that evolved from version 1 in 2012, updated in late 2012, then version 2 in 2015 (summarized by Laine & Winker, 2017), several studies emerged that recommended sets of criteria in a bid to differentiate, including quantitative and semi-quantitative methods, “legitimate” from “predatory” journals and publishers (Teixeira da Silva, 2013; Dadkhah & Bianciardi, 2016; Eriksson & Helgesson, 2017; Laine & Winker, 2017; Shamseer et al., 2017; Cobey et al., 2018; Cabells, 2019; Strinzel et al., 2019). However, the diversity of criteria is notable since they are based on very different approaches, ranging widely from simple to complex (Frandsen, 2019).
Although Beall used 54 criteria that were classified into five broad and unspecific groups (editor and staff; business management; integrity; other; poor journal standards/practice) (summarized by Laine & Winker, 2017), there were multiple issues and limitations: poor sensitivity, selection bias resulting from the inability to take into account the scientific content of individual papers, the lack of specific criteria for individual entries, annulling the reproducibility of the blacklist criteria (Crawford, 2016; Teixeira da Silva, 2017); exclusive focus on open access, making the unrealistic suggestion that “predatory” publishing was limited to open access, which it was not (Berger & Cirasella, 2015; Olivarez et al., 2018).
Departing from the 2015 version 2 of Beall’s criteria, removing 8 unrealistic or invalid criteria and then rebuilding a new, and the first ever quantitative, concept of “predatory” behavior, Teixeira da Silva (2013) used 16 criteria that were then sub-categorized into multiple sub-criteria, each with a relative weighting and rank that could eventually lead to a “predatory score”, which could be used to quantify the behavior of “legitimate” (whitelisted) and/versus “illegitimate” (blacklisted”) entities. Although those criteria need updating, the “predatory score” still remains the most realistic—and fair—quantitative way of characterizing the “behavior”, including potentially “predatory”, of a journal or publisher.
Dadkhah and Bianciardi (2016) used Beall’s criteria to establish a ranking system named “predatory rate”. They offered 14 criteria grouped into four categories including editorial member’s criteria, review process and publishing, announcements, open access policies and publication charges.
Laine and Winker (2017) summarized the criteria of whitelists and blacklists, creating a list of criteria to distinguish legitimate journals from “pseudo-journals”, suggesting 14 “warning signs” or features that may increase suspicion that a journal is predatory.
Shamseer et al. (2017) introduced 13 evidence-based characteristics that they thought would be useful for ECRs to distinguish “predatory” from legitimate journals, emphasizing that those criteria were likely insufficient to identify all potentially “predatory” journals. Eriksson and Helgesson (2017) suggested 25 characteristics, as well as additional advice, to attempt to distinguish “predatory” journals.
Cobey et al. (2018) utilized a scoping review method that attempted to review the available literature and encompass observed findings into those criteria, dividing them into six categories: journal operations; article, editorial and peer review; communication; APCs; dissemination; indexing and archiving. However, the resulting 109 criteria that they proposed included many inconsistencies, vague criteria and false-negative criteria (i.e., criteria that can easily be found in currently white-listed journals), a thorough post-publication analysis of which needs to be conducted.
Cabells established a list of predatory criteria (version 1.1) in their “Predatory Reports”, as a blog post in March of 2019 (Cabells, 2019). That list offers 74 criteria divided to three levels: severe (n = 22), moderate (n = 39), and minor (n = 13). Each level contains several categories including integrity, peer review, publication practices, indexing and metrics, fees, access and copyright, and website.
Another recent list of criteria are those established by Strinzel et al. (2019) published in May 2019 as a journal article. They offered 198 criteria that were extracted from two blacklists, including an updated Beall blacklist and Cabells’ Scholarly Analytics’ blacklist, and two whitelists, including DOAJ and Cabells’ Scholarly Analytics’ whitelist. They performed a thematic analysis and categorized the 198 criteria into seven topics: i) peer review; ii) editorial services; iii) policy; iv) business practices; v) publishing, archiving, and access; vi) website; and vii) indexing and metrics.
Any list of criteria has weaknesses and/or failures, poor specificity, or excessive generality (Cukier et al., 2020; Frandsen, 2019). As a result, criteria proposed by the above-mentioned studies may suffer multiple alterations and have many flaws and weaknesses, but their critique and examination lie beyond the scope of this paper. However, it is notable that when these criteria are used to establish blacklists or predatory ranking systems, errors may arise (Teixeira da Silva & Tsigaris, 2018; Strinzel et al., 2019; Teixeira da Silva and Tsigaris, 2020a; Tsigaris & Teixeira da Silva, 2021). Funding agencies or policy advisory entities that do make use of black- or white lists therefore combine different sources to overcome each one’s shortcomings (Eykens et al., 2019; Nelhans & Bodin, 2020).
For this reason, Kratochvíl et al. (2020) proposed to move from a simple set of formal criteria to a complex view of scholarly publishing. After assessing different lists of criteria taken from Beall, COPE, DOAJ, OASPA, and WAME, they revealed that 28% of the journals which were evaluated had been incorrectly assessed as untrustworthy. The complex view proposed by the authors is a review protocol which consists of three steps. In a first step, formal criteria are checked and evaluated to obtain a first understanding of the quality of the journal. Secondly, the authors recommend analyzing the journal’s actual content and professional quality. Third and finally, before submitting or labeling a journal as being either trustworthy or not, authors are advised to check the journal’s background (e.g. read through review reports, if available, etc.). This somewhat systematic way of assessment, which includes evaluating the actual content of a journal, however, is in many cases time-, energy- and resource-consuming, and places a tremendous burden on individual researchers or other assessors alike. It might be unrealistic to demand junior scholars who have a broad range of journals in mind to check the content and professional quality of all target journals.
A new suggested alternative to assess journal quality: a credit rating-like system
In the world of business, economics and finance, a CRS is used to assess an investment risk and the creditworthiness of a business (Afonso et al., 2011; Asimakopoulos et al., 2021). Currently, three credit rating agencies (CRAs) control 95% of the credit rating business globally: Moody’s, Standard & Poor’s (S&P), and Fitch Ratings (Asimakopoulos & Asimakopoulos, 2018). CRAs play a role in providing investors with a risk assessment of the debt security of a company or country by assessing their creditworthiness (Hemraj, 2015). However, the European Union (EU) is attempting to increase competition in the CRA market to reduce the influence, impact and control of the oligopolistic market (Meeh-Bunse & Schomaker, 2020). Curiously and conceivably, “predatory” publishers might be perceived by oligopolistic publishers (Larivière et al., 2015) as a threat to their market and multi-billion dollar profits. Thus, academics should be aware of a possible bias by the latter in an attempt, through academic literature or otherwise, to eliminate or discredit the former. We recognize the weaknesses of these CRAs and their CRSs (Josephson & Shapiro, 2020; see section “What risks exist with CRSs?”). Despite this, we felt that it would be interesting, and that there may be academic merit, in using the basic ranking parameters of these CRSs and applying them to “predatory” publishing, either as a separate but complementary tool, as a supplementary tool, or as an overlay.
Collectively, a financial entity’s risk can be ranked between lowest risk and default, while their grade can range, correspondingly, between investment and junk (Table 1; Ioana, 2014). Analogously, a publishing entity (most commonly a journal or publisher) could be equally ranked as lowest risk and extreme risk to publish in. In Table 1, we suggest seven levels for trustworthy (recommend to publish in) and untrustworthy (do not recommend to publish in) journals. An assessment of those three ratings reveals that apart from the top rank (Aaa or AAA), the next six ranks (between A and C) are divided into three tiers each, each with lower rank and investment potential as the letters of the alphabet increase. The relationship between rank/rating and investment level is inversely proportional. We take the liberty of defining D as the lowest level at which an author could eventually consider publishing in a journal, without incurring serious risk (reputational, and otherwise).
What makes the analogy to the credit rating system particularly attractive is that one cannot claim zero or 100% risk, i.e., although the system has extremes, they rarely do not reflect absolute extremes, taking into account the inherent risks of type I and II errors. The analogy is realistic because publishing an academic paper involves a real investment and costs, from human, social, research, funding, time, finances, and many others (Moher et al., 2017), each of which may carry an individual risk, but usually never an extreme.
To give a real life case-based analogy, it would be impossible to claim with confidence, despite the legal challenges to its publishing operations (Manley, 2019), that 100% of all OMICS International (or subsidiary) journals do not conduct peer review, display unscholarly characteristics, or that 100% of papers in those journals carry no academic value. In fact, there may even be some journals that are well managed, and there may even be excellent papers with useful scientific value. Using our CRS analogy, even if the rating/ranking of the entire portfolio (i.e., of the publisher, OMICS International) might be weakened by poor performing partners (i.e., journals with unscholarly principles, or editors failing to complete peer review), it is impossible to claim that 100% of all journals are unscholarly (CRS-like analogy = junk or < C/D; Table 1), i.e., there may be a mixture of A, B, or C rated journals, maybe even the occasional AAA. Even within any single journal, it is impossible to claim, with confidence, that 100% of all papers are bad, unscholarly, useless, or otherwise inclusive of unethical work (e.g., fabricated data) unless a full, detailed post-publication peer review (PPPR) is conducted for 100% of papers, and in 100% of journals in a publisher’s portfolio. Similarly, on the non-“predatory” side of an analysis, and considering the indexed and metricized journals that are generally considered to be “safe” to publish in, or reputable and scholarly, it would likely be safe to say that many journals of such publishers might have a high rating, using the CRS analogy, maybe with a few journals ranked AAA, many or most between A and B, and possibly a few rare C cases (see practical cases, as in Scopus- and PubMed-indexed journals discussed earlier). Figure 1 displays a spread of what might be expected in terms of “quality” vs “risk” in the entire publishing landscape, while Fig. 2 displays a distribution of the journals in a hypothetical questionable publisher’s portfolio over the proposed categories. The publisher discussed is a borderline case, with a fairly large share of journals categorized under no to low risk. It becomes evident from this hypothetical case that a fine-grained rating system allows a scholar to make a more fair and informed decision about the risks faced when choosing the publish in one of its journals.
In the case of large credit unions with a complex portfolio (analogously, a publisher with large amount of journals across many disciplines), a dual (Helleiner & Wang, 2018) or multiple (Ryoo et al., 2020) rating system might be necessary, as in finance. These systems are designed to assess credit on multiple levels. In our analogy, in a dual rating system, this would be applied first at the level of the publisher, and second at the level of its journals.
In order to appreciate the “quality” of a publisher’s journals, and a journal’s papers, a thorough end-to-end deconstructive assessment of the publishing operations and papers’ quality would be needed, not unlike the methodologies currently being employed to deconstruct the psychology literature, to appreciate its weaknesses and strengths, with the discovery of weaknesses revealing a replication crisis in this field of study (Wiggins & Chrisopherson, 2019). To the authors’ knowledge, no such PPPR analysis of “quality” has ever been conducted for any discipline at the coarse (across journals) and/or fine (across papers) scales. For this reason, one cannot state with 100% certainty that a journal or publisher is risk free, or completely risky, and why we believe that our CRS-like system is realistic, and thus viable for practical use and application.
There are several terms that are used in business and finance that we feel could be analogously applied to academic publishing. CRAs use qualitative factors for assessments, including the quality and stability of management, accountability, and transparency (Grittersová, 2014; Ozturk et al., 2015; Vernazza & Nielsen, 2015), any or all of which can translate to the assessment of quality, stability, or reputation of an editorial board such as the open and transparent declaration of conflicts of interest by editors (Teixeira da Silva et al., 2019a), or other publishing operations such as the quality of peer review (e.g., number of reviewers per paper or the structure and items of a review’s checklist), assessment of the technical quality of papers (e.g., the quality of tables and figures, copy-editing, verifying the validity and style of references), the capacity of a journal website and journal manager portal (e.g. user-friendly website, easy-to-use portal, or platform security), or more recently, the existence of open data and open science policies inclusive of replicable aspects to ensure transparency (Levin & Leonelli, 2017; Mayernik, 2017).
How does one go about ranking a publishing entity (e.g., journal or publisher), and how is risk assessed and graded in academic publishing? Our CRS-based proposal aims to bring greater clarity to this issue.
What possible risks exist with CRSs?
There are very polar views about CRAs, and we wish to highlight some of the criticisms. We note, as a disclaimer, that although we are using the CRSs of select CRAs in the following discussion, that we are in no way endorsing them or discouraging their use and thus remain neutral as to their importance, effects, and influence, both positive and negative, in the world of business, finance and politics. In this paper, our exclusive objective is to attempt to use the CRSs as a tool, and apply it to academic publishing. That said, we wish to point out several criticisms, weaknesses and flaws of CRAs/CRSs that readers should be aware of. Perhaps, in future versions of this prototype idea, astute readers or other academics who might use or apply our new proposed tool, consider these negative aspects, and fortify the concept that we propose.
CRAs, through their strong influence on financial markets, were claimed to have been responsible for the subprime crisis and the reduction of the credit rating of some EU countries, like Greece, to “junk” status (Ioana, 2014). Such decisions not only have massive financial repercussions, they also may have strong political consequences. Some criticisms of the CRAs is their lack of transparency (Tichy et al., 2011), delays in issuing risk perception (White, 2010), self-interest with a focus on profitability (Lai, 2014) but without considering the negative impact on society, such as youth unemployment, but the need to hold them accountable (Iglesias-Rodríguez, 2016; Kavas & Kalender, 2014), and the need to, but inability to, regulate them to eliminate their conflicts (Bayar, 2014; Vousinas, 2015). Regulation may be difficult since global governance is increasingly becoming decentralized (Luckhurst, 2018).
“Predatory lending”, which Lin and Dhesi (2010) define as “consumer welfare loss due to […] abusive practices and loan terms”, based their eight criteria of such practices on a report by the U.S. General Accounting Office. The eight criteria are, verbatim with minor edits, for example to punctuation: “1) excessive fees; 2) excessive interest rate; 3) single premium credit insurance; 4) lending without regard to ability to repay; 5) loan flipping (repeated refinancing of a mortgage loan within a short period of time with little or no economic benefit for the borrower); 6) fraud and deception; 7) prepayment penalties; 8) balloon payment (large payments of principal due at the maturity of the loan.” A comprehensive review of CRAs in the global financial markets is provided by Meier et al. (2021). Curiously, Meier et al. (2021) do not use the term “predatory lending” even once, nor do they allude to cryptocurrencies. Several of these concepts may have analogous situations in academic publishing. For example, the term “junk” in CRA may be synonymous with “unscholarly”, “predatory”, “illegitimate”, “fake”, “questionable”, “hijacked”, “pseudo” terms currently used in publishing. Table 2 provides a few examples of possible scenarios.
What risks might exist with CRS-like systems if they were to be applied to academia and publishing?
As with any evaluative tool, it runs the risk of being abused or weaponized. This point is well-articulated by the San Francisco Declaration of Research Assessment (DORA) (https://sfdora.org/read/). An author, for example, publishing in low-scoring (i.e., low CRS score) publishing entity (e.g., journal or publisher) may risk having their promotion/tenure evaluations negatively impacted, rather than being judged on the quality of the published content. This hypothetical scenario is supported by at least one study of review, promotion and tenure documents (McKiernan et al., 2019), as well as a survey of faculty publishing priorities and values (Niles et al., 2020). This suggests that evaluators may in fact penalize authors for well-reasoned publishing decisions while authors may ultimately have their publication behaviors (and general research practices) influenced and shaped in response to an evaluator’s expectations (see generally Brembs et al., 2013; Grimes et al., 2018). Similarly, as is commonly expressed (Simons, 2008) grant reviewers might not find it reassuring if an applicant’s work is published primarily in low-scoring outlets (or that it is a “good investment” for their money/funding), and thus may evaluate such proposals more negatively. Ultimately, a CRS might be developed by private or commercial institutions, or they might end up in the ‘wrong hands’, leading to the same problems faced by CRA during the financial crisis. Understanding these risks, authors may merely choose to adhere to the “status quo”, publishing in outlets and venues that are valued by members of their scientific community, regardless of any discrepancies between the judgments of those communities and the inherent value of such journals.
How could the CRS-like system be used for academic publishing?
The CRS-like system would find application, either alone, or in conjunction with a currently existing set of published criteria. In this paper, our objective is not to discuss the merits or demerits of those criteria, which deserves a separate and in-depth analysis, but rather to suggest a way to improve them and to make them more robust, or to prove that they are no longer needed, reliable, or applicable. In order to do this, our proposal for the use of this CRS-like system is as follows:
-
1.
The CRS-like system can be used alone, i.e., without any association with any existing whitelist or blacklist, to serve as an independent system, thereby disassociating it, ideologically or reputationally, with any group, entity, or “list”.
-
2.
The CRS-like system can be used as an overlay. For example, and independent of the stringency of their criteria, it could serve to overlay the “13 evidence-based characteristics” suggested by Shamseer et al. (2017) or the 198 criteria suggested by Strinzel et al. (2019). More specifically, using any set of criteria, a cumulative score or a weighting can be calculated, as in the “predatory score” (Teixeira da Silva, 2013), over which our CRS-like system is then laid. It would be helpful to have criteria that have a relative weighting, as used in the “predatory score”, for example, on a scale of 1–10, where 1 is least predatory (or likely poor quality, scholarship, or management) and 10 is highly predatory, but where each rank is evidence-based and can be measured with tangible supportive evidence. For example, unless the peer review reports of all of a journal’s papers are available, the claim that a journal is, or is not, peer reviewed, cannot be made. An example of this overlay could be a journal that violates n criteria, according to Teixeira da Silva (2013), Shamseer et al. (2017) or Strinzel et al. (2019), and receives a score, e.g., 116 (in this case, 116 is a number that reflects a hypothetical cumulative total of scores ranging from 1–10 for multiple criteria). That number, within a range, could then be overlaid with our CRS-like system to receive a classification (Table 3). Admittedly, an overlay is also an additional layer of complexity, and thus structural investment.
-
3.
It could be used at different scales. For example, at a narrow scale, if there are a total of 50 journals (in English) in a narrow field of study across all publishers, then the CRS-like system can be applied to this small “pocket” of journals, for example in a specialized research field, allowing for a ranking system that could operate independent of metrics such as the IF and CiteScore, or indexing status on WoS, Scopus, PMC, and Medline. At a larger scale, it could be used to assess medical journals in subgroups, such as in PMC or Medline. The CRS-like system could even be used in a culturally, linguistically and/or geographically independent manner, for example, by India’s University Grants Commission to assess Hindi science journals relative to other journals that are struggling to be appropriately classified (Patwardhan et al., 2018), or to assess “published in Canada” journals across the 13 provinces. The fine- or coarse-scale applications are endless.
Examples of the application of the CRS-like system for academic publishing
In this section, we propose several examples of how our CRS-like system could be used individually, or as a supplement to existing whitelist and blacklist criteria, or as an overlay based on, but not necessarily relying on, criteria in papers discussed in the section “What criteria for predatory publishing have been proposed to date?” We have limited our examples to a few criteria that can be openly and publicly verifiable. In general, a positive characteristic scores a positive score ( +), while a negative characteristic scores a negative score ( −), using the basic principles of the “predatory score” (Teixeira da Silva, 2013). We apply the term CRS Score to this prototype paper, overlaying Cabells’ non-quantifiable (“severe”, “moderate” and “minor”) criteria. Our score for "severe" criteria was − 1, − 0.5 for "moderate" and − 0.1 for "minor". We exemplify these scores with four examples next.
Example 1: “No editor or editorial board listed on the journal’s website at all” or “Editors do not actually exist or are deceased”. Cabells classifies these criteria as "severe" and "negative", which means we must score them as − 1. Editors and editorial board members play key roles in scientific journals, and the scientific status of a journal if often evaluated based on the experience and scientific status of the editors. Therefore, the inability to find information about editors and editorial board members on a website, or the inclusion of fictitious editorial members, is clearly a negative aspect.
Example 2. “The journal does not indicate that there are any fees associated with publication, review, submission, etc. but the author is charged a fee after submitting a manuscript”. We should score it as − 1 since it is considered "severe" by Cabells. Any journal should clearly indicate its publication charge, and in the case of an open access journal, its APCs. Even a platinum open access that charges no APCs should indicate this clearly. The lack of such information could be perceived as deceptive.
Example 3. “The journal’s website does not have a clearly stated peer review policy”. The score should be − 0.5 since it is considered a "moderate" criterion by Cabells. A legitimate journal should indicate the process of peer review and also timelines in the instruction for authors clearly.
Example 4. “Information received from the journal does not match the journal’s website”. As this is considered "severe" by Cabells, we should assign a score of − 1. Misleading, incorrect or missing information reduce trust in a journal or publisher, and information of a legitimate journal should be clearly stated on its website, including indexing databases and repositories.
We point readers to Cabells’ (2019) criteria for other examples of “severe”, “moderate” and “minor” criteria. However, we feel that many of Cabells’ criteria are loose, unspecific, could be applied to widely, or could be too liberally interpreted, and are thus weak, potentially flawed and unreliable, as already indicated by Dony et al. (2020), suggesting that an intense post-publication analysis of Cabells’ criteria is merited. We point out two cases:
Case 1
“The publisher displays prominent statements that promise rapid publication and/or unusually quick peer review (less than 4 weeks)”. This is a “moderate” criterion according to Cabells, so we should assign a score of − 0.5. A rapid review process may be a positive aspect of a scientific journals, but excessively fast peer review may be unreal and a sign that no peer review has taken place if it takes less than 4 weeks. Given that real and proper peer review will affect the quality of a paper, but also given that peer review cannot be verified (even in “legitimate” journals that claim to conduct it) while some journals or publishers offer an exceptionally rapid, but generally thorough peer review turn-around time (e.g., MDPI; Petrou, 2020), what CRS Score could be assigned to such a controversial criterion?
Case 2
Cabells states, as a “severe” criterion, “The journal uses misleading metrics (i.e., metrics with the words “impact factor” that are not the Clarivate Analytics Impact Factor).” The IF can be independently and publicly verified. Given the importance placed on metrics by many academics, institutions and research funders, we provide a more elaborate explanation. There are two distinct issues. On one hand, to falsely claim to have a Clarivate Analytics’ IF is indeed blatantly misleading because such a journal is clearly acting in bad faith by using deliberate deception to attract clients (authors) who would contribute intellect and money (APCs), and would merit a CRS score of − 1. However, a misleading metric is not necessarily a false claim of having a metric and some metrics are not necessarily “false”. Moreover, the terms “impact factor” are not trademark or copyrighted terms, so they are free to be used by any member of the public anywhere on the globe. The emotive issue that often arises is if the term “impact factor” is used in the name of a metric with a similar name in a deceitful manner to give potential authors the false impression that the similarly sounding “impact factor” is in fact the IF assigned by Clarivate Analytics. If a valid method of calculating a stated metric is indicated, even if it may sound similar to “impact factor”, and even if the organization that assigns such a new metric is not Clarivate Analytics, then there is nothing deceptive or invalid about this metric, and the “severe” criterion, as assigned by Cabells, is a false negative.
Whether used individually or as a supplementary tool or overlay, the “total” CRS Score of both positive and negative aspects would ultimately give a “risk” rank or rating (Table 3), as exemplified with two real-case examples next.
Application of the CRS-like system to two real case studies
Here we discuss two real-case examples in which we have applied the CRS to two existing journals which are blacklisted by Cabells in their Predatory Reports (accessed February 6, 2021). Journal B (A Free Lance, Amit Educational and Social Welfare Society) has eight violations and journal A (Business and Management Studies, Redfame Publishing) has three violations. According to these reports, it could be argued that the violations listed for journal A clearly warrant blacklisting. According to Cabells, the publisher of journal A hides or obscures relationships with for-profit partner companies and the website does not have a clearly stated peer review policy. The publisher displays prominent statements that promise rapid publication and/or unusually quick peer review (less than 4 weeks).
While one could agree that this behavior is doubtful in some or the other way, more positive characteristics or behavior of the journals’ editorial board or publisher have not been listed. Based on this violation report, it is thus not straightforward to make any informed decisions. Here we demonstrate how the scores for these violations could be assigned and what the final CRS score would be based on an additional weighting of the more positive characteristics.
For Journal A:
Positive characteristics (8.6):
-
ISSN registered (‘positive’ and ‘minor’ according to Cabells): + 0.1
-
APC costs clearly stated (‘positive’ and ‘moderate’ according to Cabells): + 0.5
-
Clear author guide online (‘positive’ and ‘moderate’ according to Cabells): + 0.5
-
Ethical principles listed (‘positive’ and ‘severe’ according to Cabells): + 1
-
Reviewer guidelines publicly available (‘positive’ and ‘severe’ according to Cabells): + 1
-
DOI’s at the article level (‘positive’ and ‘moderate’ according to Cabells): + 0.5
-
Articles (Open Access) available online (‘positive’ and ‘severe’ according to Cabells): + 1
-
Full archive online (‘positive’ and ‘severe’ according to Cabells): + 1
-
Editorial board members state membership in CV (‘positive’ and ‘severe’ according to Cabells): + 1
-
Publication policies are clearly stated online (‘positive’ and ‘severe’ according to Cabells): + 1
-
Refund policy clearly stated online (‘positive’ and ‘severe’ according to Cabells): + 1
Negative characteristics (− 1.1):
-
Publisher hides or obscures relationships with for-profit partner companies (‘negative’ and ‘minor’ according to Cabells): − 0.1
-
No clearly stated peer review policy (‘negative’ and ‘moderate’ according to Cabells): − 0.5
-
Prominent statements that promise rapid publication (‘negative’ and ‘moderate’ according to Cabells): − 0.5
The hypothetical CRS score for journal A would be 7.5, ranking it between low risk and unscholarly. The rating would be BBB-. The risk associated with publishing is medium and the investment capacity adequate. If this journal were indexed in renowned sources, such as the DOAJ (checked against DOAJ version of August 1, 2020), for example, this could increase its trustworthiness. Indexation in WoS offers additional ‘return on investment’ for researchers and would thus also increase the CRS score. As a matter of fact, the blacklisting by Cabells seems to be rather inadequate in this case.
Journal B has been flagged by Cabells for eight violations. In this case, the negative characteristics clearly outweigh the positive ones. There are strong indications of the journal operates in an unscholarly manner. The CRS score for positive characteristics is 2.1 and that for negative ones is − 6.5 bringing the total CRS score to − 4.4.
Positive characteristics (+ 2.1).
-
ISSN registered (‘positive’ and ‘minor’ according to Cabells): + 0.1
-
Ethical principles listed (‘positive’ and ‘severe’ according to Cabells): + 1
-
Reviewer guidelines publicly available (‘positive’ and ‘moderate’ according to Cabells): + 0.5
-
A part of the articles (open access) available online (‘positive’ and ‘moderate’ according to Cabells): + 0.5
Negative characteristics (− 6.5).
-
Publishing company employees are members of editorial board (‘negative’ and ‘moderate’ according to Cabells): − 0.5
-
Archive only partially online (‘negative’ and ‘moderate’ according to Cabells): − 0.5
-
The journal uses misleading metrics (‘negative’ and ‘severe’ according to Cabells): − 1
-
No refund policy stated (‘negative’ and ‘moderate’ according to Cabells): − 0.5
-
No APC’s stated (‘negative’ and ‘severe’ according to Cabells): − 1
-
No DOI’s (‘negative’ and ‘moderate’ according to Cabells): − 0.5
-
The name of the publisher suggests that it is a society, academy, etc. When it is only a solitary proprietary operation and does not meet the definition of the term used or implied non-profit mission. (‘negative’ and ‘severe’ according to Cabells): − 1
-
The name of the publisher suggests that it is a society, academy, etc. when it is only a publisher and offers no real benefits to members (‘negative’ and ‘severe’ according to Cabells): − 1
-
No policies for digital preservation (‘negative’ and ‘moderate’ according to Cabells): − 0.5
Some of the violations listed by Cabells do not seem to hold true anymore. A part of the internet archives of the journal have been posted online, but they do not seem to be complete. They are however, accessible. Editorial boards are in fact listed. We could not verify if the owner is a member of all (all journals of the publisher) the editorial boards (listed by Cabells as a severe violation). Simply because we were not able to identify the owner. Meanwhile, the peer review guidelines have been published. The issue of the validity of Cabells’ criteria, and the use of the terms / categories “minor”, “moderate” or “severe” will not be discussed in detail in this paper. The two examples listed above serve simply as a practical application of the CRS score to a real-life case study.
Strengths of this proposal
The CRS-like system we propose has strengths, the most obvious being:
-
1.
It is easy to interpret, including for junior researchers and ECRs who may be new to the academic publishing landscape. By relying on one aspect of CRSs, but without being excessively drawn into the mathematical and modeling aspects that often limit their use and interpretation by a wider swathe of academics other than economists and finance-related researchers: it allows for more academics to apply the system to their own research (individual self-appraisal level). Although widely disputed in terms of their relevance, today many authors include the Clarivate Analytic’s IF as an indicator for journal quality to their CV’s, but the inclusion of a CRS rating would do more justice to the multidimensionality of a journal’s function (Wouters et al., 2019). Universities and research centers, including their librarians, can apply the system to assess the appropriateness of publishing venues and offer advice to their scholars, researchers, faculty members and other staff regarding risk; funders can apply the selection criteria to the researchers or institutions that they fund to grade them; indexing agencies and platforms like WoS, Scopus, PMC, and Medline can use this system to grade/rate their indexed journals and publishers; Google Scholar, which currently includes an indiscriminate mixture of scholarly and unscholarly information, including papers published in “predatory” venues, can use this as a sieving mechanism to differentiate valid scholarly from potentially risky or unscholarly venues. This strength complements other attempts to appraise the scientific literature, such as scite (https://scite.ai/), which attempts to assess the extent to which articles and other contributions to the scientific literature have been substantiated or refuted.
-
2.
Our system aids users with risk assessment without necessarily implying any association with quality. Stated differently, while the CRS-like system can be used to aid in the assessment of journal and publisher quality, in principle, it may also serve as a tool for researchers to assess risk, without making any explicit assumptions about quality. That choice (i.e., how the system should be used) should not be forcefully imposed on individual academics—who necessarily have to make independent decisions about where to publish their work. Rather, this tool expands the publication choices by offering concrete and tangible guidance or even a crude form of “protection”, unlike the “Think. Check. Submit.” (https://thinkchecksubmit.org/) marketing campaign that superficially sounds attractive and useful, but in fact provides mostly obvious and self-evident statements and serves more of a status and branding symbol than for any practical use. Our tool does not intend to limit academic freedoms, thereby empowering younger researchers, ECRs and even experienced researchers to find reputable publishing venues using their own independent abilities.
-
3.
Currently, one of the most prominent market-based tool or system that provides practical “advice” is that offered by Cabells. However, their lists/reports, which are essentially whitelists and blacklists (Teixeira da Silva, 2020b), despite their euphemistic rebranding campaign (Bisaccio, 2020), are not free to access, and may have errors since members of the public are not aware (due to restricted access) of their quality criteria clearly; as a result, they may be less useful and reliable (Dony et al., 2020). On the other hand, ABDC JQL criteria are free to access (ABDC, 2018). Koerber et al. (2020) referred to whitelists and blacklists as watchlists and safelists, respectively. Our tool is not a whitelist or a blacklist. It is free and perhaps over time (potential future project), it can be converted to a free and open online tool where academics can search for specific journals or publishers to appreciate their risk rating/grading in order to make an additional (i.e., in addition to, and supplementing, other tools and parameters) informed decision regarding whether to select a journal to submit a paper. Being a not-for-profit tool does raise some questions about sustainability. However, the recent actions of scientists in the biomedical and social sciences suggests that altruistic behavior is not merely an unrealistic ideal.Footnote 3 Scientists, particularly during the current COVID-19 pandemic, have demonstrated a willingness to work together on pressing global issues (Kupferschmidt, 2020).Footnote 4 Nor is it the case that innovative projects, such as the one proposed in this manuscript, are unfundable—as startups like scite (discussed above) exemplify.Footnote 5
-
4.
It eliminates the term “predatory” altogether and replaces it with a term that offers protection to the prospective author rather than a reputationally damaging label to the entity (e.g., journal or publisher). This may reduce legal liability, potentially false accusations, and the chances of being accused of libel. For example, a journal that is, with clearly and publicly indicated criteria and violations of those criteria, and indicated as being “high risk” would more likely serve as a more useful alert system to academics than a label of “predatory”, without specific criteria, as had occurred with Beall’s blacklists.
-
5.
Very importantly, it would ensure that should a blacklist exist with an overlay of our proposed system, that three conditions would make it useful, fair, and transparent: i) the criteria must be clearly specified, graded, and if there are upgraded versions, then the date of validity of each version of those criteria and when they were upgraded, so that readers can associate specific classifications of journals or publishers with specific sets and versions of criteria; ii) the criteria must be open to the public, similar to those of Cabells (Teixeira da Silva, 2020b), for maximum transparency, and the entities that manage such lists and their criteria should be clearly indicated, unlike currently “upgraded” or “resurrected” Beall blacklists that are under anonymous management, for maximum accountability; iii) a fair and open challenge system, managed by an unbiased body, akin to public open peer reports, that allows journals to improve on weak criteria, and upgrade their CRS rank. By having this CRS-like overlay system in place, even if a journal is classified as “junk”, according to a set of academics’ criteria on a specific date, a pro-active journal management that cares about quality and scholarly values would have the opportunity of “upgrading” its rank at any time. Ultimately, a higher rank will translate into greater confidence, trust and acceptance, which in turn will translate into greater submissions and a more wholesome business and/or publishing model. Conversely, a journal that has shown specific worrisome unscholarly criteria will show a lower rank.
-
6.
As a self-reliant “alert” system that can be used by academics to screen journals. For example, if a journal is considered to be “high risk”, then authors would know that submitting to that journal could affect their reputations, whether ECRs, or even matured career scientists. This could have downstream application value, more broadly, for fields of meta-research, sociology of science, or scientometrics. Furthermore, as the publication landscape evolves, for example, the possible dissolution and transcendence of the scholarly journal (Herman et al., 2020), the CRS-like system may retain its “alert” system function, while current, journal-level metrics (e.g. the IF) could lose meaning and relevance.
-
7.
It could reduce the conflicts of interest and personal and professional bias in evaluating the quality of academic journals (e.g., commercial operations, transparency of editorial boards and operations, error and fraud detection in papers, etc.) and their ranking since all criteria, procedures, and policies are clear, visible, transparent and publicly verifiable, in line with best practices and recommendations regarding various metrics (Wilsdon et al., 2015). To some extent, this system is not affected by the investigators’ ideologies.
Possible limitations of this proposal
-
1.
This framework might be used to create lists that suffer from the same problems highlighted above, including false positives and thus lists which may have a high false discovery rate. In turn, this would lead to the creation of a hybrid “universal” list that is neither white, nor black, nor a determined shade of grey, but a combination of all of these, and more, depending on the scale and definition discussed below. The existence of several risk categories (Table 1) would then reduce the probability that a journal is predatory pre-study, which may be seen as a positive aspect, but still a false positive might exist post-study if there is an investigation as to which of the numerous categories it will be assigned to. See a more detailed discussion in Tsigaris and Teixeira da Silva (2021) on this limitation within the context of blacklists. Ultimately, the CRS-like tool may best be used, in addition to other tools and criteria, to appraise a journal or publisher’s quality, in a form of methodological triangulation (see generally, Heesen et al., 2019). This approach would help to reduce the risk of inaccurate appraisals (e.g., describing a journal as “high risk”, when it is in fact “low risk”).
-
2.
The risk of using currently existing blacklists to establish new or “updated” blacklists is extremely risky because they make an incorrect assumption that the “source” blacklists are correct or accurate. They are not, as have been debated briefly earlier in this paper and elsewhere. Thus, those wishing to establish blacklists, despite the risks and warnings, should scrap all current blacklists and establish a new, unbiased, transparent (i.e., with curators’ identities transparently indicated) list that is curated by a reputable organization, not by a single individual, or preferably a conglomerate of higher education authorities (e.g., universities, libraries, publishing organizations, and ethics organizations), to offer a global, unbiased service that serves all academics and not just select interest groups or commercial interests. Doing so might reduce bias and subjectivity, using a ‘red team’ approach “that integrates criticism into each step of the […] process” (Lakens, 2020). However, such a control mechanism and centralization of “control” regarding how this tool is used and regulated might itself introduce issues of bias, power, funding and politicization that frequently characterizes tools that initially start as open source but then get absorbed by a for-profit entity. One solution may be to openly publish this tool, transparently articulating its criteria and inputs, in an open-source friendly repository (e.g., GitHub or GitLab), and submitting all intellectual content under a non-commercial, attribution-based licensing designation.
-
3.
The possibility of abuse of blacklists and the CRS-like tool by individuals or groups that wish to defame or otherwise attack competitors, a threat that can exist for any tool, and also for blacklists.
-
4.
Like any tool, there is the issue of scale and definition. We have limited the influence of this new system to an A-D-based rating (Table 1) and set a “score” between 1 and 50 (or more), with a positive and a negative scale (Table 3). However, we can easily envision subsequent derivatives of our CRS-like system to have greater fine-scaled definition, such as a wider alphabetized system (A-Z, AA-ZZ, etc.), or a fine-grained numbering system (to different decimals, e.g., 1.1, 2.65, etc.), or a combination of both (e.g., Aa 21.45, etc.). In that sense, our proposal is the prototype, subject to change and improvement.
-
5.
Although this is a prototype, the issue of “frequency” needs to be considered. How frequently should it be updated, and by whom? Both of these limitations could be overcome by maintaining the tool as open source, i.e., the evaluation of a journal or publisher can be made at any time provided the criteria are known and recorded at that moment in time. Thus, a rating can be assigned at any time or frequency. The big problem associated with this is verification and reproducibility, not unlike science, i.e., can a separate individual or group go back in time and reproduce the rating using available evidence? Management and curation of this tool will be a challenge.
-
6.
The risk of being gamed, abused or falsified, such as metrics like the IF and CiteScore (Teixeira da Silva, 2021c). Editors or publishers of fraudulent journals might, for example, plagiarize well-written copyright policies or reviewing guidelines to mimic good behavior. They might profile themselves as being ‘international’ as a trope while not living up to this ideal (de Rijcke & Stöckelová, 2020). In addition, hijacking (or copying titles) of established journals, URLs, or reports associated with legitimate journals and/or publishers is an additional treat (Kolahi & Khazaei, 2015). These risks, however, are the same for every evaluative system. They warrant caution on the side of evaluators as well as researchers who make use of journal- or publisher-based ranking tools.
Discussion and conclusion
This paper is a blue-print. It is not a definitive concept and may, and will likely, suffer alterations over time, either by us or by other groups that might adopt our ideas. We emphasize that the CRS-like system that we propose should not be used in any discriminatory or abusive manner, nor as a way to delegitimize academics’ work, to slander them, or otherwise label them in a discriminatory or defamatory manner. The system is designed to evaluate the risk associated with publishing in a specific journal or with a certain publisher, and does not provide any insight at all into the quality of research published in, or with, them. The purpose of this system is to fortify currently existing criteria-based systems used in “predatory” publishing to appreciate the scholarly nature of a journal or publisher. We propose that the CRS-like system be used on its own, or as an overlay. Journals and publishers could use our tool as a real way to self-improve their own publishing quality, always aiming for higher, and better. Ultimately, it may become one such tool among many CRS-like systems, not unlike those found in the financial world. Doing so would provide a rating system that would send the signal to academics that it is safe and reliable to publish in a venue with a strong CRS-like rank with limited risk, or that submitting to a journal with a low rank carries risk (ethical, reputational, etc.). In theory, there are endless useful applications, although we are cognizant of some weaknesses that could be fortified over time.
Does a journal’s recent resolution to be more rigorous, transparent, or to stop engaging in some predatory behavior) offset prior failures in these aspects, and if so, to what degree? For example, how would a journal, previously described as being a “high risk” venue, be evaluated if, in response to such an evaluation (or other external pressures), it changed its publishing practices and behaviors (e.g. implementing more rigorous peer review or doing away with exorbitant APCs)? This is not so much a challenge for the tool itself to resolve, but rather a philosophical issue that the academic community needs to reflect on now that it has this new tool, similar to other tools and criteria that already exist. While ratings using our CRS-like tool are not meant to be stagnant or limited to a specific moment in time, or time period (e.g., 3 years), the issues of real-time evaluation, reproducibility and other issues discussed above would need to be further debated so as not to convert the tool into an accusatory weapon or mechanism to attack or defame critics, competitors or foes.
As for any new or existent tool related to rating or scoring systems, there are massive challenges in dealing with a non-binary set of complexities. Not only are there many aspects of the publishing process that can default in terms of their “quality” aspects, such aspects are in a constant state of change, and what may be perceived as safe or stable today might be unsafe or unstable tomorrow. Tools such as that which we propose in this paper have real practical difficulties in dealing with change because they are not real-time tools, rather they are reactionary tools that are modified and adjusted as new risks emerge, and updated only as frequently as the humans entrusted with this endeavor have the physical time, energy and resources to do so. It is likely for this reason, i.e., few have the courage and appetite to create and sustain such a system, that Cabells currently has a quasi-monopoly on the “quality control” of the “predatory” publishing “market”, an issue that needs urgent scrutiny and debate.
Change history
22 September 2021
In the PDF version of the original publication the author names were moved to last page which has now been kept in the first page with this update.
Notes
The term “peer reviewed” is added in inverted commas because, unless it is open, there is no independent way to verify that a paper was peer reviewed at all.
"What is peculiarly difficult about attaining a disjunctive category is that two of its members, each uniform in terms of an ultimate criterion [e.g. predatory behavior], may have no defining attributes in common. […] For in a disjunctive class, there are no such universal common features." (Chapter 6, On Disjunctive Concepts and Their Attainment), pp. 156–157.
https://www.psychologicalscience.org/observer/the-cooperative-revolution-is-making-psychological-science-better (November 30, 2018; last accessed: June 4, 2021).
https://www.nytimes.com/2020/04/01/world/europe/coronavirus-science-research-cooperation.html (April 1, 2020; last accessed: June 4, 2021).
https://medium.com/scite/scite-awarded-1-5-million-fast-track-sbir-grant-from-the-national-institutes-of-health-d1ac6e4ecfde (May 14, 2020; last accessed: June 4, 2021).
References
ABDC (Australian Business Deans Council). (2018). 2018 Journal Quality List Methodology Review. https://abdc.edu.au/research/abdc-journal-quality-list/2018-journal-quality-list-methodology-review/ (last accessed: June 4, 2021).
Afonso, A., Gomes, P., & Rother, P. (2011). Short-and long-run determinants of sovereign debt credit ratings. International Journal of Finance and Economics, 16(1), 1–15. https://doi.org/10.1002/ijfe.416
Aromataris, E., & Stern, C. (2020). Supporting a definition of predatory publishing. BMC Medicine, 18, 125. https://doi.org/10.1186/s12916-020-01599-6
Asimakopoulos, P., & Asimakopoulos, S. (2018). A tale of two tails: Cross credit ratings and cash holdings. SSRN Preprint (not Peer Reviewed). https://doi.org/10.2139/ssrn.3291498
Asimakopoulos, P., Asimakopoulos, S., & Zhang, A. (2021). Dividend smoothing and credit rating changes. The European Journal of Finance, 27(1–2), 62–85. https://doi.org/10.1080/1351847X.2020.1739101
Bayar, Y. (2014). Recent financial crises and regulations on the credit rating agencies. Research in World Economy, 5(1), 49–58. https://doi.org/10.5430/rwe.v5n1p49
Berger, M., & Cirasella, J. (2015). Beyond Beall’s list: Better understanding predatory publishers. College & Research Libraries, 76(3), 132–135. https://crln.acrl.org/index.php/crlnews/article/view/9277/10342.
Bisaccio, M. (2020). Announcement regarding brand-wide language changes, effective immediately. https://blog.cabells.com/2020/06/08/announcement/ (June 8, 2020; last accessed: June 4, 2021).
Bisbee, J., Hollyer, J., Rosendorff, B., & Vreeland, J. (2019). The millennium development goals and education: Accountability and substitution in global assessment. International Organization, 73(3), 547–578. https://doi.org/10.1017/S0020818319000109
Bowker, G. C., & Star, S. L. (1999). Sorting things out. Classification and its consequences. The MIT Press, Cambridge, MA, USA, 392 pp. ISBN: 9780262024617.
Brembs, B. (2019). Elsevier now officially a “predatory” publisher. http://bjoern.brembs.net/2019/12/elsevier-now-officially-a-predatory-publisher/ (December 11, 2019; last accessed: June 4, 2021).
Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 24, 291. https://doi.org/10.3389/fnhum.2013.00291
Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1986). A study of thinking. Transaction Publishers, New Brunswick, USA. https://archive.org/details/in.ernet.dli.2015.139127.
Cabells (2019). Cabells Predatory Report Criteria v 1.1. https://blog.cabells.com/2019/03/20/predatoryreport-criteria-v1-1/ (March 20, 2019; last accessed: February 23, 2021).
Cobey, K. D., Lalu, M. M., Skidmore, B., Ahmadzai, N., Grudniewicz, A., & Moher, D. (2018). What is a predatory journal? A scoping review. F1000Research, 7, 1001. https://doi.org/10.12688/f1000research.15256.2.
Cooper, C. D. O., & Han, W.-P. (2021). A new chapter for a better Bioscience Reports. Bioscience Reports, 41(5), BSR20211016. https://doi.org/10.1042/BSR20211016
Cortegiani, A., Ippolito, M., Ingoglia, G., Manca, A., Cugusi, L., Severin, A., Strinzel, M., Panzarella, V., Campisi, G., Manoj, L., Gregoretti, C., Einav, S., Moher, D., & Giarratano, A. (2020). Citations and metrics of journals discontinued from Scopus for publication concerns: the GhoS(t)copus Project. F1000Research, 9, 415. https://doi.org/10.12688/f1000research.23847.2.
Crawford, W. (2016). ‘Trust me’: The other problem with 87% of Beall’s lists. http://walt.lishost.org/2016/01/trust-me-the-other-problem-with-87-of-bealls-lists/ (January, 2016; last accessed: June 4, 2021).
Cukier, S., Helal, L., Rice, D. B., Pupkaite, J., Ahmadzai, N., Wilson, M., Skidmore, B., Lalu, M. M., & Moher, D. (2020). Checklists to detect potential predatory biomedical journals: A systematic review. BMC Medicine, 18(1), 104. https://doi.org/10.1186/s12916-020-01566-1
Dadkhah, M., & Bianciardi, G. (2016). Ranking predatory journals: solve the problem instead of removing it! Advanced Pharmaceutical Bulletin, 6(1), 1–4. https://doi.org/10.15171/apb.2016.001.
Dadkhah, M., Maliszewski, T., & Teixeira da Silva, J. A. (2016). Hijacked journals, hijacked web-sites, journal phishing, misleading metrics and predatory publishing: Actual and potential threats to academic integrity and publishing ethics. Forensic Science, Medicine, and Pathology, 12(3), 353–362. https://doi.org/10.1007/s12024-016-9785-x
de Rijcke, S., & Stöckelová, T. (2020). Predatory publishing and the imperative of international productivity: Feeding off and feeding up the dominant. In: Biagioli, M., Lippman, A. (eds) Gaming the Metrics: Misconduct and Manipulation in Academic Research, The MIT Press (pp. 101–110). https://doi.org/10.7551/mitpress/11087.001.0001.
Dony, C., Raskinet, M., Renaville, F., Simon, S., & Thirion, P. (2020). How reliable and useful is Cabell's Blacklist? A data-driven analysis. LIBER Quarterly, 30(1), 1–38. https://doi.org/10.18352/lq.10339.
El-Hagrassy, M. M., Duarte, D., Thibaut, A., Lucena, M. F. G., & Fregni, F. (2018). Principles of designing a clinical trial: Optimizing chances of trial success. Current Behavioral Neuroscience Reports, 5(2), 143–152. https://doi.org/10.1007/s40473-018-0152-y
Eriksson, S., & Helgesson, G. (2017). The false academy: Predatory publishing in science and bioethics. Medicine, Health Care and Philosophy, 20(2), 163–170. https://doi.org/10.1007/s11019-016-9740-3
Eriksson, S., & Helgesson, G. (2018). Time to stop talking about ‘predatory journals.’ Learned Publishing, 31(2), 181–183. https://doi.org/10.1002/leap.1135
Eykens, J., Guns, R., Rahman, A. I. M. J., & Engels, T. C. E. (2019). Identifying publications in questionable journals in the context of performance-based research funding. PLoS ONE, 14(11), e0224541. https://doi.org/10.1371/journal.pone.0224541
Frandsen, T. F. (2017). Are predatory journals undermining the credibility of science? A bibliometric analysis of citers. Scientometrics, 113(3), 1513–1528. https://doi.org/10.1007/s11192-017-2520
Frandsen, T. F. (2019). How can a questionable journal be identified: Frameworks and checklists. Learned Publishing, 32(3), 221–226. https://doi.org/10.1002/leap.1230
Grimes, D. R., Bauch, C. T., & Ioannidis, J. P. A. (2018). Modelling science trustworthiness under publish or perish pressure. Royal Society Open Science, 5, 171511. https://doi.org/10.1098/rsos.171511
Grittersová, J. (2014). Transfer of reputation: Multinational banks and perceived creditworthiness of transition countries. Review of International Political Economy, 21(4), 878–912. https://doi.org/10.1080/09692290.2013.848373
Grudniewicz, A., Moher, D., Cobey, K. D., Bryson, G. L., Cukier, S., Allen, K., Ardern, C., Balcom, L., Barros, T., Berger, M., Ciro, J. B., Cugusi, L., Donaldson, M. R., Egger, M., Graham, I. D., Hodgkinson, M., Khan, K. M., Mabizela, M., Manca, A., … Lalu, M. M. (2019). Predatory journals: No definition, no defence. Nature, 576(7786), 210–212. https://doi.org/10.1038/d41586-019-03759-y
Harvey, H. B., & Weinstein, D. F. (2017). Predatory publishing: An emerging threat to the medical literature. Academic Medicine, 92(2), 150–151. https://doi.org/10.1097/ACM.0000000000001521
Hasan, Z. (2018). Academic sociology: The alarming rise in predatory publishing and its consequences for Islamic economics and finance. ISRA International Journal of Islamic Finance, 10(1), 6–18. https://doi.org/10.1108/IJIF-11-2017-0044
Heesen, R., Bright, L. K., & Zucker, A. (2019). Vindicating methodological triangulation. Synthese, 196, 3067–3081. https://doi.org/10.1007/s11229-016-1294-7
Helleiner, E., & Wang, H.-Y. (2018). Limits to the BRICS’ challenge: Credit rating reform and institutional innovation in global finance. Review of International Political Economy, 25(5), 573–595. https://doi.org/10.1080/09692290.2018.1490330
Hemraj, M. (2015). Theories, rating failure and the subprime mortgage crisis. In: Credit Rating Agencies. Springer, Cham, pp. 11–70. https://doi.org/10.1007/978-3-319-17927-8_2.
Herman, E., Akeroyd, J., Bequet, G., Nicholas, D., & Watkinson, A. (2020). The changed—And changing landscape of serials publishing: Review of the literature on emerging models. Learned Publishing, 33(3), 213–229. https://doi.org/10.1002/leap.1288
Holland, K., Brimblecombe, P., Meester, W., & Chen, T. (2021). The importance of high-quality content: curation and re-evaluation in Scopus. https://www.elsevier.com/research-intelligence/resource-library/scopus-high-quality-content (February, 2021; last accessed: June 4, 2021).
Iglesias-Rodríguez, P. (2016). Paradigm shift in financial-sector policymaking models: From industry-based to civil society-based EU financial services governance? In: Iglesias-Rodriguez P., Triandafyllidou A., Gropas R. (eds) After the financial crisis. Palgrave Studies in European Political Sociology. Palgrave Macmillan, London (pp. 23–73). https://doi.org/10.1057/978-1-137-50956-7_2.
Ioana, P. S. (2014). Credit rating agencies and their influence on crisis. Annals of the Faculty of Economics, University of Oradea, Faculty of Economics, 1(2), 271–278.
Josephson, J., & Shapiro, J. (2020). Credit ratings and structured finance. Journal of Financial Intermediation, 41, 100816. https://doi.org/10.1016/j.jfi.2019.03.003
Justus, J. (2012). Carnap on concept determination: Methodology for philosophy of science. European Journal for Philosophy of Science, 2, 161–179. https://doi.org/10.1007/s13194-011-0027-5
Kavas, M., & Kalender, S. (2014). Corporate social responsibility in credit rating agencies: How to manage areas of conflict and conflicts of interest in a responsible way. Turkish Journal of Business Ethics, 7(1), 36–55. https://doi.org/10.12711/tjbe.2014.7.1.0127.
Kendall, G. (2021). Beall’s legacy in the battle against predatory publishers. Learned Publishing. https://doi.org/10.1002/leap.1374
Koerber, A., Starkey, J. C., Ardon-Dryer, K., Cummins, R. G., Eko, L., & Kee, K. F. (2020). A qualitative content analysis of watchlists vs safelists: How do they address the issue of predatory publishing? The Journal of Academic Librarianship, 46(6), 102236. https://doi.org/10.1016/j.acalib.2020.102236
Kolahi, J., & Khazaei, S. (2015). Journal hijacking: A new challenge for medical scientific community. Dental Hypotheses, 6(1), 3–5. https://doi.org/10.4103/2155-8213.150858
Kratochvil, J., Plch, L., Sebera, M., & Koriťáková, E. (2020). Evaluation of untrustworthy journals: Transition from formal criteria to a complex view. Learned Publishing, 33(3), 308–322. https://doi.org/10.1002/leap.1299
Krawczyk, F., & Kulczycki, E. (2021). How is open access accused of being predatory? The impact of Beall’s lists of predatory journals on academic publishing. The Journal of Academic Librarianship, 47(2), 102271. https://doi.org/10.1016/j.acalib.2020.102271
Kupferschmidt, K. (2020). Preprints bring ‘firehose’ of outbreak data. Science, 367(6481), 963–964. https://doi.org/10.1126/science.367.6481.963
Lai, J. (2014). Accountability and the enforcement of ethical values in finance: Insights from Islamic finance. Australian Journal of Public Administration, 73(4), 437–449. https://doi.org/10.1111/1467-8500.12108
Laine, C., & Winker, M. A. (2017). Identifying predatory or pseudo-journals. Biochemia Medica, 27(2), 285–291. https://doi.org/10.11613/BM.2017.031.
Lakens, D. (2020). Pandemic researchers—Recruit your own best critics. Nature, 581, 121. https://doi.org/10.1038/d41586-020-01392-8
Larivière, V., Haustein, S., & Mongeon, P. (2015). The oligopoly of academic publishers in the digital era. PLoS ONE, 10(6), e0127502. https://doi.org/10.1371/journal.pone.0127502
Levin, N., & Leonelli, S. (2017). How does one “open” science? Questions of value in biological research. Science, Technology, & Human Values, 42(2), 280–305. https://doi.org/10.1177/0162243916672071
Lin, P.-Y., & Dhesi, G. (2010). Comments on predatory lending behaviour. Global Economy and Finance Journal, 3(2), 176–188.
Luckhurst, J. (2018). Global economic governance since the global financial crisis. In: The Shifting Global Economic Architecture, Springer Nature Switzerland AG, Cham, Switzerland (pp. 57–80). https://doi.org/10.1007/978-3-319-63157-8..
Macháček, V., & Srholec, M. (2021). Predatory publishing in Scopus: Evidence on cross-country differences. Scientometrics, 126(3), 1897–1921. https://doi.org/10.1007/s11192-020-03852-4
Manca, A., Cugusi, L., Cortegiani, A., Ingoglia, G., Moher, D., & Deriu, F. (2020). Predatory journals enter biomedical databases through public funding. BMJ, 371, m4265. https://doi.org/10.1136/bmj.m4265
Manca, A., Moher, D., Cugusi, L., Dvir, Z., & Deriu, F. (2018). How predatory journals leak into PubMed. CMAJ, 190(35), E1042–E1045. https://doi.org/10.1503/cmaj.180154
Manley, S. (2019). Predatory journals on trial: Allegations, responses, and lessons for scholarly publishing from FTC v. OMICS. Journal of Scholarly Publishing, 50(3), 183–200. https://doi.org/10.3138/jsp.50.3.02
Mayernik, M. S. (2017). Open data: Accountability and transparency. Big Data & Society, 4(2), 1–5. https://doi.org/10.1177/2053951717718853
McCann, T. V., & Polacsek, M. (2018). False gold: Safely navigating open access publishing to avoid predatory publishers and journals. Journal of Advanced Nursing, 74(4), 809–817. https://doi.org/10.1111/jan.13483
McKiernan, E. C., Schimanski, L. A., Nieves, C. M., Matthias, L., Niles, M. T., & Alperin, J. P. (2019). Use of the journal impact factor in academic review, promotion, and tenure evaluations. eLife, 8, e47338. https://doi.org/10.7554/eLife.47338.001.
Meeh-Bunse, G., & Schomaker, S. (2020). An analysis of the competitive situation on the EU rating market in context of regulatory requirements. Proceedings of the ENTRENOVA ENTerprise REsearch InNOVAtion Conference (Online), 6(1), 147–156. https://proceedings.entrenova.org/entrenova/article/view/318 (last accessed: June 4, 2021).
Meier, S., Rodriguez Gonzalez, M., & Kunze, F. (2021). The global financial crisis, the EMU sovereign debt crisis and international financial regulation: Lessons from a systematic literature review. International Review of Law and Economics, 65, 105945. https://doi.org/10.1016/j.irle.2020.105945
Moher, D., Shamseer, L., Cobey, K. D., Lalu, M. M., Galipeau, J., Avey, M. T., Ahmadzai, N., Alabousi, M., Barbeau, P., Beck, A., Daniel, R., Frank, R., Ghannad, M., Hamel, C., Hersi, M., Hutton, B., Isupov, I., Mcgrath, T. A., Mcinnes, M. D. F., … Ziai, H. (2017). Stop this waste of people, animals and money. Nature, 549(7670), 23–25. https://doi.org/10.1038/549023a
Nelhans, G., & Bodin, T. (2020). Methodological considerations for identifying questionable publishing in a national context: The case of Swedish Higher Education Institutions. Quantitative Science Studies, 1(2), 505–524. https://doi.org/10.1162/qss_a_00033
Niles, M. T., Schimanski, L. A., McKiernan, E. C., & Alperin, J. P. (2020). Why we publish where we do: Faculty publishing values and their relationship to review, promotion, and tenure expectations. PLoS ONE, 15(3), e0228914. https://doi.org/10.1371/journal.pone.0228914
Olivarez, J. D., Bales, S., Sare, L., & van Duinkerken, W. (2018). Format aside: Applying Beall’s criteria to assess the predatory nature of both OA and non-OA library and information science journals. College & Research Libraries, 79(1), 52–67. https://doi.org/10.5860/crl.79.1.52
Ozturk, H., Namli, E., & Erdal, H. I. (2015). Modelling sovereign credit ratings: The accuracy of models in a heterogeneous sample. Economic Modelling, 54, 469–478. https://doi.org/10.1016/j.econmod.2016.01.012
Patwardhan, B., Nagarkar, S., Gadre, S. R., Lakhotia, S. C., Katoch, V. M., & Moher, D. (2018). A critical analysis of the ‘UGC-approved list of journals’. Current Science, 114(6), 1299–1303. https://doi.org/10.18520/cs/v114/i06/1299-1303.
Petrou, C. (2020). Guest Post—MDPI’s remarkable growth. https://scholarlykitchen.sspnet.org/2020/08/10/guest-post-mdpis-remarkable-growth/ (August 10, 2020; last accessed: June 4, 2021).
Ryoo, J.-Y., Lee, C.-W., & Jeon, J. Q. (2020). Multiple credit rating: Triple rating under the requirement of dual rating in Korea. Emerging Markets Finance and Trade. https://doi.org/10.1080/1540496X.2020.1768071
Shamseer, L., Moher, D., Maduekwe, O., Turner, L., Barbour, V., Burch, R., Clark, J., Galipeau, J., Roberts, J., & Shea, B. J. (2017). Potential predatory and legitimate biomedical journals: Can you tell the difference? A cross-sectional comparison. BMC Medicine, 15(1), 28. https://doi.org/10.1186/s12916-017-0785-9
Siler, K. (2020b). There is no black and white definition of predatory publishing. LSE Impact Blog. https://blogs.lse.ac.uk/impactofsocialsciences/2020/05/13/there-is-no-black-and-white-definition-of-predatory-publishing/ (May 13, 2020; last accessed: June 4, 2021).
Siler, K. (2020a). Demarcating spectrums of predatory publishing: Economic and institutional sources of academic legitimacy. Journal of the Association for Information Science and Technology, 71(11), 1386–1401. https://doi.org/10.1002/asi.24339
Simons, K. (2008). The misused impact factor. Science, 322(5899), 165. https://doi.org/10.1126/science.1165316
Strinzel, M., Severin, A., Milzow, K., & Egger, M. (2019). Blacklists and whitelists to tackle predatory publishing: A cross-sectional comparison and thematic analysis. mBio, 10, e00411–19. https://doi.org/10.1128/mBio.00411-19.
Teixeira da Silva, J. A. (2013) Predatory publishing: a quantitative assessment, the Predatory Score. The Asian and Australasian Journal of Plant Science and Biotechnology, 7(Special Issue 1), 21–34.
Teixeira da Silva, J. A. (2017). Jeffrey Beall’s “predatory” lists must not be used: they are biased, flawed, opaque and inaccurate. Bibliothecae.it, 6(1), 425–436. https://doi.org/10.6092/issn.2283-9364/7044.
Teixeira da Silva, J. A. (2020b). Cabell’s International publishing blacklist: An interview with Kathleen Berryman. Journal of Radical Librarianship, 6, 16–23. https://journal.radicallibrarianship.org/index.php/journal/article/view/49.
Teixeira da Silva, J. A. (2021c). Citations and gamed metrics: academic integrity lost. Academic Questions, 34(1), 96–99. https://doi.org/10.51845/34s.1.18.
Teixeira da Silva, J. A. (2020a). Is there a clear division between predatory and low-quality journals and publishers? Journal of the Royal College of Physicians of Edinburgh, 50(4), 458–459. https://doi.org/10.4997/JRCPE.2020.427
Teixeira da Silva, J. A. (2021a). Is the validity, credibility and reliability of literature indexed in PubMed at risk? Medical Journal Armed Forces India (in Press). https://doi.org/10.1016/j.mjafi.2021.03.009
Teixeira da Silva, J. A. (2021b). What is a legitimate, low-quality, or predatory surgery journal? Indian Journal of Surgery. https://doi.org/10.1007/s12262-021-02730-4
Teixeira da Silva, J. A., Dobránszki, J., Al-Khatib, A., & Tsigaris, P. (2018). Challenges facing the DOAJ (Directory of Open Access Journals) as a reliable source of open access publishing venues. Journal of Educational Media & Library Sciences, 55(3), 349–358. https://doi.org/10.6120/JoEMLS.201811_55(3).e001.BC.BE
Teixeira da Silva, J. A., Dobránszki, J., Bhar, R. H., & Mehlman, C. T. (2019a). Editors should declare conflicts of interest. Journal of Bioethical Inquiry, 16(2), 279–298. https://doi.org/10.1007/s11673-019-09908-2
Teixeira da Silva, J. A., Dobránszki, J., Tsigaris, P., & Al-Khatib, A. (2019b). Predatory and exploitative behaviour in academic publishing: An assessment. The Journal of Academic Librarianship, 45(6), 102071. https://doi.org/10.1016/j.acalib.2019.102071
Teixeira da Silva, J. A., & Tsigaris, P. (2018). What value do whitelists and blacklists have in academia? The Journal of Academic Librarianship, 44(6), 781–792. https://doi.org/10.1016/j.acalib.2018.09.017
Teixeira da Silva, J. A., & Tsigaris, P. (2020). Issues with criteria to evaluate blacklists: An epidemiological approach. The Journal of Academic Librarianship, 46(1), 102070. https://doi.org/10.1016/j.acalib.2019.102070
Tichy, G., Lannoo, K., Ap Gwilym, O., Alsakka, R., Masciandaro, D., & Paudyn, B. (2011). Credit rating agencies: Part of the solution or part of the problem? Intereconomics, 46(5), 232–262. https://doi.org/10.1007/s10272-011-0389-0
Topper, L., Marill, J., Kelly, C., & Funk, K. (2019). Rigorous policies ensure integrity of NLM literature databases. Canadian Medical Association Journal, 191(10), E289. https://doi.org/10.1503/cmaj.71602
Tsigaris, P., & Teixeira da Silva, J. A. (2019). Did the research faculty at a small Canadian business school publish in “predatory” venues? This depends on the publishing blacklist. Publications, 7(2), 35. https://doi.org/10.3390/publications7020035
Tsigaris, P., & Teixeira da Silva, J. A. (2020). Reproducibility issues with correlating Beall-listed publications and research awards at a small Canadian business school. Scientometrics, 123(1), 143–157. https://doi.org/10.1007/s11192-020-03353-4
Tsigaris, P., & Teixeira da Silva, J. A. (2021). Why blacklists are not reliable: A theoretical framework. The Journal of Academic Librarianship, 47, 102266. https://doi.org/10.1016/j.acalib.2020.102266
Umlauf, M. G., & Mochizuki, Y. (2018). Predatory publishing and cybercrime targeting academics. International Journal of Nursing Practice, 24(S1), e12656. https://doi.org/10.1111/ijn.12656
Vernazza, D. R., & Nielsen, E. F. (2015). The damaging bias of sovereign ratings. Economic Notes, 44(2), 361–408. https://doi.org/10.1111/ecno.12037
Vousinas, G. L. (2015). Supervision of financial institutions: The transition from Basel I to Basel III. A critical appraisal of the newly established regulatory framework. Journal of Financial Regulation and Compliance, 23(4), 383–402. https://doi.org/10.1108/JFRC-02-2015-0011.
White, L. J. (2010). The credit rating agencies. Journal of Economic Perspectives, 24(2), 211–226. https://doi.org/10.1257/jep.24.2.211
Wiggins, B. J., & Chrisopherson, C. D. (2019). The replication crisis in psychology: An overview for theoretical and philosophical psychology. Journal of Theoretical and Philosophical Psychology, 39(4), 202–217. https://doi.org/10.1037/teo0000137
Williamson, P. O., & Minter, C. I. (2019). Exploring PubMed as a reliable resource for scholarly communications services. Journal of the Medical Library Association, 107(1), 16. https://doi.org/10.5195/jmla.2019.433
Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., Jones, R., Kain, R., Kerridge, S., Thelwall, M., Tinkler, J., Viney, I., Wouters, P., Hill, J., & Johnson, B. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. SAGE Publications Ltd., Newbury Park, CA, USA, 163 pp. https://doi.org/10.4135/9781473978782.
Wouters, P., Sugimoto, C. R., Larivière, V., McVeigh, M. E., Pulverer, B., de Rijcke, S., & Waltman, L. (2019). Rethinking impact factors: Better ways to judge a journal. Nature, 569(7758), 621–623. https://doi.org/10.1038/d41586-019-01643-3
Acknowledgements
The authors acknowledge and appreciate the input and critical feedback provided by Professor Panagiotis Tsigaris (Department of Economics, Thompson Rivers University, Kamloops, BC, Canada) on earlier versions of this paper. The authors also thank Cabells for permission to use two journals from their Predatory Reports to exemplify the use of this concept.
Funding
The research of this project, and this project, were not funded. However, ECOOM, where Joshua Eykens works, is funded by the Flemish government. The opinions in the paper are those of the authors and not necessarily those of ECOOM or the Flemish government.
Author information
Authors and Affiliations
Contributions
The authors, who are co-corresponding authors, contributed equally to the intellectual discussion underlying this paper, literature exploration, writing, reviews and editing, and accept responsibility for its content.
Corresponding authors
Ethics declarations
Conflict of interest
DD is an Associate Editor for Ethical Human Psychology and Psychiatry (https://www.springerpub.com/ethical-human-psychology-and-psychiatry.html), Consulting Editor for Social Work (https://academic.oup.com/sw), and an Editorial Board Member for Research on Social Work Practice (https://journals.sagepub.com/home/rsw), Social Work in Mental Health (https://www.tandfonline.com/toc/wsmh20/current), and Journal of Autism and Developmental Disorders (https://www.springer.com/journal/10803). MM is a Managing Editor for International Journal of Health Policy and Management (https://www.ijhpm.com/). Other than these, the authors declare no relevant conflicts of interest.
Ethical issues
There are no ethical issues as this paper is based on secondary data and statements that are publicly available.
Rights and permissions
About this article
Cite this article
Teixeira da Silva, J.A., Dunleavy, D.J., Moradzadeh, M. et al. A credit-like rating system to determine the legitimacy of scientific journals and publishers. Scientometrics 126, 8589–8616 (2021). https://doi.org/10.1007/s11192-021-04118-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-021-04118-3