1 Introduction

“… if we had an ‘undo’ button, and we could go back and isolate it and grab it when it first started -- if we could find it early, and we had early detection and early response, and we could put each one of those viruses in jail -- that's the only way to deal with something like a pandemic.”—Larry Brilliant (TED Talks 2006).

1.1 The concept of infodemic

The entire globe has been engulfed in the battle against the novel 2019-nCov, which is almost convulsing the entire planet (Dong et al. 2020; Skegg et al. 2021; Khanna et al. 2020). Public health has never been threatened in this way for a long time, as the whole world behaves like a child in convulsion resulting into unprecedented desperate measures including the total lockdown of an entire nation (as not just one but for many), just to curtail the high infectious and mobile virus (in rapidly performed mitigation actions). This development poses divers’ challenges to different stakeholders including frantic efforts being made by governments to flatten the trend curve (WHO 2019; Dodds et al. 2020), socioeconomic disturbances being experienced by citizens occasioned by regulations, overstretch of facilities and frontline medical officials, etc.

However, one of the most dangerous trends constituting a serious menace towards the fight against the virus is the issue of fake news associated with the pandemic that is fast resulting to a social menace (Shimizu 2020). This may not be unconnected with the wide acceptance of unconventional media where regulation is at its lowest ebb; in fact, the very ethos of social media is to put ‘decision making’ in the hand of the reader (Moravec et al. 2019). It is critical that the social media environments have been massively informing people for the spread of COVID-19 and actions to do for preventing from it but fast flow of fake news/information has a critical role on making all good preventions to be failed. It is notable that use of advanced data processing technologies such as Artificial Intelligence and software-hardware oriented tools have been resolving many issues in the context of healthcare and COVID-19 problem (Podder et al. 2021; Rodrigues et al. 2018; Kose et al. 2016) but there is also dark side of the technology and the widely popular social media when they both are examined in a sociological manner (Amin and Khan 2021; Kose 2018; Domenico et al. 2021). The World Health Organization (WHO) in its attempt to trigger global efforts toward taming the trend of novel coronavirus disease 2019 (COVID-19) fake news across the world, noted that apart from the COVID-19 pandemic, the next most challenging problem is what it refers to and term as the infodemic nature of news daily circulated, most especially on online media, especially social media (Pulido et al. 2020). These are fake news, myths, misleading and misguiding information that are daily propagated by disgruntled elements in the society (and even innocent citizens unknowingly), thereby constituting a cog in the wheel of progress and concerted collaborative efforts of health institutions towards ensuring citizens engagement towards the fight. As a recent example, Nigeria had in the last decade had to deal with potential pandemic when she fought off and eradicated the Ebola virus when it struck. Eventually, social media has recently a great power to inform societies around the world in even cases that they are limited by authorities to reach true news but the same thing can be easily applied to manipulate them and cause massive societal problems. At the time of pandemics, both sociological and psychological states of societies are sensitive so it is critical to defend the rights for receiving true new/information against malicious and manipulative actions to spread fake news/information.

1.2 A new concept: ‘Twitterdemic'

Prominent amongst social media platforms is Twitter with well over 380 million active users by April 2020 (Statista 2020). It enjoys patronage across demographics, who throng their Twitter handles to post user-generated information popularly referred to as Tweets or simply join an ongoing hash-tag discussion, which are arranged in threads amounting to millions of tweet posts in some cases. By January 2020, Internet users in Nigeria hits 85.49 million, an increment of 2.2 million (+ 2.6%) from 2019 figure, with a total of 27 million social media users (Kemp 2020). Aforementioned reveals that 99% out of the entire subscribers in Nigeria are active users, who accesses their choice social media platforms through mobile phone with an average time of 3 h and 30 min spent per day. It is noteworthy that the utilization preference of Twitter is 50% out of the entire active users amidst other social media platforms hence its huge traffic. A major contributor in the last few months is Corona virus-related breaking news platform.

Traditional tabloids have also suddenly adopted Twitter where their breaking news are promptly posted before production of next issue the following day. With its wide acceptance, volatility and user patronage however, the major challenge that has continued to confront researchers is the most effective ways to curb the spread of fake news with its attendant threats to global peace and harmony. This is attributed to the fact that communication of false and misleading information has greatly increased the trust deficit between government institutions, corporate organizations and the societal populace (Lakshmanan et al. 2019). Nonetheless many placing trust in official source, they admit to (accidentally or intentionally) passing on information on social media (Domenico et al. 2021). It is left to be seen how things will pan out with the US president Donald Trump executive order to tear up protection of social media, thereby laden the burden of direct liability on the doorsteps of social media provider for contents (fake, illegal, repulsive …) on their platform (see Engineering and Technology 2020 for more information).

The harmful societal implications of misleading news, detecting fake news, or in this case tweets, has attracted increasing attention globally. However, the detection models deployed using author’s profile information, geographical location, handles, hash-tags, and or status of handle as either verified or unverified is generally not reliable. This is because proponents and perpetrators of fake news had since learned to model and brand their social media accounts and handles in a responsible way and manner that will easily earn the trust of followers or visitors. Thus, there is need for efficient analysis of the news semantics not to just determine the truism of the post, as the target of most literature, but to ascertain the trustworthiness of the source by a way of placing emphasis on determining the source than the content of the news itself which is the main thrust of this research.

1.3 Attempts towards detecting fake news

Several methodologies were adopted towards detecting fake news including the natural language processing, which is about the most applicable in text categorization efforts. Since posts on social media and Twitter are texts, fake news detection naturally should fall under the purview of Natural Language Processing (NLP) (Kroeze et al. 2003). Consequent upon the foregoing, the pandemic-nature of fake news and misguiding COVID-19 information cannot be overemphasized when there is dire need for a global consensus on international best practices to nosedive the ugly trend. Meanwhile, as seen in Fig. 1, most of the infodemic tweets are far counterproductive hence a desperate measure to develop a model that will identify sources of COVID-19 information before compliance with its dictate. That was the motivating factor of this study coupled with the efficiency of Machine Learning algorithms for predictive analytics of big data mining, including text mining, as a reliable detection tool.

Since a huge volume of data on social media is text data, most especially on Twitter with high limitations as to the number of words allowed for each tweet. Text mining and NLP which by design extract meaningful patterns and derive sense from text data was therefore apt for this study by way of detecting the source of the Twitter post and thereby ignore, report or reject a post as misguiding or comply with the dictate of the post as one from a verified institution.

Major problem with the modeling of text however, is the messy nature of words most especially user-generated social media posts, which are stylishly crafted and posted with characteristic phone elements such as emoticons, hash-tags, haphazard punctuations, abbreviations, and deliberate misspellings. That is an issue for the Machine Learning, which performs better with well-defined, fixed-length inputs and outputs. That becomes worse as a result of the variations in different languages (dialects), like in the case of Nigeria with over 500 vernaculars (Africa Check 2020).

1.4 Towards addressing fake news

Since Twitter posts are not acceptable text words in Machine Learning metadata, there is need to convert tweets into vectors of numbers for designing the classification phase of a typical fake news detection model. Notwithstanding, precise fake news detection, is still challenging, partly due to the spontaneous nature of the social media, and the complexity and multiplicity of online communication data. Additionally, the limited availability and varying sampling of high-quality training data is a big issue for training supervised learning models (Zhang and Ghorbani 2020). That was specifically addressed by the study explained further here, with clear-cut tweet aggregating methods for dataset modelling, which takes into consideration season and time of data capture for a better appraisal of vocabulary of known words since data nature and characteristics is known to play a significant role in text predictive analytics (Torabi Asr and Taboada 2019).

Given that infodemic tweets are counterproductive and destructive to pandemic containment programs, the most important question to ask is: ‘how can a novel predictive modelling assist with containment of destructive infodemic tweets against the notion of deception and hedonic mindset?’ Moving from also that, the following were investigated in this study:

  • Examining the fake news detection approaches in the associated literature,

  • Processing efficient analysis of news semantics, as beyond truism of posts,

  • Postulating classifier for efficient analysis and ascertaining trustworthiness of non-linear text semantics with capability of handling texture nature and characteristics within social time and season domain.

By making connection with the expressed points, the objective of this study has been development of an ensemble Machine Learning model to deal with massive tweets for classifying them as fake or not. In detail, the technical background of the study was organized as combining linguistic methods to make tweet data ready for Machine Learning classification processes, and some specific models were chosen to build the ensemble method, which was used through some Bag-of-Words. Eventually, the method, which is called as Synthetic Minority Over-Sampling Technique (SMOTE) and the classifier vote ensemble (SCLAVOEM) was evaluated with some of the known metrics.

As based on the motivations and objective of this study, the rest of the paper is arranged as follows: research domain including Literature review is presented in Sect. 2 while Sect. 3 unveils the methodology deployed for the design of the proposed model. Section 4 discusses the result of the predictive analytics and following to that, the paper is ended with discussions on conclusions, and some future work ideas with recommendations.

2 Research domain

This section discusses the theoretical framework by starting off with an abductive mapping of the nature of the novel coronavirus to fake news. Thereafter, related studies are discussed in the literature review.

2.1 Theoretical underpinnings

In order to understand better about theoretical background of this study, some essential information was provided under the next paragraphs. When passing to the literature review next, the theoretical background will be ready to support understanding what has been done so far in the literature having relation with the study here.

2.1.1 Nature of COVID-19 spread vector; asymptomatic surface contamination

Infodemic COVID-19 tweets, as a variant of fake news, are endangering to containment of pandemic. They impact negatively on measures (rapid detection, rapid response programs) being taken to curtail short incubation period and the high transmissibility of pandemic like COVID-19 (Kim et al. 2019).

The novel coronavirus has been classified as an envelope virus because of the ‘crown’ around them for survival. They are able to stay on surfaces for extended period of time which varies depending on the type of surface it is attaching itself to. For the virus to replicate, it must attach itself to a receptor in a host, in the case of the current pandemic, the host is a human being, who might be asymptomatic for a short period of time or extended period or might just not progress to ‘sick state’.

The issue of fake news is not so far off. The fake news radial ring by Zhang and Ghorbani (2020) is instructive in this regard. Fake news requires spreaders and target victims within news or social context. The target as a host might be asymptomatic and unknowingly spread the fake news along. The target might on the other hand, process the fake news and still spread the fake news knowingly or deceitfully. The ‘asymptomatic’ state is synonymous to the System 1 in Moravec et al. (2019), the processed and deceitful spreading liken to System 2.

2.1.2 Hedonic, confirmation bias, deception and reputation theories

Human beings with pre-conceived or pre-determined or pre-existing opinions tend to have hedonic mindset that causes them to have confirmation bias. Their hedonic mindset they would want to satisfy and hence will be ‘lazy’ in tasking their cognitive effort to resolve any cognitive dissonance from fake news they receive (Velavan and Meyer 2020). They are willing receptor and replicator.

With reference back to Zhang and Ghorbani’s (ibid) fake news ring (Zhang and Ghorbani 2020), human being will naturally step out of hedonic mindset to utilitarian mindset (Velavan and Meyer 2020) when engaging in news context. For example, when looking for information for university program to take or for health information, we tend to make cognitive effort. In the news context, we make effort to fight off fake news ‘virus’. We briefly put our antibodies to work.

Nonetheless, we might still pass on fake news and/or be consumed by fake news. Generally, there is a level of laxity when we are with family members or acquaintances. There is the tendency for our level of ‘alertness’ to coronaviruses not being at the same agility level compare to when we are outside of our ‘comfort’ zone. What is at play here is to an extent is our level of trust. Similarly, when faced with fake news (knowingly or unknowingly) where the sources are known or are institutional ‘authorities’ or the contents are within our knowledge, we might ‘default’ to source reputation or domain reputation. Regarding the tweet (or any other social media content), we evaluate the past with the present through three lenses of functional, social and affective (Velavan and Meyer 2020). The degree of our cognitive effort in our utilitarian mindset in this period will vary along the three lenses.

Even though we might deeply engage our cognitive efforts to ensure making an informed decision, we might just not recognize the fakeness of the fake news. We become deceived and become a consumer and/or a carrier of fake news. In the context of the interpersonal detection theory, the source intentionally wants to ensure success of deception, whilst the destination has a duty to detect the truth along with the deception (George et al. 2018). Similarly, with fake news, except in instances of unknowingly state, deception goes along with misleading, misguiding and the receiver has a duty stop fake news from propagating; especially infodemic. In order to control infodemic from exacerbating pandemic crisis, apart from processes, algorithms and systems must be in place to stop fake news in their tract (Conroy et al. 2015).

2.2 Literature review

In order to support the background knowledge towards development of the SCLAVOEM method in this study, a purposeful and reverse citation search of papers in Web of Science and Google Scholar was done firstly (Larsen et al. 2019). That was allowed to focus on related relevant research studies. To ensure an addressed limitation, the most important ‘established and current’ research studies on fake news detection were examined and also a specific emphasis was given for the ones employing alternatives of the Machine Learning techniques.

Computational linguistics, often called NLP, has been widely adopted in literature in an attempt to classify text along analytical intents being one of the most relevant technologies of the information where there is a continuous need for appraisal and analysis of avalanche of information in packets across networks. Application of NLP is widespread because people communicate a lot in their languages including web search, emails, customer service, user generated posts on social media etc. hence its extensive deployment in literature for text classification either for bug detection, sentiment analysis, customer report reviews, fake news detection etc. The motivation to checkmate fake news especially on social media is commonplace across different professional callings with diverse approaches and methodologies for an eventual accurate modelling of the goal in sight. In the last decade, a systematic literature review of text classification models by Wahono (2015) discusses several germane areas central to an efficient text classification model including dynamics of the framework adopted, choice classification model and predictive techniques, nature and type of deployed dataset etc. The review noted that majority of the selected research studies adopt Machine Learning methodology with Naïve Bayes, Decision Tree, Neural Networks, Random Forest, Logistic regression and Secure Vector machine (SVM) as the mostly deployed learner algorithms while reiterating the call for a robust dataset with balanced class representation to ensure better classification accuracy.

In recent time, similar systematic survey conducted by Meel and Vishwakarma (2019) on high quality text classification studies associated with fake news detection and information pollution on social media and websites for contemporary users reiterates the dire consequences of fake news while noting that most datasets for text predictive analytics are private public data for majorly supervised learning as they often centred on the linguistic features in texts. Similar to the study in Wahono (2015), the (Meel and Vishwakarma 2019) discovered Naïve Bayes, SVM and kNN as the mostly deployed learner algorithms while it observed the use of imbalanced class representation data as the most frequently used despite its biasedness. With the identification of specific text classification performance enhancement variables lacking in literatures that span 2005 till 2019 as aforementioned, we engaged libraries to survey contemporary quality literatures on text classification and fake news detection with inclusion policy that include preference for journal studies, choice of studies discussing fake news prediction dataset, methods, frameworks and exclusion policy of studies without experimental results on fake news detection and studies other than social media or web-based use case.

The research purpose addressing performance indices aforementioned in Wahono (2015), and Meel and Vishwakarma (2019) were sought out in consulted literatures with findings presented in Table 1.

Table 1 Text classification and fake news detection performance enhancement indices across the related studies and models

2.3 Class imbalance problem

In detection of fake news, data mining helps to explore unknown patterns and improve prediction models which helps in text classification (Shu et al. 2017). However, accurate prediction, among other things, is a function of class imbalance, which is obtained when number of instances of one class outnumbers another class. In this case, instances representing the majority class dominates the learning process of the learner algorithm thereby skewing learning methodology towards the minority instances, a situation that leads to biasedness and hence lower accuracy outcome. That issue is called as the ‘bias’ factor briefly. Most of data mining algorithms that are used in predictive research studies, function well when supplied with evenly distributed class dataset (Mirza et al. 2018; Sun et al. 2018) but as can be observed from Table 1, majority of the text-data deployed for fake news research studies have been unevenly distributed; one class has many instances than other thus leading to imbalance in dataset.

Consequent upon the foregoing, there is a need to ensure the deployment of balanced data in terms of even representation of class instances as recommended earlier to ensure better accuracy of predictions in the area of fake news identification as a predictive model trained with a balanced dataset will be better efficient and can accurately differentiate across classes during the testing phase. Hence, as a clear departure from existing text classification models in literature, and since most text data including ours are of imbalanced class distribution, this study aimed to run an algorithm to create synthetic instances for the minority class of our data to evenly distribute instances as required for a better predictive model. A number of solutions outside text classification has been proposed to solve class imbalance at data level including manually adding more instances to the minority class but a solution at an algorithmic, as is being deployed in the model of this study, has showed better result in literature (Mirza et al. 2018). The Synthetic Minority Over-Sampling Technique (SMOTE) was used alongside classifier vote ensemble; these are discussed in the methodology approach of CLAVOEM (explained in the next section). SMOTE is configured to search 5-k nearest neighbor (or tuned otherwise) of any member of the minority class and then generate attributes along the line formed between the neighbors in such a highly efficient manner inheriting the exact characteristic nature of the parent minority. Upon SMOTE application, the Machine Learning classification phase ensues to determine the experimental result of the fake news prediction modelling.

2.4 Summary of research domain in the study

This study, as summarized in Table 1 and Fig. 2, highlighted the identified gap in the literature. The research here addressed the related lapse: encapsulating minority oversampling technique to address class imbalance, which is prominent in text classification datasets, and also the introduction of hyper parameter optimization that is strongly recommended by Agarwala, et al. (2019). The adoption of Machine Learning is still widespread owning to its learning capabilities while Bag-of-Words is likewise reported to be efficient for linguistic analysis of texts. Bag-of-Words is known as an effective method, which is used in the context of language/semantics-oriented research, for creating features to represent wider sentences and ensuring matches (or general operations) over them to produce some outcomes (Cummins et al. 2018; Vries et al. 2018). In this way, it is more effective to process wide linguistic data and make jobs of Machine Learning models easier. Thanks to that, the Bag-of-Words have been used in even computer vision (Sivic and Zisserman 2008). As based on these evidences, the study here aimed to propose a comprehensive enough method for the detection of fake (or truereal) tweets.

Fig. 1
figure 1

Cross section of sampled valid and infodemic COVID-19 tweets

3 Materials and methods

This section provides detailed information about all components and the associated methods used in the study, to form the SCLAVOEM eventually. Details also make the fake news detection approach clearer, in a way to make connection between data processing of linguistic aspects and Machine Learning use.

3.1 Fake news detection approach

The method proposed in this study has been based on ten steps distributed in two phases of supervised Machine Learning and testing both featuring an enabler- hyper parameter optimization across definite points of the nine steps. The method identifies fake news with regards to COVID-19 on the Twitter social media platform. The first phase covers the following nine stages: accumulation of data (Tweets) related to COVID-19 (that was done by using data parsing methods via Twitter API and some of useful code blocks targeting tweets); Data cleaning (eliminating the data/tweets, which are not useful for further processing of classification); Tokenization of tweets including stemming (transforming a tweet with words sequence to a group of stems), stop-word handling, and N-graming (feature extraction with elimination of stop-words and also employment of specific arrays of words); generating attributes of vocabulary of known words; creating a ranked list of vocabulary of known words into Bag-of-Words via the TD-IDF method (creation of the representation of the processed tweet data with Bag-of-Words); Bag-of-Words splitting into train and test sets (making datasets for training and testing phases of the Machine Learning models); oversampling of minority output class (eliminating the issue of imbalanced data as well as bias factor); feature engineering and Ensemble classification of dependent output class into binary category of either fake or valid (as binary: 0, and 1) news to generate the predictive model. The second phase repeats step seven and eight and concludes with the evocation of the predictive model for the prediction/testing phase of the proposed COVID-19 fake news predictive model. The hyper parameter optimization feature of this model is induced during the tokenization, minority oversampling, feature engineering and ensemble classification points of the pipeline as an enabling distinguishing factor of the proposed model. This model is implemented on four distinct datasets including a sentiment analysis of the fake tweets corpus for further categorization into satire, hoax, unsubstantiated (negative), positive and click-bait labelling to determine opinions, emotions and attitudes of tweets, which are all encapsulated in the interaction overview diagram in Fig. 3.

Fig. 2
figure 2

Distribution of research variables observed across the related studies and models distribution

3.2 Data aggregation

COVID-19-related posts on the Twitter were collected across various trending hashtags, thread discussions and specific Twitter profiles of constituted health authorities, who are directly in charge of the management of the pandemic. The corpus of tweets was acquired and categorized in four sorts of Pre-lockdown, Post-lockdown, concatenation of Pre/Post-lockdown and the subset of Pre/Post Tweet-corpus from un-constituted authorities. That approach was adopted due to the noticed dynamic nature of discuss on COVID-19 owing to its novel nature, which intermittently pops up trending news hashtags or discussion threads in reaction to breaking news and government regulations that are daily reviewed and announced for prompt compliance by citizens. Eventually, the collected data was a total of 176,877 corpus of tweet words discussing COVID-19 with its percentage distribution, source, nature, as the status briefly represented in Table 2.

Table 2 Characteristic nature of the tweets across four datasets

3.3 Data preprocessing and tokenization

As shown in Table 3, the data cleaning including stop word handling, stemming, tokenization and filtering was carried out to produce vocabulary of known words whose attributes forms the numeric Bag-of-Words eventually deployed for the training and prediction phase of the model. That processes were essential to obtain unique data for the target classification problem. The tweet text corpus, after cleaning and stemming, was filtered for feature extraction with StringToWordVector (a weighted instance handler) filter, which transfigures tweet-string attributes into a set of numeric attributes representative of word occurrence information from the tweet-text contained in the strings separating the corpus into unigram (1-word), bigram (2-word phrase) and trigram (3-word phrase), as forming the vocabulary of known words. In this way, the original data format is eventually transformed into a better organization for effective use by any data processing tools including Machine Learning, Data Mining, etc. Hence, the filter creates unigram, bigram and trigram Bag-of-Word for the binary class and then merges it. The resulting Bag-of-Words was then subjected to minority oversampling to balance up class representation (for preventing from bias) and later for feature selection to reduce the long list of unigram, bigram and trigram into subset of most significant classifying attributes to reduce redundancy of attribute and thereby improve predictive accuracy of the model. All the mentioned processes lead to careful data preparation for any target Machine Learning model, which will be dealing with the classification applications done over the clean data. Without use of clean data, the Machine Learning is not useful for the problems including linguistic aspects as shown in this study.

Table 3 Preprocessing techniques and parameter optimization application points

3.4 Minority oversampling and feature engineering

In order to enhance forecast accuracy and address class imbalance in training dataset (as observed in Table 2), the SMOTE methodology was used to work at data level to resample datasets against challenges posed by imbalanced output class by oversampling instances with smaller class representation through creation of fresh synthetic instances so that addressing the shortfall rather than under-sampling of majority instances for better predictive analytics.

As represented in Fig. 4, the SMOTE considers each instance as a vector thereby generating synthetic versions (in red) along the line of a minority sample and its adjoining neighbor (in black) depending on the value of nearest neighbor programmed. The percentage of the new synthetic instances bridges the gap of class deficit when the seed used for random sampling is set at 1 and 2, interchangeably to optimize performance. Furthermore, as pointed several times earlier, that data processing makes any Machine Learning model robust against any bias factor caused by the data. In this study, the bias factor was considered as an important trap during development of a good performing tweet classifier so the SMOTE solved both performance and the bias factor issue. The entire four datasets were treated for instances resampling through oversampling. The parameter of the nearest neighbor in the resampling was optimized between 5 and 7 interchangeably to enhance performance with percentage increase likewise optimized across 30, 50 and 100% depending on class size distribution of dataset at a random seed of 1 or 2. The class index, which is the index of the class value to which SMOTE is applied, was set to 0, in order to auto-detect the non-empty minority class. The SMOTE scaled-up the minority classes across the four datasets for subsequent classification phase. Feature engineering here was employed at reducing the avalanche of text attributes usually generated from feature extraction at the filtering stage (Table 3). Some attributes in the dictionary of known words captured in the Bag-of-Words turns redundant for an efficient classification phase hence the need to rank the word-attributes in order to determine those with significant impact on text classification. The information gain (IG) evaluator filter to rank the entire 176,877 corpora as distributed across the four datasets in the determination of the relevance of each COVID-19 tweet post. In order to achieve this, the ranker evaluator calculates the entropy for each attribute, which varies from 0 (no information) to 1 (maximum information), and then assigns high IG value to the attributes as shown in Eq. (1) follows:

$$ {\text{IG}}(C_{i} B_{i} ) = E(C) - E(C/B_{i} {),} $$
(1)

where C is the output class, Bi and E are entropy. The reduced resulting word-attributes in the Bag-of-Words is at the discretion of parameters optimized as captured on the feature selection row in Table 4.

Fig. 3
figure 3

Interaction overview diagram of the proposed infodemic predictive model

Table 4 Classification metrics of infodemic tweet

3.5 Classifier ensemble Machine Learning

Classifier ensemble is a cutting-edge supervised learner in predictive analytics, as adopting the base algorithms in its learning process for an improved eventual classification and testing stage. In voting ensemble, multiple classification models are initially created using training dataset as each base model is created using same training set with different classification rules. Each base model then forwards its prediction (votes) for respective test instance using tenfold cross-validation hence the final output prediction which takes into cognizance the prediction of the better model multiple times. In this study, efficient base learners as recommended in state-of-the-earth text classifying experimental results was deployed to include Bayes-Naïve Bayes (NB), function-sequential minimal optimization (SMO), voted perceptron (VP), kNN (IBK in WEKA), and Random Forest (RF). The models forming the ensemble classifier have the classification mechanism by using Bayes probability (NB), optimization-oriented data adjustment (SMO), vote based perceptron optimization (VP), choosing the best class according to near k samples (kNN), and use of dominated tree decisions to classify the target data (RF), respectively. The ensemble classifier learned on the four datasets with tenfold cross validation for BOW1-2 while the learning was carried out on BOW-3 (train and test set separated prior to testing). Latter method was applied on BOW-4 for the sentiment analysis which predicts the sentimental output class of fake news determining the attitude, response and sentiments of purveyors. The classification on BOW-3 was binary as either valid news (if posted by constituted health authorities as presented in Table 2 earlier) or invalid news (if posted by unauthorized individuals) with the main thrust of establishing the trustworthiness of the source of a tweet rather than the truth of the tweet hence the need for the sentiment analysis of the invalid tweets composed of BOW-4.

4 Obtained results

In this study, five base Machine Learning classifiers including NB, SMO, VP, kNN, and Random Forest were deployed alongside four distinct datasets of COVID-19 related tweets, as earlier discussed. Furthermore, a synthetic minority oversampling resampling technique was applied on the datasets BOW-1(pre lockdown tweets), BOW-2 (tweets during lockdown), BOW-3 (BOW-1 + BOW-2) and BOW-4 (fake tweets from random and unauthorized posts) to address data imbalance of the class representations before conducting feature selection to reduce the number of attributes prior to classification. The result of the experiment was then analyzed by using well-known performance measures including accuracy, F-measure and area under the curve (AUC). The application-experiment flow was conducted in the Waikato Environment for Knowledge analysis (WEKA) toolkit. After preprocessing of the datasets, the minority class, BOW-1, with a total of 404 tweet instances was resampled from 177 to 230, a 30% increment, with a random seed equal to 2 and nearest neighbor = 5, giving a BOW with 457 instances and 2169 attributes. BOW-2 was also resampled on the minority class from 182 instances to 364 (100% increment) and BOW with 704 instances and 3825 attributes even though the parameter optimization was used as in BOW-1. In BOW-3, minority oversampling scales from 359 positive class to 538 (50% increment), random seed = 2, nearest neighbor = 5 and BOW with 1105 instances and 2096 attributes.

Table 4 shows the results of the base classifiers (NB, SMO, VP, kNN and RF) across the datasets and the classifiers vote on the datasets. The best classifiers are shown with bold and underline style text (If there is more than one classifier, all the best classifiers are shown with the same way). It was observed that before the minority oversampling, the accuracy of the base classifiers ranged from 79.70 to 87.26% in BOW-1, BOW2 and BOW-3, performance increases of 3.74 and 1.11%; 5.05 and 0.29%; 4.59 and 8.05% was observed in BOW-1, BOW-2, and BOW-3 due to minority oversampling and the classifier vote ensemble, respectively. Also, in Fig. 4, the result of feature selection shows NB, kNN and VP only increased performance upon information gain ranker for BOW-1, in BOW-2, the classifiers performance reduced but BOW-3 recorded a slight increment of 0.36 and 1.26% with kNN and VP, respectively.

As seen in Table 5, the sentiment analysis of fake tweets in BOW-4 showed a close performance of accuracy, as between SMO, kNN and RF. Here, VP and VOTE had no result because VP works only with binary class while classifier vote cannot handle multi-variate nominal class for negative, positive and click-trap tweets as designed in the experiment. It is notable that kNN and RF gives same values for the accuracy and AUC whereas the F-Measure is different. That’s because both models are very close but slightly better F-Measure value for the RF indicates that the RF is the best classifier here. Figure 5 shows the improvement of evaluation metrics across the datasets upon minority oversampling especially for kNN with an impressive AUC upon resampling in BOW-1 as the curve of AUC was improvingly flattened in BOW-2 across the base classifiers. SCLAVOEM’s performance was increased over the other base classifiers is illustrated in Fig. 6.

Table 5 Classification metrics on bow-4 for sentiment analysis of fake tweets
Fig. 4
figure 4

Graphical representation of the minority oversampling technique for balanced class distribution

Fig. 5
figure 5

Performance metrics of base classifiers across datasets with minority oversampling showing improvement of evaluation measures

In Fig. 7, the information gain evaluator for attribute ranking of vocabulary of known words across the datasets shows the dynamic nature of COVID-19 discussion on Twitter. For BOW-1, it was observed that the significant words or phrase with the highest entropy value triggers discussion relating to the pandemic itself such as: number of confirmed cases, issues relating with prevention and COVID-19 mortality. However, during lockdown in BOW-2, discussions was centered on issues of lockdown, community spread, palliative measures bothering such as users soliciting help by posting account details on Twitter, and issues bothering on religion because of closure of worship centers. However, issues bothering on Chinese doctors, performance of State Governors in curtailing COVID-19 spread and palliatives dominated discussions in BOW-3. This determines the classification labelling of the vote ensemble model to a large extent. Unigram, bigram and trigram were the significant words or phrase with higher entropy in BOW-4 and it determines the sentiment analysis carried out on the set of invalid tweets. The differentiating words as calculated by the information gain returned tweet words like Chinese, lockdown, palliative, Kano and Lagos were tops in the discussion as majority of the tweets are more of satires and hoaxes presenting conspiracy theories against government decisions and policies. Words like Lagos and Kano featured in few discussions which were trying to seek clarifications on the Kano-death debacle and Lagos increasing death rate.

Fig. 6
figure 6

Performance distribution of SCLAVOEM with base classifiers after minority oversampling across three datasets evaluated with F-measure and AUC, respectively

From Fig. 8, the sentiments expressed by tweets varied from negative sentiments like satire, hoax and unsubstantiated news against the government and relevant agencies as well as minority positive tweets canvassing increased awareness efforts to assist in curtailing the COVID-19 spread. There were also click-traps tweets which were marketing products or luring unsuspecting users to their Twitter handles for patronage. Figure 8 test table however shows the prediction result of the testing phase on BOW-4 which is solely the Bag-of-Words of tweets sent by non-authoritative sources as extracted from BOW-1 and BOW-2. The BOW-4 training set has been labelled as either of ‘negative’ tweet if it is considered to be a hoax, satire etc. or ‘positive’ tweet if it is in line with valid tweets normally posted by authoritative and verified handles of government health institutions. The third class of the set is labelled as a click-trap.

Fig. 7
figure 7

Entropy value distribution of ten most significant dictionaries of known words across the four datasets as ranked by information gain evaluator

The classification rule tree generated on BOW-3 for COVID-19 infodemic tweet detection is shown in Table 6. Table 7 shows SCLAVOEM’s Infodemic classifier ensembles generated in the training phase that was deployed for the classification of BOW-3 set alongside negative and positive tweets. The BOW-3 dataset is thus labelled as a binary class to categorize tweets as positive, if it is from constituted health authorities such as World Health Organization (WHO), Federal Ministry of Health (FMOH), National Centre for Disease Control (NCDC), etc.; and negative, if it’s from random Twitter users. As earlier mentioned, this is to establish the trustworthiness of a tweet source rather than the truism of a tweet itself. The negative tweets from random and untrusted tweet users in dataset BOW-4 are further analyzed for a sentiment analysis with RF and its result is presented in Table 8.

Table 6 Classification rule tree generated on bow-3 for COVID-19 infodemic tweet detection
Table 7 Experimental evaluation test table of classifier ensemble for infodemic tweets
Table 8 Experimental evaluation test table for invalid tweet sentiments

Tables 7, and 8, respectively shows the findings regarding the prediction at the model test phase. The classifier vote ensemble generated on the BOW-3 (see Table 4) is deployed in Table 7 to predict the tweets as either valid (1) from verified sources/institutions or invalid (0) from random Twitter users whose tweets cannot be trusted. The result of the prediction in Table 7 shows the correlation between predicted class score of the tweet and the actual, the model returned a False Negative (FN) in only one instance. Recognition of COVID-19 fake tweets therefore becomes easy as it is either a satire, a hoax or a click-trap as against valid news which is mostly sensitizing citizenry on the pandemic without necessarily soliciting for attention nor antagonistic in nature. The snapshot of BOW-3 tweet-text file before tokenization, its ARFF Bag-of-Words file after filtering and the synthetic instances created by SMOTE for minority oversampling is presented on Fig. 9. In the final analysis, RF performed better on resampling with AUC across the datasets which is a more reliable evaluator in performance metrics analysis.

Fig. 8
figure 8

Pie chart of sentiment class distribution for BOW-4

Fig. 9
figure 9

Screenshot of BOW-3 tweet-text file, BOW-3 Bag-of-Words and synthetic minority oversampled instances for BOW-3

5 Discussion

By considering the findings of the SCLAVOEM (in Tables 7 and 8), it can be said that the proposed approach can efficiently detect legitimate or illegitimate tweets on COVID-19 especially from random, unverified or unauthorized Twitter handles. The model can be adopted to any subject of national or public interest where accurate data dissemination is of utmost importance. It can also classify tweet sentiments as positive, negative or just a click-trap targeted at an unsuspecting user. The results for the SCLAVOEM are very important when considering the source would intentionally or might want to ensure success of deception; detecting (and possibly automatically stopping) the deception/fakeness is of great importance to unsuspecting (destination) user, especially infodemic during the pandemic period. It is clear that the SCLAVOEM is on the right track in addressing false negative and ensuring true negative and true positives. SCLAVOEM will find application in deception detection in utilitarian mindset and when in auto-detect-and-trap mode, it would be of helpful in hedonic mindset where cognitive effort is ‘low’ (across the three lenses of functional, social and affective). It can be said for the SCLAVOEM that it has the potential of detecting fake information within any data from alternative environments (Web pages, alternative social media environments, newspapers…etc.). Also, this study of the formation of SCLAVOEM has shown that how synergy of different data processing components can be very effective against massive data. Considering massive data, it may be a nice debate that if use of Deep Learning models could be more effective in the context of this study but since the currently employed solution is enough effective for the target problem and making that in a faster way than any Deep Learning model formation, the current SCLAVOEM solution has been a good contribution and indicator that the traditional Machine Learning can be adapted to a critical research problem examined in the study.

Finally, this study has made the following contributions to knowledge by incorporating minority oversampling technique to resolve the data imbalance issue in text classification as identified in literatures (as shown in Table 1). This helped to increase the instances of minority tweets in the datasets either valid or invalid for equal representation during the learning process. It is also critical that the developed method is robust against imbalanced data, which is often seen in social media-oriented data and causing bias in Machine Learning models. At the time of COVID-19, it is critical to run an intelligent detection tool, which is effective enough to ensure successful outcomes in the context of a massively sensitive health issue. The study has shown that the SCLAVOEM method can detect fake COVID-19 tweets fairly and accurately, by using hyper parameter optimization approach and classifying the target data as ‘positive’, ‘negative’, and ‘click-trap’.

5.1 Implication to theory and practice

SCLAVOEM is a predictive-based detection and classifier. It blends synthetic minority oversampling technique (SMOTE) and classifier vote ensemble CVE as a process flow. With little feature sets and minimal training, SCLAVOEM do not only detect fake tweets as ‘1: valid’ or ‘0: invalid’, but also uses sentiment analysis to classify tweets as ‘positive’, ‘negative’ or a ‘click-trap’. Most fake news identification research projects follow what can be construed to be ‘unitary’ approach and not process-flow as in SCLAVOEM. So, that sentiment analysis aspects may be an important indicator for alternative works to apply over tweets, comments…etc. shared by humans (thinking about social sides of sentences created and provided by humans).

SCLAVOEM has been a technical and process implementation. It briefly contributes to the theory through its data-process approach and three-way classifier of tweets: ‘positive’, ‘negative’, and ‘click-trap’. The process approach can be theorized into a framework with or without the classifier. It is important that the IS practitioners and executives can make use of SCLAVOEM process and classifier approach in designing and implementing predictive detection systems. Policy makers can leverage on the process and classifier approach in construing, drafting and issuing directives and guidelines. That’s also critical that use of such approach may be integrated to massively advanced Internet of Things (IoT) systems aiming different purposes such as direct medical analyze (Dourado et al. 2020), automatic diagnosis (of especially COVID-19) (Ohata et al. 2020), and secure medical analyze (Parah et al. 2020). Because the future world is a combination of social communication channels and IoT, such systems may be a good example for massive intelligent solutions. Here, use of some feature creator methods such as Bag-of-Words in this study may be an effective way for supporting feature extraction from images (thinking about use of Bag-of-Words in computer vision) and improving image segmentation outcomes in a broader way (Reboucas Filho et al. 2017; Rebouças Filho et al. 2014).

6 Conclusion and recommendation

This research paper, describes an ensemble classification approach, SCLAVOEM, towards detecting COVID-19 infodemic tweets as well as the sentiment analysis of invalid tweets regarded as fake news with four Bag-of-Words (set) containing vocabulary of known words extracted through tokenization of tweets. Results from the implementation show how hyper parameter optimization increases output performance during the application of synthetic minority oversampling to address class imbalance. By ranking of attributes, information gain score helps to discover tweet words with the most significant entropy values improved infodemic classification. The proposed methodology was evaluated by comparing five Machine Learning algorithms (techniques/models) identified in literatures for text classification, they are Naïve Bayes (NB), SMO, Vote perceptron (VP), kNN, and Random Forest (RF) that forms the base learners deployed for SCLAVOEM in the prediction phase. The main objective of the study was to examine trends in classifiers for detecting fake news and to utilize minority oversampling for class imbalance through parameter optimization and improved discriminative tendencies of the model. Our paper carried our purposeful reverse citation search to outline trend and identified a gap in literature. The result obtained from structured experiments shows RF outperformed SMO, kNN and VP in minority oversampling and feature selection while classifier vote ensemble also outperformed the individual classifiers. SCLAVOEM’s process approach provide a framework for not only detecting ‘fake tweets’ but also classifying tweets into ‘positive’, ‘negative’ and ‘click-trap’ (piège à clics). The three classifiers would be critical in infodemic period where prevent and curtailment is of the essence, (and generally helpful across normal social media engagement).

A limitation of this paper is the concentration of tweets from a single country (Nigeria). However, given the global nature and reach of tweets, this does not severely impact on the result. Since the associated tweets were considered in English language, there is the advantage of reaching to a wider understanding of fake/real information on the COVID-19. Of course, that may be a disadvantage when a country specific research may be wanted to run in natural language of any country. The designed algorithmic deception detection system can be utilized in other countries and across the globe. Furthermore, region-based research considering Europe, Asia, Africa, and America may be realized by including the countries with more cases and manipulative information regarding COVID-19. In future work, Twitter API would be employed for streaming the tweets towards real-time implementation, evaluation and scalability of the proposed infodemic model. The SCLAVOEM infodemic model will also be implemented as modular system that can be employed to automatically flag (and trap) fake information on Twitter and other social media platforms, hence protecting the public from inaccurate and information overload. Future iteration of SCLAVOEM can be ‘plugged in’ as a service to existing systems, whilst maintaining its process-flow viewpoint. The parameters of the associated components of the SCLAVOEM will also be updated with alternative values to see if the findings can be improved more. Another future work is also associated with running the SCLAVOEM for alternative tweet topics, to see it is still effective for detecting fake news, information and many other dangerous data flowing around the Twitter. That strategy may be also extended to more alternative social media environments such as Facebook and LinkedIn. In terms of technical side of the SCLAVOEM, employment of alternative Machine Learning techniques such as Support Vector Machines (SVM), Q-Learning oriented models (considering reinforcement learning against the target data state) and alternative models of Neural Networks may be another future work to add value to the current method and the associated research flow.