Abstract
In addition to the increased opportunities for citizens to participate in society, participative online journalistic platforms offer opportunities for the dissemination of online propaganda through fake accounts and social bots. Community managers are expected to separate real expressions of opinion from manipulated statements through fake accounts and social bots. However, little is known about the criteria by which managers make the distinction between “real” and “fake” users. The present study addresses this gap with a series of expert interviews. The results show that community managers have widespread experience with fake accounts, but they have difficulty assessing the degree of automation. The criteria by which an account is classified as “fake” can be described along a micro-meso-macro structure, whereby recourse to indicators at the macro level is barely widespread, but is instead partly stereotyped, where impression-forming processes at the micro and meso levels predominate. We discuss the results with a view to possible long-term consequences for collective participation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Singer, J.B., et al.: Introduction. In: Singer, J.B., et al. (eds.) Participatory Journalism: Guarding Open Gates at Online Newspapers. Wiley Subscription Services, Inc., Sussex (2011)
Diakopoulos, N., Naaman, M.: Towards quality discourse in online news comments. In: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, pp. 133–142 (2011)
Grimme, C., Preuss, M., Adam, L., Trautmann, H.: Social bots: human-like by means of human control. Big Data 5, 279–293 (2017)
Erjavec, K., Kovačič, M.P.: You don’t understand, this is a new war!’ Analysis of hate speech in news web sites’ comments. Mass Commun. Soc. 15(6), 899–920 (2012)
Bastos, M.T., Mercea, D.: The brexit botnet and user-generated hyperpartisan news. Soc. Sci. Comput. Rev. (2017). https://doi.org/10.1177/0894439317734157
Neudert, L.-M., Kollanyi, B., Howard, P.N.: Junk news and bots during the German parliamentary election: what are German voters sharing over Twitter? In: COMPROP Data Memo, vol. 7, September 2017
Badri Satya, P.R., Satya, B., Lee, K., Lee, D., Zhang, J.J.: Uncovering fake likers in online social networks. ACM Trans. Internet Technol. 2365–2370 (2016). https://doi.org/10.1145/2983323.2983695
Woolley, S.C., Howard, P.N.: Social media, revolution, and the rise of the political bot. In: Routledge Handbook of Media, Conflict, and Security, pp. 282–292. Routledge, New York (2016)
Davis, C.A., Varol, O., Ferrara, E., Flammini, A., Menczer, F.: BotORNot: a system to evaluate social bots. In: WWW 2016 Companion, pp. 1–11 (2016)
Heinderyckx, F.: Gatekeeping Theory Redux. In: Vos, T.P., Heinderyckx, F. (eds.) Gatekeeping in Transition, pp. 253–268. Routledge, New York (2015)
Williams, B.A., DelliCarpini, M.X.: Unchained reaction: the collapse of media gatekeeping and the Clinton-Lewinsky scandal. Journalism 1(1), 61–85 (2000)
Bruns, A.: Gatewatching. Collaborative Online News Production. Peter Lang, New York (2005)
Vos, T.P.: Revisiting gatekeeping theory during a time of transition. In: Vos, T.P., Heinderyckx, F. (eds.) Gatekeeping in Transition, pp. 3–24. Routledge, New York (2015)
Gagliardone, I., et al.: MECHACHAL: online debates and elections in Ethiopia - from hate speech to engagement in social media, Oxford (2016)
Engelin, M., De Silva, F.: Troll detection: a comparative study in detecting troll farms on Twitter using cluster analysis. KTH, Stockholm, Sweden, 11 May 2016
Braun, J., Gillespie, T.: Hosting the public discourse, hosting the public. J. Pract. 5(4), 383–398 (2011)
Frischlich, L., Boberg, S., Quandt, T., Boberg, S., Quandt, T.: Comment sections as targets of dark participation? Journalists’ evaluation and moderation of deviant user comments, vol. 9699 (2019)
Weischenberg, S., Malik, M., Scholl, A.: Journalismus in Deutschland 2005 Zentrale Befunde der aktuellen Repräsentativbefragung deutscher journalisten. Media Perspekt. 7, 346–361 (2006)
Gläser, J., Laudel, G.: Experteninterviews und qualitative Inhaltsanalyse als Instrument rekonstruierender Untersuchungen, 4th edn. VS Verlag für Sozialwissenschaften/Springer, Wiesbaden (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Boberg, S., Frischlich, L., Schatto-Eckrodt, T., Wintterlin, F., Quandt, T. (2020). Between Overload and Indifference: Detection of Fake Accounts and Social Bots by Community Managers. In: Grimme, C., Preuss, M., Takes, F., Waldherr, A. (eds) Disinformation in Open Online Media. MISDOOM 2019. Lecture Notes in Computer Science(), vol 12021. Springer, Cham. https://doi.org/10.1007/978-3-030-39627-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-39627-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-39626-8
Online ISBN: 978-3-030-39627-5
eBook Packages: Computer ScienceComputer Science (R0)