key: cord-0646572-hraa49xr authors: Alsmadi, Izzat; Alazzam, Iyad; AlRamahi, Mohammad A. title: An ontological analysis of misinformation in online social networks date: 2021-02-22 journal: nan DOI: nan sha: a5984e608827104622a11a9dcce0548390631a6f doc_id: 646572 cord_uid: hraa49xr The internet, Online Social Networks (OSNs) and smart phones enable users to create tremendous amount of information. Users who search for general or specific knowledge may not have these days problems of information scarce but misinformation. Misinformation nowadays can refer to a continuous spectrum between what can be seen as"facts"or"truth", if humans agree on the existence of such, to false information that everyone agree that it is false. In this paper, we will look at this spectrum of information/misinformation and compare between some of the major relevant concepts. While few fact-checking websites exist to evaluate news articles or some of the popular claims people exchange, nonetheless this can be seen as a little effort in the mission to tag online information with their"proper"category or label. The continuous fear of the spread of misinformation risks sacrificing the values of freedom of speech as a core element in democracy. Many efforts to propose solutions to the spread of misinformation are based on enforcing some forms of censorship on who can post and what can be posted. This is already implemented in many authoritarian countries around the world who try to control public media under information censorship/accuracy claims. As one possible compromise, OSNs should encourage their users to avoid re-posting misinformation and should help them identify such misinformation. In a previous work, a model is proposed for OSNs to use and promote engagement metrics that motivate and promote positive publicity and engagement, , . Facts, and truth are examples of terms that we used to refer to something that is certain or indisputable. Fact can be more often used with science, while "truth" can have belief or religion aspects. As such, what can be seen as the "truth" for someone from a certain religion, can be seen as "false truth" for someone from another religion. Religions and political orientations or beliefs, can easily make two persons disagree completely on a specific subject, news, claim, etc. This also implies an important aspect related to misinformation, is that in many cases, people unintentionally spread misinformation, thinking and believing that it is not. The statement "people hear what they want to hear" shows that in any type of information or misinformation you will find some audience who will listen to or believe in such information/misinformation. "Belief in fake news is associated with psychology, dogmatism, religious fundamentalism, and reduced analytic thinking", Bronstein et al. [2019] . But why these days misinformation is a big subject and concern ? What are the changes that happened in the last few years that may have triggered such issue ? Their is no doubt that one of the main factors is the growth of Online Social Networks (OSNs) such as Twitter, Facebook, YouTube, Instagram, LinkedIn, Google+, Reddit, Snapchat, Tiktok, etc. In those websites literally all humans around the world became content generators or producers. To a large extent, their is no control on who can post and what. This is a significant change in comparison with what we had 50 or 100 years ago where governments and specific news agencies or groups control the media of TVs, newspapers, websites, etc. To a large extent, OSNs are driven by marketing and encourage users to interact more and generate more content, regardless of the credibility of such content. OSNs rely on the merchandising of a click-and-share engagement that, as a result, encourages the circulation of contents that are sticky, and "spreadable", Gerbaudo [2012] , Venturini [2019] . For OSN celebrities, even negative publicity can have some positive impacts, Berger et al. [2010] . Researches showed that people are vulnerable to the spread and exposure of misinformation because of psychological and sociological factors. Different factors such as age, political or religious orientations can impact their vulnerability, Frenda et al. [2011] , Karduni et al. [2019] . In this paper we evaluated different misinformation perspectives and classifications. For example, misinformation can be broadly classified based on creator intention to intentional versus unintentional, Forbes [2002] , Wu et al. [2016] , Brown [2018] , Rapti [2019] . Misinformation can also have transient and temporary effect and lasting while some other misinformation can have longer lasting and impacts. One major example of misinformation with a large impact is that which surrounded US elections 2020. Another way to evaluate misinformation literature is to investigate actions towards misinformation, (e.g. detection, correction, prevention, etc.), Chen et al. [2015a] . Machine learning rule is usually related to automation and replacing human efforts. Machine learning algorithms are proposed in many papers for the automatic detection of misinformation (e.g. Alsmadi and OBrien [2019] , Vicario et al. [2019] , Al-Ramahi and , Alsmadi and OBrien [2020] ). The prevalence of misinformation in online social networks in several domains like science, political and health emerges new challenges. For example, propagation of rumors or misinformation versus authentic information during COVID-19 Pandemic was statistically significant Tiwari et al. [2020] . Therefore, a key challenge is to come up with good mechanisms/techniques to correct misinformation on social media. Such correction mechanisms are crucial before this misinformation firmly established as accurate information and facts in receiver's mind. Examples of those mechanisms include: • One of these mechanisms is crowd-sourcing technique or volunteer fact-checkers that can act as social media observers or agents Vraga and Bode [2017] . In this context, Vraga et al. [2019] explored whether observational correction on social media by emphasizing and contradicting the logical myths in misinformation using can lead people to change their minds against controversial issues in different domains like health, science, and politics. • Expert fact-checker to validate information posted Collins et al. [2020] . Results also showed adding a "Questioned" or "misleading" tag to misleading information on social media like false headlines makes this information to be perceived as less accurate Clayton et al. [2020] . Machine learning and text mining to automatically detect and filter fake and insincere contents in social media Al-Ramahi and . • Hybrid expert-machine that blends crowd and machines that showed satisfactory results in detecting fake news Collins et al. [2020] . Due to its sensitivity, in healthcare domain, many researchers have been attracted to examine mechanisms to identify health misinformation on social media. In this regard, 1) Kim et al. [2020] proposed an approach based on eye tracking to accurately determine the size of attention people give to a misinformation as well as an adjustment message, and how that can be affected by the adjustment approach adopted. 2) Kim and Walker [2020] found that tracing replies that deliver correct information is more effective against using keywords to search for COVID-19 misinformation regarding a cure and antibiotics. To tackle the problem of proliferation of health misinformation in online social networks, Trethewey [2020] suggests the following strategies: • Careful dissemination of medical information: it is very important that medical research findings to be introduced in an precise, impartial and appropriate manner so audience like news media reporters and public people can understand and act with them properly. • Expert-fact checking: Experts could verify tweets posted and approve them with supporting reply supplemented with evidence. • Social media campaigns: there is a need to cooperate with influencers in social media such as 'mommy bloggers' to conduct campaigns to promote specific health information that can help spreading correct and data driven medical information to the target audiences. • Greater public engagement: public or expert health organizations can promote and manage campaigns of public health led by experts in different fields to engage, advice and educate public as well as highlight misinformation in different topics in their areas of expertise. • Fostering a fact-checking culture: it is critical to develop and encourage a philosophy of fact-checking among public and motivate them to doubt the health information communicated in social networks and see if that information is supported by a scientific evidence. • Doctors as advocates: doctors and healthcare providers should be motivated to be proactive and share information supported by scientific evidence to the public through social networks outlets, like Facebook and Twitter Wahbeh et al. [2020] . Attempts to addressing misinformation in social media need to carefully contemplate the misinformation context as well as the audience targeted in establishing efficient interventions that could be adopted by the public Vraga et al. [2019] . For example, identifying emerging health misinformation using volunteer fact checker necessitates proper context-specific keywords to acquire enough number of related potential posts and adequate advice from official healthcare persons to reduce the variation of responses Kim and Walker [2020] . Two distinct tracks are present within popular dictionaries Buckland [1991] and journalistic literature Budd [2011] on disinformation and misinformation, for example the provenances to which designed to detect adhere. Disinformation and Misinformation may either be viewed as alternatives or differentiated in terms of meanings and deception. Misinformation can be described as accidental untruthful, imprecise or deceptive info and disinformation can be defined as untruthful, imprecise or deceptive info proposed to misinform. Within journalism, the general tendency appears designate to deal the two terms as alternatives and mostly stick to the definition of misinformation to express all kinds of fake, ambiguous, incorrect, and misleading details Thorson [2016] , Wardle et al. [2018] . Instead of all incorrect or inaccurate material (i.e. planned, unintentional, deceptive, deceiving, and so forth), the usage of "misinformation" underpins an appreciation of the distinction between reality and falsity between information and misinformation. Information is the real component to be maintained, covered, strengthened and disseminated. Misinformation is the incorrect component of the information to be prevented, combated, hidden and stopped. There is no difference between deliberate and deliberately deceptive and accidental mis-representative, for example honest errors, imprecision as a result of unawareness when disinformation and misinformation are regarded as synonyms. Therefore, all kinds of fraud are viewed fairly and the aim is to defend against all of them. It is more popular to consider misinformation and disinformation as two different terms rather than dealing with them as synonyms in the conceptual and analytical accounts of disinformation and misinformation Fallis [2015] , Floridi [2013] . In terms of motives and potential deception, the difference among disinformation and misinformation is cast: In general, misinformation is described as incorrect content, and then disinformation is described as that segment of misinformation that is in false, imprecise, or ambiguous, Notice that if disinformation is described as the deliberately deceptive component of misinformation, then in terms of motives and intention, there are no criteria for misinformation. For example, it is not possible to specify that misinformation is unintentional, Misleading as disinformation is part of Misinformation as deliberate misleading. Intentional deception should not be a subcategory of unintentional deception. Additionally, since misinformation is frequently mentioned in the sense of honest errors, prejudice, mysterious imprecision, and a distinction is maintained between misinformation and disinformation, it is fair to describe the two definitions as entirely different conceptions, wherever disinformation is not segment of misinformation Burrell [2016] . Misinformation is characterized as accidental misleading, inaccuracy, or falsehood, whereas deliberate misleading, inaccuracy, or falsehood is defined as disinformation. Intentions are also the distinctive characteristics among misinformation and disinformation: the unintentional vs the deliberate (non-accidental) misleading, imprecision, and/or falsehood Capurro and Hjørland [2003] . Misinformation is a false assertion that leads individuals astray by concealing the right truth. Deception, misunderstanding, falsehoods are often referred to Zhang et al. [2016] . This causes feelings of distrust that ultimately disrupt relations, which breach perceptions negatively, Wu et al. [2019] , Marshall and Drieschova [2018] . Disinformation is an incorrect part of information which is purposely circulated to confuse viewers Galitsky [2015] . While disinformation and misinformation mutually apply to faulty or forged facts, the aim is to make a major difference between them with no intention; misinformation is applied to mislead whereas disinformation is applied with the intention Kumar et al. [2016] . Disinformation, which is inaccurate or misleading information, is a subset of misinformation. It is purposely distributed to trick others online, and its effect it has continued to expand Galitsky [2015] . In the truthful yet incorrect conviction that the spread imprecise truths remain real, misinformation is communicated. Disinformation, however, describes incorrect truths which are considered to purposefully mislead viewers and listeners. In particular, Misinformation x is misleading or incorrect knowledge that is intentionally meant to mislead. Disinformation is incorrect information designed to confuse particularly rhetoric provided to a competitor force or the press through a government department. People's acceptance of misinformation or misleading facts depends on their past convictions and views Libicki [2007] . The key characteristics of disinformation have been highlighted by scientists Fallis [2009] are: • Disinformation is often the result of a deception operation that is carefully orchestrated and technically advanced. • Disinformation could not come from the source who aims to mislead directly. • Disinformation is frequently written to contain adjusted photographs. • Disinformation may be very broadly spread or aimed at particular individuals or organizations. • The ultimate objective is often a person or a collection of individuals. • It can be totally wrong. • It can be a distributed perception with no genuine evidence or. • It can carry misleading responses where targeted truths are provided in order to obtain some rhetoric of deception. Biased information is used when partial data is published in order to promote one side of a debate. The study Vosoughi et al. [2018] observed that unreliable social media information spreads much more rapidly than fact-based content. It has been noted that content truthfulness is not a motivating factor for the dissemination of information; rather, individuals prefer to spread the news based on their community favoritism, prejudice, or attention. Previous research efforts showed that the distribution of unreliable social media information has a substantial effect on terrorism Oh et al. [2013] , political campaigning Bovet and Makse [2019] , Shao et al. [2018] , and crises management Lukasik et al. [2016] . Information bias, already a widely obvious problem in the field of media and social sciences, has also inspired a lot of empirical studies in recent years. Many media inquiries concentrate their focus on identifying information bias on specific topics such as elections, immigration, conflicts, or racism Harrison [2006] . In Leban et al. [2014] nine categories of news bias are identified: • Researchers in the area of automated news bias detection often turns its emphasis to the study of sentiment and the mining of opinions in the news. Authors in Leban et al. [2014] indicated that the geographical differences/similarities between the examined news publishers are related to most forms of detected bias. For example, European news outlets put more emphasis on explaining events happening in Europe, while US publishers put more emphasis on events happening in the U.S. It is possible to find a similar trend that is related to news agency citations. Associated Press is often quoted by US news outlets, although European publishers tend to cite European news agencies. Among the tabloid outlets such as Daily Mail or Stern Magazine, they have also found a clear bias to write longer names, shorter articles and to use more descriptive language using multiple adjectives and adverbs. We also noticed a bigger percentage of adverbs and adjectives on websites, as expected. Can we see different patterns of we see how misinformation spread versus credible information ? One problem related to misinformation correction is that in many cases the reach of a fact-check about a misinformation or claim will be less than the reach of original claim or misinformation. It is found that in many cases, bots rather than humans spread misinformation, Shao et al. [2017] , Schlitzer [2018] , Alsmadi and O'Brien [2020] while at the same time, the effort of human-based fact-checking websites is limited due to many factors such as the limitation or availability of expertise and resources. Some references indicate that users who spread misinformation may refer to fact-checking links that falsify such claims. Nonetheless, those users will still distribute such misinformation, Shao et al. [2018] . Online Social Networks (OSNs) are rich platforms to spread misinformation fast as a result of the tension between aggregation of information and spread of misinformation, Acemoglu et al. [2010] , Alsmadi and OBrien [2019] , Amoruso et al. [2020] . The type of information/misinformation has a major factor in the spread of misinformation. By far, misinformation related to politics spread much faster than any other type of misinformation. Political participation will be associated with sharing or spreading misinformation when it conforms to individuals' beliefs, Valenzuela et al. [2019] . In this scope, US election in 2016 (and 2020) were milestones where the subject of misinformation in OSNs evolved rapidly. Aside from the political domain, large scale or world-wide crises such as Corona virus pandemic are usually surrounded by lots of misinformation driven by lack of credible information. For COVID-19 in particular, misinformation is related to several aspects including, virus origin, how it can reach and spread among humans, possible treatments, etc. Some papers investigated users based on their cognitive decisions to spread misinformation and the amount of effort they may do to fact-check such information before spreading it, Greenberg et al. [2013] , Castillo et al. [2011 ], Lupia [2013 , Swire et al. [2017] . Researchers studied also the impact of top OSNs influencers in spreading misinformation. Misinformation distribution agents may not need to be top influencers, they could be social bots or forceful agents who emerge as dominant voices in a dispute over claims, Acemoglu et al. [2010] , Groshek et al. [2018] , Shao et al. [2018] . The spread of misinformation can be investigated from the different possible users intentions or goals behind such act (e.g. unintentional, misleading readers, inciting clicks for revenue or supporting/manipulating public opinions). People response to misinformation can be different and vary between: • Positive response to support and participate in spreading such misinformation. • Neutral response to ignore the misinformation without any further personal analysis or response. • Negative response to respond back to those who spread or originate misinformation. In terms of diffusion and survival, how long the spread of misinformation can survive? Can misinformation be persistent despite the fact that many fact-checking posts exist to respond to and falsify such misinformation with proof or evidence? Observations from US elections in 2016 and 2020 show that some misinformation continue to evolve and find their audience despite the response from fact-checking websites or posts. Nonetheless, researchers investigated persistence as one factor to differentiate between the spread of information versus misinformation, . What is the percentage of people who share misinformation with good intention unknowing that its not an accurate information ? If they share such misinformation intentionally what motivates them to do so? Understanding the motivations behind sharing misinformation may help us evaluating the impact as well as methods to detect and mitigate such misinformation. Most studies assume that people do not realize the information they share is false, Metzger et al. [2021] , Talwar et al. [2019] , Duffy et al. [2020] , Chen et al. [2015b] , Chadwick and Vaccari [2019] . Four main reasons are identified in literature behind information sharing in general, Chen et al. [2015b] : entertainment, socializing, information seeking and self-expression and status seeking. We can look at those motivations from two dimensions: intentional and unintentional. • Unintentional sharing of misinformation: The following reasons can be observed in people unintentional spread of misinformation: -Sharing of information is a social activity. Users in Online Social Networks (OSNs) share their selfgenerated content or re-share contents from others as part of their social reach and interaction with their networks. -Publicity and social reach: People eager to share more and more information may override their eagerness and willingness to vet for information credibility. They are looking for more publicity, impact and attention. Users in OSNs receive daily large volumes of information. -Lack of resources: They may re-share information they received with their networks as part of their social interactions with little time and resources that they have to fact-check such information. Additionally, the continuous share of information among social networks makes it hard in many cases to trace and verify the content originator. -Emotions: People may share misinformation when they are angry or upset as part of reaction to a crisis or negative news Han et al. [2020] • Intentional sharing of misinformation: This has been associated with political or religious orientations, selfdisclosure, online trust, and social media fatigue, Talwar et al. [2019] . People may share misinformation intentionally also to discuss it, neutralize it or correct it, Rossini et al. [2020] . Political participation is positively associated with misinformation sharing, specially when it comes to misinformed users Valenzuela et al. [2019] , Boulianne [2019] . Those users receive information only through their political party media or channels. While different references indicated that in the U.S. misinformation is more correlated with right-leaning partisanship, yet they do come from left party as well, Nikolov et al. [2020] . Most research projects and applications of misinformation would like to deal with information/misinformation as a binary problem. However, in many cases, specially in OSNs context, judging information/misinformation from a binary perspective is not realistic since in most cases, the subject content cannot be explicitly classified as absolutely true or absolutely false. Generally, they can be a combination of misinformation and true news. For such reasons, many research publications (e.g. Rasool et al. [2019] , Kaliyar et al. [2019] ) argued that a multi-class classification is more realistic. Another issue related to machine-based classification of misinformation is related to the different meaning and interpretation of misinformation. We see a wide range of terms used to label some kind of misinformation such as: fake news, rumors, hoaxes, insincere questions, satire, click-baits, stance, etc. In each one of those different classifications of misinformation, a machine learning algorithm will have to classify the subject content based on its specific target, regardless whether the information is correct or not. For example, in Quora classification, an insincere question can be a question that is raised not looking for an answer but rather pass a message (e.g. mocking a person, faith or culture). Additionally, a click-bait is a content in a website in which the title of that content has nothing to do with the actual content. In those examples, the machine learning algorithm will have to decide whether the content is "relevant" to the target, rather than if its correct content or not. In this paper we evaluated misinformation evolving terminologies and perspectives. Users through OSNs and smart phones can now create their own contents, respond or re-share other users' content. Each user can be literally a news channel, through their OSN page. Their is a need to redefine terms related not only to information/misinformation but also to news and news outlets. In many cases, contents generated by users will have a mixture of a piece of news (that can be correct) and users' own personal reflection or annotation. Classically, TV channels and newspapers used to be main sources of news. Nowadays news outlets are enormous and so credibility is at a serious risk. People rely heavily on search engines to search for information. Search engines have no built-in methods to check for information credibility. Even worse, they do classify search retrieved results based on popularity rather than based on credibility-related metrics. Without building reliable methods to detect, tag and warn against inaccurate information, we will be risking civilization history. Interaction-based reputation model in online social networks Privacy and social capital in online social networks Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking Tommaso Venturini. From fake to junk news, the data politics of online virality Positive effects of negative publicity: When negative reviews increase sales Current issues and advances in misinformation research Vulnerable to misinformation? Web of deception: Misinformation on the Internet Mining misinformation in social media. Big data in complex and social networks Propaganda, misinformation, and the epistemic value of democracy Fake news in the era of online intentional misinformation; a review of existing approaches Deterring the spread of misinformation on social network sites: A social cognitive theory-guided intervention Rating news claims: Feature selection and evaluation Polarization and fake news: Early warning of potential misinformation targets Using data analytics to filter insincere posts from online social networks. a case study: Quora insincere questions Toward autonomous and collaborative information-credibility-assessment systems Prevalence of authentic versus false information in general population during covid 19 pandemic Using expert sources to correct health misinformation in social media Testing logic-based and humor-based corrections for science, health, and political misinformation on social media Fake news types and detection models on social media a state-of-the-art survey Real solutions for fake news? measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media An eye tracking approach to understanding misinformation and correction strategies on social media: the mediating role of attention and credibility to reduce hpv vaccine misperceptions Leveraging volunteer fact checking to identify misinformation about covid-19 in social media Strategies to combat medical misinformation on social media Mining physicians opinions on social media to obtain insights into covid-19: mixed methods analysis. JMIR public health and surveillance Information as thing Meaning, truth, and information: prolegomena to a theory Belief echoes: The persistent effects of corrected misinformation Thinking about information disorder: formats of misinformation, disinformation, and mal-information What is disinformation? Library trends The philosophy of information How the machine thinks: Understanding opacity in machine learning algorithms The concept of information. Annual review of information science and technology Misinformation in social media: definition, manipulation, and detection Post-truth politics in the uk's brexit referendum Detecting rumor and disinformation by web mining Disinformation on the web: Impact, characteristics, and detection of wikipedia hoaxes Conquest in cyberspace: national security and information warfare Beyond misinformation: Understanding and coping with the post-truth era Online misinformation about climate change Polarize and conquer: Russian influence operations in the united states Social media and the post-truth world order Media manipulation and disinformation online The command of the trend: Social media as a weapon in the information age Commanding the trend: Social media as information warfare How france successfully countered russian interference during the presidential election Policy Planning Staff (CAPS) of the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research (IRSEM) of the Ministry for the Armed Forces Fake news, disinformation, manipulation and online tactics to undermine democracy Religion and fake news: Faith-based alternative information ecosystems in the us and europe. The Review of Faith & International Affairs The brexit botnet and user-generated hyperpartisan news Russian involvement and junk news during brexit. The computational propaganda project. Algorithms, automation and digital politics Contesting# stopislam: The dynamics of a counter-narrative against right-wing populism Ten years after the estonian cyberattacks: Defense and adaptation in the age of digital insecurity Information warfare as a continuation of politics: An analysis of cyber incidents Inoculating against fake news about covid-19 Fake news and covid-19: modelling the predictors of fake news sharing among social media users Infodemic and the spread of fake news in the covid-19-era Impact of unreliable content on social media users during covid-19 and stance detection system The spread of true and false news online Community intelligence and social media services: A rumor theoretic analysis of tweets during social crises Influence of fake news in twitter during the 2016 us presidential election Anatomy of an online misinformation network Hawkes processes for continuous time sequence classification: an application to rumour stance classification in twitter Local government public relations and the local press News reporting bias detection prototype The spread of fake news by social bots The spread of top misinformation articles on twitter in 2017: Social bot influence and misinformation trends How many bots in russian troll tweets? Information Processing & Management Spread of (mis) information in social networks Contrasting the spread of misinformation in online social networks The paradox of participation versus misinformation: Social media, political engagement, and the spread of misinformation Social computing, behavioral-cultural modeling and prediction Information credibility on twitter Communicating science in politicized environments Processing political misinformation: comprehending the trump phenomenon Media use and antimicrobial resistance misinformation and misuse: survey evidence of information channels and fatalism in augmenting a global health threat Relevant document discovery for fact-checking articles From dark to light: The many shades of sharing misinformation online Why do people share fake news? associations between the dark side of social media use and fake news sharing behavior Too good to be true, too good not to share: the social utility of fake news. Information Why students share misinformation on social media: Motivation, gender, and study-level differences. The journal of academic librarianship News sharing on uk social media: Misinformation, disinformation, and correction Anger contributes to the spread of covid-19 misinformation Dysfunctional information sharing on whatsapp and facebook: The role of political talk, cross-cutting exposure and social corrections Revolution in the making? social media effects across the globe. Information, communication & society Right and left, partisanship predicts vulnerability to misinformation Multi-label fake news detection using multi-layered supervised learning Multiclass fake news detection using ensemble machine learning