key: cord-0452425-i0gcylpd authors: Alatawi, Faisal; Cheng, Lu; Tahir, Anique; Karami, Mansooreh; Jiang, Bohan; Black, Tyler; Liu, Huan title: A Survey on Echo Chambers on Social Media: Description, Detection and Mitigation date: 2021-12-09 journal: nan DOI: nan sha: 7bcc043ad0a388c023fdb3227c71b6a672efc108 doc_id: 452425 cord_uid: i0gcylpd Echo chambers on social media are a significant problem that can elicit a number of negative consequences, most recently affecting the response to COVID-19. Echo chambers promote conspiracy theories about the virus and are found to be linked to vaccine hesitancy, less compliance with mask mandates, and the practice of social distancing. Moreover, the problem of echo chambers is connected to other pertinent issues like political polarization and the spread of misinformation. An echo chamber is defined as a network of users in which users only interact with opinions that support their pre-existing beliefs and opinions, and they exclude and discredit other viewpoints. This survey aims to examine the echo chamber phenomenon on social media from a social computing perspective and provide a blueprint for possible solutions. We survey the related literature to understand the attributes of echo chambers and how they affect the individual and society at large. Additionally, we show the mechanisms, both algorithmic and psychological, that lead to the formation of echo chambers. These mechanisms could be manifested in two forms: (1) the bias of social media's recommender systems and (2) internal biases such as confirmation bias and homophily. While it is immensely challenging to mitigate internal biases, there has been great efforts seeking to mitigate the bias of recommender systems. These recommender systems take advantage of our own biases to personalize content recommendations to keep us engaged in order to watch more ads. Therefore, we further investigate different computational approaches for echo chamber detection and prevention, mainly based around recommender systems. Having access to verified and trusted information is crucial in the midst of the COVID-19 pandemic, one of the most significant health crises (Mallah et al. 2021) in recent history. Exposure to misinformation on social media has been linked to COVID-19 vaccine hesitancy (Loomba et al. 2021) , the belief that 5G towers spread the virus (Ahmed et al. 2020b) , the misconception that a COVID-19 vaccine candidate caused the death of trial participants 1 , and a widely held view that the virus is a conspiracy or a bioweapon (Douglas 2021) . These beliefs threaten the response to the pandemic and promote actions that can lead to the spread of the virus. In this regard, the Alan Turing Institute (Seger et al. 2020) classified epistemic security as a fundamental challenge for society when facing a situation that requires taking collective action to respond to crises (e.g., global pandemics) or complex challenges (e.g., climate change). They define epistemic security as reliably preventing threats to the production, distribution, consumption, and assessment of reliable information within a society. Echo chambers on social media are identified (Seger et al. 2020 ) as one of the core threats to epistemic security as they can drastically increase the spread and even creation of misinformation on social media (Del Vicario et al. 2019 , 2016a Zollo et al. 2017; Zollo and Quattrociocchi 2018) . The presence of misinformation on social media is a welldocumented problem Wu et al. 2019) . Social media is a prominent source of news and information about COVID-19 and other current events for most of us. Currently, more than half of adults in the US say that they get their news from social media (Shearer and Mitchell 2021) . The misinformation percolating in social media echo chambers can have dangerous ramifications in the physical world. For instance, misinformation on social media sites regarding COVID-19 has been proven to be associated with less compliance with social distancing policies (Bridgman et al. 2020) . This behavior and others led to the spread of the virus on a large scale. This is in part because social media provided a climate for this type of misinformation to spread: Social media sites lack the editorial supervision that traditional news media outlets have (Shu et al. 2017) . In addition, the spread of misinformation has been linked to the echo chamber phenomenon on social media (Törnberg 2018; Chiou and Tucker 2018; Del Vicario et al. 2016a ). Many of the widespread COVID-19 rumors and mass misinformation campaigns, for instance, have been linked to social media echo chambers. As shown in the study conducted on conservative media users by Romer and Jamieson (Romer and Jamieson 2021) , 61% of participants believe that the CDC (Centers for Disease Control and Prevention) "are exaggerating the danger posed by the coronavirus to damage the Trump presidency." All of this leads us to conclude that echo chambers are a severe problem that needs to be understood and addressed. To study echo chambers in a computational perspective, the first step is to have a definition that highlights the core features of echo chambers on social media. This definition should avoid conflating echo chambers with filter bubbles and political polarization. Both are often incorrectly used interchangeably with echo chambers. While political polarization is an attribute of social media echo chambers (as we will explain in section 2), the difference between echo chambers and filter bubbles is more subtle. Unfortunately, that is why many echo chamber definitions fail to capture the complete concept of social media echo chambers. The most common definition (Dubois and Blank 2018; Sindermann et al. 2020; Choi et al. 2020; van Eck, Mulder, and van der Linden 2021) of echo chambers comes from Jamieson and Cappella (Jamieson and Cappella 2008) . They define an echo chamber as a metaphor to capture the ways that messages are amplified and reverberate through a media platform. Jamieson and Cappella describe the echo chamber as a "bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal" (Jamieson and Cappella 2008) . To be more specific, we focus on three crucial features unique to echo chambers on social media: (1) they are a network of users, (2) the content shared in that network is onesided and very similar in the stance and opinion on different topics (Garimella et al. 2018) , and (3) outside voices are discredited and actively excluded from the discussion (Nguyen 2020) . Therefore, in this survey, we define an echo chamber as a network of users in which users only interact with opinions that support their pre-existing beliefs and opinions, and they exclude and discredit other viewpoints. This definition helps us to differentiates echo chambers from filter bubbles based on these three aspects (Nguyen 2020) : (1) the reason behind the lack of access to dissenting opinions, (2) how outside sources are viewed, and (3) the effect of exposure to debunking and counterevidence. First, echo-chamber members actively exclude and discredit voices that share dissenting opinions, while in filter bubbles, these voices are left out, most likely unintentionally due to over-personalization (Pariser 2011) . Second, echo-chamber members distrust all outside sources, while users trapped in a filter bubble lack exposure to relevant information and arguments, and once they are exposed to these opinions, they might agree with it. Finally, exposure to counterevidence can break a filter bubble but may actually backfire and reinforce an echo chamber. In this survey, we focus on echo chambers on social media platforms. Social media create an environment where we can communicate with anyone worldwide and share our ideas and opinions about various topics. On the other hand, social media facilitates the spread of mass misinformation and disinformation. The manner in which social media recommends both content and people to follow has helped form filter bubbles and echo chambers that exclude users from being exposed to others' opinions. The research interest in online echo chambers is primarily due to fear of negative influence from online social media on society. It started with the 2000 US presidential election and Sunstein's 2001 book "Republic.com" (Sunstein 2001) where he talked about the effect of the internet on group polarization. The second wave coincided with the 2008 US elections of President Obama, where Jamieson and Cappella (Jamieson and Cappella 2008) published their seminal book "Echo Chamber." This book ignited most of the current work on echo chambers. The third wave happened around 2011 and coincided with the development of AI and machine learning, especially in content personalizations and recommendations. In 2011, Pariser (Pariser 2011) published his book about how social media creates information bubbles, and he coined the term filter bubbles. In 2016, the age of post-truth and mass misinformation started the fourth wave of interest in echo chambers. The focus was on the spread of fake news and rumors. Many research cited echo chambers as a possible explanation for fake news (Shu et al. 2017) . Finally, the fifth and current wave coincided with the rise of new echo chambers like QAnon and the events of the 2020 election in the US and the global Covid-19 pandemic. Social media recommender systems recommend content that utilizes users' psychological biases like confirmation bias, cognitive dissonance, and homophily. The goal of social media's recommender systems is to recommend content that keeps users engaged with the platform to spend more time watching more ads. We do not criticize the financial motive behind social media. However, we argue that social media has a social responsibility to not cause harm to society by promoting polarized content that leads to the formation of echo chambers. Although echo chambers are not unique to social media, social media accelerates the formation of them and sustains them. Social media sites have three main features that make them a perfect environment for echo chambers: (1) there is no geographical limitation to join an echo chamber; (2) there is no social price to pay if they share fringe beliefs; (3) no matter how fringe the beliefs are, you most likely will find someone who shares them with you. To illustrate this, consider the "flat earth" echo chamber. Before social media, if someone said that they believed that the earth was flat or they questioned the shape of the earth, they would have been mocked and their level of education would have been questioned. When social media is added to the equation, professing belief in the flat earth "theory" bears no social cost. Furthermore, you most likely will find someone who shares your belief; especially, in the absence of geographical limitations. After all, there are 7 billion people on earth, certainly some of them believe in fringe ideas. Of course, these fringe ideas existed before social media, but we argue that social media makes them spread faster and on a larger scale. It took 40 years for the Flat Earth Society to reach 3,500 2 members. In a fraction of that time, their Twitter account 3 has gained 94,000 followers. Our survey focuses on the social computational perspective of echo chamber research. Our goal is to understand the echo chamber phenomenon, its mechanisms, attributes, and how to detect it, and how to mitigate it, or even better, prevent it from happening in the first place. We hope to provide a blueprint for a solution to the echo chamber problem. Al- though an individual intervention has a limited effect, we believe that a solution based on modifying the way social media recommend content is a promising direction. This solution depends on the detection of echo chambers. Because without knowing if social media has an echo chamber or not, we cannot mitigate/prevent its effect. Detecting echo chambers can help us understand their interaction with other members and how they grow and form. This information could help us to prevent future echo chambers from forming. Based on this observation, we structured our paper as shown in Figure 1 . Intended Audience. This work is mainly intended for researchers in the fields of social computing, machine learning, data mining, artificial intelligence, and ethical artificial intelligence. Additionally, we hope that this work interests social media providers to overcome the echo chamber problem and its negative effects (see Section 2 for more). Differences from Existing Surveys. The main difference between our survey and others arises from the fact that the field of echo chambers on social media lacks a comprehensive survey and a review that covers the topic from the perspective of social computing. For instance, Nguyen (Nguyen 2020) provides an excellent review of the echo chamber and the filter bubbles phenomena from the perspective of philosophy. Levy and Razin (Levy and Razin 2019) survey the economics literature on echo chambers, and they focus on the mechanisms that create echo chambers. In addition, there are a number of surveys on related topics such as filter bubbles (Bruns 2019) , political polarization , and misinformation (Shu et al. 2017 Zhou and Zafarani 2020; Sharma et al. 2019) . Our work adds to the field of echo chambers by highlighting the work done in social computing and focuses on the possible methods to address the echo chamber problems and other related issues. The structure of the paper and our contributions. Figure 1 shows the structure of our survey and outlines the research interests and topics related to the research on echo chambers on social media. Our contribution can be summarized as follows: • We define the echo chamber phenomenon on social media, and we clarify how it differs from similar social media-related phenomena such as filter bubbles and political polarization (Section 1). • We discuss attributes of social media's echo chambers (Section 2). Specifically, we focus on the social impacts and potential risks related to echo chambers. We highlight the interaction between echo chambers' members and the society regarding misinformation, conspiracy theories, social trends, political polarization, and emotional contagion. • We list the mechanisms that lead to the formation of echo chambers from the perspectives of recommender systems, human psychology, and biases (Section 3). Our goal is to explore the mechanisms that cause echo chambers in the first place to understand the echo chamber phenomenon, which is the first step to solve it. • We review the methods and features that could be used to detect echo chambers in social media platforms (Section 4). Additionally, we examine methods to model echo chambers to study their formation and interaction with people outside the echo chamber. Our goal is to explore and exploit the methods and the features to detect echo chambers, which is crucial for echo chamber prevention. • We discuss methods to prevent echo chambers from forming, and in case they already formed, how to mitigate their negative effect (Section 5). • We discuss the open problems related to echo chambers and the challenges that could be encountered while working on them (Section 6). • We document some of the datasets that could be used in future work on echo chambers (Appendix A). In this section, we illustrate five common attributes of echo chambers: diffusion of misinformation, spreading of conspiracy theory, creation of social trends, political polarization, and emotional contagion of users. We introduce the common attributes of echo chambers at the very beginning for giving a preliminary insight of what echo chambers look like in online social media . We also discuss their different outcomes, social impacts, and potential risks. Misinformation refers to false information that is spread, regardless of intent to mislead 4 . Nowadays, mainstream social media platforms are used by people due to easy access, low cost, and fast dissemination of news pieces (Shu et al. 2017 ). However, the quality and credibility of the content spread in social media is considered lower than traditional news media because of a lack of regulatory authority. Thus, people manipulate the public by leveraging echo chambers to propagate misinformation (Törnberg 2018) . Echo chambers exclude dissenting opinions, make users insist on their confirmation bias, and let misinformation go viral. The effect of echo chambers on the spread of rumors (Choi et al. 2020 ) and misinformation (Törnberg 2018; Chou, Oh, and Klein 2018; Cota et al. 2019 ) has been proven by many researchers. Despite early efforts have been undertaken to mitigate online misinformation Gottlieb and Dyer 2020; Shu and Liu 2019) , the COVID-19 related misinformation were widely spreading on social media as a global crisis (Li et al. 2020 ). Existing methods have been ineffective for the COVID-19 disinfodemic because: (1) the contents are novel and highly deceptive; (2) the dissemination is rapid; and (3) they require experts with domain knowledge to fact-check. Echo chambers in social media manipulate not only influencers but also common people to become misinformation spreaders. They enable users to intentionally or unintentionally disseminate misinformation faster (Törnberg 2018) . Misinformation spread in echo chambers usually contains three characteristics: (1) similar misinformation is frequently scrolled and repeated to the users; (2) the contents are inflammatory and emotional; and (3) meant to mislead people by exploiting social cognition and cognitive biases. Because the diffusion of misinformation can cause rampant negative effects and is the most common attributes of echo chambers in social media, echo chamber detecting methods should take it into consideration. A common definition of conspiracy theory is a belief that some covert but influential organization meet in secret agreement with the purpose of attaining some malevolent goal (Bale 2007) . Echo chambers in social media have provided fertile grounds for conspiracy theories to spread quicker (Metaxas and Finn 2017) . Existing research illustrates that various conspiracy theories have been circulating through mainstream media (Grimes 2020; Juhász and Szicherle 2017; Van Raemdonck 2019) . Conspiracy theories are attempts to explain the ultimate causes of significant social and political events and circumstances (Douglas et al. 2019) . Conspiracy believers use social media to find each other, disseminate conspiracy contents, and share fringe viewpoints. Conspiracy theories express and amplify anxieties and fears about losing control of religious, political, or social order (Marwick and Lewis 2017) . Unlike misinformation, conspiracy theories are often strongly believed by governments. This results in catastrophic impact to society. For example, AIDS denial by the government of South Africa, was estimated to have resulted in the deaths of 333,000 people (Simelela et al. 2015) . During the COVID-19 pandemic, despite the fact that many COVID-19 vaccines have been shown to be safe and effective for generating an immune response (Jackson et al. 2020) , they need to be accepted by at least 55% to 85% of the population to provide herd immunity depending on countries (Kwok et al. 2020) . However, the COVID-19 vaccinerelated conspiracy theories have been widely circulating on online social media. For example, viral social media posts say that Bill Gates intends to implant microchips in people through the coronavirus vaccine 5,6 . Other conspiracy theories such as that the pandemic is a bioweapon (Pennycook et al. 2020b) , and that the 5G towers and cellphone cause the coronavirus pandemic (Meese, Frith, and Wilken 2020) . Such information can spread fears through online social media, marking a shift towards declines in societal confidence and trust, and limits public uptake of COVID-19 vaccines. Social media platforms present temporal popular topics as social trends on the main website to attract user's attention. Top trends are usually summarized in several trending words or hashtags. For example, "#JohnsonVariant" was in Twitter's top trending topics as the Britain's Prime Minister Boris Johnson announce to lift most remaining COVID-19 precautions in England on July 19, 2021 7 . People use the "John-son Variant" to refer to the more contagious Delta variant of COVID-19, and express angry, concerns, and depression to the administration. The creation of such trending topics is one of the common attribute of echo chambers in social media. Many studies have tried to discover the important factors that cause trending topics (Mathioudakis and Koudas 2010; Romero et al. 2011; Wu and Huberman 2007) . Asur et al. (Asur et al. 2011) found that the resonance of the content with the users of the social network is crucial. They further define the measurement of "resonance" in three parts: (1) the novelty of content; (2) the influence of members of the diffusion network; and (3) the impact of media outlets when the topics originate in standard news media. Information with high "novelty", "influence", and "impact" can capture huge attention in a short time. Thus, information spread from echo chambers in social media have the capability to create trending topics due to their large scope, like-minded stance, and social influence. Despite social trends containing misinformed statements and false claims, they were presented by social media and reported by mainstream news media (Marwick and Lewis 2017). Nowadays, social media companies are making use of this attribute to their profit. They guide the algorithms to select news for its trending topic to keep social media users spending more time on the platform (Carlson 2018). Social media companies, influencers, and news media outlets should take responsibility to carefully display and report social trends. Moreover, social trends can be supervised to detect echo chambers for malicious activities. Political polarization is the divergence of political attitudes to ideological extremes (DiMaggio, Evans, and Bryson 1996; Fiorina and Abrams 2008) . According to the evidence from the recent events 8 , it is clear that the United States experienced record levels of voter engagement. But it also means the country is extremely polarized. Other examples can be found during the COVID-19 pandemic. Based on the US vaccination data from CDC 9 as of July 13, 2021, there are 55.6% of total population who get at least one dose of COVID-19 vaccine. However, a recent study indicate the anti-vaccination movement is currently on the rise (Germani and Biller-Andorno 2021). They demonstrate that antivaccination supporters are more engaged in discussions on Twitter and share their contents from a pull of strong influencers. As this process evolves by the echo chambers in social media, the community becomes polarized. As shown in Figure 2 , the gap between two major parties in the US has increased while the overlapping has decreased significantly over the past two decades. In social media, we can observe two giant partisan echo chambers represent two major political groups of people with opposite political opinion and stances (Colleoni, Rozza, and Arvidsson 2014) . Given that individuals tend to align with those who are like-minded in nature, politiprecautions-despite-delta-variant/?sh=e9de7c662b17 8 https://www.cfr.org/blog/2020-election-numbers 9 https://covid.cdc.gov/covid-data-tracker/#vaccinations cians and parties intentionally reinforce the partisan bias inside echo chambers, leading to an increasing level of political polarization (Levy and Razin 2019) . For example, Levy et al. (Levy and Razin 2019) illustrated that politicians made decisions and policies motivated by political purposes rather than social benefits. Political polarization can cause extreme selective exposure, cognitive bias, and correlation neglect (Sears and Freedman 1967) . However, Dubois et al. (Dubois and Blank 2018) found that there is only a small segment of the population are likely to find themselves in an echo chamber. Essentially, the impact of partisan echo chambers is overstated. They suggested that echo chamber researchers should test the theory in the realistic context of a multiple media environment. Emotional states can be transferred to others via emotional contagion, leading people to suffer from the same emotions without their awareness (Fowler and Christakis 2008; Rosenquist, Fowler, and Christakis 2011; karami, H. Nazer, and Liu 2021) . A recent study showed that extreme emotions are exposed and amplified by echo chambers (Wollebaek et al. 2019 ). This manifestation is usually caused by users who continually receiving misleading contents and conspiracy theories. For example, from a COVID-19 case study of China, Ahmed et al. (Ahmed et al. 2020a ) illustrated that young people, aged 21-40 years old, were suffering from psychological problems during the COVID-19 epidemic. This is because young people who frequently participate in social media repeatedly receive broadcasts of fatality rate, confirmed cases, and misleading information via echo chambers. Moreover, Del et al. found that inside the echo chamber, active users appear to become highly emotional relative to less active users (Del Vicario et al. 2016b ). Their analysis indicated that the higher involvement in the echo chamber enables more negative mental behaviors. Kramer et al. (Kramer, Guillory, and Hancock 2014) provided experimental evidence that emotional contagion can occur without direct interaction between people, and in the complete absence of nonverbal cues. These types of echo chambers are difficult to detect in social media via content-based or network-based methods. In this section, we discuss the primary mechanisms underlying echo chambers, as shown in Figure 3 . Specifically, the echo chambers mechanisms consist of three aspects that are connected in a feedback loop: recommender algorithms related to automatic systems, confirmation bias and cognitive dissonance related to human psychology, and homophily related to social networks . Figure 3 : Primary mechanisms underlying echo chamber effect related to three main factors: automatic systems such as recommender algorithms (Section 3.1), human psychology such as confirmation bias and cognitive dissonance (Sections 3.2 and 3.3), and social networks such as homophily (Section 3.4). These mechanisms are not mutually independent but highly correlated in a way that ultimately create feedback loops that further reinforce the existence of each other. Recommender algorithms trap users into personalized information by using their past behaviors to tailor recommendations to their preferences (Rastegarpanah, Gummadi, and Crovella 2019). These prediction engines "constantly create and refine a theory of who you are and what you will do and want next" (Pariser 2011) , which then forms a unique universe of information around each of us. For example, when clicking on a news article, we show our interest in articles on this topic. The recommender algorithms take note of our behavior and will present more articles about similar topics in the future. As the process evolves, we are getting more and more personalized information, which ultimately leads us to: (1) becoming the only person in the formed universe, (2) not knowing how information is recommended, and (3) unable to choose whether to enter this process (Pariser 2011 ). This self-reinforcing pattern of narrow exposure and concentrated user interest caused by recommender algorithms is an important mechanism behind the echo chamber effect. Among the many outcomes of such recommender algorithms, e.g., narrower self-interest, overconfidence, decreased motivation to learn, the likely exacerbated polarization has the most negative impact. For this reason, many researchers have criticized recommender algorithms for the increase in societal polarization (Rastegarpanah, Gummadi, and Crovella 2019; Hannak et al. 2013; Ge et al. 2020) . For example, Dandekar et al. (Dandekar, Goel, and Lee 2013) showed how many traditional recommender algorithms used on internet platforms can lead to polarization of user opinions in society. Therefore, an important line of research studies how to diversify the recommendation results, e.g., ). Confirmation bias is the tendency to seek, interpret, favor, and recall information adhering to preexisting opinions (Nickerson 1998) . According to the selective exposure research (Frey 1986) , we tend to seek supporting information while avoiding challenging information. Echo chambers are among one of the many outcomes of confirmation bias. The rampant use of social media further amplifies the effect of confirmation bias on echo chambers. There are three types of confirmation bias: biased search for information (Mynatt, Doherty, and Tweney 1978) , biased interpretation of information (Lord, Ross, and Lepper 1979) , and biased memory recall of information (Hastie and Park 1986) . In the context of social media, for example, users not only actively seek news that is consistent with their current hypothesis but also interpret information in their own ways. Even if both the collection and interpretation are neutral, they probably remember information selectively to reinforce their expectations, i.e., selective recall effect (Hastie and Park 1986) . Confirmation bias and the provision of recommender algorithms create a self-reinforcing spiral. As described in Figure 3 , on one hand, recommender algorithms provide users with more of the same content based on their past behaviors to shape the future preference; on the other hand, users accept and even actively seek such information due to confirmation bias. The feedback loop between recommender algorithms and human psychology eventually leads to an echo chamber that shifts users' world view. In the field of social psychology, cognitive dissonance refers to an internal contradiction between two opinions, beliefs, or items of knowledge (Festinger 1957) . For example, if someone eats meat but at the same time cares about the animals' life (Loughnan, Bastian, and Haslam 2014) . On the grounds that people strive towards consistency, they psychologically feel the pressure to reduce or eliminate the distress caused by dissonance. Festinger (Festinger 1957) introduced three major strategies for dissonance reduction: (1) change one or more of the beliefs, opinions, or behaviors, (2) increase consonance by acquiring new information or belief, (3) forget or reduce the importance of the cognitions. Echo chambers are considered as one of the practices in reducing dissonance. People try to seek out for ideologically consonant platforms and interactions to avoid contact with in-dividuals that confront their ideas (Evans and Fu 2018) . Moreover, ideological homogeneity in online echo chambers can encourage extremism. There are two aspects for this stimulation: (1) one's commitment to their thought will increase dramatically if it has been written down and disseminated to a public audience (Cialdini and Cialdini 2007) . For example, the act of tweeting or posting contents on social media websites; and (2) discussion with like-minded individuals as well as the social support will reinforce the correctness of that belief (Frey 1986 ). For instance, liking tweets/posts and retweeting/reposting thus boosting attitude extremity (Bright et al. 2020) . All of which are in support of decreasing individuals' cognitive dissonance. Homophily, also known as love of the same, is the process by which similar individuals become friends or connected due to their high similarity (Zafarani, Abbasi, and Liu 2014) . This similarity can be of two types: (1) status homophily, and (2) value homophily (McPherson, Smith-Lovin, and Cook 2001) . Status homophily deals with people who are connecteddue to similar ascribed (sex, race, or ethnicity) or acquired characteristics (education or religion). Value homophily involves grouping similar people based on their values, attitudes, and beliefs regardless of their differences in status characteristics (McPherson, Smith-Lovin, and Cook 2001) . Depending on the echo chamber's ideology, the echo chamber can be formed due to status homophily, value homophily, or both. Social media and other online technologies have loosened the basic sources of homophily such as geography and allowed users to bind homophilous relationships on other dimensions like race, ethnicity, sex, gender, and religion. Moreover, homophily has predictive and analytic power on social media and can be measured by how the assortativity, also known as social similarity, of the network has changed over time and modeled using independent cascade models (ICM) (Zafarani, Abbasi, and Liu 2014) . By measuring political homophily on Twitter, we can investigate whether the structure of communication is an echo chamber-or public sphere-like (Colleoni, Rozza, and Arvidsson 2014) , or whether there is a homophilous difference between the echo chambers of conservative individuals and liberal ones (Boutyline and Willer 2017) . As illustrated in Figure 3 , current recommender systems especially collaborative filtering methods use the concept of homophily to track the effect of user's friends or a crowd of users with the same interest to provide a useful recommendation (Beigi and Liu 2018) . On the other hand, to support their preexisting opinion (i.e. confirmation bias) people tend to follow like-minded individuals and create a homophilous relationship. The feedback loop between these three components will create an echo chamber behavior on social media. Detection of echo chambers is important for several reasons. First, people involved in different echo chambers are ignorant of information outside. This may lead to the spread of misinformation. Second, detecting different echo chambers may be instrumental in finding communities with extremist and harmful ideologies. There are many instances where opinions inside echo chambers lead to adverse consequences for society as a whole. An example of this is the issue of COVID-19 vaccine hesitancy. If individuals inside an echo chamber are unwilling to be vaccinated, they are more likely to contract and suffer sever harm from the virus than those who are vaccinated. Simply put, some beliefs circulating in online echo chambers can have dangerous and lethal real world consequences. Cossard et al. look at Italian Twitter to analyze the debate surrounding vaccinations. They conclude that both ardent supporters and critics of these vaccines are at the center of the detected echo chambers (Cossard et al. 2020) . There are several different strategies employed in computer science and social science literature to detect echo chambers. Since individuals inside an echo chamber have low intra-community interactions and high inter-community interactions as mentioned in Section 3.4, community detection algorithms are widely used. Our literature review suggests that the main approaches to detect echo chambers include content based detection and network based detection. Finally, since ground truth data about echo chambers is scarce, researchers try to model echo chambers using various techniques based on assumptions about the ground truth. Here we look into methods that utilize the content produced by users to detect echo chambers. Users interact with social media platforms in many ways: posting, liking posts, reblogging posts, or commenting on them. These interactions provide a valuable information about the beliefs and opinions of users that can be used to detect echo chambers. Stance and Opinion Detection We can use social media users' opinions (e.g., the tweets they write) to measure the similarity (or dissimilarity) between users. There are many ways to mine users' opinions, such as simple textual similarity measures like TF-IDF (Ramos et al. 2003) , Word mover (Kusner et al. 2015) , Doc2vec (Le and Mikolov 2014) , or stance detection. N-gram analysis and TF-IDF are used to generate features by Nguyen et al. (Nguyen, Huyi, and Warren 2017) in conjunction with k-means clustering to find echo chambers. The authors propose that using this approach to find echo chambers and cross linking across echo chambers might help slow their growth. Stance detection is widely used in fake news and rumor detection work (Shu et al. 2017; Kumar and Carley 2019) . Traditionally, this task involved finding word-based similarities. With the popularity of neural networks, embeddingbased similarities are seeing more usage. In word-based embeddings, the embedding of the narrative in question is determined by aggregating the embeddings of individual words. The embedding function can be an aggregate function like mean, maximum, or average, etc. On the other hand, neural networks are capable of embedding the entire narrative (Orbach et al. 2020) . Once the embeddings are created, they can be compared together to identify whether they agree or disagree with each other. Measuring user sentiment across communities with differing views might also help us gain insights into echo chambers. Del Vicario et al. use a sentiment analysis API to measure the emotional distance between communities (Del . Given the topic, the greater the community distance, the more polarizing it is. Word-based embeddings that are commonly used can either be taken from a source where they are pre-generated, or they can be generated by the a training corpus. Some of the commonly used pre-generated embeddings are Stanford's GloVe (Pennington, Socher, and Manning 2014) and Google Word2Vec (Mikolov et al. 2013) embeddings. In Word2Vec, the embedding of a word is created by either a Continuous Bag of Words (CBOW) or Skip-gram neural network model. These models extract the embeddings of words by either predicting the word from the surrounding words (CBOW) or predicting surrounding words (Skip-Gram). For embedding an entire narrative Recurrent Neural Networks(RNN) and Transformer-based models are being used extensively more recently. Apart from vanilla RNNs, which have recurrent cells that share hyperparameters, there are several other types of RNNs. The most common among them are LSTMs (Hochreiter and Schmidhuber 1997) and GRUs (Cho et al. 2014) . For transformer-based models, BERT (Devlin et al. 2018) and its variations are used more prominently in echo chamber research (Han et al. 2019 ). Transformer models have two components at a high level: an encoder and a decoder. BERT uses the encoder part of the transformer model only. It is commonly used for generating the encodings of an input narrative. Opinion Distance The goal of measuring opinion distance is to find if two opinions are similar or not. We input text opinions from two different users, and the output of the algorithm will be a value between 0 and 1. A distance of 0 indicates that the two opinions are identical regarding the topics discussed in the two opinions, and 1 represents the opposite. Compared to the other opinion mining methods, this method can capture the nuances between opinions. In other words, this method could differentiate between two opinions that have the same stance on a topic (let's say Brexit, for example); however, they differ in the justification of their opinion (in that case, one believes that Brexit is good for the economy and the other thinks it's terrible but necessary to reduce immigration for example). Gurukar et al. (Gurukar et al. 2020 ) developed a method to compute the distance between opinions. This method consists of three steps: (1) opinion representation, (2) mapping the opinion subjects between the two opinions, and (3) computing the opinion distance. The goal of opinion representation is to identify the opinion topics the users talked about. This step is dynamic, meaning that the user's opinion topics are only based on their opinion text, not on a preset target topic. Therefore, we need a way to map the different topics between opinions. After the mapping step, we compute the distance between opinions using this formula (Gurukar et al. 2020) : Here, f is a distance function, S 1 i and S 2 j represent the subjects i and j in opinion O 1 and O 2 respectively, pol is a function that measures the polarity of the subject towards an opinion. Finally, M is the set of mapped opinions. Polarization Detection To detect echo chambers, one could utilize methods used to detect polarization and polarized speech. There are generally two opposing sides for most public discourse topics, one that supports and the other opposes it. Usually, the two sides use different languages to describe the same idea (Garimella and Weber 2017) . For example, a person might use the term "global warming" and someones else uses "Climategate". Both opinions deal with the same subject; however, the language they use and their attitude differ. To analyze political polarization in Twitter, Garimella and Weber (Garimella and Weber 2017) looked for signs of polarization in the way users interact with each other and their content. Specifically, they studied the "follow" patterns, retweet behavior, and how partisan the hashtags used in tweets are. The same features could be used to detect echo chambers. The leanings of individuals can help with the detection of echo chambers (Cinelli et al. 2021) . Let the set of content produced by a user, i, be represented by the set C i = {c 1 , c 2 , ..., c ai }, where a i represents the number of activities by the user. Cinelli et al. (Cinelli et al. 2021) define the leaning of the user as: If the distribution of the leanings, P (x) is bi-modal, then it is an indicator of binary polarization. Bessi et al. study the formation of echo chambers by using YouTube links shared on Facebook as a petri dish ). The likes, shares and comments are used as features to measure how polarized the users are. The authors argue that the bimodal PDF based on the users polarity is evidence for the formation of echo chambers. Another interesting niche approach for echo chamber detection is the work by Bessi et al. The authors look at echo chambers from the perspective of Big Five personality traits. An unsupervised approach is used to infer the personality traits from text. The authors conclude that a certain combination of the big five personality traits are responsible for individuals ending up in an echo chamber (Bessi 2016 ). Looking at the social network can give us insights into echo chambers. Because of homophily, people inside an echo chamber are well connected while people outside of echo chambers are weakly connected. Apart from connectedness, one of the common ways to detect echo chambers in fake news publications is to look at the propagation patterns of topics being discussed. Network based detection is one of the most common ways to locate echo chambers. Because of the increasing use of social media platforms, they are becoming habitats for echo chambers. An example of this is demonstrated by Guarino et al. where they use community detection to illustrate that high segregation and clustering of communities by political alignment is evidence for the presence of echo chambers in social media (Guarino et al. 2020 ). Fast Greedy The fast greedy algorithm performs hierarchical agglomerative clustering to get a network that optimizes on modularity (Newman 2004) . Modularity is a measure of the quality of a graph containing communities. Let A ij be the weight of the edge between i and j. Let k x be the total weight of the edges attached to node x. Let c x be the community for node x. σ(x, y) is an indicator function which takes the value 1, if x = y and 0, if x = y. m is defined as the total number of edges. Modularity, Q, is defined as: Fast greedy is one of the simplest modularity based algorithms that can be used for detecting communities. Finding the optimal graph configuration for modularity is an NP-Hard problem. Thus, heuristics like the greedy approach are used in literature. Fast Greedy is one of many community detection approaches used by Cossard et al. to find echo chambers for vaccine supporters and skeptics (Cossard et al. 2020) . Louvain The Louvain algorithm is used to detect communities in a graph (Blondel et al. 2008 ). The objective function of this algorithm is modularity (Eq. 3). The algorithm starts by assigning a unique community to each node. This algorithm iteratively combines communities to gain modularity until convergence. First, each node is compared with the neighboring nodes. Communities are changed to match the neighbors if it increases the modularity. Each community is reduced to a single node where the new edge weight is a sum of the previous graph. The popularity of louvain algorithm's usage is testament to its effectiveness. Guarino et al. use Louvain for their Dis-InfoNet toolbox (Guarino et al. 2020) . Cossard et al. use Louvain and other community detection algorithms to detect echo chambers in an Italian subset of Twitter (Cossard et al. 2020) . Nourbaksh et al. use Louvain on a co-linking network to detect echo chambers (Nourbakhsh et al.) . WalkTrap The WalkTrap algorithm is based on the idea that a random walk tends to end in the same community (Pons and Latapy 2005) . Let P t xy be the probability of walking from x to y in t timesteps, d(x) be the degree of x. Then the distance, r ij between i and j is defined as: Similar to the Fast Greedy algorithm, the algorithm is initialized by assigning a unique community to each node. Hierarchical clustering is used to group similar nodes. The Closest communities are merged and form a new graph where each node is a merged community. The process is repeated until we have one community. In order to deal with the high computational complexity of finding the optimal communities, a Monte Carlo approach can be used to estimate the probabilities for the random walks. An example of WalkTrap algorithm for detecting echo chambers is illustrated by Del Vicario et al.. The authors of this work wanted to study the opinions on Brexit by looking at Facebook data. They created a Bipartite graph and used community detection to find echo chambers within the data (Del . Infomap The infomap algorithm optimizes the map equation (Rosvall, Axelsson, and Bergstrom 2009) to find communities. The map equation tries to find the lower bound on the length of the sequence used to represent a walk on the graph. The representation of the walk can be minimized by using Huffman codes. The most frequently visited nodes may be represented by a lower number of bits. In order to minimize the length of the walk, the graph can be divided into different modules. Each module has a codebook (module codebook). There is also a codebook representing movement between the modules (index codebook). The description length for a module may be represented by the map equation: Communities detected by the map equation may differ from those formed by a modularity based approach. This is because the idea behind the map equation is to optimize the flow of information while the modularity is a connectionbased metric. InfoMap is another algorithm that is popular among the research community for studying echo chambers. It is one of the algorithms used by Cossard et al. for studying echo chambers related to vaccination skepticism (Cossard et al. 2020) . It is also the only community detection algorithm used by Du et al. in their work where they study how echo chambers strengthen by comparing detected communities across two time points (Du and Gregory 2016) . Other Approaches There are a plethora of other approaches for detecting communities. Spectral approaches, for instance, use eigen decomposition of the adjacency matrix to perform clustering. Heat kernel based detection (Kloster and Gleich 2016) is inspired from heat diffusion. Shi et al. (Shi and Chen 2020) compare 70 community detection approaches on basis of their quality and runtime. Brugnoli et al. (Brugnoli et al. 2019) use time series algorithms to study subclusters within echo chambers. Due to the difficulty of studying echo chambers based on real-life data, some researchers use a different method to model echo chambers and study the information propagated in them. In table 2, list some of these methods, and we explain how they work. Friedkin-Johnson Dynamics. The Friedkin-Johnson Dynamics (FJ) model can be used to simulate the leaning of different users in a social network (Chitra and Musco 2020) . This model assumes that the leanings of each user on each issue can be represented as a spectrum from -1 to 1, where -1 represents one extreme and 1 represents the opposite extreme e.g. agreement or disagreement on a particular issue. Each user is represented by a node in a graph, G. The weight of the edges, w ij , in the graph signify the influence they have over each other. Since the model is represented by an undirected graph, the influence is reciprocal. The crucial difference between the FJ model and leanings as defined in Eq. 2 is that this model assumes that there are certain innate opinions that the users have that cannot be changed. However, as time passes each user builds on these innate opinions based on the influence of peers. The FJ model uses a time series where the leaning, z t i , of a user, i, in the social network at time t is a function of their leaning,z t−1 i , at time step t−1 and their innate leaning s i ∈ [−1, 1]. Let d i be the degree of user i. The leaning at time t, is formally defined as: One of the advantages of using the FJ model is that it is simple to find where the leanings of the users will end up as time progresses. The state at which the leaning does not change (equilibrium) is simply denoted as: The model is capable of representing concepts like polarization, local/global disagreement, and local/global conflict (Chitra and Musco 2020) . Bounded Confidence Model. This Model is used to predict the opinions of agents by modelling their interactions (Deffuant et al. 2000) . Instead of having binary opinions on an issue, each entity can have an opinion in a spectrum on [0, 1]. The change in opinions is determined by the threshold, d, and the convergence parameter, µ. The idea here is that if two entities with different opinions interact, they are unlikely to change opinions if they differ by the threshold, d. If their opinions are below the threshold, they adjust their Random Walk Likelihood of Random walk ending in the same community Table 1 : Common Graph-based community detection algorithms and their usage in literature for detecting echo chambers. opinions based on the barycentric combination determined by the factor µ. Thus, the model concentrates the opinions of entities towards centroids. The number of centroids might reflect the different echo-chambers that have been formed. Stochastic Block Model. The Stochastic Block Model (SBM) creates a graph model of a community where the parameters of the model define how much members of a community intermingle inside and outside their community (Abbe 2017) . This model is useful when combined with the assumption of homophily, i.e. people with similar leanings tend to be connected. It must be noted that this model does not simulate homophily by default. SBM creates a random graph based on a number of parameters. The basic assumption is that there are two or more communities created from n nodes. For each pair of nodes, let p be the probability that they are connected given that they are in the same community. Let q be the probability that they are connected if they are in different communities. This results in a graph over both the communities. In order to simulate homophily, we set p to be greater than q. This will result in nodes of the same leanings/communities being connected more inside the communities. SBM can be useful when combined with FJ model to study the effects of Filter Bubbles (Chitra and Musco 2020) . FJ model is useful to determine the equilibrium leanings of individuals while SBM can provide us with a way to model the initial state of the social network. We can simulate filter bubbles by adding or removing edges. Cascade Models Graph models can be used to study echo chambers in social media. Since social media discourse is temporal, the propagation patterns can be encapsulated in a cascade model (Zhou and Zafarani 2018) . The cascade model is represented using a tree. Each node in the tree is a user in the social network. The root node represents the user who began the discourse. Each level of the tree represents propagation. This could be a share, reply, or any other kind of social media-specific interaction in which information is passed from one user to another. There are multiple variations of a cascade propagation model. Cascade similarity between several disconnected discourses might help with the detection of echo chambers. Epidemic Models. Epidemic models are inspired by the spread of disease. These models are used in publications to detect echo chambers (Cinelli et al. 2021) . The model represents individuals as nodes with one three states: Susceptible(S), Infected(I), or Recovered(R). Each individual has a certain probability of moving from one state to another based on the state of their neighbors. There are different variations of this model based on the possible transition (such as SIR, SIS, SIRD, and SEIR). The latter two variants add additional states to the model like Deceased(D) and Exposed(E). In this section, we discuss how to prevent echo chambers on social media from forming. We further discuss how to mitigate echo chambers in cases where they are already formed. We divide prevention and mitigation strategies into two types: algorithm-focused and human-focused. The algorithm-focused strategies try to address the causes of echo chambers that occur due to algorithmic curation and content recommendation. The human-focused strategies include methods designed to give the user more power over their information environment by urging them to think about the quality of the information they consume. In recent years, there is a focus on the fairness of machine learning algorithms as a response to the realization that some of these algorithms might lead to the unfair representation of some users based on some protected attributes such as gender, race, or age. We argue that echo chambers' problem could be thought of as a fairness problem if we redefine the problem as it is not fair to users to be recommended only content or other users that excluded diverse opinions. As we mentioned in previous sections, the source of many problems related to echo chambers is recommender systems. They could produce a polarized network (echo chamber) of users by recommending homophilic links, or produce the filter bubble effect by suggesting biased content. To solve this problem, we examine research field of fairness in recommender systems. There are three main types of solutions to Adjacency Grid Table 2 : Models typically used for Echo Chambers and Filter Bubbles the problem of unfair recommender systems: preprocessing solution, in-processing, and post-processing. We start with preprocessing solutions where we try to address the polarization in the data used to train recommendation algorithms. Inspection of data poisoning attacks from Rastegarpanah et al. (Rastegarpanah, Gummadi, and Crovella 2019) suggest adding additional training data. They refer to these data as antidote data. This method has the advantage of not modifying the training data nor modifying the recommendation algorithm. A small amount of antidote data (1% more new data samples) reduces polarization of the recommendations by 50%. However, decreasing polarization will lead to a decrease in recommendation accuracy, which is an expected result since the "correct" recommendation created the polarization in the first place. In-process methods try to modify the way the recommendations systems works to prevent the formation of echo chambers. One way to prevent creating echo chambers is by modifying the objective function of the recommender system. Chitra and Musco (Chitra and Musco 2020) showed that a simple modification to the objectivate function mitigates the filter bubble effect. By adding a regularization term to the objective function, Chitra and Musco showed that the recommendation algorithm would take into account the whole network instead of focusing on the user. Post-processing methods can be thought of as the methods that we use after the recommendation has been made with or without any changes to the data or the algorithm. One suggested idea is to detect a counter-argument to each polarized argument. Orbach et al. (Orbach et al. 2020) try to identify, from among a set of speeches on the same topic and with an opposing stance, the ones that directly counter it. Another important body of work that merits mention in the discussion of algorithmic prevention of echo chambers is that done by the Polarization lab at Duke University. The lab conducted a set of studies on political polarization and echo chambers, and summarized them in the book Breaking the Social Media Prism (Bail 2021) . The results of the studies concluded that exposing users to polar opposite beliefs in order to draw them out of their echo chambers actually further entrenched them in their beliefs. The lab did find an effective approach in depolarizing users by exposing them to viewpoints only slightly less polar than the ones they held. Essentially, going one bandwidth lower in acceptability for beliefs. These findings have deep implications for algorithmic prevention of echo chambers, should the echo chamber already exist. Algorithms seeking to draw a user out of their echo chamber by depolarizing the user must not only analyze and decide what content to recommend, but must also decide whether the content is in the latitude of acceptability that would slightly depolarize the user. This is certainly a difficult challenge, but the results could bear fruit for specifically the mitigation of echo chambers. A secondary method of echo chamber prevention and mitigation relies on giving users more power to curate their own information feed with an eye towards less bias. We refer to methods that take this route as human-focused prevention methods. These types of methods have been tried publicly by Facebook and Twitter, to varying degrees of success. The main human-focused methods are currently: labeling of misinformation/Fake News, fact checking, and "nudging" the user to think about accuracy. All of these methods, while well intentioned, face certain limitations. labeling bad information can potentially lead the a member of an echo chamber to realize that they are caught in a chamber. Conversely, labeling could result in the implied truth effect (Pennycook et al. 2020a) . The implied truth effect means that attaching labels to fake news or misinformation increases perceived accuracy of headlines without labels. So, if a user in an echo chamber were to be subject to the implied truth effect, it would mean that any piece of bad information that slipped labeling would be perceived as more accurate by the user. Fact-checking also runs into resistance from the underlying features of echo chambers, primarily the distrust of help from outside the echo chamber. Indeed, fact-checking could easily backfire and entrench the user more in their beliefs. Mosleh et al. found that factchecking and debunking led users to abuse the fact-checker with toxic comments . Fact-checking faces further challenges because it relies primarily on human agents to check a piece of information. Fake news and misinformation spread much faster than true information (Vosoughi, Roy, and Aral 2018) , and in an echo chamber, fact checkers may not be able to keep up with the velocity of information spread. While some studies have found success with fact-checking (Gillani et al. 2018) , the results are a mixed bag and require further research before fact-checking can be considered an effective human-focused approach to prevention and mitigation of echo chambers. Utilizing the "nudge" approach has displayed some promising early results. The basis of the nudge approach comes from the concept of "nudging" an agent towards a desirable outcome. Pennycook et al. found that susceptibil-ity to fake news can be explained in part by a simple lack of reasoning on the part of the user (Pennycook and Rand 2019) . With respect to echo chambers, nudging the user to think about accuracy could potentially reduce the negative effects of the chamber. Two separate studies have found that nudging users on social media to think about accuracy of information can yield positive results in reducing the belief in fake news and misinformation (Bago, Rand, and Pennycook 2020; Pennycook et al. 2021) . These results could be of use in echo chamber prevention and mitigation because they counteract two features of echo chambers: the resistance of information that disagrees with the echo chamber member's worldview, and the awareness that the member is in an echo chamber at all. Clearly, a human-focused approach to echo chamber prevention and mitigation is a difficult and multifaceted problem. The mixed results from current approaches makes it abundantly clear that there are many opportunities for research in this area. Future work can build on the results we have discussed here, and take advantage of the dearth of psychological studies to develop an approach that ultimately proves beneficial to the echo chamber member. Though complicated, echo chambers are a problem worth researching because of their prevalence on social media and their wide adoption across multiple platforms. In our estimation, the source of challenges stemming from echo chambers is the fact that echo chambers have many shareholders: (1) the members of the echo chamber, (2) the social media platforms, and (3) the "offline" world. Each one of these shareholders introduces challenges as well as open problems to solve. This section discusses these aforementioned shareholders' challenges and other challenges and the open problems related to echo chambers. The human element of the echo chamber makes solving and studying it a challenging problem. Any work related to echo chambers and polarization should consider how people inside an echo chamber consume content, how they perceive the world, and how they view people outside their echo chamber. Echo chamber members have four critical features that contribute to this problem: (1) they are not aware of their echo chamber, (2) they select contents that adhere to their beliefs, (3) they resist any information that disagrees with their worldview, and (4) they distrust help from outside the echo chamber. Following, we are going to go over each one of these features briefly. Echo chamber members are generally not aware of their echo chamber. Most users would not suspect that the content they consume and their relationships online are part of an echo chamber. Raising awareness about echo chambers and their effect (See Section 2) on the individual and society is an essential step towards solving the problem of echo chambers. The presentation of the information that leads users out of their echo chambers and towards a more civil online presence is critical. The presentation should be done carefully and in a way that avoids belittling users and their beliefs, to avoid the toxic abuse mentioned in Section 5. A possible way to combat echo chambers is by designing a tool that helps users find whether they are in an echo chamber. One example of such a tool is the Check-my-echo 10 tool by the polarization lab at Duke University. As we mentioned in Section 2, confirmation bias and selective exposure are core factors for forming echo chambers. Users will select the information that reinforces their existing beliefs due to confirmation bias. Also, they will avoid information that contradicts their beliefs. Bakshy et al. (Bakshy, Messing, and Adamic 2015) found that individuals' choices play a more decisive role in limiting exposure to cross-cutting content that conflicts with the echo chamber's worldview. Therefore, any method that tries to recommend new content outside the echo chamber should consider this fact and not recommend content that could cause echo chamber members to double down on their preexisting beliefs. Echo chambers members are more involved in producing (Dubé, Vivion, and MacDonald 2015) and consuming (Schmidt et al. 2018 ) content related to their ideology than any other groups outside the echo chamber. This fact leads us to conclude that echo chamber members' attitudes are rooted more deeply in a person's social and psychological background (Schmidt et al. 2018) than people outside the echo chamber. Therefore, their stance on different topics is firm and less likely to be changed in a short period of time. We must realize that the echo chamber problem is due to the interaction of the structure of social media with select elements of human psychology. Solving it will take a fundamental change in the way social media works. Members of echo chambers tend to distrust any voices outside their group that offer information that conflicts with their worldview. In addition this distrust, being confronted with opposing information can reinforce preexisting beliefs, and even well intentioned debate can lead to an increase of opinion polarization (Schmidt et al. 2018) . Trying to bridge the echo chamber by sharing diverse content results in a loss in network centrality and content appreciation of users that attempt to solve the problem (Garimella et al. 2018) . Therefore, the type of corrective information and how it gets presented is a critical concern for solving the echo chamber problem. The main objective of social media platforms is to keep the user engaged with the platform by spending more time consuming content, and therefore consuming more advertisements. This is the fundamental principle driving revenue and profit for mainstream social media platforms. However, we believe that social media platforms must balance between the social good and their financial profit, which is a legitimate concern. Although recently social media platforms have introduced some measures to combat the filter bubble and fake news problems, these methods have found limited success. We posit that the main way for platforms to reduce the echo chamber problem is by designing an echo chamberaware recommender system. As we mentioned in Section 3, recommender systems are one of the leading perpetrators of creating echo chambers. Recommender systems are utilizing users' own biases to personalize recommendations in order to keep them engaged with social media. Therefore, the ethical responsibility lies on recommender systems' designers to consider the effects of their systems on the formation of echo chambers. In accord, any solution to echo chambers must ensure the quality of the content (or connections) that is recommended to users. What we mean by the quality of the recommendation is that any content that is recommended to users must seek to avoid the formation of echo chambers and opinion polarization and at the same time ensure that social media platforms meet their obligations to keep users engaged on the platform. To design echo chamber-aware recommender systems, we need to find the right balance between the quality of recommendations and the main objective of such systems, which is to recommend content that interests users. These recommender systems should seek to be aware of the overall polarization of the whole social network and predict how their recommendation affects the network's overall polarization. One possible way to design such systems is to make recommendations that make sure that users are informed about other viewpoints other than theirs. This method is inspired by AllSides 11 , a website that shows users a news article and a brief extract of news articles from three different political views right, center, and left. With the dearth of fake news and misinformation and the increase in political polarization, many have called on social media companies to have better oversight over their platforms. Consequently, many social media platforms have started to introduce measures to combat fake news and abusive content. For example, Twitter appended a fact-checking notice on President Trump's tweets that Twitter judged as misleading or false claims. This type of measure is called content moderation. Although these efforts are important ways to combat echo chambers and misinformation, their effects are not well understood. More work is needed to develop methods and suggestions of best practices to moderate content in social media platforms. We identified these problems as potential concerns: (1) the sheer amount of content that gets posted on social media make unbias, and objective human moderations near impossible, (2) automatic content moderation do not work as good as needed to stop abusive and polarizing content, (3) the subjective and potentially bias human moderation could lead to the formation of a network-wide echo chamber, and (4) excessive moderation could make some users feel unwelcome, leading them to leave the social media platform, which causes financial losses and might accelerate the formation of a network-wide echo chamber. We call social media that has only one ideological leaning a network-wide echo chamber. For example, Reddit is a left-leaning echo chamber, while Gab is a right-leaning echo chamber (Cinelli et al. 2021) . Although we do not believe that content moderation (especially when it comes to Reddit) is the sole cause of this phenomenon, content moderation contributes to the feeling that some ideological or political leanings are not welcome in "mainstream" social media. For example, in the United States, some conservatives believe that the "mainstream" social media has a bias against them. This belief causes the emergence of social media that has a conservative-leaning, e.g., Gab, Parler, and Rumble, to name a few. Having network-wide echo chambers is a very concerning indicator of how politically polarized society is right now. Society's increasingly polarized environment affects social media, which in turn contributes to the polarization of society, creating a feedback loop of polarization (Jasny, Waggle, and Fisher 2015) . In this vicious cycle, echo chambers create and spread polarized content. The polarized content increases the political polarization of society and political discourse. This polarized environment helps echo chambers to grow and accelerate the formation of new echo chambers. Breaking this vicious cycle is challenging, and the social and political climates are very susceptible to this type of interaction. DellaVigna and Kaplan (DellaVigna and Kaplan 2007) concluded that Fox News affected voter turnout and the Republican vote share in the Senate. Moreover, De-Wit et al. (De-Wit, Brick , and Van Der Linden 2019) mention that the political base affects politicians' tweets, which reporters use to build their news headlines. Rancorous political discourse is contributing to the echo chamber problem and leads to an increase in opinion polarization because echo chamber members are more involved with their content, and they show signs that their beliefs are more deeply rooted (Schmidt et al. 2018) . In fact, two-sided neutral arguments have weaker effects on reinforcement than one-sided confirming and contradicting arguments (Karlsen et al. 2017) . While the emergence of echo chambers can seem to be an unstoppable wave, we must realize that there is hope for a better information ecosystem. We showed that echo chambers are largely a byproduct of recommender systems. As such, what has been manufactured by these systems can likewise be deconstructed by these systems. Social media may not currently live up to its' promise of bringing us closer together and fostering better conversations presently. The future does not have to be this way -through research and a structured strategy a less polarized world is possible. The echo chamber effect is tremendous, as seen in the spread of misinformation amid the COVID-19 pandemic, which has caused many people to lose their lives. One of the most significant abettors to the spread of misinformation are social media platforms and their recommender algorithms that compound the problem of echo chambers. Thus, there is a great need for the social computing community to take the responsibility to develop effective models and tools to help combat the negative effect of echo chambers. Therefore, this survey focuses on a computational perspective to help readers grasp the recent technologies for detecting and preventing echo chambers. We detailed the mechanisms that lead to the formation of echo chambers. In summary, the driver behind the formation and growth of echo chambers is the feedback loop between (1) recommender systems, (2) psychological biases, namely confirmation bias and cognitive dissonance, and (3) the homophilic users' networks. We showed that content recommender systems (which are designed to keep the user engaged in social media to watch more ads) are the main reason for the formation of echo chambers. We also showed that the users have the requisite biases to fall into an echo chamber: confirmation bias and cognitive dissonance. Recommender systems utilize these biases to "personalize" their recommendations which trap users in echo chambers. In closing, the echo chamber phenomenon is challenging to tackle because not all stakeholders necessarily want it to be solved. Social media wants users to stay engaged with their platform to show them more personalized ads. Furthermore, it is difficult for people to admit that they live in an echo chamber of users with similar opinions. It is very difficult for one to admit that the other side might hold interesting opinions. However, the way out of echo chambers starts with understanding the mechanisms that lead to the formations of echo chambers and using this knowledge to detect and mitigate echo chambers. Community detection and stochastic block models: recent developments Epidemic of COVID-19 in China and associated psychological problems COVID-19 and the 5G Conspiracy Theory: Social Network Analysis of Twitter Data Trends in social media: Persistence and decay Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines Exposure to ideologically diverse news and opinion on Facebook Political paranoia v. political realism: On distinguishing between bogus conspiracy theories and genuine conspiratorial politics Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber? Similar but different: Exploiting users' congruity for recommendation systems Personality traits and echo chambers on facebook Users polarization on Facebook and Youtube Journal of statistical mechanics: theory and experiment The social structure of political echo chambers: Variation in ideological homophily in online networks The causes and consequences of COVID-19 misperceptions: Understanding the role of news and social media Echo Chambers Exist!(But They're Full of Opposing Views) Recursive patterns in online echo chambers Filter bubble Facebook in the news: Social media, journalism, and public responsibility following the 2016 trending topics controversy Fake News and Advertising on Social Media: A Study of the Anti-Vaccination Movement Analyzing the Impact of Filter Bubbles on Social Network Polarization Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation Rumor Propagation is Amplified by Echo Chambers in Social Media Addressing Health-Related Misinformation on Social Media Influence: The psychology of persuasion The echo chamber effect on social media Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data Falling into the echo chamber: the Italian vaccination debate on Twitter Quantifying echo chamber effects in information spreading over political communication networks Biased assimilation, homophily, and the dynamics of polarization Are social media driving political polarization? Online Mixing beliefs among interacting agents The spreading of misinformation online Polarization and Fake News: Early Warning of Potential Misinformation Targets Echo chambers: Emotional contagion and group polarization on facebook Mapping social dynamics on Facebook: The Brexit debate The Fox News effect: Media bias and voting Bert: Pre-training of deep bidirectional transformers for language understanding COVID-19 conspiracy theories Understanding conspiracy theories The Echo Chamber Effect in Twitter: does community polarization increase? The echo chamber is overstated: the moderating effect of political interest and diverse media Vaccine hesitancy, vaccine refusal and the anti-vaccine movement: influence, impact and implications Opinion formation on dynamic networks: identifying conditions for the emergence of partisan echo chambers A theory of cognitive dissonance Political polarization in the American public Dynamic spread of happiness in a large social network: longitudinal analysis over 20 years in the Framingham Heart Study Recent research on selective exposure to information Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship A Long-Term Analysis of Polarization on Twitter Understanding Echo Chambers in E-Commerce Recommender Systems The antivaccination infodemic on social media: A behavioral analysis Me, my echo chamber, and I: introspection on social media polarization Information and Disinformation: Social Media in the COVID-19 Crisis Health disinformation & social media: The crucial role of information hygiene in mitigating conspiracy theory and infodemics Beyond Fact-Checking: Network Analysis Tools for Monitoring Disinformation in Social Media Towards Quantifying the Distance between Opinions The fallacy of echo chambers: Analyzing the political slants of usergenerated news comments in Korean media Measuring Personalization of Web Search The relationship between memory and judgment depends on whether the judgment task is memory-based or on-line Long Short-term Memory An mRNA vaccine against SARS-CoV-2-preliminary report Echo chamber: Rush Limbaugh and the conservative media establishment An empirical examination of echo chambers in US climate policy networks Degenerate feedback loops in recommender systems The political effects of migration-related fake news, disinformation and conspiracy theories in Europe. Friedrich Ebert Stiftung, Political Capital Policy Research & Consulting Institute Echo chamber and trench warfare dynamics in online debates Heat kernel based community detection Experimental evidence of massive-scale emotional contagion through social networks Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations Annual Meeting of the Association for Computational Linguistics From word embeddings to document distances Herd immunity-estimating the level required to halt the COVID-19 epidemics in affected countries Distributed Representations of Sentences and Documents Echo Chambers and Their Effects on Economic and Political Outcomes MM-COVID: A multilingual and multimodal data repository for combating COVID-19 disinformation Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence The psychology of eating animals. Current Directions in Psychological COVID-19: breaking down a global health crisis Media manipulation and disinformation online Twittermonitor: trend detection over the twitter stream Birds of a feather: Homophily in social networks ¡? covid19?¿ COVID-19, 5G conspiracies and infrastructural futures The infamous# Pizzagate conspiracy theory: Insight from a TwitterTrails investigation Efficient estimation of word representations in vector space Endogenetic structure of filter bubble in social networks Measuring political polarization: Twitter shows the two sides of Venezuela Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment Consequences of confirmation and disconfirmation in a simulated research environment Fast algorithm for detecting community structure in networks Echo Chambers and Epistemic Bubbles Mitigating the spread of fake news by identifying and disrupting echo chambers Confirmation bias: A ubiquitous phenomenon in many guises ?? Mapping the echo-chamber: detecting and characterizing partisan networks on Twitter Out of the Echo Chamber: Detecting Countering Debate Speeches The filter bubble: How the new personalized web is changing what we read and how we think. Penguin Glove: Global vectors for word representation The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings Shifting attention to accuracy can reduce misinformation online Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning Computing communities in large networks using random walks Using tf-idf to determine word relevance in document queries Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems Patterns of Media Use, Strength of Belief in COVID-19 Conspiracy Theories, and the Prevention of COVID-19 From March to July 2020 in the United States: Survey Study Influence and passivity in social media Social network determinants of depression The map equation Polarization of the vaccination debate on Facebook Selective exposure to information: A critical review Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically Combating Fake News: A Survey on Identification and Mitigation Techniques Comparison and Benchmark of Graph Clustering Algorithms Combating disinformation in a social media age Detecting fake news on social media Fake News Detection on Social Media: A Data Mining Perspective A political and social history of HIV in South Africa Age, gender, personality, ideological attitudes and individual differences in a person's news spectrum: how many and who might be prone to "filter bubbles" and "echo chambers Republic.Com. USA: Princeton University Press. ISBN 0691070253 Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature Echo chambers and viral misinformation: Modeling fake news as complex contagion Echo Chamber Effects in the Climate Change Blogosphere The echo chamber of antivaccination conspiracies: mechanisms of radicalization on Facebook and Reddit. Institute for Policy The spread of true and false news online Anger, fear, and echo chambers: The emotional basis for online behavior Novelty and collective attention Misinformation in Social Media: Definition, Manipulation, and Detection. SIGKDD Explor Social media mining: an introduction What is gab: A bastion of free speech or an alt-right echo chamber Fake news: A survey of research, detection methods, and opportunities A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities Debunking in a world of tribes Misinformation Spreading on Facebook Acknowledgement. We would like to thank Dr. H. Russell Bernard for his insightful input and kind help in this work. He was very generous with his valuable time, and his advice was important for the completion of this work. In table 3, we list some of the datasets that could be used in any future work related to echo chambers or any related topic, such as political polarization and filter bubbles. • Cognitive Dissonance: refers to an internal contradiction between two opinions, beliefs, or items of knowledge (Festinger 1957 ).• Confirmation bias: is the tendency to seek, interpret, favor, and recall information adhering to preexisting opinions (Nickerson 1998 ).• Echo chamber: a network of users in which users only consume opinions that support their pre-existing beliefs and opinions and exclude and discredit other viewpoints.. Number • Filter bubble: an environment and especially an online environment in which people are exposed only to opinions and information that conform to their existing beliefs 12 .• Homophily: also known as love of the same, is the process by which similar individuals become friends or connected due to their high similarity (Zafarani, Abbasi, and Liu 2014) .• Political polarization: is the divergence of political attitudes to ideological extremes 13 .• Selective exposure: A tendency for people both consciously and unconsciously to seek out material that supports their existing attitudes and opinions and to actively avoid material that challenges their views 14 .