key: cord-0587132-17jlkvz9 authors: Shi, Hanyu; Silva, Mirela; Capecci, Daniel; Giovanini, Luiz; Czech, Lauren; Fernandes, Juliana; Oliveira, Daniela title: Lumen: A Machine Learning Framework to Expose Influence Cues in Text date: 2021-07-12 journal: nan DOI: nan sha: 56b62401b2d2bd5ddc750857b9e460721620714e doc_id: 587132 cord_uid: 17jlkvz9 Phishing and disinformation are popular social engineering attacks with attackers invariably applying influence cues in texts to make them more appealing to users. We introduce Lumen, a learning-based framework that exposes influence cues in text: (i) persuasion, (ii) framing, (iii) emotion, (iv) objectivity/subjectivity, (v) guilt/blame, and (vi) use of emphasis. Lumen was trained with a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content. * The first two authors have equal contribution. 1. A theory akin to confirmation bias and often used in Communication research pertaining to the idea that individuals favor information that reinforces their prior beliefs [6] . ability to think deliberately and analytically (i.e., "System 2" [11] ) is generally associated with the rejection of disinformation, regardless of the participants' political alignmentthus, the activation of this analytical thinking mode may act as an "antidote" to today's selective exposure. We therefore advocate that interventions should mitigate deceptive content via the exposure of influence cues in texts. Similar to the government and state-affiliated media account labels on Twitter [13] , bringing awareness to the influence cues present in misleading texts may, in turn, aid users by providing additional context in the message, thus helping users think analytically, and benefit future work aimed at the automatic detection of deceptive online content. Towards this goal, we introduce Lumen 2 , a two-layer learning framework that exposes influence cues in text using a novel combination of well-known existing methods: (i) topic modeling to extract structural features in text; (ii) sentiment analysis to extract emotional salience; (iii) LIWC 3 to extract dictionary features related to influence cues; and (iv) a classification model to leverage the extracted features to predict the presence of influence cues. To evaluate Lumen's effectiveness, we leveraged our dataset of 2,771 diverse pieces of online texts, manually labeled by our research team according to the influence cues in the text using standard qualitative analysis methods. We must, however, emphasize that Lumen is not a consumer-focused end-product, and instead is insomuch as a module for application in future user tools that we shall make publicly available to be leveraged by researchers in future work (as described in Sec. 6.2). Our newly developed dataset is comprised of nearly 3K texts, where 1K were mainstream news articles, and 2K deceptive or misleading content in the form of: Russia's Internet Research Agency's (IRA) propaganda tar-geting Americans in the 2016 U.S. Presidential Election, phishing emails, and fake and hyperpartisan news articles. Here, we briefly define these terms, which we argue fall within the same "deceptive text umbrella." Disinformation constitutes any any purposefully deceptive content aimed at altering the opinion of or confusing an individual or group. Within disinformation, we find instances of propaganda (facts, rumors, half-truths, or lies disseminated manipulatively for the purpose of influencing public opinion [15] ) and fake news (fabricated information that mimics real online news [3] and considerably overlaps with hyperpartisan news [5] ). Misinformation's subtler, political form is hyperpartisan news, which entails a misleading coverage of factual events through the lens of a strong partisan bias, typically challenging mainstream narratives [3] , [5] . Phishing is a social engineering attack aimed at influencing users via deceptive arguments into an action (e.g., clicking on a malicious link) that will go against the user's best interests. Though phishing differs from disinformation in its modus operandi, we argue that it overlaps with misleading media in their main purpose-to galvanize users into clicking a link or button by triggering the victim's emotions [5] , and leveraging influence and deception. We conducted a quantitative analysis of the dataset, which showed that authority and commitment were the most common principles of persuasion in the dataset (71% and 52%, respectively), the latter of which was especially common in news articles. Phishing emails had the largest occurrence of scarcity (65%). Framing was a relatively rare occurrence (13% gain and 7% loss), though gain framing was predominantly prevalent in phishing emails (41%). The dataset invoked an overall positive sentiment (VADER compound score of 0.232), with phishing emails containing the most positive average sentiment (0.635) and fake news with the most negative average sentiment (−0.163). Objectivity and subjectivity occurred in over half of the dataset, with objectivity most prevalent in fake news articles (72%) and subjectivity most common in IRA ads (77%). Attribution of blame/guilt was disproportionately frequent for fake and hyperpartisan news (between 38 and 45%). The use of emphasis was much more common in informal texts (e.g., IRA social media ads, 70%), and less common in news articles (e.g., mainstream media, 17%). We evaluated Lumen in comparison with other traditional ML and deep learning algorithms. Lumen presented the best performance in terms of its F 1-micro score (69.23%), performing similarly to LSTM (69.48%). In terms of F 1-macro, LSTM (64.20%) performed better than Lumen (58.30%); however, Lumen presented better interpretability for intuitively understanding of the model, as it provides both the relative importance of each feature and the topic structure of the training dataset without additional computational costs, which cannot be obtained with LSTM as it operates as a black-box. Our results highlight the promise of exposing influence cues in text via learning methods. This paper is organized as follows. Section 2 positions this paper's contributions in comparison to related work in the field. Section 3 details the methodology used to generate our coded dataset. Section 4 describes Lumen's de- 3 . Lumen and dataset will be available upon publication. sign and implementation, as well as Lumen's experimental evaluation. Section 5 contains a quantitative analysis of our dataset, and Lumen's evaluation and performance. Section 6 summarizes our findings and discusses the limitations of our work, as well as recommendations for future work. Section 7 concludes the paper. This section briefly summarizes the extensive body of work on machine learning methods to automatically detect disinformation and hyperpartisan news, and initial efforts to detect the presence of influence cues in text. Most anti-phishing research has focused on automatic detection of malicious messages and URLs before they reach a user's inbox via a combination of blocklists [16] , [17] and ML [18] , [19] . Despite yielding high filtering rates in practice, these approaches cannot prevent zero-day phishing 4 from reaching users because determining maliciousness of text is an open problem and phishing constantly changes, rendering learning models and blocklists outdated in a short period of time [19] . Unless the same message has been previously reported to an email provider as malicious by a user or the provider has the embedded URL in its blocklist, determining maliciousness is extremely challenging. Furthermore, the traditional approach to automatically detect phishing takes a binary standpoint (phishing or legitimate, e.g., [20] , [21] , [22] ), potentially overlooking distinctive nuances and the sheer diversity of malicious messages. Given the limitations of automated detection in handling zero-day phishing, human detection has been proposed as a complementary strategy. The goal is to either warn users about issues with security indicators in web sites, which could be landing pages of malicious URLs [23] , [24] or train users into recognizing malicious content online [25] . These approaches are not without their own limitations. For example, research on the effectiveness of SSL warnings shows that users either habituate or tend to ignore warnings due to false positives or a lack of understanding about the warning message [26] , [27] . The previously known "antidote" to reduce polarization and increase readers' tolerance to selective exposure was via the use of counter-dispositional information [5] . However, countering misleading texts with mainstream or highquality content in the age of rapid-fire social media comes with logistical and nuanced difficulties. Pennycook and Rand [28] provide a thorough review of the three main approaches employed in fighting misinformation: automatic detection, debunking by field experts (which is not scalable), and exposing the publisher of the news source. Similar to zero-day phishing, disinformation is constantly morphing, such that "zero-day" disinformation may thwart already-established algorithms, such was the case with the COVID-19 pandemic [28] . Additionally, the final determination of a fake, true, or hyperpartisan label is fraught 4 . A new, not-yet reported phishing email. with subjectivity. Even fact-checkers are not immune-their agreement rates plummet for ambiguous statements [29] , calling into question their efficacy in hyperpartisan news. We posit that one facet of the solution lies within the combination of human and automated detection. Pennycook and Rand [28] conclude that lack of careful reasoning and domain knowledge is linked to poor truth discernment, suggesting (alongside [3] , [30] ) that future work should aim to trigger users to to think slowly and analytically [11] while assessing the accuracy of the information presented. Lumen aims to fulfill the first step of this goal, as our framework exposes influence cues in texts, which we hypothesize is disproportionately leveraged in deceptive content. We focus on prior work that has investigated the extent to which Cialdini's principles of persuasion (PoP) [7] , [12] (described in Sec. 3) are used in phishing emails [31] , [32] , [33] and how users are susceptible to them [34] , [35] . Lawson et al. [35] leveraged a personality inventory and an email identification task to investigate the relationship between personality and Cialdini's PoP. The authors found that extroversion was significantly correlated with increased susceptibility to commitment, liking, and the pair (authority, commitment), the latter of which was found in 41% of our dataset. Following Cialdini's PoP, after manually labeling ∼200 phishing emails, Akbar [36] found that authority was the most frequent principle in the phishing emails, followed by scarcity, corroborating our findings for high prevalence of authority. However, in a large-scale phishing email study with more than 2,000 participants, Wright et al. [37] found that liking receives the highest phishing response rate, while authority received the lowest. Oliveira et al. [34] , [38] unraveled the complicated relationship between PoP and Internet user age and susceptibility, finding that young users are most vulnerable to scarcity, while older ones are most likely to fall for reciprocation, with authority highly effective for both age groups. These results are promising in highlighting the potential usability of exposing influence cues to users. Contrary to phishing, few studies have focused on detecting influence cues or analyzing how users are susceptible to them in the context of fake or highly partisan content. Xu et al. [39] stands out as the authors used a mixedmethods analysis, leveraging both manual analysis of the textual content of 1.2K immigration-related news articles from 17 different news outlets, and computational linguistics (including, as we did, LIWC). The authors found that moral frames that emphasize that support authority/respect were shared/liked more, while the opposite occurred for reciprocity/fairness. Whereas we solely used trained coders, they measured the aforementioned frames by applying the moral foundations dictionary [40] . To the best of our knowledge, no prior work has investigated or attempted to automatically detect influence cues in texts in such a large dataset, containing multiple types of deceptive texts. In this work, we go beyond Cialdini's principles to also detect gain and loss framing, emotional salience, subjectivity and objectivity, and the use of empha-sis and blame. Further, no prior work has made available to the research community a dataset of deceptive texts labeled according to the influence cues applied in the text. This section describes the methodology to generate the labeled dataset of online texts used to train Lumen, including the definition of each of the influence cues labels. We composed a diverse dataset by gathering different types of texts from multiple sources, split into three groups: (Deceptive Texts) 1, 082 pieces of text containing disinformation and/or deception tactics, (Hyperpartisan News) 1, 003 hyperpartisan media news from politically right-and leftleaning publications, and (Mainstream News) 974 center mainstream media news. Our dataset therefore contained 3, 059 pieces of text in total. For the Deceptive Texts Group, we mixed 492 Facebook ads created by the Russian Internet Research Agency (IRA), 130 known fake news articles, and 460 phishing emails: Facebook IRA Ads. We leveraged a dataset of 3,517 Facebook ads created by the Russian IRA and made publicly available to the U.S. House of Representatives Permanent Select Committee on Intelligence [41] by Facebook after internal audits. These ads were a small representative sample of over 80K organic content identified by the Committee and are estimated to have been exposed to over 126M Americans between June 2015 and August 2017. After discarding ads that did not have text entry, the dataset was decreased to 3,286 ads, which were mostly (52.8%) posted in 2016 (U.S. election year).We randomly selected 492 for inclusion. Fake News. We leveraged a publicly available 5 dataset of nearly 17K news labeled as fake or real collected by Sadeghi et al. [42] from PoliticFact.com, a reputable source of factfinding. We randomly selected 130 fake news ranging from 110 − 200 words dated between 2007 to 2020. Phishing Emails. To gather our dataset, we collected approximately 15K known phishing emails from multiple public sources [43] , [44] , [45] , [46] , [47] , [48] , [49] , [50] . The emails were then cleaned and formatted to remove errors, noise (e.g., images, HTML elements), and any extraneous formatting so that only the raw email text remained. We randomly selected 460 of these emails ranging from 50−150 words to be included as part of the Deceptive Texts. For the Hyperpartisan News and Mainstream News Groups, we used a public dataset 6 comprised of 2.7M news articles and essays from 27 American publications dated from 2013 to early 2020. We first selected articles ranging from 50 − 200 words and then classified them as left, right, or center news according to the AllSides Bias Rating 7 . For inclusion in the Hyperpartisan News Group, we randomly selected 506 right news and 497 left news; the former were dated from 2016 to 2017 and came from two publications sources (Breitbart and National Review) while the latter were dated from 2016 to 2019 and came from six publications (Buzzfeed News, Mashable, New Yorker, People, VICE, and Vox). To compose Mainstream News Group, we 5 . https://ieee-dataport.org/open-access/ fnid-fake-news-inference-dataset#files 6. https://components.one/datasets/all-the-news-2-news-articles-dataset/ randomly selected 974 center news from all seven publications (Business Insider, CNBC, NPR, Reuters, TechCrunch, The Hill, and Wired) dated from 2014 to 2019. We then developed coding categories and a codebook based on Cialdiani's principles of influence [51] , subjectivity/objectivity, and gain/loss framing [52] . These categories have been used in prior works (e.g., [32] , [34] , and were adapted for the purposes of this study, as well as with the additional emphasis and blame/guilt attribution categories. Next, we held an initial training session with nine undergraduate students. The training involved a thorough description of the coding categories, their definitions and operationalizations, as well as a workshop-style training where coders labeled a small sample of the texts to get acquainted with the coding platform, the codebook, and the texts. Coders were instructed to read the text at least twice before starting the coding to ensure they understood it. After that, coders were asked to share their experiences labeling the texts and to discuss any issues or questions about the process. After this training session, two intercoder reliability pretests were conducted; in the first pretest, coders independently co-coded a sample of 20 texts, and in the second pretest, coders independently co-coded a sample of 40 texts. After each one of these pretests, a discussion and new training session followed to clarify any issues with the categories and codebook. Following these additional discussion and training sessions, coders were then instructed to co-code 260 texts which served as our intercoder reliability sample. To calculate intercoder reliability, we used three indexes. Cohen's kappa and Percent of Agreement ranged from 0.40 to 0.90, and 66% to 99%, respectively, which was considered moderately satisfactory. Due to the nature of the coding and type of texts, we also opted to use Perrault and Leigh's index because (a) it has been used in similar studies that also use nominal data [53] , [54] , [55] , [56] ; (b) it is the most appropriate reliability measure for 0/1 coding (i.e., when coders mark for absence or presence of given categories), as traditional approaches do not take into consideration two zeros as agreement and thus penalize reliability even if coders disagree only a few times [57] ; and (c) indexes such as Cohen's kappa and Scott's pi have been criticized for being overly conservative and difficult to compare to other indexes of reliability [58] . Perrault and Leigh's index (I r ) returned a range of 0.67 to 0.99, which was considered satisfactory. Finally, the remaining texts were divided equally between all coders, who coded all the texts independently using an electronic coding sheet in Qualtrics. Coders were instructed to distribute their workload equally over the coding period to counteract possible fatigue effects. This coding process lasted three months. The coding categories were divided into five main concepts: principles of influence, gain/loss framing, objectivity/subjectivity, attribution of guilt, and emphasis. Coders marked for the absence (0) or presence (1) of each of the categories. Definitions 7. https://www.allsides.com/media-bias/media-bias-ratings and examples for each influence are detailed in Appendix A, leveraged from the coding manual we curated to train our group of coders. Principles of Persuasion (PoP). Persuasion refers to a set of principles that influence how people concede or comply with requests. The principles of influence were based on Cialdini's marketing research work [7] , [12] , and consist of the following six principles: (i) authority 8 or expertise, (ii) reciprocation, (iii) commitment and consistency, (iv) liking, (v) scarcity, and (vi) social proof. We added subcategories to the principles of commitment (i.e., indignation and call to action) and social proof (i.e., admonition) because an initial perusal of texts revealed consistent usage across texts. Framing. Framing refers to the presentation of a message (e.g., health message, financial options, and advertisement) as implying a possible gain (i.e., possible benefits of performing the action) vs. implying a possible loss (i.e., costs of not performing a behavior) [10] , [11] , [59] . Framing can affect decision-making and behavior; work by Kahneman and Tversky [11] on loss aversion supports the human tendency to prefer avoiding losses over acquiring equivalent gains. Slant. Slant refers to whether a text is written subjectively or objectively; subjectivesentences generally refer to a personal opinion/judgment or emotion, whereas objective sentences fired to factual information that is based on evidence, or when evidence is presented. It is important to note that we did not ask our coders to fact check, instead asking them to rely on sentence structure, grammar, and semantics to determine the label of objective or subjective. Attribution of Blame/Guilt. Blame or guilt refers to when a text references "another" person/object/idea for wrong or bad things that have happened. Emphasis. Emphasis refers to the use of all caps text, exclamation points (either one or multiple), several question marks, bold text, italics text, or anything that is used to call attention in text. This section describes the design, implementation and evaluation of Lumen, our proposed two-level learning-based framework to expose influence cues in texts. Exposing presence of persuasion and framing is tackled as a multi-labeling document classification problem, where zero, one, or more labels can be assigned to each document. Due to recent developments in natural language processing, emotional salience is an input feature that Lumen exposes leveraging sentiment analysis. Note that Lumen's goal is not to distinguish deceptive vs. benign texts, but to expose different influence cues applied in different types of texts. Figure 1 illustrates Lumen's two-level hierarchical learning-based architecture. On the first level, the following features are extracted from the raw text: (i) topical structure inferred by topic modeling, (ii) LIWC features related to influence keywords [14] , and (iii) emotional salience features learned via sentiment analysis [60] . On the second level, a 8 . e.g., people tend to comply with requests or accept arguments made by figures of authority. classification model is used to identify the influence cues existing in the text. Probabilistic topic modeling algorithms are often used to infer the topic structure of unstructured text data [61] , [62] , which in our case are deceptive texts, hyperpartisan news, and mainstream news. Generally, these algorithms assume that a collection of documents (i.e., a corpus) are created following the generative process. Suppose that there are D documents in the corpus C and each document d = 1, ..., D has length m d . Also suppose that there are in total K different topics in the corpus and the vocabulary V includes V unique words. The relations between documents and topics are determined by conditional probabilities P (t|d), which specify the probability of topic t = 1, ..., K given document d. The linkage between topics and unique words are established by conditional probabilities P (w|t), which indicate the probability of word w = 1, ..., V given topic t. According to the generative process, for each token w(i d ), which denotes the i d -th word in document d, we will first obtain the topic of this token, z(i d ) = t, according to P (t|d). With the obtained z(i d ), we then draw the a word w(i d ) = w according to P (w|t = z(i d )) . In this work, we leveraged Latent Dririchlet Allocation (LDA), one of the most widely used topic modeling algorithms, to infer topic structure in texts [63] . In LDA, both P (w|t) and P (t|d) are assumed to have Dirichlet prior distributions. Given our dataset, which is the evidence to the probabilistic model, the goal of LDA is to infer the most likely conditional distributionP (w|t) andP (t|d), which is usually done by either variational Bayesian approach [63] or Gibbs Sampling [64] . In Lumen, the conditional probabilitieŝ P (t|d) represent the topic structure of the dataset. We use language to convey our thoughts, intentions, and emotions, with words serving as the basic building blocks of languages. Thus, the way different words are used in unstructured text data provide meaningful information to streamline our understanding of the use of influence cues in text data. Lumen thus leverages LIWC, a natural language processing framework that connects commonly-used words with categories [14] , [65] to retrieve influence features of texts to aid ML classification. LIWC includes more than 70 different categories in total, such as Perceptual Processes, Grammar, and Affect, and more than 6K common words. However, not all the categories are related to influence. After careful inspection, we manually selected seven categories as features related to influence for Lumen. For persuasion, we selected the category time (related to scarcity); for emotion, we selected the categories anxiety, anger, and sad; and for framing, we selected the categories reward and money (gain), and risk (loss). We denote the collection of the chosen LIWC categories as set S. Given a text document d with document length m d from the corpus C, to build the LIWC feature X LIW C i,d , ∀i ∈ S, we first count the number of words in the text d belonging to the LIWC category i, denoted as n i,d , then normalize the raw word count with the document length: (1) Emotional salience refers to both valence (positive to negative) and arousal (arousing to calming) of an experience or stimulus [8] , [9] , [66] , and research has shown that deception detection is reduced for emotional compared to neutral stimuli [66] . Similarly, persuasion messages that generate high (compared to low) arousal lead to poorer consumer decision-making [9] . Emotional salience may impair full processing of deceptive content and high arousal content may trigger System 1, the fast, shortcut-based brain processing mode [67] . In this work, we used a pre-trained rule-based model, VADER, to extract the emotional salience and valence from a document [60] . Both levels of emotion range from 0 to 1, where a small value means low emotional levels and a large number means high emotional levels. Therefore, emotional salience is both an input feature to the learning model and one of Lumen's outputs (see Fig. 1 ). Lumen's second level corresponds to the application of a general-purpose ML algorithm for document classification. Although Lumen is general enough to allow application of any general-purpose algorithm, in this paper, we applied Random Forest (RF) because it can provide the level of importance for each input predicative feature without additional computational cost, which aids in model understanding. Another advantage of RF is its robustness to the magnitudes of input predicative features, i.e., RF does not need feature normalization. We use the grid search approach to fine-tune the parameters in the RF model and follow the cross-validation to overcome any over-fitting issues of the model. As described previously, Lumen generates three types of features at its first hierarchical level (emotional salience, LIWC categories, and topic structure), which serve as input for the learning-based prediction algorithm (Random Forest, for this analysis) at Lumen's second hierarchical level (Fig. 1) ; these features rely on the unstructured texts in the dataset. However, different features need distinct preprocessing procedures. In our work, we used the Natural Language Toolkit (NLTK) [68] to pre-process the dataset. For all three types of features, we first removed all the punctuation, special characters, digital numbers, and words with only one or two characters. Next, we tokenized each document into a list of lowercase words. For topic modeling features, we removed stopwords (which provide little semantic information) and applied stemming (replacing a word's inflected form with its stem form) to further clean-up word tokens. For LIWC features, we matched each word in each text with the predetermined word list in each LIWC category; we also performed stemming for LIWC features. We did not need to perform pre-processing for emotional salience because we Fig. 1 : Lumen's two-level architecture. Pre-processed text undergoes sentiment analysis for extraction of emotional salience, LIWC analysis for extraction of features related to influence keywords, and topic modeling for structural features. These features are inputs to ML analysis for prediction of influence cues applied to the message. applied NLTK [60] , which has its own tokenization and preprocessing procedures. Additionally, we filtered out documents with less than ten words since topic modeling results for extremely short documents are not reliable [69] . We were then left with 2,771 cleaned documents, with 183,442 tokens across the corpus, and 14,938 unique words in the vocabulary. Next, we split the the 2,771 documents into a training and a testing set. In learning models, hyper-parameters are of crucial importance because they control the structure or learning processing of the algorithms. Lumen applies two learning algorithms: an unsupervised topic modeling algorithm, LDA, on the first hierarchical level and RF on the second level. Each algorithm introduces its own types of hyper-parameters; for LDA, examples include the number of topics and the concentration parameters for Dirichlet distributions, whereas for RF are the number of trees and the maximum depth of a tree. We also used the grid search approach to find a better combination of hyper-parameters. Note that due to time and computational power constraints, it is impossible to search all hyper-parameters and all their potential values. In this work, we only performed the grid search for number of topics (LDA) and the number of trees (RF). The results show that the optimal number of topics is 10 and the optimal number of trees in RF is 200. Note also that the optimal result is limited by the grid search space, which only contains a finite size of parameter combinations. If we only trained and tested Lumen on one single pair of training and testing sets, there would be high risk of overfitting. To lower this risk, we used 5-fold crossvalidation, wherein the final performance of the learning algorithm is the average performance over the five training and testing pairs. To evaluate our results (Sec. 5), we compared Lumen's performance in predicting the influence cues applied to a given document with three other document classification algorithms: (i) Labeled-LDA, (ii) LSTM, and (iii) naïve algorithm. Labeled-LDA is a semi-supervised variation of the original LDA algorithm [70] , [71] . When training the Labeled-LDA, both the raw document and the human coded labels for influence cues were input into the model. Compared to Lumen, Labeled-LDA only uses the word frequency information from the raw text data and has a very rigid assumption of the relation between the word frequency information and the coded labels, which limits its flexibility and prediction ability. Long Short-Term Memory (LSTM) takes the input data recurrently, regulates the flow of information, and determines what to pass on to the next processing step and what to forget. Since neural networks mainly deal with vector operations, we used 50-dimensional word embedding matrix to transfer each word into vector space [72] . The main shortcoming of neural network is that it works as a blackbox, making it difficult to understand the underlying mechanism. The naïve algorithm served as a base line for our evaluation. We randomly generated each label for each document according to a Bernoulli distribution with equal probabilities for two outcomes. As shown in Table 1 , we used F 1-score (following the work by Ramage et al. [70] and van der Heijden et al. [71] ), and accuracy rate to quantify the performance of the algorithms. We note that the comparison of F -scores is only meaningful under the same experiment setup. It would be uninformative to compare F-scores from distinctive experiments in different pieces of work in the literature due to varying experiment conditions. F 1-score can be easily calculated for single-labeling classification problems, where each document will only be assigned to one label. However, in our work, we are dealing with a multi-labeling classification problem, which means that no limit is imposed on how many labels each document can include. Thus, we employed two variations of the F 1-score to quantify the overall performance of the learning algorithm: macro and micro F 1-scores. This section details Lumen's evaluation. First, we provide a quantitative analysis of our newly developed dataset used to train Lumen, and the results of Lumen classification in comparison to other ML algorithms. We first begin by quantifying the curated dataset of 2,771 deceptive, hyperpartisan, or mainstream texts, hand-labeled by a group of coders. When considering all influence cues, most texts used between three and six cues per texts; only 3% of all texts leveraged a single influence cue, and 2% used zero cues (n = 58). When considering the most common pairs and triplets between all influence cues, slant (i.e., subjectivity or objectivity) and principles of persuasion (PoP) dominated the top 10 most common pairings and triplets. As such, the most common pairs were (authority, objectivity) and (authority, subjectivity), occurring for 48% and 45% of all texts, respectively. The most common (PoP, PoP) pairing was between authority and commitment, co-occurring in 41% of all texts. Emphasis appeared once in the top 10 pairs and twice in the top triplets: (emphasis, subjectivity) occurring for 29% of texts, and (emphasis, authority, subjectivity) and (emphasis, commitment, subjectivity) for 20% and 19% of texts, respectively. Blame/guilt appeared only once in the top triplets as (authority, blame/guilt, objectivity), representing 19% of all texts. Gain framing appeared only as the 33rd most common pair (gain, subjectivity) and 18th most common triplet (call to action, scarcity, gain), further emphasizing its scarcity in our dataset. We found that most texts in the dataset contained one to four principles of persuasion, with only 4% containing zero and 3% containing six or more PoP labels; 29% of texts apply two PoP and 23% leverage three PoP. Further, Fig. 2 shows that authority and commitment were the most prevalent principles appearing, respectively, in 71% and 52% of the texts; meanwhile, reciprocation and indignation were the least common PoP (5% and 9%, respectively). Almost all types of texts contained every PoP to varying degrees; the only exception is reciprocation (the least-used PoP overall) which was not at all present in fake news texts (in the Deceptive Texts Group) and barely present (n = 3, 0.6%) in right-leaning hyperpartisan news. Authority was the most-used PoP for all types of texts, except phishing emails (most: call to action) and IRA ads (most: commitment), both of which are in the Deceptive Texts Group. Deceptive Texts. Fake news was notably reliant on authority (92% of all fake news leveraged the authority label) compared to phishing emails (45%) and the IRA ads (32%); however, fake news used liking, reciprocation, and scarcity (5%, 0%, 3%, respectively) much less often than phishing emails (27%, 8%, 65%) or IRA ads (41%, 10%, 24%). Interestingly, admonition was most used by fake news (35%), though overall, admonition was only present in 14% of all texts. Phishing emails were noticeably more reliant on call to action (80%) and scarcity (65%) compared to fake news (33%, 3%) and IRA ads (40%, 24%), yet barely used indignation (0.4%) compared to the same (13% for fake news and 17% for IRA ads). The IRA ads relied on indignation, liking, reciprocation, and social proof much more than the others; note again that reciprocation was the least occurring PoP (5% overall), but was most commonly occurring in IRA ads (10%). Hyperpartisan News. Right-leaning texts had nearly twice as much call to action and indignation than left-leaning texts (61% and 19% vs. 31% and 8%, respectively). Meanwhile, left-leaning hyperpartisan texts had noticeably more liking (30% vs. 13%), reciprocation (8% vs. 0.6%), and scarcity (27% vs. 13%) than right-leaning texts. Mainstream News. Authority (88%) and commitment (43%) were the most frequently appearing PoP in center news, though this represents the third highest occurrence of authority and lowest use of commitment across all six text type groups. Mainstream news also used very little indignation (3%) compared the the other text types except phishing emails (0.4%), and also demonstrated the lowest use of social proof (7%). Authority and commitment were the most common PoP in the dataset, with the former most common in fake news articles. Phishing emails had the largest occurrence of scarcity). There were few gain or loss labels for the overall dataset (only 13% and 7%, respectively). Very few texts (18%) were framed exclusively as either gain or loss, 81% did not include any framing at all, and only 1% of the texts used both gain and loss framing in the same message. We also found that gain was much more prevalent than loss across all types of texts, except for fake news, which showed an equal amount (1.5% for both gain and loss). Notably, phishing emails had significantly more gain and loss than any other text type (41% and 29%, respectively); mainstream center news and IRA ads showed some use of gain framing (10% and 13%, respectively) compared to the remaining text types. Next, we investigated how persuasion and framing were used in texts by analyzing the pairs and triplets between the two influence cues. Gain framing most frequently occurred with call to action and commitment, though these represent only 9% of pairings. Gain, call to action and scarcity was the the most common triplet between PoP and framing, occurring for 7% of all texts-this is notable as phishing emails had call to action and scarcity as its top PoP, and gain framing was also most prevalent in phishing. Also of note is that loss appeared in even fewer common pairs and triplets compared to gain (e.g., loss and call to action appeared in just 5% of texts). Framing was a relatively rare occurrence in the dataset, though predominantly present in phishing emails, wherein gain was invoked 1.5× more often than loss. We used VADER's compound sentiment score (E, wherein E ≥ 0.05, , E ≤ −0.05, and −0.05 < E < 0.05 denote positive, negative, and neutral sentiment, respectively) and LIWC's positive and negative emotion word count metrics to measure sentiment. Overall, our dataset was slightly positive in terms of average compound sentiment (µ = 0.23) and with an average of 4.0 positive emotion words and 1.7 negative emotion words per text. In terms of specific text types, fake news contained the only negative average compound sentiment (−0.163), and right-leaning hyperpartisan news had the only neutral average compound sentiment (0.015); all other text types had, on average, positive sentiment, with phishing emails as the most positive text type (0.635). Left-leaning hyperpartisan news had the highest average positive emotion word count (5.649) followed by phishing emails (4.732), whereas fake news had the highest average negative word count (2.892) followed by left hyperpartisan news (2.796). We also analyzed whether emotional salience has indicative powers to predict the influence cues. Most influence cues and LIWC categories had an average positive sentiment, with liking and gain framing having the highest levels of positive emotion. Anxiety and anger (both LIWC categories) showed the only neutral sentiment, whereas admonition, blame/guilt, and indignation as the only negative sentiment (with the latter being the most negative out of all categories). Interestingly, items such as loss framing and LIWC's risk both had positive sentiment. The dataset invoked an overall positive sentiment, with phishing emails containing the most positive average sentiment and fake news with the most negative average sentiment. The objective and subjective labels were present in 52% and 64% of all texts in the dataset, respectively. This > 50% frequency for both categories was present in all text types except phishing emails and IRA ads, where subjectivity was approximately 2.5× more common than objectivity. The most subjective text type were IRA ads (77%) and the most objective texts were fake news (72%); inversely, the least objective texts were phishing emails (27%) and least subjective were mainstream center news (58%). More notably, there was an overlap between the slants, wherein 29% of all texts contained both subjective and objective labels. This could reflect mixing factual (objective) statements with subjective interpretations of them. Nonetheless, objectivity and subjectivity were independent variables, χ 2 (4, N = 2, 998) = 72.0, p ≈ 0. The parings (objectivity, authority) and (subjectivity, authority) were the the top two most common pairs considering PoP and slant; these pairs occurred at nearly the same frequency within the dataset (48% and 45%, respectively). This pattern repeats itself for other (PoP, slant) pairings and triplets, insofar as (objectivity, subjectivity, authority) is the third most commonly occurring triplet. When comparing just (PoP, slant) triplets, slant is present in 9/10 top triplets, with (subjectivity, authority, commitment) and (objectivity, authority, commitment) as the two most common triplets (30% and 27%, respectively). Objectivity and subjectivity occurred over half of the dataset, with the latter was much more common in phishing emails and IRA ads, while the former was most common in fake news articles. Twenty-nine percent of all texts contained the blame/guilt label. Interestingly, nearly the same proportions of fake news (45.4%) and right-leaning hyperpartisan news (45.0%) were labeled with blame/guilt, followed by left-leaning hyperpartisan news (38%). Phishing emails, IRA ads, and mainstream center media used blame/guilt at the lowest frequencies (ranging from 15% to 25%). Blame/guilt was somewhat seen in the top 10 pairs with PoP, only pairing with authority (4th most common pairing with 26% frequency) and commitment (6th most common, 18%). However, blame/guilt appeared more frequently amongst the top 10 triplets with PoP, co-occurring with authority, commitment, call to action, and social admonition. Blame/guilt was disproportionately frequent for fake and hyperpartisan news, commonly co-occurring with authority or commitment. Emphasis was used in nearly 35% of all texts in the dataset. Among them, all news sources (fake, hyperpartisan, and mainstream) appeared with the smallest use of emphasis (range: 17% to 26%). This follows as news (regardless of veracity) likely is attempting to purport itself as legitimate. On the other hand, phishing emails and IRA ads were both shared on arguably more informal environments of communication (email and social media), and were thus often found to use emphasis (over 54% for both categories). Additionally, similar to previous analyses for other influence cues, emphasis largely co-occurred with authority, commitment, and call to action. The use of emphasis was much more common in informal text typed (phishing emails and IRA social media ads), and less common in news-like sources (fake, hyperpartisan, or mainstream). We also explored whether LIWC features have indicative powers to predict the influence cues. Table 1 in Appendix B shows that indignation and admonition had the highest average anxiety feature, while liking and gain framing had the lowest. Indignation also scored three times above the overall average for the anger feature, as well as for sadness (alongside blame/guilt), whereas gain had the lowest average for both anger and sadness. The reward feature was seen most in liking and in gain, while risk was slightly more common in loss framing. The time category had the highest overall average and was most common in blame/guilt, while money had the second largest overall average and was most common in loss. We also saw that that left-leaning hyperpartisan news had the highest average anxiety, sadness, reward, and time counts compared to all text types, whereas right-leaning hyperpartisan news averaged slightly higher than left-leaning media only in the risk feature. Note, however, that LIWC is calculated based on word counts and is therefore possibly biased towards longer length texts; it should thus be noted that while hyperpartisan left media had the highest averages for four of the seven LIWC features, hyperpartisan media also had the second largest average text length compared to other text types. For the Deceptive Texts Group, phishing emails had the largest risk and money averages over all text types, while averaging lowest in anxiety, anger, and sadness. Fake news was highest overall in anger, though it was slightly higher in anxiety, sadness, and time compared to phishing emails and IRA ads. On the other hand, the IRA ads were lowest in reward, risk, time, and money compared to the its group. Lastly, mainstream center media had no LIWC categories in either high or low extremities-most of its average LIWC values were close to the overall averages for the entire dataset. LIWC influence features varied depending on the type of text. Left hyperpartisan news had the highest averages for four features (anxiety, sadness, reward, and time). Phishing evoked risk and money, while fake news evoked anger. This section describes our results in evaluating Lumen's multi-label prediction using the dataset. We compared Lumen's performance against three other ML algorithms: Labeled-LDA, LSTM, and a naïve algorithm. The former two learning algorithms and Lumen performed much better than the naïve algorithm, which shows that ML is promising for retrieval of influence cues in texts. From Table 1 we can see that Lumen's performance is as good as the state-of-theart prediction algorithm LSTM in terms of F 1-micro score and overall-accuracy (with < 0.25% difference between each metric). On the other hand, LSTM outperformed Lumen in terms of F 1-macro, which is an unweighted mean of the metric for each labels, thus potentially indicating that Lumen underperforms LSTM in some labels although both algorithms share similar overall prediction result (accuracy). Nonetheless, Lumen presented better interpretability than LSTM (discussed below). Finally, both Lumen and LSTM presented better performance than Labeled-LDA in both F 1-scores and accuracy, further emphasizing that additional features besides topic structures can help improve the performance of the prediction algorithm. To show Lumen's ability to provide better understanding to practitioners (i.e., interpretability), we trained it with our dataset and the optimal hyper-parameter values from grid search. After training, Lumen provided both the relative importance of each input feature and the topic structure of the dataset without additional computational costs, which LSTM cannot provide because it operates as a blackbox. Table 2 shows the top-five important features in Lumen's prediction decision-making process. Among these features, two are related to sentiment, and the remaining three are topic features (related to bank account security, company profit report, and current events tweets), which shows the validity for the choice of these types of input features. Positive and negative sentiment had comparable levels of importance to Lumen, alongside the bank account security topic. In this paper, we posit that interventions to aid humanbased detection of deceptive texts should leverage a key invariant of these attacks: the application of influence cues in the text to increase its appeal to users. The exposure of these influence cues to users can potentially improve their decision-making by triggering their analytical thinking when confronted with suspicious texts, which were not flagged as malicious via automatic detection methods. Stepping towards this goal, we introduced Lumen, a learning framework that combines topic modeling, LIWC, sentiment analysis, and ML to expose the following influence cues in deceptive texts: persuasion, gain or loss framing, emotional salience, subjectivity or objectivity, and use of emphasis or attribution of guilt. Lumen was trained and tested on a newly developed dataset of 2,771 texts, comprised of purposefully deceptive texts, and hyperpartisan and mainstream news, all labeled according to influence cues. Most texts in the dataset applied between three and six influence cues; we hypothesize that these findings may reflect the potential appeal or popularity of texts of moderate complexity. Deceptive or misleading texts constructed without any influence cues are too simple to convince the reader, while texts with too many influence cues might be far too long or complex, which are in turn more time-consuming to write (for attackers) and to read (for receivers). Most texts also applied authority, which is concerning as it has been shown to be one of the most impactful in user susceptibility to phishing studies [34] . Meanwhile, reciprocation was the least used principle at only 5%; this may be an indication that reciprocation does not lend itself well to be applied in text, as it requires giving something to the recipient first and expecting an action in return later. Nonetheless, reciprocation was most common in IRA ads (10%); these ads were posted on Facebook, and social media might be a more natural and intuitive location to give gifts or compliments. We also found that the application of the PoP was highly imbalanced with reciprocation, indignation, social proof, and admonition each being applied less than 15% the texts during the coding process. The least used influence cue were gain and loss framing, appearing in only 13% and 7% of all texts. Though Kahneman and Tversky [11] posited that loss is more impactful than the possibility of a gain, our dataset indicates that gain was more prevalent than loss. This is especially the case in phishing emails, wherein the framing frequencies increase to 41% and 29%; this difference suggests that in phishing emails, attackers might be attempting to lure users to potential financial gain. We further hypothesize that phishing emails exhibited these high rates of framing because successful phishing survives only via a direct action from the user (e.g., clicking a link), which may therefore motivate attackers to implement framing as a key influence method. Phishing emails also exhibited the most positive average sentiment (0.635) compared to other text types, possibly related to its large volume of gain labels, which were also strongly positive in sentiment (0.568). Interestingly, texts varied among themselves in terms of influence cues even within their own groups. For example, within the Deceptive Texts Group, fake news used notably more authority, objectivity, and blame/guilt compared to phishing emails and IRA ads, and was much lower in sentiment compared to the latter two. Though phishing emails and IRA ads were more similar, phishing was nonetheless different in its use of higher positive sentiment, gain framing, scarcity, and lower blame/guilt. This was also evident within the Hyperpartisan News Group-while rightlearning news had a higher frequency of commitment, call to action, and admonition than left-learning news, the opposite was also true for liking, reciprocation, and scarcity. Even comparing among all news types (fake, hyperpartisan, and mainstream), this diversity of influence cues still prevailed, with the only resounding agreement in a relative lack of use of emphasis. This diversity across text types gives evidence of the highly imbalanced application of influence cues in real deceptive or misleading campaigns. We envision the use of Lumen (and ML methods in general) to expose influence cues as a promising direction for application tools to aid human detection of cybersocial engineering and disinformation. Lumen presented a comparable performance compared to LSTM in terms of the F 1-micro. Lumen's interpretability can allow a better understanding of both the dataset and the decision-making process Lumen undergoes, consequently providing invaluable insights for feature selection. Dataset. One of the limitations of our work is that the dataset is unbalanced. For example, our coding process revealed that some influence (e.g., authority) were disproportionately more prevalent than others (e.g., reciprocation, framing). Even though an unbalanced dataset is not ideal for ML analyses, we see this as part of the phenomenon. Attackers and writers might find it more difficult to construct certain concept via text, thus favoring other more effective and direct influence cues such as authority. Ultimately, our dataset is novel in that each of the nearly 3K items were coded according to 12 different variables; this was a time-expensive process and we shall test the scalability of Lumen in future work. Nevertheless, we plan to alleviate this dataset imbalance in our future work by curating a larger, high-quality labeled dataset by reproducing our coding methodology, and/or with the generation of synthetic, balanced datasets. Though we predict that a larger dataset will still have varying proportions of certain influence cues, it will facilitate machine learning with a larger volume of data points. Additionally, our dataset is U.S.-centric, identified as a limitation in some prior work (e.g., [4] , [73] , [74] ). All texts were ensured to be in the English language and all three groups of data were presumably aimed at an American audience. Therefore, we plan future work to test Lumen in different cultural contexts. ML Framework. Lumen, as a learning framework, has three main limitations. First, although the two-level architecture provides high degree of flexibility and is general enough to include other predictive features in the future, it also introduces complexity and overhead because tuning the hyper-parameters and training the model will be more computationally expensive. Second, topic modeling, a key component of Lumen, generally requires a large number of documents of a certain length (usually thousands of documents and hundreds of words in each document, such as a collection of scientific paper abstracts) for topic inference. This will limit Lumen's effectiveness on short texts or when the training data is limited. Third, some overlap between the LIWC influence features and emotional salience might exist (e.g., the sad LIWC category may correlate with the negative emotional salience), which may negatively impact the prediction performance of the machine learning algorithm used in Lumen. In other words, correlation of input features makes machine learning algorithms hard to converge in general. In this paper, we introduced Lumen, a learning-based framework to expose influence cues in text by combining topic modeling, LIWC, sentiment analysis, and machine learning in a two-layer hierarchical architecture. Lumen was trained and tested with a newly developed dataset of 2,771 total texts manually labeled according to the influence cues applied to the text. Quantitative analysis of the dataset showed that authority was the most prevalent influence cue, followed by subjectivity and commitment; gain framing was most prevalent in phishing emails, and use of emphasis commonly occurred in fake, partisan, and mainstream news articles. Lumen presented comparable performance with LSTM in terms of F 1-micro score, but better interpretability, providing insights of feature importance. Our results highlight the promise of ML to expose influence cues in text with the goal of application in tools to improve the accuracy of human detection of cyber-social engineering threats, potentially triggering users to think analytically. We advocate that the next generation of interventions to mitigate deception expose influence to users, complementing automatic detection to address new deceptive campaigns and improve user decision-making when confronted with potentially suspicious text. Persuasion constitutes a series of influence principles based on Robert Cialdini's work, split into the following categories: 1) Authority or Expertise/Source Credibility 2) Reciprocation 3) Commitment (sub-categories: Indignation, Call to Action) 4) Liking 5) Scarcity/Urgency/Opportunity 6) Social Proof (sub-category: Admonition) Authority or Expertise/Source Credibility. Humans tend to comply with requests made by figures of authority and/or with expertise/credibility. The text can include: • Literal authority (e.g., law enforcement personnel, lawyers, judges, politicians) • Reputable/credible entity that could exert some power over people (e.g., a bank) • Indirect authority (especially a fictitious company/person) that builds a setting of authority Examples: • "Tupac Shakur was indeed not just one of the greatest rappers of all time but a worldly icon whose status in hiphop culture can never be replaced. His revolutionary knowledge mixed with street experience made him powerful unstoppable force that spoke to the hearts of millions of people." • "According to data from Mapping Police Violence" • "Autopsy says" • "Fox & Friends hosts declare" Reciprocation. Humans tend to repay, in kind, what another person has provided them. Text might first give/offer something, expecting that the person/user will reciprocate. Even if the person does not reciprocate, s/he will still keep the "gift." Therefore, if the user thinks they received a gift, they may reciprocate the kindness (and may only find out later that the "gift" was fake). Example: • "Aww! Because you need such a cutie on your timeline!" Commitment. Once humans have taken a stand, they will feel pressured to behave in line with their commitment. Text leverages a role assumed by the target and their commitment to that role. Petitions and donations/charity (gun control, animal abuse, children's issue, political issues), or political affiliations engagements. Examples: • "But let's remember Tupac and his ability to question the social order. Changes, one of his popular songs, asks everyone to change their lifestyles for better society. He always asked people to share with each other and to learn to love each other." • "Patriotism comes from your heart... follow its dictates and don't live a false life. Join!" • "We will stand for our right to keep and bear arms!" • "Black Matters" Indignation. Still within the definition of commitment, text employing indignation will also focus on anger or annoyance provoked by what is perceived as unfair treatment, unjust, unworthy, mean. Examples: • "Why should we be a target for police violence and harassment?" • "Why the pool party in Georgia is a silent story? Why the police was not aware of a large party? Why this story has no national outrage? Is it ok when a black teenager dies?" • "Obama never tried to protect blacks from police pressure" Call to Action. Still within the definition of commitment, ads/text employing a call to action will represent an exhortation or stimulus to do something in order to achieve an aim or deal with a problem. A piece of content intended to induce a viewer, reader, or listener to perform a specific act, typically taking the form of an instruction or directive (e.g., buy now or click here). Examples: • "Stop racism! We all belong to ONE HUMAN RACE." • "We really can change the world if we stay united" • "We can be heard only when we stand together" • "White House must reduce the unemployment rates of black population" • "If this is a war against police -we're joining this war on the cop's side!" • "If we want to stop it, we should fight as our ancestors did it for centuries." Liking. Humans tend to comply with requests from people they like or with whom they share similarities. Forms of liking: • Physical attractiveness: Good looks suggest other favorable traits, i.e. honesty, humor, trustworthiness • Similarity: We like people similar to us in terms of interests, opinions, personality, background, etc. • Compliments: We love to receive praises, and tend to like those who give it • Contact and Cooperation: We feel a sense of commonality when working with others to fulfill a common goal • Conditioning and Association: We like looking at models, and thus become more favorable towards the cars behind them May also come in the form of establishing a familiarity or rapport with the object of liking Example: • "What a beautiful and intelligent child she is. How magnificent is her mind. . . " Scarcity/Urgency/Opportunity. Opportunities seem more valuable when their availability is limited Text can leverage this principle by tricking/asking an Internet user into clicking on a link to avoid missing out on a "once-in-a-lifetime" opportunity; creates a sense of urgency. Examples: • "Is it time to call out the national guard?" (Urgency) • "Free Figure' s Black Power Rally at VCU" (Opportunity) • "CLICK TO GET LIVE UPDATES ON OUR PAGE" (Opportunity) Social Proof. People tend to mimic what the majority of people do or seem to be doing. People let their guard and suspicion down when everyone else appears to share the same behaviors and risks. In this way, they will not be held solely responsible for their actions (i.e., herd mentality). The actions of the group drive the decision making process. Examples: • "More riots are coming this summer" • "America is deceased. Islamic terror has penetrated our homeland and now spreads at a threw. Remember Victims Of Islamic Terror" Admonition. Within the definition of social proof, admonition pertains to texts that may include the following: • Caution, advise, or counsel against something. • Reprove or scold, especially in a mild and goodwilled manner: The teacher admonished him about excessive noise. • Urge to a duty or admonish them about their obligations. Examples: • "More riots are coming this summer" • "America is deceased. Islamic terror has penetrated our homeland and now spreads at a threw Remember Victims Of Islamic Terror" Slant Slant encompasses subjectivity and objectivity. Subjectivity. Subjective sentences generally refer to personal opinion, emotion or judgment. The use of popular adverbs (e.g, very, actually), upper case, exclamation and interrogation marks, hash tags, indicates subjectivity. Examples: • "I doubt that it's true" • "A beautiful message was seen on the streets of the capitol," • "A timely message for today." • "No matter what Defense Secretary or POTUS are saying they don't fool me with promises of gay military equality as key to the nation's agenda." • "This is something that America has a serious issue with -RACISM!" "Is it time to call out the national guard?" • "This makes me ANGRY!" Objectivity. Objective sentences refers to factual information, based on evidence, or when evidence is presented. May or may not include statistics. Examples: • "It has been discovered that" • "According to data from Mapping Police Violence" • "The McKinney Police Department, Chief Of Police Greg Conley said" Gain/Loss Framing refers to the presentation of a message (e.g., health message, financial options, advertisement etc.) as implying a possible gain (e.g., refer to possible benefits of performing a behavior) vs. implying a possible loss (e.g., refer to the costs of not performing a behavior). Gain. People are likely to act in ways that benefit them in some way. A reward will increase the probability of a behavior A promise that a product (or something else) can provide some form of self-improvement or benefit to the user. This product can come in the form of an ad, job offer, joining a group, etc. Examples: • "Think about the benefits of recycling." • "Think about what you can gain if you join." Loss. People are likely to act in ways that reduce loss/harm to them. Avoiding loss will increase the probability of a behavior A promise that a product (or something else) can help avoid some behavior/outcome. Examples: • "Think about the costs of recycling." • "Think about what you can lose if you don't join." When the text references an "another" (who/what) for the wrong/bad things happening. "Who" can be a person, organization, etc., and "what" can be a cause, object, etc. Example: • "...Hillary is a Satan, and her crimes and lies had proved just how evil she is." Emphasis refers to the use of all caps text, several exclamation points, several question marks, or anything used to call attention. Example: • "Our women are the most powerful!!" • "LATIN WOMEN CAN DO THINGS TO MEN WITH THERE EYES" Report on the investigation into Russian interference in the 2016 presidential election Fact check: Courts have dismissed multiple lawsuits of alleged electoral fraud presented by Trump campaign Beyond "fake news": Analytic thinking and the detection of false and hyperpartisan news headlines Facebook News Use During the 2017 Norwegian Elections-Assessing the Influence of Hyperpartisan News A third wave of selective exposure research? The challenges posed by hyperpartisan news on social media The Oxford Handbook of Political Communication Influence: The Psychology of Persuasion A Circumplex Model of Affect Emotional Arousal May Increase Susceptibility to Fraud in Older and Younger Adults Shaping Perceptions to Motivate Healthy Behavior: The Role of Message Framing Prospect Theory: An Analysis of Decision under Risk The Science of Persuasion Government and state-affiliated media account labels The Development and Psychometric Properties of LIWC Beyond the Lock Icon: Real-time Detection of Phishing Websites Using Public Key Certificates PhishFarm: A Scalable Framework for Measuring the Effectiveness of Evasion Techniques against Browser Phishing Blacklists Detecting Phishing Attacks Using Natural Language Processing and Machine Learning Deconstructing the Phishing Campaigns that Target Gmail Users Detection of Phishing Attacks: A Machine Learning Approach A Multi-Classifier Based Prediction Model for Phishing Emails Detection Using Topic Modeling, Named Entity Recognition and Image Processing Phishing email detection based on structural properties Improving SSL Warnings: Comprehension and Adherence Crying Wolf: An Empirical Study of SSL Warning Effectiveness Anti-Phishing Phil: The Design and Evaluation of a Game That Teaches People Not to Fall for Phish What Do We Really Know About How Habituation to Warnings Occurs Over Time?: A Longitudinal fMRI Study of Habituation and Polymorphic Warnings Alice in Warningland: A Large-scale Field Study of Browser Security Warning Effectiveness The Psychology of Fake News Checking how fact-checkers check Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines Understanding Scam Victims: Seven Principles for Systems Security Empirical Analysis of Weapons of Influence, Life Domains, and Demographic-Targeting in Modern Spam -An Age-Comparative Perspective Persuasion: How phishing emails can influence users and bypass security measures Dissecting Spear Phishing Emails for Older vs Young Adults: On the Interplay of Weapons of Influence and Life Domains in Predicting Susceptibility to Phishing Interaction of Personality and Persuasion Tactics in Email Phishing Attacks Analysing persuasion principles in phishing emails Influence techniques in phishing attacks: An examination of vulnerability and resistance Susceptibility to Spear-Phishing Emails: Effects of Internet User Demographics and Email Content What Drives Hyper-Partisan News Sharing: Exploring the Role of Source, Style, and Content Liberals and conservatives rely on different sets of moral foundations Social Media Advertisements Fnid: Fake news inference dataset Phishing scam reports archive Alerts & Notifications, Information Technology Phishing Scams Targeting the UMN Phish Bowl/Phishing Scams Recent Phishing Examples The psychology of persuasion Prospect theory Lights, camera, conflict: Newspaper framing of the 2008 screen actors guild negotiations Newspaper portrayals of child abuse: Frequency of coverage and frames of the issue Measures of political talk frequency: Assessing reliability and meaning Frequent but accurate: A closer look at uncertainty and opinion divergence in climate change print news Reliability of nominal data based on qualitative judgments Content analysis in mass communication: Assessment and reporting of intercoder reliability The Influence of Framing on Risky Decisions: A Meta-analysis Vader: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text Handbook of latent semantic analysis Topic models Latent Dirichlet Allocation Finding Scientific Topics The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods Cold-blooded Lie Catchers? An Investigation of Psychopathy, Emotional Processing, and Deception Detection: Psychopathy and Deception Detection Large Stakes and Big Mistakes Natural Language Toolkit A New Evaluation Framework for Topic Modeling Algorithms Based on Synthetic Corpora Labeled LDA: A Supervised Topic Model for Credit Attribution in Multilabeled Corpora Cognitive Triaging of Phishing Attacks Comparative study of word embedding methods in topic segmentation Measuring the reach of "fake news" and online disinformation in Europe The authors would like to thank the coders for having helped with the labeling of the influences cues in our dataset. This work was support by the University