Tagline: A venue for research on using advanced NLP to detect, understand, and defend against current and future threats in online social platforms.
Description: Social threats for individuals and organizations are prevalent in online conversations, where human vulnerabilities pave the way for phishing, propaganda, scams, disinformation campaigns, and social engineering tactics (Bakhshi, Papadaki, and Furnell 2008, Karakasilitiosis, Furnell, and Papadaki 2007). For example, over 80% of cyber penetrations start with a social engineering attack (Verizon/TM 2014), often through manipulative language usage, resulting in a loss of money, sensitive information, or control of resources to unsuspecting victims at an individual level. Detection techniques based on metadata have yielded minimal success in the face of rising personalized attacks, especially those involving impersonation and power relationships (e.g., a spoofed dean requesting a gift card purchase from a department faculty). The implications are potentially more dire for disinformation campaigns, as these are implemented on a much larger scale. Natural language processing (NLP) and computational sociolinguistics techniques in conjunction with metadata analysis can provide a better means for detecting and countering attacks and disinformation campaigns in a wide variety of online, conversational contexts (Dalton et al 2019, Kim et al 2018, Dalton et al 2017, Sawa et al 2016). The STOC (Social Threats in Online Conversations: Understanding and Management) workshop is the first dedicated to gleaning actions and intentions of adversaries in online conversations, from the adversaries’ language use coupled with communication content.
Topics: The topics of the workshop include but are not limited to the following:
- Development and evaluation of corpora to study social engineering threats and attacks in various forms of online communication, such as emails, SMS, slack, Whatsapp and LinkedIn
- Development and evaluation of corpora to study large scale influence and disinformation campaigns targeting specific communities via social media
- Challenges in developing corpora for social engineering attacks and disinformation campaigns
- Advances in NLP for understanding online conversations and social engineering contexts, e.g., semantic parsing, information retrieval and question answering
- Detection of social threats at different scales, e.g., from mass phishing attacks to targeted social engineering against individuals and businesses to sophisticated disinformation campaigns against entire populations
- NLP based mitigation techniques for social engineering attacks (e.g., verification of provenance) and for disinformation campaigns (e.g., counter messaging)
- Dialogue/narrative understanding and generation for bots to counter social engineering attacks
- Strategies for countering unfolding disinformation campaigns to slow and stop their progress
Automatic detection of actions and intentions of participants in online conversations, e.g., the implied “ask” in the sentence
will result in eligibility for a $500 prize.
- Automatic detection of the “provocation” underlying a disinformation campaign and the socio-cognitive vulnerabilities of the target population it aims to exploit
- Natural language generation techniques to enable bot development for controlled, goal-directed and yet natural sounding conversations with potential adversaries and their followers
- Active and passive defense mechanisms used for development of conversational bots
- Risk and trust models for operating NLP bots with discretion and autonomy to engage with an adversary or an adversary’s followers
- Persuasion techniques used in dialogue/narrative in social engineering contexts and in disinformation campaigns
- Identification of attitudes that adversaries attempt to induce in targets for compliance
- Techniques to induce attitudes in the adversaries or their followers, through a range of different counter measures
- Social impact of disinformation campaigns, social engineering attacks, persuasion techniques, etc. using language and communication strategies
- Evaluation of the impact of different types of social engineering attacks and disinformation campaigns
Both long (8pp) and short (4pp) papers must follow the
LREC Stylesheet and can be submitted here. For the STOC workshop, all submissions need to be anonymized.
Ethics statement: Submissions must include a brief statement about “ethical considerations” that addresses the potential for malicious use of the technology or other possible negative impacts and how these are mitigated.
- First call for workshop papers
- Dec 20, 2020
- Second call for workshop papers
- Jan 15, 2020
- Final call for workshop papers
- Feb 1, 2020
- Workshop papers due
- Feb 21, 2020
- Notification of acceptance
- March 12, 2020
- Camera-ready papers due
- April 7, 2020
- Dr. Rosanna E. Guadagno, Director of Info Warfare WG at Center for International Security, Stanford; Former NSF Program Director.
Her research focuses on social Influence and persuasion, mediated-communication, and gender roles.
Forthcoming book: Psychological Processes in Social Media: Why We Click.
- Title: Information warfare in the social media age
- Abstract: Misinformation – false information passed off as factual – is an effective weapon in the information age and has become widely used to influence people’s attitudes and behavior on social media. For instance, the Russian Internet Research Agency’s information warfare campaign supported Donald Trump’s successful candidacy in the 2016 United States presidential election, with some arguing that their efforts were effective facilitating Trump’s electoral college victory (Jamieson, 2018). In this talk, I examine the complex relationship between people’s social media use and their susceptibility to information warfare campaigns intended to sow mis- and disinformation. To accomplish this, the literature on social influence and persuasion via social media will be reviewed, focusing on the role that perceived social norms, cognitive dissonance, obedience to authority, and the viral nature of social media content play on people’s willingness to believe mis- and disinformation attempts. This talk concludes with a discussion of potential strategies that individuals, policy makers, and technology companies could adopt to aid in protecting unsuspecting people from these types of influence operations.
- Bio: here
- Dr. Ian Harris, Professor of Computer Science, University of California Irvine.
His research focuses on social engineering attack detection, embedded systems security, electronic design automation from natural language. Author of
Catch me, Yes we can! - Pwning Social Engineers using Natural Language Processing Techniques in Real-Time.
- Title: Social engineering and how to detect it
- Abstract: Social engineering is a modern name for the actions of a con artist, the manipulation of people for profit. Social engineering has probably existed as long as people have communicated with one another but with the advent of computer-mediated communication, the reach and impact of social engineers has grown to tremendous proportions. Training regimes have been used to prepare people to detect and resist these attacks, but a clever social engineer can alter the victim's state of mind to make him more vulnerable. Fortunately, advances in NLP have made it possible to automatically infer not just the semantics, but also aspects of the pragmatics of a sequence of dialog acts. That is, we can not only determine the meaning of an individual utterance, but also the intent of the speaker in making the utterance. In this talk I will present previous work in social engineering detection and I will discuss the work of my group in detecting social engineering attacks by identifying utterances with malicious intent. I will also discuss how these approaches can be applied more generally to identify other interesting aspects of meaning in conversations.
- Bio: here
||Research Scientist at the Florida Institute for Human and Machine Cognition
||Her research interests are in natural language understanding and improving NLP systems using deep
linguistic knowledge. She was involved in the organization of the SpLU-RoboNLP 2019
and the SpLU-2018 workshops. She also led an effort to develop Hindi for the PARSEME shared task.
||Research Associate at the Florida Institute for Human and Machine Cognition
His research focuses on using natural language processing to improve cyber threat intelligence. He has
tools to improve open source intelligence gathering using information foraging and improving cyber attack
through social signals such as outrage.
||Senior Research Scientist, Associate Director of the Florida Institute for Human and Machine Cognition
||Her research focuses on semantics, MT, language understanding, and cyber-NLP. She has organized multiple
conferences and workshops, e.g., ACL (local org, 1999), AAAI Symposium Series (director 1994-1999), Co-Chair
of multiple AMTA Workshops. Served multiple times as ACL Area Chair. PI or co-I: CAUSE, SocialSim, ASED.
||Assistant Professor at the University of North Carolina at Charlotte
||Her research interests are using natural language processing and cognitive science to program intelligent
agents. She is the co-PI on DARPA’s ASED program (PANACEA team). She is the co-chair for Widening NLP
(WiNLP), which is a two year position. This year the WiNLP workshop was colocated with ACL.
||Professor of Cognitive Science and Computer Science at Rensselaer Polytechnic Institute
||Previously Professor of Computer Science at SUNY Albany, and Director of Albany’s ILS Institute. His
focuses on computational linguistics, sociolinguistics, socio-behavioral computing. He was involved in IBM’s
Jeopardy! Challenge. Organized and served as invited speaker several workshops for ACL, LREC, ACM, ISAT.
- Ehab Al-Shaer (UNCC)
- Genevieve Bartlett (USC-ISI)
- Emily Bender (UWash)
- Larry Bunch (IHMC)
- Esteban Castillo (UAlbany)
- David DeAngelis (ISI)
- Mona Diab (GWU/Amazon)
- Sreekar Dhaduvai (UAlbany)
- Min Du (Berkeley)
- Maxine Eskenazi (CMU)
- William Ferguson (Raytheon)
- Mark Finlayson (FIU)
- Marjorie Freedman (ISI)
- Bryanna Hebenstreit (UAlbany)
- Christopher Hidey (Columbia)
- Scott Langevin (Uncharted)
- Christian Lebiere (CMU)
- Kristina Lerman (USC/ISI)
- Fei Liu (UCF)
- Amir Masoumzadeh (UAlbany)
- Kathleen McKeown (Columbia)
- Alex Memory (Leidos)
- Chris Miller (SIFT)
- Mark Orr (Virginia Tech)
- Ian Perera (IHMC)
- Alan Ritter (OSU)
- James Ryan (Raytheon)
- Emily G. Saldanha (PNNL)
- Sashank Santhanam (UNCC)
- Sonja Schmer-Galunder (SIFT)
- Svitlana Volkova (PNNL)
- Zhou (Joy) Yu (UC Davis)
- Ning Yu (Leidos)
- Alan Zemal (UAlbany)
- Rensselaer Polytechnic Institute
- The Florida Institute for Human and Machine Cognition
- University of North Carolina at Charlotte