Overview

Tagline: A venue for research on using advanced NLP to detect, understand, and defend against current and future threats in online social platforms.

Description: Social threats for individuals and organizations are prevalent in online conversations, where human vulnerabilities pave the way for phishing, propaganda, scams, disinformation campaigns, and social engineering tactics (Bakhshi, Papadaki, and Furnell 2008, Karakasilitiosis, Furnell, and Papadaki 2007). For example, over 80% of cyber penetrations start with a social engineering attack (Verizon/TM 2014), often through manipulative language usage, resulting in a loss of money, sensitive information, or control of resources to unsuspecting victims at an individual level. Detection techniques based on metadata have yielded minimal success in the face of rising personalized attacks, especially those involving impersonation and power relationships (e.g., a spoofed dean requesting a gift card purchase from a department faculty). The implications are potentially more dire for disinformation campaigns, as these are implemented on a much larger scale. Natural language processing (NLP) and computational sociolinguistics techniques in conjunction with metadata analysis can provide a better means for detecting and countering attacks and disinformation campaigns in a wide variety of online, conversational contexts (Dalton et al 2019, Kim et al 2018, Dalton et al 2017, Sawa et al 2016). The STOC (Social Threats in Online Conversations: Understanding and Management) workshop is the first dedicated to gleaning actions and intentions of adversaries in online conversations, from the adversaries’ language use coupled with communication content.

Topics: The topics of the workshop include but are not limited to the following:

Submissions

Both long (8pp) and short (4pp) papers must follow the LREC Stylesheet and can be submitted here. Ethics statement: Submissions must include a brief statement about “ethical considerations” that addresses the potential for malicious use of the technology or other possible negative impacts and how these are mitigated.

Important Dates

First call for workshop papers
Dec 20, 2020
Second call for workshop papers
Jan 15, 2020
Final call for workshop papers
Feb 1, 2020
Workshop papers due
Feb 14, 2020
Notification of acceptance
March 13, 2020
Camera-ready papers due
April 2, 2020
Workshop date
May 11, 2020

Schedule (Proceedings)

Invited Speakers

Accepted Papers

Organizing Committee

Archna Bhatia Research Scientist at the Florida Institute for Human and Machine Cognition abhatia@ihmc.us Her research interests are in natural language understanding and improving NLP systems using deep linguistic knowledge. She was involved in the organization of the SpLU-RoboNLP 2019 and the SpLU-2018 workshops. She also led an effort to develop Hindi for the PARSEME shared task.
Adam Dalton Research Associate at the Florida Institute for Human and Machine Cognition adalton@ihmc.us His research focuses on using natural language processing to improve cyber threat intelligence. He has developed tools to improve open source intelligence gathering using information foraging and improving cyber attack prediction through social signals such as outrage.
Bonnie Dorr Senior Research Scientist, Associate Director of the Florida Institute for Human and Machine Cognition bdorr@ihmc.us Her research focuses on semantics, MT, language understanding, and cyber-NLP. She has organized multiple conferences and workshops, e.g., ACL (local org, 1999), AAAI Symposium Series (director 1994-1999), Co-Chair of multiple AMTA Workshops. Served multiple times as ACL Area Chair. PI or co-I: CAUSE, SocialSim, ASED.
Samira Shaikh Assistant Professor at the University of North Carolina at Charlotte samirashaikh@uncc.edu Her research interests are using natural language processing and cognitive science to program intelligent agents. She is the co-PI on DARPA’s ASED program (PANACEA team). She is the co-chair for Widening NLP workshop (WiNLP), which is a two year position. This year the WiNLP workshop was colocated with ACL.
Tomek Strzalkowski Professor of Cognitive Science and Computer Science at Rensselaer Polytechnic Institute tomek@rpi.edu Previously Professor of Computer Science at SUNY Albany, and Director of Albany’s ILS Institute. His research focuses on computational linguistics, sociolinguistics, socio-behavioral computing. He was involved in IBM’s Jeopardy! Challenge. Organized and served as invited speaker several workshops for ACL, LREC, ACM, ISAT.

Program Committee

Sponsors