
On Tuesday, April 21, The Westport Library welcomes NYU researcher Dr. Matthias Becker to reveal how hate, bias, and hidden messaging spreads across social media — often in ways you don't recognize.
Presented by the Library's Common Ground Initiative, this informative seminar invites participants of all ages to Decode Hate by providing the tools they need to identify harmful discourse and recognize how it shapes our worldview online and offline.
A consistent link between Dr. Becker's research activities is the question of how implicit hate speech is constructed and what conditions its production is subject to. He recently shared some insight with us that gives important context to his work.
***
Common Ground Q&A
Westport Library: What is the work you are currently doing?
Matthias Becker: Think about the last time you scrolled through your feed and something made you uneasy — a comment that seemed off, a meme that landed wrong, a phrase you couldn't quite place but felt was doing something. Maybe it was about the war in Iran. Maybe it was about an election. You probably kept scrolling. That moment — that flicker of recognition before you moved on — is exactly where my research begins.
I lead a research project called Decoding Hate at New York University's Center for the Study of Antisemitism, where I study how hate speech, conspiracy narratives, and mis- and disinformation spread on social media — and how we can detect and counter them. We use a combination of linguistic analysis and AI-supported tools to examine hundreds of thousands of online comments, looking at not just what people say, but how they say it: the coded language, the irony, the strategic ambiguity that allows hateful ideas to circulate without ever sounding overtly extreme. Together with AddressHate, we're building detection systems that don't just flag risky content but identify what kind of harm is present and why — in language that's legible to educators, policymakers, and courts.
Westport Library: Why is it such important work?
Matthias Becker: Because hate doesn't announce itself — and neither does the AI that's spreading it.
Most of what circulates online doesn't look like the crude hatred of decades past. It looks like irony, insinuation, strategic ambiguity — ideas traveling in plain sight, just below the threshold of what most people would call extreme. The distinction between free speech and hate speech matters enormously here — and it's precisely this coded, ambiguous nature of modern hate that makes drawing that line so difficult, and so consequential. That also makes these expressions extraordinarily hard to detect, for humans and AI systems alike.
My research addresses three interconnected drivers of this problem. First, coordinated bad actors who deliberately exploit divisive issues and manufacture disinformation at scale. Second, platform algorithms that reward outrage and amplify the most emotionally charged content, regardless of whether it's true or harmful. Third, the conditions of online communication itself — anonymity, mutual reinforcement, constant exposure to extremity — which turn ordinary users into unwitting amplifiers of hate. If we don't understand these mechanics, we can't build tools that actually work — and communities, educators, and platforms remain one step behind.
And here's the deeper problem: most public debate about AI and hate focuses on what AI produces — offensive outputs, extremist content. That's real. But it's downstream of a harder issue: what AI absorbs. Every major model shows consistent bias toward hateful associations — not because engineers are hateful, but because models were trained on centuries of human text in which those associations are already embedded. You can add guardrails. The underlying associations remain.
Westport Library: How does this work affect those who come to the talk?
Matthias Becker: Everyone in that room uses social media — or lives with someone who does. The talk is designed to give people practical insight into what's actually happening in the digital spaces they inhabit every day: why certain content keeps showing up in their feeds, how ordinary-seeming posts can normalize extreme ideas over time, and what they can do about it.
But it goes further than awareness. We'll look closely at how irony, coded language, and strategic ambiguity allow hate speech, conspiracy narratives, and disinformation to spread while evading both human recognition and automated detection — and how algorithms and coordinated actors actively accelerate that process. You'll leave with a sharper eye for what you're seeing online, a clearer understanding of the structural forces shaping it, and concrete tools to act — because recognizing how manipulation works is the first step toward refusing it.
The talk is designed for a general adult audience, but the core questions — why do people share harmful content, how do algorithms shape what we see, what does coded language actually do — translate directly into a school-facing format as well. I'd be delighted to work with the library on a version tailored for students, whether as a classroom visit, a youth program, or a separate evening event. Digital literacy around hate, disinformation, and algorithmic influence is arguably most urgent for the generation that has grown up entirely inside these systems — and there is no more important investment we can make than equipping young people to see clearly, think critically, and push back.
***
Tuesday, April 21, 6 pm
Decode Hate Video Challenge for Teens
Calling all teens — Make the internet a better place, one video at a time! Join us in Brooks Place before Dr. Becker's seminar to find out how you can win up to $1000 by creating a compelling video that challenges hate and bias on social media.
Tuesday, April 21, 7 pm
Decode Hate on Social Media with Matthias J. Becker
As social media transcends the boundaries of the digital world, how do we differentiate between free speech and hate speech online — and how do we combat its harmful effects? Dr. Becker will deliver an informative seminar for an intergenerational audience that emphasizes practical, research-informed insight into understanding and navigating contemporary online discourse and its real-world consequences.
Thursday, May 28, 6 pm
Teens' Decode Hate Video Challenge Awards Ceremony & Follow-Up Discussion
Join the top five finalists of our Decode Hate Video Challenge for a LIVE judging panel and awards ceremony to celebrate the winners with cash prizes! Dr. Matthias Becker will be attendance as one of the judges and will hold a public Q&A forum for participants who would like to debrief regarding his April 21st event.