Dangerous by Design
Two Studies: Social Media Algorithms Fuel Online Hate
New studies from the Anti-Defamation League and Tech Transparency Project show how some of the biggest tech platforms directly contribute to the proliferation of online antisemitism, hate, and extremism.

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves?

The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate.

A new pair of studies from ADL (the Anti-Defamation League) and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse.

For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability.

In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability.

What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism.

As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more. Based on our findings, here are three recommendations for industry and government:

1. Tech companies need to fix the product features that currently escalate antisemitism and auto-generate hate and extremism. Tech companies should tune their algorithms and recommendation engines to ensure they are not leading users down paths riddled with hate and antisemitism. They should also improve predictive autocomplete features and stop auto-generation of hate and antisemitism altogether.

2. Congress must update Section 230 of the Communications Decency Act to fit the reality of today’s internet. Section 230 was enacted before social media and search platforms as we know them existed, yet it continues to be interpreted to provide those platforms with near-blanket legal immunity for online content, even when their own tools are exacerbating hate, harassment, and extremism. We believe that by updating Section 230 to better define what type of online activity should remain covered and what type of platform behavior should not, we can help ensure that social media platforms more proactively address how recommendation engines and surveillance advertising practices are exacerbating hate and extremism, which lead to online harms and potential offline violence. With the advent of social media, the use of algorithms, and the surge of artificial intelligence, tech companies are more than merely static hosting services. When there is a legitimate claim that a tech company played a role in enabling hate crimes, civil rights violations, or acts of terror, victims deserve their day in court.

3. We need more transparency. Users deserve to know how platform recommendation engines work. This does not need to be a trade secret-revealing exercise, but tech companies should be transparent with users about what they are seeing and why. The government also has a role to play. We’ve seen some success on this front in California, where transparency legislation was passed in 2022. Still, there’s more to do. Congress must pass federal transparency legislation so that stakeholders (the public, researchers, and civil society) have access to the information necessary to truly evaluate how tech companies’ own tools, design practices, and business decisions impact society.

Hate is on the rise. Antisemitism both online and offline is becoming normalized. A politically charged U.S. presidential election is already under way. This is a pressure cooker we cannot afford to ignore, and tech companies need to take accountability for their role in the ecosystem.

Whether you work in government or industry, are a concerned digital citizen, or a tech advocate, we hope you find this pair of reports to be informative. There is no single fix to the scourge of online hate and antisemitism, but we can and must do more to create a safer and less hate-filled internet.

Yaël Eisenstat, Vice President, ADL Center for Technology and Society
Katie Paul, Director, Tech Transparency Project

 Read the reports here.

August 17, 2023
Top stories_
May 6, 2024

Facebook hosts a thriving black market for fake and stolen accounts. Some sellers are offering accounts that can run political ads in India, raising election interference fears.

April 11, 2024

The former Google CEO has repeatedly called China’s AI ambitions a threat to the U.S. His personal investments reveal a much friendlier stance.

February 14, 2024

The U.S. imposes sanctions on individuals, groups, and countries deemed to be a threat to national security. Elon Musk’s X appears to be selling premium service to some of them.

January 30, 2024

Meta gave the green light to teen-targeted ads for drug parties and anorexia that violated its policies and used images produced by its AI image generator.