My Personal Fight Against Online Hate
by Christopher Wolf

Chris Wolf is Senior Counsel Emeritus at Hogan Lovells US LLP in Washington, DC and the co-author of Viral Hate: Containing Its Spread on the Internet. This essay is adapted from a talk he gave at Berkeley Law on February 4, 2021.
In 1995, I became one of the first lawyers to focus on the Internet as an area of law. The Internet was just then being broadly introduced to consumers. People were coming to recognize the Internet as a magnificent tool for communication, entertainment and education. But as with all modes of communication throughout history that were used for good and bad, the Internet also was being used for good and bad, and was being adopted by hatemongers, including homophobes.
As a leader at the Anti-Defamation League (ADL), the anti-hate organization that has been fighting hate since its founding in 1913, I urged that ADL focus on Internet hate. And in 1995, I offered to lead the effort, which has continued and grown to today. My personal fight against online hate has continued unabated.
The dark side of the Internet that was emerging in 1995 has gotten worse, by several orders of magnitude, with the expansion of social media tools. The ease with which information is shared comes with a price. Every day, individuals and groups use the power of the Internet as a weapon to spread vitriol aimed at racial, ethnic, religious and sexual minorities, and other targets. Calls for violence, bigoted rants, lies, bullying, and conspiracy circulate openly on the Internet, with effects on individuals and society that are profound and dangerous.
ADL quickly came to lead the focus on Internet hate here and abroad. I became Chair of the International Network Against Cyber Hate and represented ADL at programs in Israel and across Europe.
One of my more memorable trips was to a Paris conference hosted by the Government of France. At one panel, I was asked why the US Congress didn’t criminalize certain hate speech as was done in France and Germany. I explained how most hate speech was not and could not be illegal because of the broad license granted by the First Amendment. That was why speech illegal in Europe was showing up on web sites hosted in the US. I was not exactly going out on a limb with that answer in terms of Constitutional law. Still, a former French Minister of Justice yelled out from the back of the room, “Stop hiding behind the First Amendment.” I didn’t realize that’s what I was doing. In any event, I don’t think an amendment to the Constitution was likely or that I could play any role in its passage.
Recently, in an article for the New York Times Magazine, Emily Bazelon reiterated that that the First Amendment sets a high bar for punishing inflammatory words. She cited University of Chicago law professor Geoffrey R. Stone who has written that Supreme Court precedent “wildly overprotects free speech from any logical standpoint,” but [that] the court learned from experience to guard against a worse evil: the government using its power to silence its enemies.”
Back in the US, my work with the now-retired ADL National Director Abe Foxman culminated in our writing a book about online hate in 2013. In the book, we said that the online hate we are witnessing can fairly be labeled “Viral Hate,” which happens to be the title of the book – Viral Hate: Containing Its Spread on the Internet. If we were to write an update to the book today, we would observe that as with the virus causing the Covid-19 pandemic, viral hate online is mutating. Unlike its physical cousin, however, there is no vaccine to combat the virus of online hate.
I’m still involved with the League today as a member of the national ADL Board of Directors. And I’m proud to say that ADL’s Center for Technology and Society is a leader in the fight against online hate.
Several weeks ago, Adam Clark Estes wrote a piece for Recode on Vox entitled “How neo-Nazis used the internet to instigate a right-wing extremist crisis.” In his article, Estes wrote, quoting now, “White supremacists have historically been early to technological trends, sometimes even shaping how mainstream Americans experienced them. Consider that The Birth of a Nation, an influential 1915 film by D.W. Griffith based on a 1905 novel called The Clansman and credited with reviving the Ku Klux Klan, was the first film to be shown at the White House. One could argue that almost a century later, tech-savvy white supremacists played a critical role in putting Trump in the White House. From the beginning, they seemed to know just how powerful and transformative the internet would be.”
I agree. It always has seemed that haters have been early adopters.
The Vox Recode piece makes this chilling observation:
“Th[e] communication structure has evolved dramatically since a few ambitious neo-Nazis plugged their computers into dial-up modems and built the early networks of hate. Being an extremist is a mobile, multimedia experience now, thanks to smartphones, social media, podcasts, and livestreaming. And it’s not just the leaderless resistance strategy that has endured among right-wing extremists. A number of neo-Nazi themes — namely those drawn from a racist dystopian novel from the 1970s called The Turner Diaries — have also transcended the decades of technological advancement to crop up again during the Capitol riot in January.”
If we are going to combat the advancement of online hate, it makes sense doesn’t it, that we should define our terms. If advocates are going to ask Facebook and Twitter to take down hate-filled content, there needs to be some agreement on terminology, right?
Some readers of our book asked us why we didn’t specify a definition of online hate, just as some listeners to my talks ask the same question. It’s not that it’s impossible to come up with a definition. I could say the following: “Online hate is a form of expression – words, pictures, music and/or videos — that takes place online — on the Internet and in social media — with the purpose of attacking a person or a group on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender.” That dry definition sounds a lot like the terms of service used by online platforms. But the definition does not begin to convey the scope of hate on the Internet.
So I was thinking. Maybe this is the way to define online hate – and bear with me as this list is long:
Online hate is anti-Semitism. Online hate is cyberbullying. Online hate is cyberstalking. Online hate is cyber-harassment. Online hate is doxing. Online hate is swatting. Online hate uses deep fakes. Online hate is holocaust denial. Online hate is the pollution of mean-spirited and off-topic comments appended to legitimate online articles. Online hate includes the promotion of white supremacist merchandise on shopping sites. Online hate is right wing, anti-immigrant sentiment on social-media sites that leads to violent crimes against immigrants. Online hate is speech that spurs attacks on minorities in civil wars around the globe. Online hate is the radicalization of young people. Online hate is recruitment by extremist groups. Online hate is the product of anonymity where people do and say things they never would do or say with attribution. Online hate is harassment and intimidation that affects lives and sometimes silences people, online and offline. Online hate is the propagation of stereotypes and lies about LGBTQ and other minority groups. Online hate includes falsehoods that poison the minds of young people for all of their lives. Online hate scares and scars people, with lifelong effects. And as we learned on January 6, online hate includes propagation of lies that leads to violent insurrection.
That definition of online hate is a mouthful. And even as comprehensive as it may seem, it does not begin to encompass the breadth of online hate. For example, I didn’t mention the online conduct that drove college student Tyler Clementi to jump off the George Washington Bridge. Nor did I mention the epidemic of revenge porn. Sadly, the examples go on and on.
One of the roadblocks to precisely defining online hate is that hate speech often depends on context. For example, Nazi propaganda posted by an anti-Semitic white supremacist group easily fits into the category of hate speech while the same propaganda posted by a college professor teaching World War II and the Holocaust would not be classified as hate speech.
Deborah Lipstadt, the preeminent scholar on anti-Semitism, has observed that there is no reason to be frustrated by the fact that you can’t quite define anti-Semitism. She said: “Much of the general public can’t define it. Even scholars in the field can’t agree on a precise definition. In fact, there are people, particularly Jews, who eschew definitions and argue that Jews can feel anti-Semitism in their bones, the same way that African Americans can recognize racism and gays recognize homophobia.” She likened this to the famous comment by Justice Stewart about hard core pornography, “I know it when I see it.” Of course, that will not suffice as a definition that online platforms can use in the Community Standards or Terms of Use.
The fact that it is difficult to precisely and comprehensively define online hate, and that it often depends on context, is why policing it is so difficult. There is no hate filter than can be downloaded to one’s computer to keep out offensive material on the Internet. Unlike with copyrighted content and child pornography that can be hashed for detection online, defining hate speech very often is subjective. That is why Facebook employs more than 50,000 monitors worldwide to evaluate user complaints about improper content.
Beyond defining online hate is the issue of what kind of impact hate is having today. We have seen how hate groups use the Internet to indoctrinate, recruit and rally their followers. Again, the January 6th insurrection at the Capitol is a fresh and dramatic example.
But beyond use of the Internet as a collaborative tool for haters, a recent ADL survey showed that nearly 30 percent of Americans reported experiencing severe online hate and harassment, including sexual harassment, stalking, physical threats, swatting, doxing or sustained harassment. Individuals from marginalized groups reported feeling less safe online last year than in the past. This was particularly evident on Facebook, where 77 percent of users reported being harassed – more than on any other platform.
It probably is stating the obvious, but a piece of online hate hidden in obscurity on the Internet does little to no harm. But that same content recommended in search results and otherwise syndicated across the Internet has far greater harmful potential. And so it is heartening to learn of experiments by the platforms involving AI, and other techniques to alter the prominence or trending nature of posts, so that posts remain online but not easily found by the ordinary user.
But as the platforms step up their roles as online moderators to eliminate online hate, an old concern has re-emerged. What process are the platforms using in applying their terms of use and community standards?
Ten+ years ago, Professor Jeff Rosen wrote a cover story for the New York Times entitled “Google’s Gatekeepers” in which he dubbed Deputy General Counsel Nicole Wong “The Decider.” He gave her that name in light of her role leading a relatively small team of reviewers and lawyers in deciding what content remained on the Google platform and what would be blocked, in light of the Google Terms of Service.
Back then, Professor Rosen and Nicole Wong never could have imagined the role online companies would play today in reviewing, and potentially removing, online content to protect users from foreign interference in elections, to remove content supporting terrorism, to remove certain sexually-related content and to remove online hate.
Perhaps a concern that the platforms are too inclined to censor is tempered by the fact, commented upon by Professor Kate Klonick, that the major online platforms, Facebook, YouTube and Twitter, have rules that are protective of free-speech, in large part because the companies are American companies and the people making the judgments are lawyers steeped in First Amendment principles of free expression. Still, Professor Klonick in 2017 authored a New York Times op ed entitled, “The Terrifying Power of Internet Censors.”
And Professor Jack Balkin has called online companies modern-day sovereigns. He complains about substantive problems of private speech regulation as well as procedural problems. He says that
Currently, speech platforms do not govern in the same way that liberal
democratic states do. Enforcement of community norms often lacks notice, due process and transparency. Platform operators may behave like absolutists monarchs, who claim to exercise power benevolently, but who make arbitrary exceptions and judgments in governing online speech.
Professor Balkin goes on to observe that “procedural values may be as important if not more important than substantive values.” He also said “as online speech platforms govern, and increasingly resemble governments, it is hardly surprising that end-users expect them to abide by the basic obligations of those who govern populations in democratic societies – transparency, notice and fair procedures; reasoned explanations for decisions or changes of policy; the ability of end users to complain and demand reforms; and the ability of end users to participate, even in the most limited ways, in the governance of the institution.”
In the ten+ years since Jeff Rosen focused on the “deciders” at Google, we have come a long way towards understanding how decisions are made at the platforms. Facebook, for example has published materials describing its decision-making, and other materials have been leaked. But we are nowhere need the transparency hoped for by Professor Balkin. Much more can be done to inform the public of how the process works at the platforms when it comes to deciding what is hate speech subject to removal.
Going forward, greater transparency would promote greater confidence in the process through which hate speech is defined and removal occurs (or is denied).
One way to understand how decisions are made on what constitutes hate speech at Facebook comes from a review of recent decisions by Facebook’s court-like Oversight Board. The board, which has been called Facebook’s version of a Supreme Court, announced last Thursday that it overturned Facebook’s decisions in four out of the five cases before it.
Two of the cases involved hate speech removals by Facebook. The panel upheld Facebook removing a Russian post that demeaned Azerbaijani people. But it overturned a decision involving a post in Myanmar (where a military coup currently is taking place), conceding that the post could be considered offensive but, in its view, not did not reach the level of hate speech. Admittedly, the decision seemed to be based on some fine differences in translation but since Myanmar is in the grips of an ongoing genocide against the Rohingya Muslim minority, in which inflammatory Facebook posts have played a role, it seems hard to fathom how “statements referring to Muslims as mentally unwell or psychologically unstable are not a strong part of this rhetoric,” or “while the post might be considered pejorative or offensive towards Muslims, it did not advocate hatred or intentionally incite any form of imminent harm.” The Board seems to miss the point about the cumulative effect of offensive content in a genocide against minorities, as the Holocaust Museum exhibit described in our book demonstrated. So the Board’s interpretation of the Facebook Community Standard were not particularly helpful and did little to advance the elimination of hate.
I recognize how unsatisfying it is to read about a problem but not to be presented with proposed solutions. But my assignment in connection with the Berkeley talk from which this paper is adapted was to define the problem, leaving it to a future session to discuss remedies. So, stay tuned for a future discussion of ways to address online hate, including through public policy, education and technology.