Back in May, it seemed fairly obvious how all of this was going to go down. Following on the horrific mass murder carried out at a supermarket in Buffalo, we saw NY’s top politicians all agree that the real blame… should fall on the internet and Section 230. It had quickly become clear that NY’s own government officials had screwed up royally multiple times in the leadup to the massacre. The suspect had made previous threats, which law enforcement mostly brushed off. And then, most egregiously, the 911 dispatcher who answered the call about the shooting, hung up on the caller. And we won’t even get into a variety of other societal failings that resulted in all of this. No, the powers that be have decided to pin all the blame on the internet and Section 230.
To push this narrative, and to avoid taking any responsibility themselves, NY’s governor Kathy Hochul had NY Attorney General Letitia James kick off a highly questionable “investigation” into how much blame they could pin on social media. The results of that “investigation” are now in, and would you believe it? AG James is pretty sure that social media and Section 230 are to blame for the shooting! Considering the entire point of this silly exercise was to deflect blame and put it towards everyone’s favorite target, it’s little surprise that this is what the investigation concluded.
Hochul and James are taking victory laps over this. Here’s Hochul:
“For too long, hate and division have been spreading rampant on online platforms — and as we saw in my hometown of Buffalo, the consequences are devastating,” Governor Hochul said. “In the wake of the horrific white supremacist shooting this year, I issued a referral asking the Office of the Attorney General to study the role online platforms played in this massacre. This report offers a chilling account of factors that contributed to this incident and, importantly, a road map toward greater accountability.”
Hochul is not concerned about the failings of law enforcement officials, nor the failings of mental health efforts. Nor the failings of efforts to keep unwell people from accessing weapons for mass murder. Nope. It’s the internet that’s to blame.
James goes even further in her statement, flat out blaming freedom of speech for mass murder.
“The tragic shooting in Buffalo exposed the real dangers of unmoderated online platforms that have become breeding grounds for white supremacy,” said Attorney General James.
The full 49 page report is full of hyperbole and insisting that the use of forums by people doing bad things is somehow proof that the forums themselves caused the people to be bad. The report puts tremendous weight on the claims of the shooter himself, an obviously troubled individual, who insists that he was “radicalized” online. The report’s authors simply assume that this is accurate, and that it wasn’t just the shooter trying to push off the responsibilities for his own actions.
Incredibly, the report has an entire section that highlights how residents of Buffalo feel that social media should be held responsible. But, that belief that social media is to blame is… mostly driven by misleading information provided by the very same people creating this report in order to offload their own blame. Like, sure, if you keep telling people that social media is to blame, don’t be surprised when they parrot back your talking points. But that doesn’t mean those are meaningful or accurate.
There are many other oddities in the report. The shooter apparently set up a Discord server, with himself as the only member, where he wrote out a sort of “diary” of his plans and thinking. The report seems to blame Discord for this, even though this is no different than opening a local notepad and keeping notes there, or writing them down by hand on a literal notepad. I mean, what is this nonsense:
By restricting access to the Discord server only to himself until shortly before the attack, he ensured to near certainty that his ability to write would not be impeded by Discord’s content moderation.
Discord’s content moderation operates dually at the individual user and server level, and generally across the platform. The Buffalo shooter had no incentive to operate any server-level moderation tools to moderate his own writing. But the platform’s scalable moderation tools also did not stop him from continuing to plan his mass violence down to every last detail.
But without users or moderators apart from the shooter himself to view his writings, there could be no reports to the platform’s Trust and Safety Team. In practice, he mocked the Community Guidelines, writing in January 2022, “Looks like this server may be in violation of some Discord guidelines,” quoting the policy prohibiting the use of the platform for the organization, promotion, or support of violent extremism, and commenting with evident sarcasm, “uh oh.” He continued to write for more than three and a half more months in the Discord server, filling its virtual pages with specific strategies for carrying out his murderous actions.
He used it as a scratchpad. How do you blame Discord for that?!? If he’d done the same thing in a physical notebook, would AG James be blaming Moleskine for selling him a notebook? This just all seems wholly disconnected from reality.
The report also blames YouTube, because the shooter watched a video on how to comply with NY gun laws. As if that can lead to blame?
One of the videos actually demonstrates the use of an attachment to convert a rifle to use only a fixed magazine in order to comply with New York and other states’ assault weapons bans. The presenter just happens to mention that the product box itself notes that the device can be removed with a drill.
The more you read in the report, the more it becomes obvious just how flimsy James’/Hochul’s argument is that social media is to blame. Here’s the report admitting that he didn’t do anything obviously bad on Reddit:
Like the available Discord comments, the content of most of these Reddit posts is largely exchanging information about the pros and cons of certain brands and types of body armor and ammunition. They generally lack context from which it could have been apparent to a reader that the writer was planning a murderous rampage. One comment, posted about a year ago, is chilling in retrospect; he asks with respect to dark-colored tactical gear, “in low light situations such as before dusk after dawn and at nighttime it would provide good camouflage, also maybe it would be also good for blending in in a city?” It is difficult to say, however, that this comment should have been flagged at the time it was made
The report also notes how all these social media sites sprung into action after the shooting — something helped along because of Section 230, and acts as if this is a reason to reform 230. Indeed, while the report complains that they were still able to find a few images and video clips from the attack, the numbers were tiny and clearly suggest that barely any slipped through. But, this report — again prepared by a NY state gov’t which had law enforcement check on the shooter and do nothing about it — suggests that not being perfect in their moderation is a cause for alarm:
For the period May 20, 2022 to June 20, 2022, OAG investigators searched a number of mainstream social networks and related sites for the manifesto and video of the shooting. Despite the efforts these platforms made at moderating this content, we repeatedly found copies of the video and manifesto, and links to both, on some of the platforms even weeks after the shooting. The OAG’s findings most likely represent a mere fraction of the graphic content actually posted, or attempted to be posted, to these platforms. For example, during the course of nine weeks immediately following the attacks, Meta automatically detected and removed approximately 1 million pieces of content related to the Buffalo shooting across its Facebook and Instagram platforms. Similarly, Twitter took action on approximately 5,500 Tweets in the two weeks following the attacks that included still images or videos of the Buffalo shooting, links to still images and videos, or the shooter’s manifesto. Of those, Twitter took action on more than 4,600 Tweets within the first 48 hours of the attack
When we found graphic content as part of these efforts, we reported it through user reporting tools as a violation of the platform’s policy. Among large, mainstream platforms, we found the most content containing video of the shooting, or links to video of the shooting, on Reddit (17 instances), followed by Instagram (7 instances) and Twitter (2 instances) during our review period. We also found links to the manifesto on Reddit (19 instances), the video sharing site Rumble (14 instances), Facebook (5 instances), YouTube (3 instances), TikTok (1 instance), and Twitter (1 instance). Response time varied from a maximum of eight days for Reddit to take down violative content to a minimum of one day for Facebook and YouTube to do so.
We did not find any of this content on the other popular online platforms we examined for such content, which included Pinterest, Quora, Twitch, Discord, Snapchat, and Telegram, during our review period. That is not to say, however, that it does not exist on those platforms.
In other words, sites like Twitter and Facebook took down thousands to millions of people reposting this content and single digit reposts may have slipped through the content moderation systems… and NY’s top politicians think this is a cause for concern?
I mean, honestly, it is difficult to read this report and think that social media is a problem. What the report actually shows is that social media was, at best, tangential to all of this, and when the shooter and his supporters tried to share and repost content associated with the attack, the sites were pretty good (if not absolutely perfect) about getting most of it off the platform. So it’s absolutely bizarre to read all of that and then jump to the “recommendations” section, where they act as if the report showed that social media is the main cause of the shooting, and just isn’t taking responsibility.
It’s almost as if the “recommendations” section was written prior to the actual investigation.
The report summary from Hochul leaves out how flimsy the actual report is, and insists it proves four things the report absolutely does not prove:
- Fringe platforms fuel radicalization: this is entirely based on the claims of the shooter himself, who has every reason to blame others for his action. The report provides no other support for this.
- Livestreaming has become a tool for mass shooters: again, the “evidence” here is that this guy did it… and so did the Christchurch shooter in 2019. Of course (tragically, and unfortunately) there have been a bunch of mass shootings between now and then, and the vast, vast majority of them do not involve livestreaming. To argue that there’s any evidence that livestreaming is somehow connected to mass shootings is beyond flimsy.
- Mainstream platforms moderation policies are inconsistent and opaque. Again, the actual report suggests otherwise. It shows (as we highlighted above) that the mainstream platforms are pretty aggressive in taking down content associated with a mass shooting, and relatively quick at doing so.
- Online platforms lack accountability. What does accountability even mean here? This prong is used to attack Section 230, ignoring that it’s Section 230 that enabled these companies to build up tools and processes in their trust & safety departments to react to tragedies like this one.
The actual recommendations bounce back and forth between “obviously unconstitutional restrictions on speech” and “confused and nonsensical” (some are both). Let’s go through each of them:
- Create Liability for the Creation and Distribution of Videos of Homicides: This is almost certainly problematic under the 1st Amendment. You may recall that law enforcement types have been calling for this sort of thing for ages, going back over a decade. Hell, we have a story from 2008 with NY officials calling for this very same thing. It’s all nonsense. Videos of homicides are… actual evidence. Criminalizing the creation and distribution of evidence of a crime seems like a weird thing for law enforcement to be advocating for. It’s almost as if they don’t want to take responsibility. Relatedly, this would also criminalize taking videos of police shooting people. Which, you know, probably is not such a good idea.
- Add Restrictions to Livestreaming: I remind you that the report mentions exactly two cases of livestreamed mass murders: this one in Buffalo and the one in 2019 in Christchurch, New Zealand. That is not exactly proof that livestreaming is deeply connected with mass murder. The suggestion is completely infeasible, demanding “tape delays” on livestreaming, so that… it is no longer livestreaming. They also demand ways to “identify first-person violence before it can be widely disseminated.” And I’d like a pony too.
- Reform Section 230: Again, the actual report shows how the various platforms did a ton to get rid of content glorifying the shooter. Yes, a few tiny things slipped through… just as the shooter slipped through New York police review when he was previously reported for threatening violence. But, Hochul and James are sure that 230 is a problem. They demand that “an online platform has the initial burden of establishing that its policies and practices were reasonably designed.” This is effectively a repeal of 230 (as I’ll explain below).
- Increase Transparency and Strengthen Moderation: As we’ve discussed at length, many of these transparency mandates are actually censorship demands in disguise. Also, reforming Section 230 as they want would not strengthen moderation, it would weaken it by making it that much more difficult to actually adapt to bad actors on the site. The same is likely true of most transparency mandates, which make it more difficult to adapt to changing threats, because the transparency requirements slow everyone down.
I want to call out, again, why the “reasonably designed” bit of the “reform 230” issue is so problematic. Again, this requires people to actually understand how Section 230 works. Section 230’s main benefit is the procedural benefit of getting frivolous, vexatious cases tossed out early. If you condition 230 protections on proving “reasonableness,” you literally take away the entire benefit of 230. Because, now, every time there’s a lawsuit, you first have to go through the expensive, and time consuming process of proving your policies are reasonable. And, thus, you lose all of the procedural benefits of 230 and are left fighting nuisance lawsuits constantly. The idea makes no sense at all.
Worse, it again greatly limits the ability of sites to adapt and improve their moderation efforts, because now every single change that they make needs to go through a careful legal review before it will get approved, and then every single change will open them up to a new legal challenge that these new policies are somehow “unreasonable.” The entire “reasonableness” scheme incentivizes companies to not fix moderation and to not adapt and strengthen moderation, because any change to your policies creates the risk of liability, and the need to fight long and expensive lawsuits.
So, to sum all this up: we have real evidence that NY state failed in major ways with regards to the Buffalo shooter. Instead of owning that, NY leadership decided to blame social media, initiating this “investigation.” The actual details of the investigation show that social media had very, very little to do with this shooting at all, and where it was used, it was used in very limited ways. It also shows that social media sites were actually extremely fast and on the ball in removing content regarding the shooting, while a very, very, very tiny bit of content may have slipped through the filtering process, it was hugely successful.
And yet… the report still blames social media, insists a bunch of false things are true, and then makes a bunch of questionable (unconstitutional) recommendations, along with recommendations to effectively take away all of Section 230’s benefits… which would actually make it that much more difficult for websites to respond to future events and future malicious actors.
It’s all garbage. But, of course, it’s just politicians grandstanding and deflecting from their own failings. Social media and Section 230 are a convenient scapegoat, so that’s what we get.