Think about this scenario: A mother of college-age kids decides to pursue her passion as an interior designer and love of the European modernist aesthetic, and simply starts posting her ideas own Instagram and Twitter. As her following grows, she begins to promote videos of her design tips on her own YouTube channel. As a result of her efforts, she develops a substantial YouTube subscriber base (millions of subscribers) and a similar following on Twitter (not to mention Facebook and Instagram). By all accounts, she is the epitome of 21st century online success — she not only obtains significant ad revenues from her YouTube presence and now thriving design business, but has created for herself an incredible reputation as an online influencer. So what does Section 230 immunity have to do with this scenario? More than you may think (or want to imagine).
How? With such success, she suddenly starts dealing with the unthinkable: some “fans” on her Facebook page object to her use of certain Native American fabrics in her European designs as “improper cultural appropriation.” She counters that she is celebrating these patterns as part of a juxtaposition against more modern lines and design, to no avail. Facebook (and Instagram) promptly take down a large number of her posts. The Twitterverse jumps on the bandwagon, calls for her removal, and her account is suspended. YouTube soon follows suit. She is now not only watching her entire social media presence crater, but her business suffer to the brink of collapse. What’s worse, is that she has zero recourse against any of the platforms. Why? Not just terms of service that heavily favor the platform but more importantly — you guessed it — Section 230 immunity.
As you probably know, there has been a lot of attention this year about Section 230 of the Communications Decency Act, most of it pushed by political speech on social media platforms and some fairly strong feelings on both sides of the issue. I have written on this subject most recently here, and find this issue is not an easy one to address. On one hand, many advocate that Section 230 is essential to free speech on the internet and such immunity cannot be curtailed. On the other hand, a significant number of voices (many in Washington, D.C.) insist that Section 230 immunity should not only be severely limited but even cease altogether. It shouldn’t surprise you that I think neither of these approaches work, but that is because the issue is more nuanced than politicians would have you believe and more important than a political talking point.
I won’t recount the basic structure of Section 230 immunity (I have already written on that here), but suffice it to say that there are two main parts to the protections afforded to interactive service providers: First, Section 230 shields online service providers from civil liability for defamatory, tortious, and even illegal content that its users post onto the platform (such as third-party comments posted in response to an article posted on a social media platform). I believe that the vast majority of people would agree with this proposition to a point — to the extent the interactive service provider does not know the activity is defamatory or illegal, most people would reasonably agree it should not be held liable for it. Despite this point, current Section 230 jurisprudence goes much farther. The bigger issue, it seems, is what such providers do (or don’t do) with respect to accessing such content on their platform and whether they are doing so in good faith. Here are three reasons that the status quo on Section 230 immunity is no longer acceptable:
The Internet Is Not a Baby Anymore. In the early days of the internet, it made sense to create a statutory protection for online service providers who chose to moderate content. Some providers (like Compuserve) operated like an online newsstand (i.e., distributor) and did not moderate content, while others (like Prodigy) operated more like a newspaper editor (i.e., publisher) and chose to do so. Both got sued … but only Prodigy was held liable. Section 230 came about in large part in an effort to address this disparity. Makes sense for the mid-1990s, but what about 25 years later? Given the sheer reach of many of these platforms and massive amount of content and news disseminated on them now, at the very least the issue needs to be revisited.
Broad-Based Immunity Has Had Its Day. Shortly after Section 230 was enacted, a lawsuit was brought that interpreted Section 230 very broadly. In Zeran v. America Online, Incorporated, the Fourth Circuit Court of Appeals affirmed the trial court’s dismissal of the case based upon Section 230. Stemming from offensive jokes about the 1995 Oklahoma City bombing posted online using the plaintiff’s first name and home telephone number, the plaintiff brought suit against AOL seeking damages for harm to his business. Rather than treat AOL as a distributor under the First Amendment (holding it liable if it knew or had reason to know of the illegal content), the presiding judge on the panel (J. Harvie Wilkinson) cited Congress’ desire to protect free speech in reading Section 230 broadly. He found distributors of content online to be a “subset” of publishers and deserving of very broad protection against liability. No question, Judge Wilkinson’s broad interpretation shaped subsequent Section 230 case law — the question is, should this interpretation stand in 2020 and beyond? Examples like the one in the introductory paragraph beg otherwise.
Good Faith Matters. A cursory review of articles online will uncover numerous instances of online service providers restricting content under the auspices of “fact checking” and “combating fraud and misinformation” (to name a few). Being an election year, it seems as if accounts are being suspended and content is being removed at a feverish pace. From my perspective, moderating content is not per se problematic — the problem is, where should such providers draw the line? Perhaps the answer lies within Section 230 itself — “action voluntarily taken in good faith to restrict access to or availability” of certain content. Under the present broad interpretation of Section 230, such good faith has little “punch”; however, rethinking the breadth of Section 230 immunity can bring this critical language back to the fore.
Based on the legislative history, Congress expressly sought to encourage online platforms to offer a “forum for a true diversity of political discourse.” It is catastrophically ironic that in this day and age, Section 230 jurisprudence has induced such providers to not only lose sight of Congress’ express intent here, but ostensibly abandon it. The argument is not about the neutrality of online service providers from my perspective, but about their methodology and consistency of its application. It’s high time that we rethink Section 230 consistent with this original intent, and we should accept nothing less.
Tom Kulik is an Intellectual Property & Information Technology Partner at the Dallas-based law firm of Scheef & Stone, LLP. In private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively counsel and help his clients navigate the complexities of law and technology in their business. News outlets reach out to Tom for his insight, and he has been quoted by national media organizations. Get in touch with Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer), or contact him directly at [email protected]