Legal Law

Ultimate ideas on part 230: The nice, the unhealthy, and the unapologetically ugly

Unless you’ve been hiding under a rock lately, you know the CEOs of Twitter, Facebook and Google were harassed on Capitol Hill last week over their “fact-checking” of content and allegedly fighting disinformation on their platforms. At best, it was an opportunity for the American public to hear, so to speak, directly from the horse’s mouth about these actions by these platforms. At worst, it was a toneless display of arrogance at the highest level that showed a startling ambivalence about the inconsistency in applying their own policies and terms of use. In any event, her testimony did not dispel any concerns about Big Tech’s actions. The question is no longer whether section 230 should be revised, but how and by how much. Unfortunately, the answer isn’t straightforward, but the approach to the answer may be easier than you think.

I first addressed why social media platforms need to respect Section 230 here, reasons why we need to rethink and reevaluate the protection here, followed by arguments in favor of repairing Section 230 for the 21st century. Unfortunately, statements made in the Senate last week have shown that Twitter, Facebook, and Google either fail to understand the depth of concern about their actions, or they care. As a result, it seemed appropriate to address some final thoughts on section 230 from a different angle – that of its good points, its bad points, and the absolutely ugly points that result from the law not keeping up with the platforms.

The good. Regardless of the criticism, section 230 has real value. Although the largest social media platforms can easily deal with a loss of immunity to content moderation, there are a variety of other platforms competing for market share in the market (like Parler and Rumble) that absolutely benefit from such immunity (and need this). Section 230 protects online service providers from civil liability for defamatory, illicit and even illegal content that its users post on the platform (i.e., comments on posts), but also provides immunity from civil liability for moderating or restricting content that is posted on Platform (e.g. removing obscene content or content threatening violence against a person or group). This is a reasonable approach to an ongoing problem. For this reason, requests for Section 230 to be repealed entirely without adequate replacement are simply wrong. Complete repeal not only results in social media platforms but a large number of websites removing information to a large extent in order to avoid liability, which leads to even more censored content. Online service providers (including social media platforms) offer valuable channels for the exchange of information and should enjoy some protection from liability in order to ensure the free exchange of ideas and speeches.

The bad. For at least the biggest social media platforms, they don’t practice what they preach. Regardless of claims that their “purpose is to serve public conversation” (Twitter) or community standards that claim to “express a place and give people a voice” (Facebook), these platforms have a march to expand content continued removal, reaching a fever level this election season, and culminating in actions under the guise of “fact-checking” to correct “misinformation” (and that’s to say the least). There are a number of reasons why this has happened, but one of the biggest is due to the disastrously widespread case law under Section 230. Since Zeran v America Online, Incorporated in 1997 (where the Fourth Circuit Court of Appeals upheld the court’s dismissal on the basis of Section 230 immunity, online content distributors have held a “subset” of publishers and have made much more money Liability protection). This interpretation made sense in 1997, but 23 years later the reasoning is no longer the same. The Internet has matured and its reach has increased dramatically due to the proliferation of mobile devices and the reach of cellular and wireless communications. This development has helped create social media platforms whose reach has only grown exponentially and changed the way news and other information is disseminated online. The result, however, has been a slow and steady shift away from their own guidelines for supporting public discourse and donning the “information gatekeeper” cloak.

The unapologetically ugly. In the already highly polarized political atmosphere of this presidential election cycle, some of the largest social media platforms have made it their business to censor content without apparently thinking about the consequences for their actions. For example, Twitter has blocked the New York Post’s account from tweets referring to the newspaper’s exposure of Hunter Biden (son of presidential candidate Joe Biden) ‘s dealings that he got from a laptop he left at a computer repair shop Has. The reason for the removal? Violations of Twitter’s Hacking Policy, although other sources and the reporting itself showed the information was not hacked. In fact, within 24 hours of the removal, Twitter changed its policy, claiming that “feedback” exposed concerns about “inappropriate censorship by journalists and whistleblowers”. For similar reasons, Facebook limited the reach of the same NY Post article on its platform. These actions resulted in the CEOs of Facebook and Twitter (as well as Google) being asked to testify before the Senate to explain their actions, which at first glance appeared politically motivated. Twitter’s CEO Jack Dorsey attempted to justify his platform’s actions in question, even claiming that the content in question could now be shared, although it was not and remained for a while during Dorsey’s testimony so. While measured, their statements failed to address inconsistencies in enforcing their own policies or otherwise allay concerns about perceived political bias. You can unwrap it yourself here (despite the irony on YouTube), but let’s just say that these CEOs’ testimony didn’t exactly add to the usefulness of their platforms given the stakes back then.

I want to make sure that this point is completely clear: Social media platforms, as private commercial endeavors, can determine what content they want to allow on their platforms. That is their right. What they need to understand, however, is that Section 230 is a privilege that they must respect. Due to their own inconsistent actions, they have abused their privilege, which is partly due to the fact that the case law under Section 230 has not kept pace with the times and its review is long overdue. Additionally, my greatest concern about their actions is the deviation from their supposed purpose and the bitter inconsistency in applying their own policies and procedures, which is a disservice to users and the internet community as a whole. Consistent good faith enforcement supports the intent of section 230 while removing the perceived bias. They should also avoid the pitfalls of addressing “misinformation” by stopping being goalkeepers and returning to refereeing. Simply put, following the guidelines given, allow the conversation to begin and stop being a part of it. Unfortunately, their actions have ramifications for them – in this case, whether they like it or not, it is loss of section 230 protection in one form or another.

Tom Kulik is an intellectual property and information technology partner with the Dallas-based law firm Scheef & Stone, LLP. Having worked in private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively advise his clients and help them tackle the complexities of law and technology in their business. News outlets are reaching out to Tom for insight and he has been quoted by national media organizations. Contact Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer) or contact him directly at [email protected]

Related Articles