The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity

From AcaWiki
Jump to: navigation, search

Citation: Danielle Keats Citron, Benjamin Wittes (2017) The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity. Fordham Law Review (RSS)
Internet Archive Scholar (search for fulltext): The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity
Download: https://ssrn.com/abstract=3007720
Tagged: CDA230 (RSS), intermediary liability (RSS)

Summary

CDA part of broad campaign to restrict sexually explicit material online, parts of struck down on first amendment grounds. CDA 230 devised as safe harbor for online providers engaged in self-regulation, particularly responding to Prodigy, which was held liable because it had tried to remove objectionable material but did so incompletely.

230(c)(1) addresses under-screening: "no provider or user of interactive computer services shall be held liable as the publisher or speaker of any information provided by an information content provider."

230(c)(2) addresses over-screening: "no provider or user of an interactive computer service shall be held liable on any action voluntarily taken in good faith to restrict access to material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

Since 1996:

  • The judiciary has interpreted with near unaminity 230 as giving very broad immunity, not limited to speech or Good Samaritan moderation. In tech 230 "is a kind of sacred cow"
  • The digital marketplace is far different: 12 million subscribers to "commercial service providers" to billions of users; service providers are huge companies, many providing services that have little to do with speech and which could not escape liability for failing to meet reasonable duties of care if they operated offline; large scale abuse has emerged.

Authors suggest courts could shift interpretation to not apply safe harbor unless claims relate to publication of user-generated content, and limit safe harbor to Good Samaritans and especially not extend it to actively Bad Samaritans who have designed services to promote abusive, illegal, etc behavior.

Authors additionally propose legislative changes:

"sites that encourage destructive online abuse or which are principally used for that purpose should not enjoy immunity from liability. Mirroring section 230’s current exemption of federal law and intellectual property, the amendment could state, “Nothing in section 230 shall be construed to limit or expand the application of civil or criminal liability for any website or other content host that purposefully encourages cyber stalking, nonconsensual pornography, sex trafficking, child sexual exploitation, or that principally hosts such material.”"

"A broader though still balanced approach would be to clarify the reach of section 230(c)(1), which could be revised as follows: “No provider or user of an interactive computer service that takes reasonable steps to prevent unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.”

Such re-calibration would bring incentives inline with role of online services in contemporary world, would not completely remove immunity (which newspapers, hotels, and many other non-digital examples survive without -- authors point out that the sky will not fall), and might encourage more reasonable/less reactionary regulation outside the US (eg EU call to remove hate speech within 24 hours).

Finally, authors note stages of liability treatment of new technology:

  1. recognize new harm, but not benefit, so adapt existing theories of liability to reach new harm
  2. recognize technology benefit and previous awards as threat to progress, grant sweeping protection to firms in new industry
  3. both benefits and harms considered analytically, and new liability rules separate indispensable parts of the new industry and parts that cause unnecessary harms

Authors argue their proposal seeks to move internet law to third stage: "establish a reasonable standard of care that will reduce opportunities for abuses without interfering with the further development of a vibrant internet or unintentionally turning innocent platforms into involuntary insurers for those injured through their sites."