Considerations for governing open foundation models
Citation: Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang (2024/10/11) Considerations for governing open foundation models.
DOI (original publisher): 10.1126/science.adp1848
Semantic Scholar (metadata): 10.1126/science.adp1848
Sci-Hub (fulltext): 10.1126/science.adp1848
Internet Archive Scholar (search for fulltext): Considerations for governing open foundation models
Wikidata (metadata): Q135644890
Download: https://www.science.org/doi/full/10.1126/science.adp1848
Tagged:
Summary
Argues that policy for foundation models should separate the benefits of openness (competition, innovation, power dispersion, accountability) from claims about its marginal risks, and then evaluate risk relative to closed models and preexisting technologies. Using that “marginal risk” lens, the authors note that evidence for higher risk from openness is limited overall (with documented exceptions like CSAM/NCII via image models) and that some popular proposals (e.g., compute-licensing thresholds) are mismatched to those harms. They also show how several regulatory ideas—downstream content liability for developers, mandatory content provenance/watermarking guarantees, and data-disclosure duties—can disproportionately burden open-model developers even when not targeted at them, and urge direct consultation with the open-model community. A gradient of release modes (fully closed → hosted/API → downloadable weights → full releases) is used to situate these choices.
Theoretical and Practical Relevance
Provides a policy framing and checklist: (1) assess marginal risk (open vs. closed vs. status-quo tech) rather than treating openness as intrinsically riskier; (2) avoid interventions that incidentally penalize openness—e.g., developer liability for user conduct or provenance mandates that presuppose centralized control; (3) when disclosure is needed, design it to not create perverse incentives against releasing data or documentation; and (4) engage open-model developers when crafting rules. For implementers, the gradient of release plus this lens supports evidence-based release decisions and targeted safeguards without collapsing “open vs. closed.”