The Gradient of Generative AI Release: Methods and Considerations
Citation: Irene Solaiman (2023/06/23) The Gradient of Generative AI Release: Methods and Considerations.
DOI (original publisher): 10.1145/3593013.3593981
Semantic Scholar (metadata): 10.1145/3593013.3593981
Sci-Hub (fulltext): 10.1145/3593013.3593981
Internet Archive Scholar (search for fulltext): The Gradient of Generative AI Release: Methods and Considerations
Wikidata (metadata): Q135644345
Download: https://dl.acm.org/doi/abs/10.1145/3593013.3593981
Tagged:
Summary
Introduces a six-level gradient for releasing generative AI—fully closed → gradual/staged → hosted → cloud/API → downloadable → fully open—and uses it to map trade-offs between auditability, community research, and risk control. The paper also charts 2018–2022 release trends (greater closedness for more capable, big-lab systems; relatively more openness from openness-oriented collectives) and enumerates pre-/post-release guardrails (e.g., documentation, model/data cards, structured access, rate-limits, filtering, red-teaming) as complements to any chosen release mode.
Theoretical and Practical Relevance
Provides a shared release design vocabulary that separates how available a system is from how it is safeguarded, enabling developers, platforms, and regulators to choose a release point on the gradient while layering appropriate controls and transparency artifacts. Practically, the gradient supports evidence-based comparisons of releases across modalities and over time, and offers a structured way to justify or contest release decisions without collapsing everything into “open vs. closed.”