<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://acawiki.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mike+Linksvayer</id>
	<title>AcaWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://acawiki.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mike+Linksvayer"/>
	<link rel="alternate" type="text/html" href="https://acawiki.org/Special:Contributions/Mike_Linksvayer"/>
	<updated>2026-04-16T18:49:55Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.12</generator>
	<entry>
		<id>https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14086</id>
		<title>A Formal Model of the Economic Impacts of AI Openness Regulation</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14086"/>
		<updated>2025-08-29T22:44:40Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=A Formal Model of the Economic Impacts of AI Openness Regulation&lt;br /&gt;
|authors=Tori Qiu, Benjamin Laufer, Jon Kleinberg, Hoda Heidari&lt;br /&gt;
|url=https://arxiv.org/abs/2507.14193&lt;br /&gt;
|summary=Authors build a formal economic model to analyze how AI openness regulations shape developer behavior and downstream innovation. In their framework, a developer (the generalist) chooses how open to make a model, while a regulator sets (1) an openness threshold, the minimum level required for a model to count as “open source,” and (2) a penalty, a cost imposed if the model falls short of that threshold. The model shows how these levers interact with the baseline strength of the model to determine equilibrium outcomes for openness and fine-tuning.&lt;br /&gt;
|relevance=The paper’s contribution is analytic: it formalizes the idea that regulatory definitions of “open” and the costs of non-compliance matter differently across contexts. This clarifies how such levers could be studied, but it does not yet provide policy lessons, since the relative scaling of costs and benefits of openness is assumed rather than validated. For policy use, the framework would need empirical grounding to determine whether thresholds, penalties, or other mechanisms truly influence developer behavior at different capability levels.&lt;br /&gt;
|wikidata=Q136001528&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14085</id>
		<title>A Formal Model of the Economic Impacts of AI Openness Regulation</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14085"/>
		<updated>2025-08-29T22:44:15Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=A Formal Model of the Economic Impacts of AI Openness Regulation&lt;br /&gt;
|authors=Tori Qiu, Benjamin Laufer, Jon Kleinberg, Hoda Heidari&lt;br /&gt;
|url=https://arxiv.org/abs/2507.14193&lt;br /&gt;
|summary=Authors build a formal economic model to analyze how AI openness regulations shape developer behavior and downstream innovation. In their framework, a developer (the generalist) chooses how open to make a model, while a regulator sets (1) an openness threshold, the minimum level required for a model to count as “open source,” and (2) a penalty, a cost imposed if the model falls short of that threshold. The model shows how these levers interact with the baseline strength of the model to determine equilibrium outcomes for openness and fine-tuning.&lt;br /&gt;
|relevance=Qiu et al. develop a game-theoretic framework to analyze how regulations defining “open source” in AI might affect developer choices and downstream innovation. In their setup, regulators can adjust two levers: a threshold that specifies how open a model must be to qualify for benefits, and a penalty imposed if the threshold is not met. The results show that the effectiveness of these levers depends on model capability, with higher baseline performance making thresholds more salient and lower performance making penalties more salient. These findings are illustrative of how regulation could shape openness, but they rest on stylized assumptions about costs and benefits rather than empirical evidence.&lt;br /&gt;
|wikidata=Q136001528&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14084</id>
		<title>A Formal Model of the Economic Impacts of AI Openness Regulation</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14084"/>
		<updated>2025-08-29T20:40:20Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=A Formal Model of the Economic Impacts of AI Openness Regulation&lt;br /&gt;
|authors=Tori Qiu, Benjamin Laufer, Jon Kleinberg, Hoda Heidari&lt;br /&gt;
|url=https://arxiv.org/abs/2507.14193&lt;br /&gt;
|summary=Authors build a formal economic model to analyze how AI openness regulations shape developer behavior and downstream innovation. In their framework, a developer (the generalist) chooses how open to make a model, while a regulator sets (1) an openness threshold, the minimum level required for a model to count as “open source,” and (2) a penalty, a cost imposed if the model falls short of that threshold. The model shows how these levers interact with the baseline strength of the model to determine equilibrium outcomes for openness and fine-tuning.&lt;br /&gt;
|relevance=The paper’s contribution is to replace vague discussions of “open” with a rigorous account of how regulatory definitions and penalties change incentives. For theory, it provides a tractable game-theoretic model of openness that clarifies when regulation can actually shift behavior. For practice, it warns policymakers that the same lever may work differently depending on context: thresholds matter more when models are strong, penalties matter more when they are weak. This helps ground debates on openness regulation in formal analysis rather than intuition.&lt;br /&gt;
|wikidata=Q136001528&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14083</id>
		<title>A Formal Model of the Economic Impacts of AI Openness Regulation</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=A_Formal_Model_of_the_Economic_Impacts_of_AI_Openness_Regulation&amp;diff=14083"/>
		<updated>2025-08-29T20:39:44Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=A Formal Model of the Economic Impacts of AI Openness Regulation |authors=Tori Qiu, Benjamin Laufer, Jon Kleinberg, Hoda Heidari |url=https://arxiv.org/abs/25...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=A Formal Model of the Economic Impacts of AI Openness Regulation&lt;br /&gt;
|authors=Tori Qiu, Benjamin Laufer, Jon Kleinberg, Hoda Heidari&lt;br /&gt;
|url=https://arxiv.org/abs/2507.14193&lt;br /&gt;
|summary=Authors build a formal economic model to analyze how AI openness regulations shape developer behavior and downstream innovation. In their framework, a developer (the generalist) chooses how open to make a model, while a regulator sets (1) an openness threshold, the minimum level required for a model to count as “open source,” and (2) a penalty, a cost imposed if the model falls short of that threshold. The model shows how these levers interact with the baseline strength of the model to determine equilibrium outcomes for openness and fine-tuning.&lt;br /&gt;
|relevance=The paper’s contribution is to replace vague discussions of “open” with a rigorous account of how regulatory definitions and penalties change incentives. For theory, it provides a tractable game-theoretic model of openness that clarifies when regulation can actually shift behavior. For practice, it warns policymakers that the same lever may work differently depending on context: thresholds matter more when models are strong, penalties matter more when they are weak. This helps ground debates on openness regulation in formal analysis rather than intuition.&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Institutional_Books_1.0:_A_242B_token_dataset_from_Harvard_Library%27s_collections,_refined_for_accuracy_and_usability&amp;diff=14082</id>
		<title>Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Institutional_Books_1.0:_A_242B_token_dataset_from_Harvard_Library%27s_collections,_refined_for_accuracy_and_usability&amp;diff=14082"/>
		<updated>2025-08-29T06:10:11Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability |authors=Matteo Cargnelutti, Catherine Br...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability&lt;br /&gt;
|authors=Matteo Cargnelutti, Catherine Brobston, John Hess, Jack Cushman, Kristi Mukk, Aristana Scourtas, Kyle Courtney, Greg Leppert, Amanda Watson, Martha Whitehead, Jonathan Zittrain&lt;br /&gt;
|url=https://arxiv.org/abs/2506.08300&lt;br /&gt;
|summary=ntroduces Institutional Books 1.0, a large, provenance-tracked corpus of public-domain books from Harvard Library (originally digitized via Google Books): 983,004 volumes across 250+ languages totaling ~242B tokens selected from ~1.08M scans. The release includes OCR text (original + post-processed) and rich metadata (bibliographic IDs, language detection, topic labels mapped to LoC classes, OCR quality scores, tokenizability metrics, duplicate flags, and HathiTrust rights determinations). The paper and companion resources document the refining pipeline and dataset diagnostics to make historical texts accurate and model-ready for training and analysis. Distribution is via Hugging Face under early-access, noncommercial, no-redistribution terms while institutions work toward longer-term sharing norms.&lt;br /&gt;
|relevance=The contribution is a lawful, well-documented large-scale textual corpus with institutional provenance—addressing the scarcity of public, rights-clear training data and enabling replicable pretraining, evaluation, and text mining on historical books. Practically, the release provides ready-to-use signals (language/topic labels, OCR/quality scores, token counts) and open pipeline code for reuse and auditing; policy-wise, it exemplifies how public-domain collections + metadata curation can expand access while navigating rights and stewardship constraints (with current usage limits noted).&lt;br /&gt;
|arxiv=2506.08300&lt;br /&gt;
|wikidata=Q135233876&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=The_Gradient_of_Generative_AI_Release:_Methods_and_Considerations&amp;diff=14081</id>
		<title>The Gradient of Generative AI Release: Methods and Considerations</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=The_Gradient_of_Generative_AI_Release:_Methods_and_Considerations&amp;diff=14081"/>
		<updated>2025-08-27T05:18:14Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: url&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=The Gradient of Generative AI Release: Methods and Considerations&lt;br /&gt;
|authors=Irene Solaiman&lt;br /&gt;
|url=https://arxiv.org/abs/2302.04844&lt;br /&gt;
|summary=Introduces a six-level gradient for releasing generative AI—fully closed → gradual/staged → hosted → cloud/API → downloadable → fully open—and uses it to map trade-offs between auditability, community research, and risk control. The paper also charts 2018–2022 release trends (greater closedness for more capable, big-lab systems; relatively more openness from openness-oriented collectives) and enumerates pre-/post-release guardrails (e.g., documentation, model/data cards, structured access, rate-limits, filtering, red-teaming) as complements to any chosen release mode.&lt;br /&gt;
|relevance=Provides a shared release design vocabulary that separates how available a system is from how it is safeguarded, enabling developers, platforms, and regulators to choose a release point on the gradient while layering appropriate controls and transparency artifacts. Practically, the gradient supports evidence-based comparisons of releases across modalities and over time, and offers a structured way to justify or contest release decisions without collapsing everything into “open vs. closed.”&lt;br /&gt;
|pub_date=2023/06/23&lt;br /&gt;
|doi=10.1145/3593013.3593981&lt;br /&gt;
|wikidata=Q135644345&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Analyzing_the_Evolution_and_Maintenance_of_ML_Models_on_Hugging_Face&amp;diff=14080</id>
		<title>Analyzing the Evolution and Maintenance of ML Models on Hugging Face</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Analyzing_the_Evolution_and_Maintenance_of_ML_Models_on_Hugging_Face&amp;diff=14080"/>
		<updated>2025-08-26T03:43:18Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Analyzing the Evolution and Maintenance of ML Models on Hugging Face |authors=Joel Castaño, Silverio Martínez-Fernández, Xavier Franch, Justus Bogner |url=...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Analyzing the Evolution and Maintenance of ML Models on Hugging Face&lt;br /&gt;
|authors=Joel Castaño, Silverio Martínez-Fernández, Xavier Franch, Justus Bogner&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3643991.3644898&lt;br /&gt;
|summary=The authors mine ~380k Hugging Face model reposto characterize ecosystem growth, author groups, model-card topics, and—centrally—maintenance. They classify commit messages into perfective, corrective, adaptive and cluster repos into high- vs. low-maintenance using k-means on commit frequency, intervals, authors, etc. Findings: commit activity is right-skewed (many models have few commits); average commit edits ~5 files (median 2); perfective dominates (~89%), with adaptive (~6%) and corrective (~2.5%) far smaller; and only ~16.5% of models fall into a high-maintenance cluster (the rest low-maintenance). High-maintenance repos are larger and more popular (downloads/likes) and tend to have longer model cards; popularity is concentrated in a small number of author groups.&lt;br /&gt;
|relevance=The study provides ecosystem-level evidence on how open-model projects are actually maintained: most are low-maintenance and dominated by incremental/perfective updates, while a small, active core anchors the platform’s maintained models. This sharpens our understanding of open-weight development by distinguishing lineage-level reuse from within-repo upkeep, and by linking maintenance intensity to popularity, size, and documentation depth. Practically, the features they surface (commit cadence, author count, card length) are useful signals of model health for users; for platform and tool designers, the results motivate ML-specific maintenance tools (better versioning for data/models, automated monitoring for drift) and transparent maintenance logs to improve selection and trust.&lt;br /&gt;
|doi=10.1145/3643991.3644898&lt;br /&gt;
|wikidata=Q135972655&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Anatomy_of_a_Machine_Learning_Ecosystem:_2_Million_Models_on_Hugging_Face&amp;diff=14079</id>
		<title>Anatomy of a Machine Learning Ecosystem: 2 Million Models on Hugging Face</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Anatomy_of_a_Machine_Learning_Ecosystem:_2_Million_Models_on_Hugging_Face&amp;diff=14079"/>
		<updated>2025-08-26T01:17:09Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Anatomy of a Machine Learning Ecosystem: 2 Million Models on Hugging Face |authors=Benjamin Laufer, Hamidah Oderinwale, Jon Kleinberg |url=https://arxiv.org/a...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Anatomy of a Machine Learning Ecosystem: 2 Million Models on Hugging Face&lt;br /&gt;
|authors=Benjamin Laufer, Hamidah Oderinwale, Jon Kleinberg&lt;br /&gt;
|url=https://arxiv.org/abs/2508.06811&lt;br /&gt;
|summary=The paper empirically maps the open ML ecosystem on Hugging Face by analyzing ~1.86 million models and reconstructing “family trees” that link fine-tuned descendants to their base models. Using metadata and model-card text, the authors quantify trait drift along lineages and find that (i) siblings resemble each other more than parent–child pairs (fast, directed mutations), (ii) licenses tend to drift from restrictive/commercial toward permissive or copyleft (sometimes misaligned with upstream terms), (iii) models drift from multilingual to English-only compatibility, and (iv) model cards shorten and standardize, with increased use of templated/auto-generated text.&lt;br /&gt;
|relevance=The work contributes ecosystem-level evidence on how open-weight development evolves, adapting an evolutionary/phylogenetic lens to quantify lineage structure and mutation in model traits. Practically, the findings bear on governance and platform design: license drift highlights compliance and stewardship challenges for derivative models; language and documentation drift inform evaluation reliability and discoverability; and the observed lineage dynamics suggest that norms and instrumentation at the base-model level can shape downstream behavior across large families.&lt;br /&gt;
|pub_date=2025/08/09&lt;br /&gt;
|wikidata=Q135972441&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=On_the_Societal_Impact_of_Open_Foundation_Models&amp;diff=14078</id>
		<title>On the Societal Impact of Open Foundation Models</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=On_the_Societal_Impact_of_Open_Foundation_Models&amp;diff=14078"/>
		<updated>2025-08-05T02:07:29Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=On the Societal Impact of Open Foundation Models |authors=Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen H...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=On the Societal Impact of Open Foundation Models&lt;br /&gt;
|authors=Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan&lt;br /&gt;
|url=https://arxiv.org/abs/2403.07918&lt;br /&gt;
|summary=Defines open foundation models as those with broadly available model weights and frames their societal impact via five distinctive properties—broader access, greater customizability, potential for local inference, irreversibility of release, and weaker monitoring—which jointly explain both benefits (innovation, competition, dispersion of decision-making, transparency) and risks. The paper contributes a six-point framework for assessing marginal risk (risk beyond closed models or preexisting tech) and surveys seven misuse vectors (e.g., biosecurity, cybersecurity, disinformation, NCII/CSAM), concluding that current evidence is generally insufficient to quantify marginal risk in most cases and clarifying why past debates often talk past each other.&lt;br /&gt;
|relevance=Offers policy-usable guidance: keep benefits and marginal-risk analysis distinct; require assumptions and evidence when claiming harm; and avoid obligations that presume centralized control (e.g., developer liability or strict watermarking duties) that open-model developers cannot realistically meet. The authors recommend: (i) developers publish which responsible-AI practices they implement vs. delegate downstream; (ii) researchers prioritize empirical tests of marginal risk; (iii) policymakers assess regulatory burden on open-model developers and target interventions to specific misuse vectors; and (iv) competition authorities measure openness-linked benefits (choice, costs) rather than assume them.&lt;br /&gt;
|pub_date=2024/02/07&lt;br /&gt;
|wikidata=Q135645242&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=On_the_Standardization_of_Behavioral_Use_Clauses_and_Their_Adoption_for_Responsible_Licensing_of_AI&amp;diff=14077</id>
		<title>On the Standardization of Behavioral Use Clauses and Their Adoption for Responsible Licensing of AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=On_the_Standardization_of_Behavioral_Use_Clauses_and_Their_Adoption_for_Responsible_Licensing_of_AI&amp;diff=14077"/>
		<updated>2025-08-05T01:58:03Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=On the Standardization of Behavioral Use Clauses and Their Adoption for Responsible Licensing of AI |authors=Daniel McDuff, Tim Korjakow, Scott Cambo, Jesse J...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=On the Standardization of Behavioral Use Clauses and Their Adoption for Responsible Licensing of AI&lt;br /&gt;
|authors=Daniel McDuff, Tim Korjakow, Scott Cambo, Jesse Josua Benjamin, Jenny Lee, Yacine Jernite, Carlos Muñoz Ferrandis, Aaron Gokaslan, Alek Tarkowski, Joseph Lindley, A. Feder Cooper, Danish Contractor&lt;br /&gt;
|url=https://arxiv.org/abs/2402.05979&lt;br /&gt;
|summary=Mixed-methods study of behavioral-use clause (BUC) licenses for AI—combining qualitative interviews, clustering/summary of common clauses, and a quantitative adoption analysis—to explain why/how BUCs are used and how they’re being adapted. The authors document large-scale uptake on Hugging Face (e.g., ~41,700 RAIL, ~3,566 LLaMA-2–licensed repos as of Jan 2024) and a recurring pattern of overlapping but inconsistent clause sets across licenses. They consolidate clause families (mis/disinformation, privacy, health, military, etc.; 21 behavioral restrictions) and compare derivative/distribution conditions across popular licenses (e.g., OpenRAIL, LLaMA-2, ImpACT, Falcon). The paper argues for “standardized customization”—a common core plus domain-specific add-ons—and prototypes a community-oriented license generator and license-compatibility scan to help authors assemble consistent, enforceable terms.&lt;br /&gt;
|relevance=Positions behavioral-use clause (BUC) licenses as a pragmatic middle path between “open with no restrictions” and “don’t release,” showing adoption at scale and arguing that standardization can reduce confusion and better align terms with intent. It delivers actionable components—a canonical clause taxonomy, a community license generator, and a license-compatibility scan—that platforms, funders, and regulators can directly use. Practical levers include preferring standardized BUC bundles in grants/procurement, requiring machine-readable attestations of selected clauses, encouraging OpenRAIL-style behavioral limits where needed (while avoiding non-behavioral restrictions like “research-only”), and using compatibility checks to prevent conflicts (e.g., with GPL). The paper cautions that standardization encodes value judgments, recommending configurable clauses to handle domain-specific risks while preserving a shared core.&lt;br /&gt;
|pub_date=2024/02/07&lt;br /&gt;
|wikidata=Q135645219&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=New_Tools_are_Needed_for_Tracking_Adherence_to_AI_Model_Behavioral_Use_Clauses&amp;diff=14076</id>
		<title>New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=New_Tools_are_Needed_for_Tracking_Adherence_to_AI_Model_Behavioral_Use_Clauses&amp;diff=14076"/>
		<updated>2025-08-05T01:52:54Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses |authors=Daniel McDuff, Tim Korjakow, Kevin Klyman, Danish Contractor |url=http...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses&lt;br /&gt;
|authors=Daniel McDuff, Tim Korjakow, Kevin Klyman, Danish Contractor&lt;br /&gt;
|url=https://arxiv.org/abs/2505.22287v1&lt;br /&gt;
|summary=Analyzes real-world use of behavioral-use clause (BUC; RAIL-style) licenses via two studies: (i) deployment of an open-source RAIL License Generator (licenses.ai), yielding 308+ customized licenses in ~12 months and revealing clause-selection patterns (e.g., heavy emphasis on disinformation/misuse prohibitions; fewer selections for bodily-harm/warfare clauses); and (ii) a large-scale scan of 1.7M Hugging Face model repos, finding RAIL licenses on 12.1% of models versus 61.5% under OSI open-source licenses, with a convergence toward a short list (~25) of canonical clauses. The paper argues the field now needs tools to track both adoption and adherence to these licenses, and sketches socio-technical avenues: provenance/fingerprinting, output watermarking, and community reporting. It also documents licensing flux (e.g., Stable Diffusion and DeepSeek changes; Dolma dataset’s ImpACT→ODC-By-2.0 switch) and situates OSAID 1.0 and “open-weights” as alternative release philosophies in tension with behavioral-use licensing.&lt;br /&gt;
|relevance=Shifts the conversation from authoring BUCs to verifying compliance: the evidence base (generator telemetry + 1.7M-repo scan) motivates building adherence tooling and attestation processes that smaller labs can use—not just major providers. For policy and platform governance, the results support explicit monitoring mandates (or incentives) for BUC compliance, investment in traceability infrastructure, and clear labeling of open-weights vs. BUC-restricted releases so obligations are intelligible to downstream users; they also highlight that, absent such tools, standardized BUCs may not deliver intended public-interest outcomes.&lt;br /&gt;
|wikidata=Q135645210&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Measuring_the_openness_of_AI_foundation_models:_competition_and_policy_implications&amp;diff=14075</id>
		<title>Measuring the openness of AI foundation models: competition and policy implications</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Measuring_the_openness_of_AI_foundation_models:_competition_and_policy_implications&amp;diff=14075"/>
		<updated>2025-08-05T01:47:00Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Measuring the openness of AI foundation models: competition and policy implications |authors=Thibault Schrepel, Jason Potts |url=https://www.tandfonline.com/d...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Measuring the openness of AI foundation models: competition and policy implications&lt;br /&gt;
|authors=Thibault Schrepel, Jason Potts&lt;br /&gt;
|url=https://www.tandfonline.com/doi/full/10.1080/13600834.2025.2461953&lt;br /&gt;
|summary=Most “openness” yardsticks for foundation models focus on technical artifacts; this paper instead measures license-level openness as a driver of the AI innovation commons. It defines an 18-variable index, clustered around three economic problems a license must solve for commons to work—knowledge (e.g., accessibility, transparency, documentation), implicit contracting (e.g., contribution policies, exit rights, anti-opportunism), and collective-action governance (e.g., access &amp;amp; use rights, participatory governance, interoperability). The authors hand-score 11 prominent models (GPT-4, Gemini Ultra, Llama 3, Midjourney V6, Claude 3, xAI, Mistral 8×7B, BLOOM, Cohere Aya, Cohere Command R, Falcon 180B) across these variables (0/1/2 per variable; 198 scored items total) and release the evidence table. Headline findings: today’s models score highest on documentation/support, collaboration platforms, derivative-works and lowest on exit rights, costs of maintenance, participatory governance, credit/revenue sharing; openness rankings diverge from technical-only indices, with Cohere Aya and BLOOM-560M leading on “knowledge accessibility,” while GPT-4, Gemini, Midjourney V6, Command R, Claude 3 score 0/2 on knowledge accessibility and (except Command R) on interoperability. Aggregate scores are 64/132 for knowledge vs. 35/132 for each of implicit contracting and governance—pinpointing where “open” claims break down.&lt;br /&gt;
|relevance=Turns “open vs. closed” into a license-design checklist that antitrust and policy actors can act on. For policymakers, it proposes a minimum audit (review how existing rules affect the 18 variables) and a maximal, three-pillar program: (1) legal exemptions for genuinely open systems (not just technical definitions), (2) economic measures (tax incentives, procurement preference, funding, institutional support) to tilt toward open models, and (3) technical support (transparency duties when public funds are used; open data repositories; interoperability standards). For enforcers, it argues the closed end of the spectrum deserves priority scrutiny (licenses that restrict forking, interoperability, or access), whereas more-open licenses reduce anticompetitive leverage by enabling scrutiny, forking, and entry. The authors publish the 198-item evidence sheet so others can reuse the rubric and recompute weights for procurement, exemptions, or oversight.&lt;br /&gt;
|pub_date=2025/03/05&lt;br /&gt;
|doi=10.1080/13600834.2025.2461953&lt;br /&gt;
|wikidata=Q135645204&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=The_2024_Foundation_Model_Transparency_Index&amp;diff=14074</id>
		<title>The 2024 Foundation Model Transparency Index</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=The_2024_Foundation_Model_Transparency_Index&amp;diff=14074"/>
		<updated>2025-08-05T01:38:14Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=The 2024 Foundation Model Transparency Index |authors=Rishi Bommasani, Kevin Klyman, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, Percy Liang |u...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=The 2024 Foundation Model Transparency Index&lt;br /&gt;
|authors=Rishi Bommasani, Kevin Klyman, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, Percy Liang&lt;br /&gt;
|url=https://arxiv.org/abs/2407.12929&lt;br /&gt;
|summary=Updates the Foundation Model Transparency Index six months after its launch by scoring 14 developers on the same 100 indicators across upstream, model, and downstream domains, but with a key methodological shift: developers submit itemized reports (not just public web evidence). Average transparency rises from 37 → 58/100, with developers disclosing new, previously non-public information on ~16.6 indicators on average; yet sustained, systemic opacity remains around copyright status, data access, data labor, and downstream impact. The team also releases a public transparency report for each developer, arguing the Index itself likely contributed to improved disclosure.&lt;br /&gt;
|relevance=Establishes a defensible, repeatable yardstick for model-developer transparency that regulators, procurers, and funders can operationalize (e.g., set minimum indicator baselines, require developer-submitted attestations, and reference per-developer reports). The results support policy interventions targeted at lagging areas (e.g., external data access, mitigations/evaluations) while preserving a comparison-ready dashboard that separates where transparency improved from where it hasn’t—useful for eligibility rules, oversight, and monitoring over time.&lt;br /&gt;
|pub_date=2024/07/17&lt;br /&gt;
|wikidata=Q135645196&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Opening_up_ChatGPT:_Tracking_openness,_transparency,_and_accountability_in_instruction-tuned_text_generators&amp;diff=14073</id>
		<title>Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Opening_up_ChatGPT:_Tracking_openness,_transparency,_and_accountability_in_instruction-tuned_text_generators&amp;diff=14073"/>
		<updated>2025-08-05T01:20:55Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators |authors=Andreas Liesenfeld, Alianda Lopez, Mark...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators&lt;br /&gt;
|authors=Andreas Liesenfeld, Alianda Lopez, Mark Dingemanse&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3571884.3604316&lt;br /&gt;
|summary=Introduces an evidence-based openness audit for instruction-tuned text generators and applies it to an early cohort of 15 projects, scoring 13 features across three areas—availability (e.g., code, model/LLM weights, RLHF data, licensing, package), documentation (architecture, preprint/paper, datasheets), and access methods (API, download)—with a tri-level rubric (open / partial / closed). A living dataset [since superseded by https://osai-index.eu/] accompanies the paper; headline findings show many projects marketed as “open source” inherit undocumented training data, rarely share instruction-tuning (RLHF) data, and provide sparse scientific documentation.&lt;br /&gt;
|relevance=Offers one of the first structured yardsticks for “open” in chat-style LLMs, turning the label into verifiable feature checks that researchers, platforms, and funders can adopt for disclosure, badging, or procurement. The checklist and public, updatable repository support consistent comparisons over time and help detect open-washing (open weights but closed data/Docs/RLHF), providing a baseline many later audits build on.&lt;br /&gt;
|pub_date=2023/07/19&lt;br /&gt;
|wikidata=Q135645109&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=MusGO:_A_Community-Driven_Framework_For_Assessing_Openness_in_Music-Generative_AI&amp;diff=14072</id>
		<title>MusGO: A Community-Driven Framework For Assessing Openness in Music-Generative AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=MusGO:_A_Community-Driven_Framework_For_Assessing_Openness_in_Music-Generative_AI&amp;diff=14072"/>
		<updated>2025-08-05T01:11:09Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=MusGO: A Community-Driven Framework For Assessing Openness in Music-Generative AI |authors=Roser Batlle-Roca, Laura Ibáñez-Martínez, Xavier Serra, Emilia G...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=MusGO: A Community-Driven Framework For Assessing Openness in Music-Generative AI&lt;br /&gt;
|authors=Roser Batlle-Roca, Laura Ibáñez-Martínez, Xavier Serra, Emilia Gómez, Martín Rocamora&lt;br /&gt;
|url=https://www.arxiv.org/abs/2507.03599&lt;br /&gt;
|summary=Adapts an evidence-based openness audit from LLMs to music-generative AI and, via a survey of 110 MIR community members, distills a domain-specific framework—MusGO—with 13 categories: 8 essential (graded closed / partial / open; e.g., source code, training data, model weights, code docs, training procedure, evaluation procedure, research paper, licensing) and 5 desirable (binary; model card, datasheet, package, user-oriented application, supplementary material page). The paper publishes a public leaderboard and open repository so assessments (and their evidence) can be scrutinized and updated. Applying MusGO to 16 music models shows training procedure is often the most open, training data the most closed (only one model reaches “fully open” there), and that openness in code ↔ weights ↔ documentation ↔ licensing tends to co-occur, while datasheets are rare. A weighting scheme (doubling the three most-valued essentials) orders models but is expressly not a single-score definition.&lt;br /&gt;
|relevance=Provides a sector-specific, evidence-verifiable yardstick for “open model” claims in music—useful for audits, funding, and platform policy—and a community workflow (leaderboard + repo) that can catch open-washing and track improvements. By treating training data disclosure pragmatically (counting detailed source disclosure as “fully open” when raw sharing is unlawful) and adding evaluation procedure and user-facing artifacts, MusGO shows how openness can be operationalized under IP constraints while still enabling scrutiny and reproducibility—complementing general schemes (e.g., MOF/OSAID) with music-specific criteria.&lt;br /&gt;
|wikidata=Q135644933&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Considerations_for_governing_open_foundation_models&amp;diff=14071</id>
		<title>Considerations for governing open foundation models</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Considerations_for_governing_open_foundation_models&amp;diff=14071"/>
		<updated>2025-08-05T00:59:42Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Considerations for governing open foundation models |authors=Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Mar...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Considerations for governing open foundation models&lt;br /&gt;
|authors=Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang&lt;br /&gt;
|url=https://www.science.org/doi/full/10.1126/science.adp1848&lt;br /&gt;
|summary=Argues that policy for foundation models should separate the benefits of openness (competition, innovation, power dispersion, accountability) from claims about its marginal risks, and then evaluate risk relative to closed models and preexisting technologies. Using that “marginal risk” lens, the authors note that evidence for higher risk from openness is limited overall (with documented exceptions like CSAM/NCII via image models) and that some popular proposals (e.g., compute-licensing thresholds) are mismatched to those harms. They also show how several regulatory ideas—downstream content liability for developers, mandatory content provenance/watermarking guarantees, and data-disclosure duties—can disproportionately burden open-model developers even when not targeted at them, and urge direct consultation with the open-model community. A gradient of release modes (fully closed → hosted/API → downloadable weights → full releases) is used to situate these choices.&lt;br /&gt;
|relevance=Provides a policy framing and checklist: (1) assess marginal risk (open vs. closed vs. status-quo tech) rather than treating openness as intrinsically riskier; (2) avoid interventions that incidentally penalize openness—e.g., developer liability for user conduct or provenance mandates that presuppose centralized control; (3) when disclosure is needed, design it to not create perverse incentives against releasing data or documentation; and (4) engage open-model developers when crafting rules. For implementers, the gradient of release plus this lens supports evidence-based release decisions and targeted safeguards without collapsing “open vs. closed.”&lt;br /&gt;
|pub_date=2024/10/11&lt;br /&gt;
|doi=10.1126/science.adp1848&lt;br /&gt;
|wikidata=Q135644890&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=A_Different_Approach_to_AI_Safety:_Proceedings_from_the_Columbia_Convening_on_Openness_in_Artificial_Intelligence_and_AI_Safety&amp;diff=14070</id>
		<title>A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=A_Different_Approach_to_AI_Safety:_Proceedings_from_the_Columbia_Convening_on_Openness_in_Artificial_Intelligence_and_AI_Safety&amp;diff=14070"/>
		<updated>2025-08-05T00:33:25Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety |authors=Camille François, Lu...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=A Different Approach to AI Safety: Proceedings from the Columbia Convening on Openness in Artificial Intelligence and AI Safety&lt;br /&gt;
|authors=Camille François, Ludovic Péran, Ayah Bdeir, Nouha Dziri, Will Hawkins, Yacine Jernite, Sayash Kapoor, Juliet Shen, Heidy Khlaaf, Kevin Klyman, Nik Marda, Marie Pellat, Deb Raji, Divya Siddarth, Aviya Skowron, Joseph Spisak, Madhulika Srikumar, Victor Storchan, Audrey Tang, Jen Weedon&lt;br /&gt;
|url=https://arxiv.org/abs/2506.22183&lt;br /&gt;
|summary=Reports the outcomes of the Columbia Convening on AI Openness and Safety (Nov 19, 2024) and its six-week prep program: (i) a community-informed research agenda on the intersection of openness and safety; (ii) a workflow-based map of post-training technical interventions and open-source tooling for deploying open foundation models safely; and (iii) a survey of the content-safety filter ecosystem with a development roadmap. It argues that openness—transparent weights, interoperable tooling, and public governance—can strengthen safety via independent scrutiny and decentralized mitigation, but highlights gaps (multimodal/multilingual benchmarks, defenses against prompt-injection/compositional attacks in agentic systems, and participatory mechanisms). The paper closes with five priority directions (participatory inputs; future-proof content filters; ecosystem-wide safety infrastructure; rigorous agentic safeguards; expanded harm taxonomies).&lt;br /&gt;
|relevance=Turns “openness helps safety” into an operational program: a post-training safety workflow developers can map against, tables of tools and interventions (with identified gaps) to target investment, and a consolidated content-filter landscape with trade-offs and a roadmap. It also points to ROOST—a new open, scalable safety-infrastructure effort—as a vehicle for building shared tooling, and notes the agenda’s influence on the Feb 2025 French AI Action Summit. For practitioners and policymakers, this yields a concrete way to specify where to build/open tooling, how to choose and evaluate filters, and where to prioritize research on agentic-system safeguards.&lt;br /&gt;
|doi=10.48550/ARXIV.2506.22183&lt;br /&gt;
|wikidata=Q135644843&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Risks_and_Opportunities_of_Open-Source_Generative_AI&amp;diff=14069</id>
		<title>Risks and Opportunities of Open-Source Generative AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Risks_and_Opportunities_of_Open-Source_Generative_AI&amp;diff=14069"/>
		<updated>2025-08-05T00:21:30Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Risks and Opportunities of Open-Source Generative AI |authors=Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder, Fabio Pizzati, Katherine...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Risks and Opportunities of Open-Source Generative AI&lt;br /&gt;
|authors=Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Aaron Purewal, Csaba Botos, Fabro Steibel, Fazel Keshtkar, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Imperial, Juan Arturo Nolazco, Lori Landay, Matthew Jackson, Phillip H. S. Torr, Trevor Darrell, Yong Lee, Jakob Foerster&lt;br /&gt;
|url=https://arxiv.org/abs/2405.08597&lt;br /&gt;
|summary=Introduces an openness taxonomy for generative-AI pipelines that scores each component (pre-training data, SFT/alignment data, evaluation code/data, inference, architecture/weights) on a license-based scale (C1–C5 for code, D1–D5 for data) and applies it to 45 widely used LLMs. The analysis surfaces two empirical patterns: (O1) providers commonly open weights while keeping training and safety-evaluation data/code closed, and (O2) currently more-open models underperform leading closed systems on public leaderboards. The paper also frames risk discussion across near/mid/long-term horizons and argues that—with appropriate practices—opening more of the pipeline (including safety evals and logs) yields net benefits. An accompanying website https://open-source-llms.github.io/ tracks updates to the model taxonomy.&lt;br /&gt;
|relevance=Gives researchers, platforms, and policymakers a component-level measurement of “open” they can operationalize today (license rubric + per-component scoring + model matrix) to compare releases and target where to open next (e.g., safety-evaluation code/data). The recommended practices—release training &amp;amp; safety-evaluation datasets, training/eval/inference code, intermediate checkpoints, and training logs; publish thorough documentation; and keep openness conditions largely voluntary—offer a concrete path to expand transparency without new hard legal constraints; the long-term section posits that greater openness can aid technical alignment and reduce extreme-risk scenarios.&lt;br /&gt;
|arxiv=10.48550/ARXIV.2405.08597&lt;br /&gt;
|wikidata=Q135644817&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=The_Gradient_of_Generative_AI_Release:_Methods_and_Considerations&amp;diff=14068</id>
		<title>The Gradient of Generative AI Release: Methods and Considerations</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=The_Gradient_of_Generative_AI_Release:_Methods_and_Considerations&amp;diff=14068"/>
		<updated>2025-08-04T23:47:49Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=The Gradient of Generative AI Release: Methods and Considerations |authors=Irene Solaiman |url=https://dl.acm.org/doi/abs/10.1145/3593013.3593981 |summary=Int...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=The Gradient of Generative AI Release: Methods and Considerations&lt;br /&gt;
|authors=Irene Solaiman&lt;br /&gt;
|url=https://dl.acm.org/doi/abs/10.1145/3593013.3593981&lt;br /&gt;
|summary=Introduces a six-level gradient for releasing generative AI—fully closed → gradual/staged → hosted → cloud/API → downloadable → fully open—and uses it to map trade-offs between auditability, community research, and risk control. The paper also charts 2018–2022 release trends (greater closedness for more capable, big-lab systems; relatively more openness from openness-oriented collectives) and enumerates pre-/post-release guardrails (e.g., documentation, model/data cards, structured access, rate-limits, filtering, red-teaming) as complements to any chosen release mode.&lt;br /&gt;
|relevance=Provides a shared release design vocabulary that separates how available a system is from how it is safeguarded, enabling developers, platforms, and regulators to choose a release point on the gradient while layering appropriate controls and transparency artifacts. Practically, the gradient supports evidence-based comparisons of releases across modalities and over time, and offers a structured way to justify or contest release decisions without collapsing everything into “open vs. closed.”&lt;br /&gt;
|pub_date=2023/06/23&lt;br /&gt;
|doi=10.1145/3593013.3593981&lt;br /&gt;
|wikidata=Q135644345&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=The_Model_Openness_Framework:_Promoting_Completeness_and_Openness_for_Reproducibility,_Transparency,_and_Usability_in_Artificial_Intelligence&amp;diff=14067</id>
		<title>The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=The_Model_Openness_Framework:_Promoting_Completeness_and_Openness_for_Reproducibility,_Transparency,_and_Usability_in_Artificial_Intelligence&amp;diff=14067"/>
		<updated>2025-08-04T23:36:04Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence |authors=Matt Wh...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence&lt;br /&gt;
|authors=Matt White, Ibrahim Haddad, Cailean Osborne, Xiao-Yang Yanglet Liu, Ahmed Abdelmonsef, Sachin Varghese, Arnaud Le Hors&lt;br /&gt;
|url=https://arxiv.org/abs/2403.13784&lt;br /&gt;
|summary=Responding to inconsistent “open model” claims, the paper introduces the Model Openness Framework (MOF)—a three-class scheme (III Open Model → II Open Tooling → I Open Science) defined by 17 required components across code, data, and documentation, each to be released under acceptable open licenses. It pairs the framework with a Model Openness Tool (MOT) and a machine-readable MOF.JSON inventory for attestation, badging, and community dispute/validation—shifting openness from a label to a verifiable bill of materials. Notably, it treats model parameters as data (recommending open data licenses) and clarifies that MOF measures completeness when open (not a scalar “degree of openness”), with higher classes adding training/eval code, data, and datasets to enable reproducibility.&lt;br /&gt;
|relevance=MOF provides actionable release criteria and a checkable disclosure bundle (MOT + MOF.JSON) so producers can ship reproducible models and consumers/regulators can verify what can be used, studied, modified, and redistributed—reducing open-washing and standardizing expectations. The class ladder (III→I) operationalizes a path from “open weights + docs” to tooling-complete and finally data-complete releases, aligning openness with reproducibility, transparency, and usability for audits and downstream use.&lt;br /&gt;
|doi=10.48550/ARXIV.2403.13784&lt;br /&gt;
|wikidata=Q135644324&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=The_Brief_and_Wondrous_Life_of_Open_Models&amp;diff=14066</id>
		<title>The Brief and Wondrous Life of Open Models</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=The_Brief_and_Wondrous_Life_of_Open_Models&amp;diff=14066"/>
		<updated>2025-08-04T23:26:31Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=The Brief and Wondrous Life of Open Models |authors=Madiha Zahrah Choksi, Ilan Mandel, Sebastian Benthall |url=https://dl.acm.org/doi/10.1145/3715275.3732206...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=The Brief and Wondrous Life of Open Models&lt;br /&gt;
|authors=Madiha Zahrah Choksi, Ilan Mandel, Sebastian Benthall&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3715275.3732206&lt;br /&gt;
|summary=Studies how “open model” communities actually form and persist on Hugging Face, using interaction/participation traces plus qualitative case studies. It finds a characteristic pattern—model obsolescence, nomadic communities that migrate across releases, and a small number of persistent communities—and argues models, as largely static artifacts, rarely support the kind of iterative, co-creative collaboration familiar from OSS. The paper also evaluates licensing/governance against OSAID 1.0 and introduces a simple dual lens on openness—process openness vs. use openness—to clarify where current practices succeed or fail.&lt;br /&gt;
|relevance=Reorients “open model” policy and platform design away from assuming OSS-like community dynamics: sustaining governance requires community scaffolding (maintainers, documentation, licensing clarity) that models alone don’t produce. The process/use openness split offers a compact rubric for audits, funding, and platform rules—e.g., weighting projects with durable communities and transparent processes, not just released weights—while cautioning against directly porting OSS norms to model-centric ecosystems.&lt;br /&gt;
|pub_date=2025/06/23&lt;br /&gt;
|doi=10.1145/3715275.3732206&lt;br /&gt;
|wikidata=Q135644296&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Rethinking_open_source_generative_AI:_open_washing_and_the_EU_AI_Act&amp;diff=14065</id>
		<title>Rethinking open source generative AI: open washing and the EU AI Act</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Rethinking_open_source_generative_AI:_open_washing_and_the_EU_AI_Act&amp;diff=14065"/>
		<updated>2025-08-04T23:10:56Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Rethinking open source generative AI: open washing and the EU AI Act |authors=Andreas Liesenfeld, Mark Dingemanse |url=https://dl.acm.org/doi/10.1145/3630106....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Rethinking open source generative AI: open washing and the EU AI Act&lt;br /&gt;
|authors=Andreas Liesenfeld, Mark Dingemanse&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3630106.3659005&lt;br /&gt;
|summary=Identifies open-washing risks under the EU AI Act’s open-model exemptions and introduces an evidence-based openness assessment for generative AI built on 14 dimensions (each rated open / partial / closed). The framework is demonstrated by a BloomZ–vs–Llama 2 audit and then applied in a systematic sweep of 40 text generators and 6 text-to-image models, with all judgements tied to public evidence. Findings show the dominant “open-weights, closed everything else” pattern, widespread evasion of training-data disclosure, and a “release-by-blogpost” tactic that borrows the aura of scientific openness without the underlying documentation.&lt;br /&gt;
|relevance=Turns “openness” from a label into an auditable, multi-dimensional measure that regulators, funders, and platforms can operationalize (including weighted scores or energy-label-style categories) while avoiding single-metric shortcuts that enable open-washing. Provides a reusable, public auditing infrastructure (feature matrix, leaderboard, evidence links) to compare releases, set which forms of openness should count under policy, and steer incentives toward complete, reproducible disclosures rather than marketing claims.&lt;br /&gt;
|pub_date=2024/06/03&lt;br /&gt;
|doi=10.1145/3630106.3659005&lt;br /&gt;
|wikidata=Q135644214&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Unlocking_AI_Transparency:_The_Access_Methodology_Framework_for_Classifying_AI_System_Openness&amp;diff=14064</id>
		<title>Unlocking AI Transparency: The Access Methodology Framework for Classifying AI System Openness</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Unlocking_AI_Transparency:_The_Access_Methodology_Framework_for_Classifying_AI_System_Openness&amp;diff=14064"/>
		<updated>2025-08-04T22:52:20Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Unlocking AI Transparency: The Access Methodology Framework for Classifying AI System Openness |authors=Michele Herman |url=https://ssrn.com/abstract=5152494...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Unlocking AI Transparency: The Access Methodology Framework for Classifying AI System Openness&lt;br /&gt;
|authors=Michele Herman&lt;br /&gt;
|url=https://ssrn.com/abstract=5152494&lt;br /&gt;
|summary=Positions current “openness” schemes (e.g., MOF, OSAID-inspired approaches) as equating transparency with open-source licensing of components, then introduces an Access Methodology Framework (AMF) that classifies openness by the kind of access granted for evaluation rather than by redistribution rights. AMF prioritizes black-box conformity testing where feasible; when artifact access is needed, it requires licenses that permit testing/analysis to understand system behavior but needn’t grant reuse or commercial rights. It operationalizes two licensing modalities—Private Access (PA) under confidentiality for defined stakeholders and Open Access (OA) without confidentiality—allowing terms (e.g., no redistribution/commerce) tailored to risk and use case.&lt;br /&gt;
|relevance=Provides a compliance and auditability path that decouples “transparency” from mandatory open-source release, offering regulators, platforms, and developers a use-case-specific access design (black-box tests first; PA/OA when necessary) that can preserve trade secrets while enabling scrutiny. As a complement (and counterpoint) to MOF/OSAID-style regimes, AMF gives institutions a way to specify testing rights, stakeholder scope, and confidentiality conditions without over- or under-opening components—potentially reducing open-washing debates while supporting innovation and risk mitigation.&lt;br /&gt;
|pub_date=2025&lt;br /&gt;
|doi=10.2139/SSRN.5152494&lt;br /&gt;
|wikidata=Q135644161&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Public_AI_White_Paper_%E2%80%93_A_Public_Alternative_to_Private_AI_Dominance&amp;diff=14063</id>
		<title>Public AI White Paper – A Public Alternative to Private AI Dominance</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Public_AI_White_Paper_%E2%80%93_A_Public_Alternative_to_Private_AI_Dominance&amp;diff=14063"/>
		<updated>2025-08-04T22:12:55Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Public AI White Paper – A Public Alternative to Private AI Dominance&lt;br /&gt;
|authors=Felix Sieker, Alek Tarkowski, Lea Gimpel, Cailean Osborne&lt;br /&gt;
|summary=This white paper defines Public AI and turns it into a policy framework with three characteristics—public attributes, public functions, and public control—that together determine a system’s “degree of publicness.” It proposes a gradient of publicness spanning commercial systems with some public attributes, through public provision of individual components (e.g., datasets, tooling), up to full-stack public AI infrastructure that integrates compute, data, and models with high public attributes/functions/control.&lt;br /&gt;
&lt;br /&gt;
To operationalize the vision, it lays out three pathways (compute, data, model) under an orchestrating public institution:&lt;br /&gt;
&lt;br /&gt;
(i) public compute access—especially for fully open-source projects—to ensure at least one fully open model with capabilities near state-of-the-art;&lt;br /&gt;
&lt;br /&gt;
(ii) data commons and stewardship models that treat key inputs as digital public goods; and&lt;br /&gt;
&lt;br /&gt;
(iii) support for open-source model development ecosystems.&lt;br /&gt;
&lt;br /&gt;
The paper also articulates governance principles (commons-based governance; open release of models and components; conditional compute tying public resources to openness; protection of digital rights; environmental sustainability; and reciprocity to prevent privatization of public value).&lt;br /&gt;
|relevance=The framework gives policymakers a map and a metric: use the gradient to situate any AI initiative, then move it “up” by boosting public attributes/functions/control—preferably via full-stack strategies rather than isolated components. It reframes public investment away from racing private labs and toward reducing dependencies, building independent capacity, and setting openness-linked conditions on compute and other inputs. For practitioners (research institutions, platforms, funders), it points to actionable levers: allocate compute to truly open projects, stand up data commons under commons-based governance, and publish models/components openly with strong documentation and oversight.&lt;br /&gt;
|pub_date=2025/05&lt;br /&gt;
|wikidata=Q135644012&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Public_AI_White_Paper_%E2%80%93_A_Public_Alternative_to_Private_AI_Dominance&amp;diff=14062</id>
		<title>Public AI White Paper – A Public Alternative to Private AI Dominance</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Public_AI_White_Paper_%E2%80%93_A_Public_Alternative_to_Private_AI_Dominance&amp;diff=14062"/>
		<updated>2025-08-04T22:12:23Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Public AI White Paper – A Public Alternative to Private AI Dominance&lt;br /&gt;
|authors=Felix Sieker, Alek Tarkowski, Lea Gimpel, Cailean Osborne&lt;br /&gt;
|summary=This white paper defines Public AI and turns it into a policy framework with three characteristics—public attributes, public functions, and public control—that together determine a system’s “degree of publicness.” It proposes a gradient of publicness spanning commercial systems with some public attributes, through public provision of individual components (e.g., datasets, tooling), up to full-stack public AI infrastructure that integrates compute, data, and models with high public attributes/functions/control.&lt;br /&gt;
&lt;br /&gt;
To operationalize the vision, it lays out three pathways (compute, data, model) under an orchestrating public institution:&lt;br /&gt;
&lt;br /&gt;
(i) public compute access—especially for fully open-source projects—to ensure at least one fully open model with capabilities near state-of-the-art;&lt;br /&gt;
&lt;br /&gt;
(ii) data commons and stewardship models that treat key inputs as digital public goods; and&lt;br /&gt;
&lt;br /&gt;
(iii) support for open-source model development ecosystems.&lt;br /&gt;
&lt;br /&gt;
The paper also articulates governance principles (commons-based governance; open release of models and components; conditional compute tying public resources to openness; protection of digital rights; environmental sustainability; and reciprocity to prevent privatization of public value).&lt;br /&gt;
|relevance=The framework gives policymakers a map and a metric: use the gradient to situate any AI initiative, then move it “up” by boosting public attributes/functions/control—preferably via full-stack strategies rather than isolated components. It reframes public investment away from racing private labs and toward reducing dependencies, building independent capacity, and setting openness-linked conditions on compute and other inputs. For practitioners (research institutions, platforms, funders), it points to actionable levers: allocate compute to truly open projects, stand up data commons under commons-based governance, and publish models/components openly with strong documentation and oversight.&lt;br /&gt;
|pub_date=2025/05&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Public_AI_White_Paper_%E2%80%93_A_Public_Alternative_to_Private_AI_Dominance&amp;diff=14061</id>
		<title>Public AI White Paper – A Public Alternative to Private AI Dominance</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Public_AI_White_Paper_%E2%80%93_A_Public_Alternative_to_Private_AI_Dominance&amp;diff=14061"/>
		<updated>2025-08-04T22:12:00Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Public AI White Paper – A Public Alternative to Private AI Dominance |authors=Felix Sieker, Alek Tarkowski, Lea Gimpel, Cailean Osborne |summary=This white...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Public AI White Paper – A Public Alternative to Private AI Dominance&lt;br /&gt;
|authors=Felix Sieker, Alek Tarkowski, Lea Gimpel, Cailean Osborne&lt;br /&gt;
|summary=This white paper defines Public AI and turns it into a policy framework with three characteristics—public attributes, public functions, and public control—that together determine a system’s “degree of publicness.” It proposes a gradient of publicness spanning commercial systems with some public attributes, through public provision of individual components (e.g., datasets, tooling), up to full-stack public AI infrastructure that integrates compute, data, and models with high public attributes/functions/control.&lt;br /&gt;
&lt;br /&gt;
To operationalize the vision, it lays out three pathways (compute, data, model) under an orchestrating public institution:&lt;br /&gt;
&lt;br /&gt;
(i) public compute access—especially for fully open-source projects—to ensure at least one fully open model with capabilities near state-of-the-art;&lt;br /&gt;
&lt;br /&gt;
(ii) data commons and stewardship models that treat key inputs as digital public goods; and&lt;br /&gt;
&lt;br /&gt;
(iii) support for open-source model development ecosystems.&lt;br /&gt;
&lt;br /&gt;
The paper also articulates governance principles (commons-based governance; open release of models and components; conditional compute tying public resources to openness; protection of digital rights; environmental sustainability; and reciprocity to prevent privatization of public value).&lt;br /&gt;
|relevance=he framework gives policymakers a map and a metric: use the gradient to situate any AI initiative, then move it “up” by boosting public attributes/functions/control—preferably via full-stack strategies rather than isolated components. It reframes public investment away from racing private labs and toward reducing dependencies, building independent capacity, and setting openness-linked conditions on compute and other inputs. For practitioners (research institutions, platforms, funders), it points to actionable levers: allocate compute to truly open projects, stand up data commons under commons-based governance, and publish models/components openly with strong documentation and oversight.&lt;br /&gt;
|pub_date=2025/05&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Towards_a_Framework_for_Openness_in_Foundation_Models:_Proceedings_from_the_Columbia_Convening_on_Openness_in_Artificial_Intelligence&amp;diff=14060</id>
		<title>Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Towards_a_Framework_for_Openness_in_Foundation_Models:_Proceedings_from_the_Columbia_Convening_on_Openness_in_Artificial_Intelligence&amp;diff=14060"/>
		<updated>2025-08-04T21:57:21Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence |authors=Adrien Basdevan...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence&lt;br /&gt;
|authors=Adrien Basdevant, Camille François, Victor Storchan, Kevin Bankston, Ayah Bdeir, Brian Behlendorf, Merouane Debbah, Sayash Kapoor, Yann LeCun, Mark Surman, Helen King-Turvey, Nathan Lambert, Stefano Maffulli, Nik Marda, Govind Shivkumar, Justine Tunney&lt;br /&gt;
|url=https://arxiv.org/abs/2405.15802&lt;br /&gt;
|summary=Authors propose a descriptive framework for analyzing “openness” across the AI model stack and the broader AI system stack (not just model weights). They articulate four tenets: (1) consider both stacks, (2) expect multiple forms of openness (e.g., open data/weights/tooling), (3) clarify benefits, risks, and modalities of opening different components, and (4) treat safety as a core consideration. The paper sequences the framework from a model-stack view to a system-stack view, adds cross-cutting attributes, and then composes a final map; it also overlays normative schemes such as the Model Openness Framework (MOF) and the Open Source AI Definition (OSI).&lt;br /&gt;
|relevance=Provides a shared vocabulary to specify what is open, where in the stack, to whom/under what terms, and with what safeguards, enabling more precise comparisons of openness claims and reducing the field’s over-focus on weights. The overlay with MOF and OSI’s Open Source AI Definition shows how the framework can bridge description and policy, helping developers, platforms, and regulators operationalize openness choices while balancing safety.&lt;br /&gt;
|pub_date=2024/05/17&lt;br /&gt;
|arxiv=2405.15802&lt;br /&gt;
|wikidata=Q135643879&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14059</id>
		<title>Opening the Scope of Openness in AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14059"/>
		<updated>2025-08-04T21:32:02Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Opening the Scope of Openness in AI&lt;br /&gt;
|authors=Tamara Paris, AJung Moon, Jin L.C. Guo&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3715275.3732087&lt;br /&gt;
|summary=Authors surface 98 openness concepts (via topic modeling over a multidisciplinary corpus) and qualitatively group them into three themes and three approaches to defining openness, then discuss how these map to AI. The themes are:&lt;br /&gt;
&lt;br /&gt;
* Interactivity (sub-themes: access, inspectability, distribution, reuse, collaboration)&lt;br /&gt;
* Freedom (no obstacle, organic, non-isolation, broader boundaries, undetermined, autonomy)&lt;br /&gt;
* Inclusiveness (fairness, diversity, democratization)&lt;br /&gt;
&lt;br /&gt;
The approaches are:&lt;br /&gt;
* properties&lt;br /&gt;
* afforded actions (what openness enables or prevents)&lt;br /&gt;
* desired effects&lt;br /&gt;
&lt;br /&gt;
Authors note that widely cited definitions and frameworks, e.g., the Open Source AI Definition (OSI) and the Model Openness Framework, primarily operationalize openness via afforded actions (who can access/use/study/modify/share, under what conditions, and for which components). They encourage future work on under-explored parts of the taxonomy (e.g., collaboration, non-isolation, diversity) and on making the taxonomy actionable for AI governance and practice.&lt;br /&gt;
|relevance=The taxonomy gives a shared vocabulary to specify what is open, to whom, how, and to what end, helping compare openness claims (open weights/data/eval) and revealing gaps where policy or practice may need to balance openness with other objectives (e.g., safety, privacy, equity).&lt;br /&gt;
|doi=10.1145/3715275.3732087&lt;br /&gt;
|wikidata=Q135643660&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14058</id>
		<title>Opening the Scope of Openness in AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14058"/>
		<updated>2025-08-04T21:30:31Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Opening the Scope of Openness in AI&lt;br /&gt;
|authors=Tamara Paris, AJung Moon, Jin L.C. Guo&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3715275.3732087&lt;br /&gt;
|summary=Authors surface 98 openness concepts (via topic modeling over a multidisciplinary corpus) and qualitatively group them into three themes and three approaches to defining openness, then discuss how these map to AI. The themes are:&lt;br /&gt;
&lt;br /&gt;
1. Interactivity (sub-themes: access, inspectability, distribution, reuse, collaboration)&lt;br /&gt;
2. Freedom (no obstacle, organic, non-isolation, broader boundaries, undetermined, autonomy)&lt;br /&gt;
3. Inclusiveness (fairness, diversity, democratization)&lt;br /&gt;
&lt;br /&gt;
The approaches are: (i) properties, (ii) afforded actions (what openness enables or prevents), and (iii) desired effects.&lt;br /&gt;
&lt;br /&gt;
Authors note that widely cited definitions and frameworks, e.g., the Open Source AI Definition (OSI) and the Model Openness Framework, primarily operationalize openness via afforded actions (who can access/use/study/modify/share, under what conditions, and for which components). They encourage future work on under-explored parts of the taxonomy (e.g., collaboration, non-isolation, diversity) and on making the taxonomy actionable for AI governance and practice.&lt;br /&gt;
|relevance=The taxonomy gives a shared vocabulary to specify what is open, to whom, how, and to what end, helping compare openness claims (open weights/data/eval) and revealing gaps where policy or practice may need to balance openness with other objectives (e.g., safety, privacy, equity).&lt;br /&gt;
|doi=10.1145/3715275.3732087&lt;br /&gt;
|wikidata=Q135643660&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14057</id>
		<title>Opening the Scope of Openness in AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14057"/>
		<updated>2025-08-04T21:30:00Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Opening the Scope of Openness in AI&lt;br /&gt;
|authors=Tamara Paris, AJung Moon, Jin L.C. Guo&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3715275.3732087&lt;br /&gt;
|summary=Authors surface 98 openness concepts (via topic modeling over a multidisciplinary corpus) and qualitatively group them into three themes and three approaches to defining openness, then discuss how these map to AI. The themes are:&lt;br /&gt;
&lt;br /&gt;
1. Interactivity (sub-themes: access, inspectability, distribution, reuse, collaboration),&lt;br /&gt;
2. Freedom (no obstacle, organic, non-isolation, broader boundaries, undetermined, autonomy), and&lt;br /&gt;
3. Inclusiveness (fairness, diversity, democratization).&lt;br /&gt;
&lt;br /&gt;
The approaches are: (i) properties, (ii) afforded actions (what openness enables or prevents), and (iii) desired effects.&lt;br /&gt;
&lt;br /&gt;
Authors note that widely cited definitions and frameworks, e.g., the Open Source AI Definition (OSI) and the Model Openness Framework, primarily operationalize openness via afforded actions (who can access/use/study/modify/share, under what conditions, and for which components). They encourage future work on under-explored parts of the taxonomy (e.g., collaboration, non-isolation, diversity) and on making the taxonomy actionable for AI governance and practice.&lt;br /&gt;
|relevance=The taxonomy gives a shared vocabulary to specify what is open, to whom, how, and to what end, helping compare openness claims (open weights/data/eval) and revealing gaps where policy or practice may need to balance openness with other objectives (e.g., safety, privacy, equity).&lt;br /&gt;
|doi=10.1145/3715275.3732087&lt;br /&gt;
|wikidata=Q135643660&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14056</id>
		<title>Opening the Scope of Openness in AI</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Opening_the_Scope_of_Openness_in_AI&amp;diff=14056"/>
		<updated>2025-08-04T20:53:44Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Opening the Scope of Openness in AI |authors=Tamara Paris, AJung Moon, Jin L.C. Guo |url=https://dl.acm.org/doi/10.1145/3715275.3732087 |summary=Authors ident...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Opening the Scope of Openness in AI&lt;br /&gt;
|authors=Tamara Paris, AJung Moon, Jin L.C. Guo&lt;br /&gt;
|url=https://dl.acm.org/doi/10.1145/3715275.3732087&lt;br /&gt;
|summary=Authors identify 98 topics on &amp;quot;openness&amp;quot; from literature, identify 3 themes and 3 approaches to define openness, and discuss each in the context of openness and AI.&lt;br /&gt;
&lt;br /&gt;
The themes and their sub-themes are:&lt;br /&gt;
&lt;br /&gt;
1. Access (sub-themes: access, inspectability, distribution, reuse, collaboration)&lt;br /&gt;
2. Freedom (no obstacle, organic, non-isolation, broader boundaries, undetermined, autonomy)&lt;br /&gt;
3. Inclusiveness (fairness, diversity, democratization)&lt;br /&gt;
&lt;br /&gt;
The 3 approaches are:&lt;br /&gt;
1. properties&lt;br /&gt;
2. afforded actions&lt;br /&gt;
3. desired effects&lt;br /&gt;
&lt;br /&gt;
Authors argue existing definitions such as the OSAID and Model Openness Framework focus on afforded actions.&lt;br /&gt;
|relevance=Authors encourage future work on underexplored themes and approaches of AI openness.&lt;br /&gt;
|doi=10.1145/3715275.3732087&lt;br /&gt;
|wikidata=Q135643660&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User:Infotainment&amp;diff=12705</id>
		<title>User:Infotainment</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User:Infotainment&amp;diff=12705"/>
		<updated>2024-10-14T21:56:44Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Creating user page for new user.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I am a Human-Computer Interaction and Information Studies graduate student with a past life in cognitive science, educational psychology, and learning sciences. My current research interests focus on using psychology, design, and technology to improve the quality and reach of science. I've studied metascience, tools for thought, philosophy, data visualization, and am broadly interested in the psychological sciences. Someday, this may earn me a liveable wage.&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User_talk:Infotainment&amp;diff=12706</id>
		<title>User talk:Infotainment</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User_talk:Infotainment&amp;diff=12706"/>
		<updated>2024-10-14T21:56:44Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Welcome!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Welcome to ''AcaWiki''!'''&lt;br /&gt;
We hope you will contribute much and well.&lt;br /&gt;
You will probably want to read the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents help pages].&lt;br /&gt;
Again, welcome and have fun! [[User:Mike Linksvayer|Mike Linksvayer]] ([[User talk:Mike Linksvayer|talk]]) 21:56, 14 October 2024 (UTC)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User:Varunnrao&amp;diff=12703</id>
		<title>User:Varunnrao</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User:Varunnrao&amp;diff=12703"/>
		<updated>2024-10-14T21:56:24Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Creating user page for new user.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Rao is a Ph.D. student in the Department of Computer Science and is advised by Professor Andrés Monroy-Hernández. As a social computing researcher, he aims to understand and measure the societal impacts of AI systems in the context of labor. His current research, part of the workers algorithm observatory (WAO), involves understanding the concerns and needs of gig workers in the context of algorithmic rideshare platform decisions, and building tools to investigate black-box algorithmic systems. In the past, his research has defined a new form of discrimination due to the selective use of people in job ad images, revealing through quantitative analysis, how this practice, coupled with ad delivery algorithms, circumvents existing non-discrimination protections on platforms like Facebook. &lt;br /&gt;
&lt;br /&gt;
Previously, he has worked as an applied scientist and returned as an intern at Amazon. He helped protect customers and build seller trust by developing explainable and retrieval augmented vision-language models. Rao was also a computer vision intern on the Core Recognition team at Apple, helping design OCR (optical character recognition) solutions applied to handwritten text. &lt;br /&gt;
&lt;br /&gt;
He graduated with a master’s from Carnegie Mellon University where his research involved computer vision and understanding the privacy needs and attitudes of incidental home IoT device users.&lt;br /&gt;
&lt;br /&gt;
Rao can be reached at varunrao@princeton.edu.&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User_talk:Varunnrao&amp;diff=12704</id>
		<title>User talk:Varunnrao</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User_talk:Varunnrao&amp;diff=12704"/>
		<updated>2024-10-14T21:56:24Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Welcome!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Welcome to ''AcaWiki''!'''&lt;br /&gt;
We hope you will contribute much and well.&lt;br /&gt;
You will probably want to read the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents help pages].&lt;br /&gt;
Again, welcome and have fun! [[User:Mike Linksvayer|Mike Linksvayer]] ([[User talk:Mike Linksvayer|talk]]) 21:56, 14 October 2024 (UTC)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User:Saeedqaredaqi&amp;diff=12249</id>
		<title>User:Saeedqaredaqi</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User:Saeedqaredaqi&amp;diff=12249"/>
		<updated>2022-07-21T05:36:26Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Creating user page for new user.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;i am Phd student of University of Isfahan (Iran) in geology field(Tectonic)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User_talk:Saeedqaredaqi&amp;diff=12250</id>
		<title>User talk:Saeedqaredaqi</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User_talk:Saeedqaredaqi&amp;diff=12250"/>
		<updated>2022-07-21T05:36:26Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Welcome!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Welcome to ''AcaWiki''!'''&lt;br /&gt;
We hope you will contribute much and well.&lt;br /&gt;
You will probably want to read the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents help pages].&lt;br /&gt;
Again, welcome and have fun! [[User:Mike Linksvayer|Mike Linksvayer]] ([[User talk:Mike Linksvayer|talk]]) 05:36, 21 July 2022 (UTC)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User_talk:Caleb&amp;diff=12248</id>
		<title>User talk:Caleb</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User_talk:Caleb&amp;diff=12248"/>
		<updated>2022-04-16T05:50:25Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Welcome!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Welcome to ''AcaWiki''!'''&lt;br /&gt;
We hope you will contribute much and well.&lt;br /&gt;
You will probably want to read the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents help pages].&lt;br /&gt;
Again, welcome and have fun! [[User:Mike Linksvayer|Mike Linksvayer]] ([[User talk:Mike Linksvayer|talk]]) 05:50, 16 April 2022 (UTC)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User:Caleb&amp;diff=12247</id>
		<title>User:Caleb</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User:Caleb&amp;diff=12247"/>
		<updated>2022-04-16T05:50:24Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Creating user page for new user.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;My main research interests focus on scientometrics, patentometrics and science and technology polic.&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User:Oikonom&amp;diff=12245</id>
		<title>User:Oikonom</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User:Oikonom&amp;diff=12245"/>
		<updated>2022-04-16T05:50:06Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Creating user page for new user.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MSc. in Economics and Finance&lt;br /&gt;
Aspiring PhD student in economics, in a personal quest for wisdom and knowledge, enticed by questions of development.&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User_talk:Oikonom&amp;diff=12246</id>
		<title>User talk:Oikonom</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User_talk:Oikonom&amp;diff=12246"/>
		<updated>2022-04-16T05:50:06Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Welcome!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Welcome to ''AcaWiki''!'''&lt;br /&gt;
We hope you will contribute much and well.&lt;br /&gt;
You will probably want to read the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents help pages].&lt;br /&gt;
Again, welcome and have fun! [[User:Mike Linksvayer|Mike Linksvayer]] ([[User talk:Mike Linksvayer|talk]]) 05:50, 16 April 2022 (UTC)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User:Naderaleebrahim&amp;diff=12235</id>
		<title>User:Naderaleebrahim</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User:Naderaleebrahim&amp;diff=12235"/>
		<updated>2022-03-03T06:47:57Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Creating user page for new user.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Nader Ale Ebrahim currently works as a “Research Visibility and Impact” freelancer consultant. Nader is also an adjunct lecturer at Alzahra University. He was working as a visiting research fellow with Institute of Management and Research Services (IPPP), University of Malaya from 2013 to November 2017. Nader holds a PhD in Technology Management from the Faculty of Engineering, University of Malaya. He has over 25 years of experience in the field of technology management and new product development in different companies. His current research interests are University rankings, Open access, Research visibility, Research Tools, and Bibliometrics. Dr. Nader provides assistance and guidance for researchers in disseminating and promoting their research work in order to enhance their research visibility and impact, as well as citations. He believes that the research cycle does not end with publications alone, thus he encourages pro-activeness in dissemination of research outputs.&lt;br /&gt;
Nader is well-known as the creator of “Research Tools” Box. Dr. Nader was the winner of Refer-a-Colleague Competition and has received prizes from renowned establishments such as Thomson Reuters (recently, known as Clarivate Analytics). He has so far conducted over 400 workshops within and outside of Malaysia. Over 100 of his papers/articles have been published and presented in several journals and conferences.&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=User_talk:Naderaleebrahim&amp;diff=12236</id>
		<title>User talk:Naderaleebrahim</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=User_talk:Naderaleebrahim&amp;diff=12236"/>
		<updated>2022-03-03T06:47:57Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Welcome!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Welcome to ''AcaWiki''!'''&lt;br /&gt;
We hope you will contribute much and well.&lt;br /&gt;
You will probably want to read the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents help pages].&lt;br /&gt;
Again, welcome and have fun! [[User:Mike Linksvayer|Mike Linksvayer]] ([[User talk:Mike Linksvayer|talk]]) 06:47, 3 March 2022 (UTC)&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=That_%22S%22_Word:_Sovereignty,_and_Globalization,_and_Human_Rights,_Et_Cetera&amp;diff=12211</id>
		<title>That &quot;S&quot; Word: Sovereignty, and Globalization, and Human Rights, Et Cetera</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=That_%22S%22_Word:_Sovereignty,_and_Globalization,_and_Human_Rights,_Et_Cetera&amp;diff=12211"/>
		<updated>2021-11-29T03:04:46Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=That &amp;quot;S&amp;quot; Word: Sovereignty, and Globalization, and Human Rights, Et Cetera |authors=Louis Henkin |url=https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?articl...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=That &amp;quot;S&amp;quot; Word: Sovereignty, and Globalization, and Human Rights, Et Cetera&lt;br /&gt;
|authors=Louis Henkin&lt;br /&gt;
|url=https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=3595&amp;amp;context=flr&lt;br /&gt;
|summary=Argues state sovereignty was inappropriately extended from the notion of a sovereign within a country (eg a king) and that prior to WWII it primarily meant that a country could run its affairs within its borders as it liked. Hitler changed that, and post-WWII trends that limit state sovereignty have included making war illegal, human rights, and international cooperation. However the pre-WWII version continually raises its head, particularly in response to new trends around the international market, cyberspace, and globalization. Advocates for reforming the state system to support human rights globally.&lt;br /&gt;
|wikidata=Q109797117&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Sovereignty_2.0&amp;diff=12210</id>
		<title>Sovereignty 2.0</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Sovereignty_2.0&amp;diff=12210"/>
		<updated>2021-11-29T00:08:03Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Sovereignty 2.0 |authors=Anupam Chander, Haochen Sun |url=https://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=3422&amp;amp;context=facpub |summary=Clai...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Sovereignty 2.0&lt;br /&gt;
|authors=Anupam Chander, Haochen Sun&lt;br /&gt;
|url=https://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=3422&amp;amp;context=facpub&lt;br /&gt;
|summary=Claims to be the &amp;quot;first comprehensive account of digital or data sovereignty&amp;quot; and surveys various ways states (China, US, EU, Global South) are asserting it. Argues that digital sovereignty is not merely an extension of sovereignty needed to control corporations and competitor states, but is suited to hijacking by states to control their citizens.&lt;br /&gt;
&lt;br /&gt;
Digital sovereignty is new because it is:&lt;br /&gt;
* always global (involves foreign actors, or cuts off local actors from global internet)&lt;br /&gt;
* asserted against corporations (in addition to other states)&lt;br /&gt;
* enlarges legibility to state considerably&lt;br /&gt;
* enables protectionism [comment: unclear how this makes digital different/new]&lt;br /&gt;
&lt;br /&gt;
Historically &amp;quot;sovereign&amp;quot; is most often paired with &amp;quot;immunity&amp;quot;; provides examples of speech, privacy, and security controls being used to insulate the state or control citizens.&lt;br /&gt;
&lt;br /&gt;
Concludes that the power of both corporations and regulators must be regulated.&lt;br /&gt;
|wikidata=Q109796645&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Protecting_the_global_internet_from_technology_cold_wars&amp;diff=12209</id>
		<title>Protecting the global internet from technology cold wars</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Protecting_the_global_internet_from_technology_cold_wars&amp;diff=12209"/>
		<updated>2021-11-28T23:15:28Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Protecting the global internet from technology cold wars |authors=Anupam Chander |url=https://cacm.acm.org/magazines/2021/9/255030-protecting-the-global-inter...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Protecting the global internet from technology cold wars&lt;br /&gt;
|authors=Anupam Chander&lt;br /&gt;
|url=https://cacm.acm.org/magazines/2021/9/255030-protecting-the-global-internet-from-technology-cold-wars/fulltext&lt;br /&gt;
|summary=Argues Trump administration's TikTok ban was a departure from US internet policy, but also similar to actions taken elsewhere in the world. Suggests national security rationale for TikTok ban was overblown, and the US should not cede its advocacy for a global internet.&lt;br /&gt;
|wikidata=Q109796570&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=SugarCoat:_Programmatically_Generating_Privacy-Preserving,_Web-Compatible_Resource_Replacements_for_Content_Blocking&amp;diff=12208</id>
		<title>SugarCoat: Programmatically Generating Privacy-Preserving, Web-Compatible Resource Replacements for Content Blocking</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=SugarCoat:_Programmatically_Generating_Privacy-Preserving,_Web-Compatible_Resource_Replacements_for_Content_Blocking&amp;diff=12208"/>
		<updated>2021-11-27T03:42:22Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=SugarCoat: Programmatically Generating Privacy-Preserving, Web-Compatible Resource Replacements for Content Blocking |authors=Michael Smith, Pete Snyder, Benj...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=SugarCoat: Programmatically Generating Privacy-Preserving, Web-Compatible Resource Replacements for Content Blocking&lt;br /&gt;
|authors=Michael Smith, Pete Snyder, Benjamin Livshits, Deian Stefan&lt;br /&gt;
|summary=SugarCoat allows filter list authors to automatically generate privacy-preserving replacements for arbitrary JavaScript scripts, eliminating the need for manual analysis and implementation of resource replacements is to focus on and intercept accesses to the sources of privacy-sensitive data (e.g., Web APIs like document.cookie and localStorage). SugarCoat instruments JavaScript resources and the resources they create to restrict their access to sensitive data sources according to a custom policy (e.g., “load script 𝑈 , but prevent it from accessing storage”). SugarCoat generates resource replacements in two steps. First, we use dynamic analysis to identify code points where JavaScript code uses Web APIs (e.g., functions, constructors, objects, and object properties) that expose sensitive data sources. Then, we repair the code at these code points to instead use “mock” implementations of the same APIs, which expose the same interfaces but enforce privacy policies specified by filter list authors.&lt;br /&gt;
|wikidata=Q109764075&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=How_Law_Made_Silicon_Valley&amp;diff=12207</id>
		<title>How Law Made Silicon Valley</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=How_Law_Made_Silicon_Valley&amp;diff=12207"/>
		<updated>2021-11-27T00:35:06Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=How Law Made Silicon Valley&lt;br /&gt;
|authors=Anupam Chander&lt;br /&gt;
|url=https://scholarlycommons.law.emory.edu/elj/vol63/iss3/3/&lt;br /&gt;
|summary=Notes 1990s US executive, legislative, and judicial branches haltingly and inconsistently, but importantly, made decisions to effectively subsidize Internet development, analogous to nineteenth-century US judges alteration of law to subsidize industrial development. (&amp;quot;Silicon Valley&amp;quot; in this paper really means U.S.-based internet platforms; it does not attempt to explain previous iterations of Silicon Valley.)&lt;br /&gt;
&lt;br /&gt;
Provides examples in the areas of intermediary liability, copyright, privacy.&lt;br /&gt;
&lt;br /&gt;
Contrasts with European Union, Japan, and South Korea, which made lesser, spottier, or later decisions in favor of Internet development in each of these areas.&lt;br /&gt;
&lt;br /&gt;
Notes subsidies have dispersed costs which regulators might seek to mitigate while also not threatening innovators.&lt;br /&gt;
|relevance=&amp;quot;Even as late as 2008, European lawyers could only advise, “[T]he scope of&lt;br /&gt;
liability of Web 2.0 websites is an unsettled point of law.”175 It was only at the&lt;br /&gt;
end of 2011 and the beginning of 2012 that the European Court of Justice&lt;br /&gt;
made clear that Internet intermediaries could not be required to affirmatively&lt;br /&gt;
filter their entire networks for copyright infringement. In cases brought by the&lt;br /&gt;
Belgian collecting rights society, SABAM, against Internet access provider&lt;br /&gt;
Scarlet and online social network Netlog, the court held that enjoining these&lt;br /&gt;
companies to filter on behalf of copyright owners uploads by all users would&lt;br /&gt;
violate the privacy and speech rights of users, and would be unduly costly and&lt;br /&gt;
burdensome to the Internet enterprise.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Approximately 10 years later, Article 13 of new copyright directive might swing back to effectively requiring copyright filters.&lt;br /&gt;
|journal=Emory Law Journal&lt;br /&gt;
|pub_date=2014&lt;br /&gt;
|wikidata=Q109765008&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=How_Law_Made_Silicon_Valley&amp;diff=12206</id>
		<title>How Law Made Silicon Valley</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=How_Law_Made_Silicon_Valley&amp;diff=12206"/>
		<updated>2021-11-27T00:32:55Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=How Law Made Silicon Valley&lt;br /&gt;
|authors=Anupam Chander&lt;br /&gt;
|url=https://scholarlycommons.law.emory.edu/elj/vol63/iss3/3/&lt;br /&gt;
|summary=Notes 1990s US executive, legislative, and judicial branches haltingly and inconsistently, but importantly, made decisions to effectively subsidize Internet development, analogous to nineteenth-century US judges alteration of law to subsidize industrial development. (&amp;quot;Silicon Valley&amp;quot; in this paper really means U.S.-based internet platforms; it does not attempt to explain previous iterations of Silicon Valley.)&lt;br /&gt;
&lt;br /&gt;
Provides examples in the areas of intermediary liability, copyright, privacy.&lt;br /&gt;
&lt;br /&gt;
Contrasts with European Union, Japan, and South Korea, which made lesser, spottier, or later decisions in favor of Internet development in each of these areas.&lt;br /&gt;
&lt;br /&gt;
Notes subsidies have dispersed costs which regulators might seek to mitigate while also not threatening innovators.&lt;br /&gt;
|relevance=&amp;quot;Even as late as 2008, European lawyers could only advise, “[T]he scope of&lt;br /&gt;
liability of Web 2.0 websites is an unsettled point of law.”175 It was only at the&lt;br /&gt;
end of 2011 and the beginning of 2012 that the European Court of Justice&lt;br /&gt;
made clear that Internet intermediaries could not be required to affirmatively&lt;br /&gt;
filter their entire networks for copyright infringement. In cases brought by the&lt;br /&gt;
Belgian collecting rights society, SABAM, against Internet access provider&lt;br /&gt;
Scarlet and online social network Netlog, the court held that enjoining these&lt;br /&gt;
companies to filter on behalf of copyright owners uploads by all users would&lt;br /&gt;
violate the privacy and speech rights of users, and would be unduly costly and&lt;br /&gt;
burdensome to the Internet enterprise.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Approximately 10 years later, Article 13 of new copyright directive might swing back to effectively requiring copyright filters.&lt;br /&gt;
|journal=Emory Law Journal&lt;br /&gt;
|pub_date=2014&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
	<entry>
		<id>https://acawiki.org/index.php?title=Detecting_Filter_List_Evasion_with_Event-Loop-Turn_Granularity_JavaScript_Signatures&amp;diff=12205</id>
		<title>Detecting Filter List Evasion with Event-Loop-Turn Granularity JavaScript Signatures</title>
		<link rel="alternate" type="text/html" href="https://acawiki.org/index.php?title=Detecting_Filter_List_Evasion_with_Event-Loop-Turn_Granularity_JavaScript_Signatures&amp;diff=12205"/>
		<updated>2021-11-26T22:01:23Z</updated>

		<summary type="html">&lt;p&gt;Mike Linksvayer: Created page with &amp;quot;{{Summary |title=Detecting Filter List Evasion with Event-Loop-Turn Granularity JavaScript Signatures |authors=Quan Chen, Peter Snyder, Ben Livshits, Alexandros Kapravelos |ur...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Summary&lt;br /&gt;
|title=Detecting Filter List Evasion with Event-Loop-Turn Granularity JavaScript Signatures&lt;br /&gt;
|authors=Quan Chen, Peter Snyder, Ben Livshits, Alexandros Kapravelos&lt;br /&gt;
|url=https://www.doc.ic.ac.uk/~livshits/papers/pdf/oakland21.pdf&lt;br /&gt;
|summary=Current URL-targeting blocks of harmful JavaScript can be circumvented by moving to an unmatched URL or by mixing harmful JavaScript with necessary or common JavaScript. The latter circumvention causes some known harmful JavaScript to be unblocked by popular blocklists. Authors use known harmful JavaScript and code analysis to identify additional harmful JavaScript and have proposed additions to popular blocklists as a result. Similar analysis could be used to block harmful JavaScript delivered at unmatched URLs or bundled with necessary or common JavaScript.&lt;br /&gt;
|wikidata=Q109655725&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mike Linksvayer</name></author>
		
	</entry>
</feed>