The word “open” can mean many things with respect to AI. A new paper outlines the variations and ranks popular models for openness.
What’s new: Researchers at Radboud University evaluated dozens of models billed as open by their developers. They plan to keep their analysis of language models updated here.
How it works: The authors assessed 40 large language models and six text-to-image generators, adding OpenAI’s closed models ChatGPT and DALL·E 2 as reference points. They evaluated 14 characteristics, scoring each as open (1 point), partially open (0.5 points), or closed (0 points). For example, an API would be described as partially open if using it requires users to register. They divided the characteristics into three categories:
- Availability with respect to source code, pretraining data, base weights, fine-tuning data, fine-tuning weights, and licensing under a recognized open-source license
- Documentation of code, architecture, preprint paper, published peer-reviewed paper, model card, and datasheets that describe how the developer collected and curated the data
- Access to a downloadable package and open API
Results: Of the language models, OLMo 7B Instruct from Allen Institute for AI scored highest with 12 open characteristics and 1 partially open characteristic (it lacked a published, peer-reviewed paper).
- OLMo 7B Instruct and AmberChat (based on Llama-7B) were the only language models for which availability was fully open. BigScience’s BLOOMZ was the only language model whose documentation was fully open.
- Some prominent “open” models scored less well. Alibaba’s Qwen 1.5, Cohere’s Command R+, and Google’s Gemma-7B Instruct were judged closed or partially open for most characteristics. Falcon-40B-Instruct scored 2 open and 5 partially open characteristics. Neither Meta’s Llama 2 Chat nor Llama 3 Instruct achieved any open marks.
- Among text-to-image generators, Stability AI’s Stable Diffusion was far and away the most open. The authors deemed it fully open with respect to availability and documentation, and partially open with respect to access.
Behind the News: The Open Source Initiative (OSI), a nonprofit organization that maintains standards for open-source software licenses, is leading a process to establish a firm definition of “open-source AI.” The current draft holds that an open-source model must include parameters, source code, and information on training data and methodologies under an OSI-recognized license.
Why it matters: Openness is a cornerstone of innovation: It enables developers to build freely on one another’s work. It can also lubricate business insofar as it enables developers to sell products built upon fully open software. And it has growing regulatory implications. For example, the European Union’s AI Act regulates models that are released under an open source license less strictly than closed models. All these factors raise the stakes for clear, consistent definitions. The authors’ framework offers clear, detailed guidelines for developers — and policymakers — in search of clarity.
We’re thinking: We’re grateful to AI developers who open their work to any degree, and we especially appreciate fully open availability, documentation, and access. We encourage model builders to release their work as openly as they can manage.