An icon of idealism in AI stands accused of letting its ambition eclipse its principles.
What’s new: Founded in 2015 to develop artificial general intelligence for the good of humankind, OpenAI swapped its ideals for cash, according to MIT Technology Review.
The critique: OpenAI began as a nonprofit committed to sharing its research, code, and patents. Despite a $1 billion initial commitment from Elon Musk, Peter Thiel, and others, the organization soon found it needed a lot more money to keep pace with corporate rivals like Google’s DeepMind. The pursuit of funding led it ever farther afield of its founding principles as it sought to attract further funding and talent, writes reporter Karen Hao.
- In early 2019, OpenAI set up a for-profit arm. The organization limited investors to a 100-fold return, which critics called a ploy to promote expectations of exaggerated returns.
- Soon after, the organization accepted a $1 billion investment from Microsoft. In return, OpenAI said Microsoft was its “preferred partner” for commercializing its research.
- Critics accuse the company of overstating its accomplishments and, in the case of the GPT-2 language model, overdramatizing them by withholding code so it wouldn’t be misused.
- Even as it seeks publicity, OpenAI has grown secretive. The company provided full access to early research such as 2016’s Gym reinforcement learning environment (pictured above). Recently, though, it has kept some projects quiet, making employees sign non-disclosure agreements and forbidding them from talking to the press.
Behind the news: Musk seconded the critique, adding that he doesn’t trust the company’s leadership to develop safe AI. The Tesla chief resigned from OpenAI’s board last year saying that Tesla’s autonomous driving research posed a conflict of interest.
Why it matters: OpenAI aimed to counterbalance corporate AI, promising a public-spirited approach to developing the technology. As the cost of basic research rises, that mission becomes increasingly important — and difficult to maintain.
We’re thinking: OpenAI’s team has been responsible for several important breakthroughs. We would be happy to see its employees and investors enjoy a great financial return. At the same time, sharing knowledge is crucial for developing beneficial applications, and exaggerated claims contribute to unrealistic expectations that can lead to public backlash. We hope that all AI organizations will support openness in research and keep hype to a minimum.