A new report documents the interplay of powerful forces that drove AI over the past year: open versus proprietary technology, public versus private financing, innovation versus caution.
What’s new: Drawn from research papers, news articles, earnings reports, and the like, the seventh annual State of AI Report recaps the highlights of 2024.
Looking back: AI’s rapid advance in 2024 was marked by groundbreaking research, a surge of investment, international regulations, and a shift in safety concerns from hypothetical risks to real-world issues, according to investors Nathan Benaich and Ian Hogarth.
- Top models: Anthropic’s Claude, Google’s Gemini, and Meta’s Llama largely closed the gap with OpenAI’s top multimodal model, GPT-4o, before its successor o1 raised the bar for reasoning. Meanwhile, models built in China such as DeepSeek, Qwen, and Kling challenged the top models despite the United States’ restrictions on exports of the most powerful AI chips. The year saw a proliferation of models small enough to run on local devices, such as Gemini Nano (3.25 billion parameters) and the smaller of Apple’s AFM family (3 billion parameters).
- Research: Model builders settled on mixtures of curated natural and synthetic data for training larger models (Microsoft’s Phi family, Anthropic Claude 3.5 Sonnet, Meta Llama 3.1) and knowledge distillation for training smaller ones (Flux.1, Gemini 1.5 Flash, Mistral-NeMo-Minitron, and numerous others). Meanwhile, researchers established benchmarks to measure new capabilities like video understanding and agentic problem-solving. Another motivation for new benchmarks is to replace older tests in which new models consistently achieve high scores, possibly because the test data had contaminated their training data.
- Finance: Investment boomed. The chip designer Nvidia contributed nearly one-third of the AI industry’s $9 trillion total value, including public and private companies, and the combined value of public AI companies alone exceeded the entire industry’s value last year. The most dramatic single trend in AI finance was the shift by major public companies from acquisitions to acquisition-like transactions, in which tech giants took on talent from top startups, sometimes in exchange for licensing fees, without buying them outright: notably Amazon-Covariant, Google-Character.AI, and Microsoft-Inflection. In venture investment, robotics now accounts for nearly 30 percent of all funding. Standouts included the humanoid startup Figure with a $675 million round at a $2.6 billion valuation and its competitor 1X with a $125 million round.
- Regulation: Regulation of AI remains fragmented globally. The U.S. issued executive orders that mainly relied on new interpretations or implementations of existing laws. Europe’s AI Act sought to balance innovation and caution by declaring that large models pose a special risk and banning applications such as predictive policing, but some observers have deemed it heavy-handed. China focused on enforcement of its more restrictive laws, requiring companies to submit models for government review. Widespread fears that AI would disrupt 2024’s many democratic elections proved unfounded.
- Safety: While anxieties in 2023 focused on abstract threats such as the risk that AI would take over the world, practical concerns came to the fore. Model makers worked to increase transparency, interpretability, and security against external attacks. Actual security incidents occurred on a more personal scale: Bad actors used widely available tools to harass and impersonate private citizens, notably generating fake pornographic images of them, which remains an unsolved problem.
Looking forward: The authors reviewed predictions they made in last year’s report — among them, regulators would investigate the Microsoft/OpenAI Partnership (accurate), and a model builder would spend over $1 billion on training (not yet) — and forecast key developments in 2025:
- An open source model will outperform OpenAI’s proprietary o1 on reasoning benchmarks.
- European lawmakers, fearing that the AI Act overreaches, will refrain from strict enforcement.
- Generative AI will hit big. A viral app or website built by a noncoder or a video game with interactive generative AI elements will achieve breakout success. An AI-generated research paper will be accepted at a major machine learning conference.
Why it matters: The authors examined AI from the point of view of investors, keen to spot shifts and trends that will play out in significant ways. Their report dives deep into the year’s research findings as well as business deals and political currents, making for a well rounded snapshot of AI at the dawn of a new year.
We’re thinking: The authors are bold enough to make clear predictions and self-critical enough to evaluate their own accuracy one year later. We appreciate their principled approach!