Graph showing cross-validation accuracy vs. number of features for raw and whitened inputs.
Technical Insights

Focus on the Future, Learn From the Past: 15 years ago, the idea of scaling up deep learning was controversial — but it was right. Keep your eyes open for such ideas in 2025.

I’m thrilled that former students and postdocs of mine won both of this year’s NeurIPS Test of Time Paper Awards.
Cartoon showing people stuck in wet concrete, with a person saying ‘You asked for a concrete idea!’
Technical Insights

Best Practices for AI Product Management: Generative AI is making it possible to build new kinds of applications in new ways. Here are emerging best practices for AI product managers.

AI Product Management is evolving rapidly. The growth of generative AI and AI-based developer tools has created numerous opportunities to build AI applications.
AI ecosystem layers: applications, orchestration, foundational models, cloud, and semiconductors.
Technical Insights

The Falling Cost of Building AI Applications: Big AI’s huge investments in foundation models enables developers to build AI applications at very low cost.

There’s a lingering misconception that building with generative AI is expensive.
Two people reading in bed, one with a book on library functions and a head labeled with AI layers.
Technical Insights

AI Is Part of Your Online Audience: Some webpages are written not for humans but for large language models to read. Developers can benefit by keeping the LLM audience in mind.

A small number of people are posting text online that’s intended for direct consumption not by humans, but by LLMs (large language models).
Man with tools says, “I optimized for tool use!” Woman at computer replies, “Should’ve optimized for computer use!”
Technical Insights

From Optimizing for People to Optimizing for Machines: Why large language models are increasingly fine-tuned to fit into agentic workflows

Large language models (LLMs) are typically optimized to answer peoples’ questions.
Two cheetahs in a savannah, with one saying ‘Move fast and be responsible!’ in a speech bubble.
Technical Insights

How to Get User Feedback to Your AI Products - Fast!: Your ability to prototype AI capabilities fast affects all parts of the product development cycle, starting with getting user feedback.

Startups live or die by their ability to execute at speed. For large companies, too, the speed with which an innovation team is able to iterate has a huge impact on its odds of success.
Comic where a robot is hiding in a closet during a game of hide-and-seek.
Technical Insights

Why Science-Fiction Scenarios of AI’s Emergent Behavior Are Likely to Remain Fictional: The sudden apparance of “emergent” AI capabilities may be an artifact of the metrics you study

Over the weekend, my two kids colluded in a hilariously bad attempt to mislead me to look in the wrong place during a game of hide-and-seek.
Welcoming Diverse Approaches Keeps Machine Learning Strong: What technology counts as an “agent”? Instead of arguing, let's consider a spectrum along which various technologies are “agentic.”
Technical Insights

Welcoming Diverse Approaches Keeps Machine Learning Strong: What technology counts as an “agent”? Instead of arguing, let's consider a spectrum along which various technologies are “agentic.”

One reason for machine learning’s success is that our field welcomes a wide range of work.
We Need Better Evals for LLM Applications: It’s hard to evaluate AI applications built on large language models. Better evals would accelerate progress.
Technical Insights

We Need Better Evals for LLM Applications: It’s hard to evaluate AI applications built on large language models. Better evals would accelerate progress.

A barrier to faster progress in generative AI is evaluations (evals), particularly of custom AI applications that generate free-form text.
Project Idea — A Car for Dinosaurs: AI projects don’t need to have a meaningful deliverable. Lower the bar and do something creative.
Technical Insights

Project Idea — A Car for Dinosaurs: AI projects don’t need to have a meaningful deliverable. Lower the bar and do something creative.

A good way to get started in AI is to start with coursework, which gives a systematic way to gain knowledge, and then to work on projects.
Mega Prompt Latin Type with Eyeballs.
Technical Insights

From Prompts to Mega-Prompts: Best practices for developers of LLM-based applications in the era of long context and faster, cheaper token generation

In the last couple of days, Google announced a doubling of Gemini Pro 1.5's input context window from 1 million to 2 million tokens, and OpenAI released GPT-4o, which generates tokens 2x faster and 50% cheaper than GPT-4 Turbo and natively accepts and generates multimodal tokens.
Building Models That Learn From Themselves: AI developers are hungry for more high-quality training data. The combination of agentic workflows and inexpensive token generation could supply it.
Technical Insights

Building Models That Learn From Themselves: AI developers are hungry for more high-quality training data. The combination of agentic workflows and inexpensive token generation could supply it.

Inexpensive token generation and agentic workflows for large language models (LLMs) open up intriguing new possibilities for training LLMs on synthetic data. Pretraining an LLM
Why We Need More Compute for Inference: Today, large language models produce output primarily for humans. But agentic workflows produce lots of output for the models themselves — and that will require much more compute for AI inference.
Technical Insights

Why We Need More Compute for Inference: Today, large language models produce output primarily for humans. But agentic workflows produce lots of output for the models themselves — and that will require much more compute for AI inference.

Much has been said about many companies’ desire for more compute (as well as data) to train larger foundation models.
Proposed ChatDev architecture, illustrated.
Technical Insights

Agentic Design Patterns Part 5, Multi-Agent Collaboration: Prompting an LLM to play different roles for different parts of a complex task summons a team of AI agents that can do the job more effectively.

Multi-agent collaboration is the last of the four key AI agentic design patterns that I’ve described in recent letters.
Agentic Design Patterns Part 4, Planning: Large language models can drive powerful agents to execute complex tasks if you ask them to plan the steps before they act.
Technical Insights

Agentic Design Patterns Part 4, Planning: Large language models can drive powerful agents to execute complex tasks if you ask them to plan the steps before they act.

Planning is a key agentic AI design pattern in which we use a large language model (LLM) to autonomously decide on what sequence of steps to execute to accomplish a larger task.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox