Two people reading in bed, one with a book on library functions and a head labeled with AI layers.
Technical Insights

AI Is Part of Your Online Audience: Some webpages are written not for humans but for large language models to read. Developers can benefit by keeping the LLM audience in mind.

A small number of people are posting text online that’s intended for direct consumption not by humans, but by LLMs (large language models).
Man with tools says, “I optimized for tool use!” Woman at computer replies, “Should’ve optimized for computer use!”
Technical Insights

From Optimizing for People to Optimizing for Machines: Why large language models are increasingly fine-tuned to fit into agentic workflows

Large language models (LLMs) are typically optimized to answer peoples’ questions.
Two cheetahs in a savannah, with one saying ‘Move fast and be responsible!’ in a speech bubble.
Technical Insights

How to Get User Feedback to Your AI Products - Fast!: Your ability to prototype AI capabilities fast affects all parts of the product development cycle, starting with getting user feedback.

Startups live or die by their ability to execute at speed. For large companies, too, the speed with which an innovation team is able to iterate has a huge impact on its odds of success.
Comic where a robot is hiding in a closet during a game of hide-and-seek.
Technical Insights

Why Science-Fiction Scenarios of AI’s Emergent Behavior Are Likely to Remain Fictional: The sudden apparance of “emergent” AI capabilities may be an artifact of the metrics you study

Over the weekend, my two kids colluded in a hilariously bad attempt to mislead me to look in the wrong place during a game of hide-and-seek.
Welcoming Diverse Approaches Keeps Machine Learning Strong: What technology counts as an “agent”? Instead of arguing, let's consider a spectrum along which various technologies are “agentic.”
Technical Insights

Welcoming Diverse Approaches Keeps Machine Learning Strong: What technology counts as an “agent”? Instead of arguing, let's consider a spectrum along which various technologies are “agentic.”

One reason for machine learning’s success is that our field welcomes a wide range of work.
We Need Better Evals for LLM Applications: It’s hard to evaluate AI applications built on large language models. Better evals would accelerate progress.
Technical Insights

We Need Better Evals for LLM Applications: It’s hard to evaluate AI applications built on large language models. Better evals would accelerate progress.

A barrier to faster progress in generative AI is evaluations (evals), particularly of custom AI applications that generate free-form text.
Project Idea — A Car for Dinosaurs: AI projects don’t need to have a meaningful deliverable. Lower the bar and do something creative.
Technical Insights

Project Idea — A Car for Dinosaurs: AI projects don’t need to have a meaningful deliverable. Lower the bar and do something creative.

A good way to get started in AI is to start with coursework, which gives a systematic way to gain knowledge, and then to work on projects.
Mega Prompt Latin Type with Eyeballs.
Technical Insights

From Prompts to Mega-Prompts: Best practices for developers of LLM-based applications in the era of long context and faster, cheaper token generation

In the last couple of days, Google announced a doubling of Gemini Pro 1.5's input context window from 1 million to 2 million tokens, and OpenAI released GPT-4o, which generates tokens 2x faster and 50% cheaper than GPT-4 Turbo and natively accepts and generates multimodal tokens.
Building Models That Learn From Themselves: AI developers are hungry for more high-quality training data. The combination of agentic workflows and inexpensive token generation could supply it.
Technical Insights

Building Models That Learn From Themselves: AI developers are hungry for more high-quality training data. The combination of agentic workflows and inexpensive token generation could supply it.

Inexpensive token generation and agentic workflows for large language models (LLMs) open up intriguing new possibilities for training LLMs on synthetic data. Pretraining an LLM
Why We Need More Compute for Inference: Today, large language models produce output primarily for humans. But agentic workflows produce lots of output for the models themselves — and that will require much more compute for AI inference.
Technical Insights

Why We Need More Compute for Inference: Today, large language models produce output primarily for humans. But agentic workflows produce lots of output for the models themselves — and that will require much more compute for AI inference.

Much has been said about many companies’ desire for more compute (as well as data) to train larger foundation models.
Proposed ChatDev architecture, illustrated.
Technical Insights

Agentic Design Patterns Part 5, Multi-Agent Collaboration: Prompting an LLM to play different roles for different parts of a complex task summons a team of AI agents that can do the job more effectively.

Multi-agent collaboration is the last of the four key AI agentic design patterns that I’ve described in recent letters.
Agentic Design Patterns Part 4, Planning: Large language models can drive powerful agents to execute complex tasks if you ask them to plan the steps before they act.
Technical Insights

Agentic Design Patterns Part 4, Planning: Large language models can drive powerful agents to execute complex tasks if you ask them to plan the steps before they act.

Planning is a key agentic AI design pattern in which we use a large language model (LLM) to autonomously decide on what sequence of steps to execute to accomplish a larger task.
Agentic Design Patterns Part 3, Tool Use: How large language models can act as agents by taking advantage of external tools for search, code execution, productivity, ad infinitum
Technical Insights

Agentic Design Patterns Part 3, Tool Use: How large language models can act as agents by taking advantage of external tools for search, code execution, productivity, ad infinitum

Tool use, in which an LLM is given functions it can request to call for gathering information, taking action, or manipulating data, is a key design pattern of AI agentic workflows.
Agentic Design Patterns Part 2, Reflection: Large language models can become more effective agents by reflecting on their own behavior.
Technical Insights

Agentic Design Patterns Part 2, Reflection: Large language models can become more effective agents by reflecting on their own behavior.

Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress this year: Reflection, Tool use, Planning and Multi-agent collaboration.
Agentic Design Patterns Part 1: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance
Technical Insights

Agentic Design Patterns Part 1: Four AI agent strategies that improve GPT-4 and GPT-3.5 performance

I think AI agent workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox