The Benefits of Lazy Prompting You don’t always need to provide context when prompting a large language model. A quick prompt can be enough.

Published
Reading time
2 min read
Cartoon of a relaxed man saying “Relax! I’m lazy prompting!” while lounging under a beach umbrella near a stressed coworker at a desk.
Loading the Elevenlabs Text to Speech AudioNative Player...

Dear friends,

Contrary to standard prompting advice that you should give LLMs the context they need to succeed, I find it’s sometimes faster to be lazy and dash off a quick, imprecise prompt and see what happens. The key to whether this is a good idea is whether you can quickly assess the output quality, so you can decide whether to provide more context. In this post, I’d like to share when and how I use “lazy prompting.”

When debugging code, many developers copy-paste error messages — sometimes pages of them — into an LLM without further instructions. Most LLMs are smart enough to figure out that you want them to help understand and propose fixes, so you don’t need to explicitly tell them. With brief instructions like “Edit this: …” or “sample dotenv code” (to remind you how to write code to use Python's dotenv package), an LLM will often generate a good response. Further, if the response is flawed, hopefully you can spot any problems and refine the prompt, for example to steer how the LLM edits your text.

At the other end of the spectrum, sometimes  I spend 30 minutes carefully writing a 2-page prompt to get an AI system to help me solve a problem (for example to write many pages of code) that otherwise would have taken me much longer.

I don’t try a lazy prompt if (i) I feel confident there’s no chance the LLM will provide a good solution without additional context. For example, given a partial program spec, does even a skilled human developer have a chance of understanding what you want? If I absolutely want to use a particular piece of pdf-to-text conversion software (like my team LandingAI’s Agentic Doc Extraction!), I should say so in the prompt, since otherwise it’s very hard for the LLM to guess my preference. I also wouldn’t use a lazy prompt if (ii) a buggy implementation would take a long time to detect. For example, if the only way for me to figure out if the output is incorrect is to laboriously run the code to check its functionality, it would be better to spend the time up-front to give context that would increase the odds of the LLM generating what I want.

By the way, lazy prompting is an advanced technique. On average, I see more people giving too little context to LLMs than too much. Laziness is a good technique only when you’ve learned how to provide enough context, and then deliberately step back to see how little context you can get away with and still have it work. Also, lazy prompting applies only when you can iterate quickly using an LLM’s web or app interface It doesn’t apply to prompts written in code for the purpose of repeatedly calling an API, since presumably you won’t be examining every output to clarify and iterate if the output is poor.

Thank you to Rohit Prsad, who has been collaborating with me on the open-source package aisuite, for suggesting the term lazy prompting. There is an analogy to lazy evaluation in computer science, where you call a function at the latest possible moment and only when a specific result is needed. In lazy prompting, we add details to the prompt only when they are needed.

Keep building!

Andrew 

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox