LLMs as Operating Systems: Agent Memory
Instructors: Charles Packer, Sarah Wooders
- Intermediate
- 1 Hour 19 Minutes
- 8 Video Lessons
- 6 Code Examples
- Instructors: Charles Packer, Sarah Wooders
What you'll learn
Build agents with long-term, persistent memory using Letta to manage and edit context efficiently.
Learn how an LLM agent can act as an operating system to manage memory, autonomously optimizing context use.
Apply memory management to create adaptive, collaborative AI agents for real-world tasks like research and HR.
About this course
Learn how to build agentic memory into your applications in this short course, LLMs as Operating Systems: Agent Memory, created in partnership with Letta, and taught by its founders Charles Packer and Sarah Wooders.
An LLM can use any information stored in its input context window but has limited space. Using a longer input context also costs more and causes slower processing. Managing this context window and what to input becomes very important.
Based on the innovative approach in the MemGPT research paper “Towards LLMs as Operating Systems,” its authors, two of whom are Charles and Sarah, proposed using an LLM agent to manage this context window, building a management system that provides applications with managed, persistent memory.
Examples of Managing Agent Memory are:
- Control Conversation Memory. As conversations grow beyond defined limits, move information from context to a persistent searchable database. Summarize information to keep relevant facts in context memory. Restore relevant conversation elements as needed by conversation flow.
- Persist and edit facts such as names, dates, and preferences, and make them available in context.
- Persist and track ‘task’ specific information. For example, a research agent needs to keep research information in context memory, swapping the most relevant information from a searchable database with previous information.
In this course, you’ll learn:
- How to build an agent with self-editing memory, using tool-calling and multi-step reasoning, from scratch.
- Letta, an open-source framework that adds memory to your LLM agents, giving them advanced reasoning capabilities and transparent long-term memory.
- The key ideas behind the MemGPT paper, the two tiers of memory in and outside the context window, and how agent states comprised of memory, tools, and messages are turned into prompts.
- How to create and interact with a MemGPT agent using the Letta framework, and how to build and edit its core and archival memory.
- How core memory is designed and implemented with an example of how to customize it with blocks and memory tools.
- How to implement multi-agent collaboration both by sending messages and by sharing memory blocks.
By the end of this course, you will have the tools to build LLM applications that can leverage virtual context, extending memory beyond the finite context window of LLMs.
Who should join?
Anyone who has basic Python skills and is curious about how autonomous agents can manage their own memory.
Course Outline
8 Lessons・6 Code ExamplesIntroduction
Video・6 mins
Editable memory
Video with code examples・12 mins
Understanding MemGPT
Video・15 mins
Building Agents with Memory
Video with code examples・10 mins
Programming Agent Memory
Video with code examples・10 mins
Agentic RAG and External Memory
Video with code examples・11 mins
Multi-agent Orchestration
Video with code examples・12 mins
Conclusion
Video・1 min
Appendix - Tips, Help, and Download
Code examples・1 min
Instructors
LLMs as Operating Systems: Agent Memory
- Intermediate
- 1 Hour 19 Minutes
- 8 Video Lessons
- 6 Code Examples
- Instructors: Charles Packer, Sarah Wooders
Course access is free for a limited time during the DeepLearning.AI learning platform beta!
Want to learn more about Generative AI?
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!