Short CourseIntermediate1 Hour 41 Minutes

Improving Accuracy of LLM Applications

Instructors: Sharon Zhou, Amit Sangani

Lamini, Meta
  • Intermediate
  • 1 Hour 41 Minutes
  • 7 Video Lessons
  • 4 Code Examples
  • Instructors: Sharon Zhou, Amit Sangani
    • Lamini
    • Meta
    Lamini, Meta

What you'll learn

  • Understand development steps, from evaluation, through prompting, self-reflection, and fine-tuning, to improve your model’s reliability and accuracy.

  • Learn how memory tuning can increase your model performance by embedding facts into your model to reduce hallucination.

  • Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.

About this course

Join our new short course, Improving Accuracy of LLM Applications with Lamini and Meta. Learn from Sharon Zhou, Co-founder & CEO of Lamini, and Amit Sangani, Senior Director of Partner Engineering, Meta.

Many developers have experienced frustration with inconsistent results when working with LLM applications. This course offers a systematic approach to enhance the accuracy and reliability of your LLM applications.

You will build an SQL agent, add evaluation metrics to measure performance, and use prompt engineering and self-reflection to make the model perform better. Finally, you will fine-tune the model with techniques like LoRA and memory tuning that embeds facts in model weights to reduce hallucinations.

In this course, you’ll use Llama’s family of open-source models.

What you’ll do: 

  • Build a text to SQL agent and simulate situations where it hallucinates to begin the evaluation process.
  • Build an evaluation framework to systematically measure performance, including criteria for good evaluations, best practices, and how to develop an evaluation score.
  • Learn how instruction fine-tuning enhances pre-trained LLMs to follow instructions, and how memory fine-tuning embeds facts to reduce hallucinations. 
  • Break fine-tuning myths and see how Performance-Efficient Fine-tuning (PEFT) techniques like Low-Rank Adaptation(LoRA) reduce training time by 100x and Mixture of Memory Experts (MoME) reduces it even further. 
  • Go through an iterative process of generating training data and fine-tuning, learning practical tips such as adding examples, generating variations, and filtering generated data to increase model accuracy.

Start improving the accuracy of LLM applications today! 

Who should join?

This course is ideal for anyone with intermediate Python knowledge and familiarity with large language models (LLMs) looking to build more factual and precise LLM applications.

Course Outline

7 Lessons・4 Code Examples
  • Introduction

    Video5 mins

  • Overview

    Video with code examples17 mins

  • Create an SQL Agent

    Video with code examples10 mins

  • Create an Evaluation

    Video with code examples23 mins

  • Finetuning, PEFT, & Memory Tuning

    Video12 mins

  • Generate Data & Finetune

    Video with code examples31 mins

  • Conclusion

    Video1 min

Instructors

Sharon Zhou

Sharon Zhou

Co-Founder and CEO of Lamini

Amit Sangani

Amit Sangani

Senior Director of Partner Engineering of Meta

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!