Short CourseBeginner1 Hour 19 Minutes

Red Teaming LLM Applications

Instructors: Matteo Dora, Luca Martial

Giskard
  • Beginner
  • 1 Hour 19 Minutes
  • 7 Video Lessons
  • 5 Code Examples
  • Instructors: Matteo Dora, Luca Martial
    • Giskard
    Giskard

What you'll learn

  • Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.

  • Apply red teaming techniques from cybersecurity to ensure the safety and reliability of your LLM application.

  • Use an open source library from Giskard to help automate LLM red-teaming methods.

About this course

Learn how to test and find vulnerabilities in your LLM applications to make them safer. In this course, you’ll attack various chatbot applications using prompt injections to see how the system reacts and understand security failures. LLM failures can lead to legal liability, reputational damage, and costly service disruptions. This course helps you mitigate these risks proactively. Learn industry-proven red teaming techniques to proactively test, attack, and improve the robustness of your LLM applications.

In this course:

  • Explore the nuances of LLM performance evaluation, and understand the differences between benchmarking foundation models and testing LLM applications.
  • Get an overview of fundamental LLM application vulnerabilities and how they affect real-world deployments.
  • Gain hands-on experience with both manual and automated LLM red-teaming methods.
  • See a full demonstration of red-teaming assessment, and apply the concepts and techniques covered throughout the course.

After completing this course, you will have a fundamental understanding of how to experiment with LLM vulnerability identification and evaluation on your own applications.

Who should join?

Red Teaming LLM Applications is a beginner-friendly course. Basic Python knowledge is recommended to get the most out of this course.

Course Outline

7 Lessons・5 Code Examples
  • Introduction

    Video4 mins

  • Overview of LLM Vulnerabilities

    Video with code examples18 mins

  • Red Teaming LLMs

    Video with code examples13 mins

  • Red Teaming at Scale

    Video with code examples17 mins

  • Red Teaming LLMs with LLMs

    Video with code examples10 mins

  • A Full Red Teaming Assessment

    Video with code examples15 mins

  • Conclusion

    Video1 min

Instructors

Matteo Dora

Matteo Dora

Lead LLM Safety Researcher at Giskard

Luca Martial

Luca Martial

Product Lead at Giskard

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!