Short CourseBeginner1 Hour 25 Minutes

Safe and Reliable AI via Guardrails

Instructor: Shreya Rajpal

GuardrailsAI
  • Beginner
  • 1 Hour 25 Minutes
  • 10 Video Lessons
  • 6 Code Examples
  • Instructor: Shreya Rajpal
    • GuardrailsAI
    GuardrailsAI

What you'll learn

  • Learn the common failure modes of LLM-powered applications that guardrails can help mitigate, including hallucinations and revealing sensitive information.

  • Understand how AI guardrails validate and verify your applications with input and output guards, ensuring reliable and controlled interactions.

  • Add guardrails to a RAG-powered customer service chatbot to create a new layer of control of the applications behavior.

About this course

Join our new short course, Safe and Reliable AI via Guardrails, and learn how to build production-ready applications with Shreya Rajpal, co-founder & CEO of GuardrailsAI.

The output of LLMs is fundamentally probabilistic, making it impossible to know in advance, or to guarantee the same response twice. This makes it difficult to put LLM-powered applications into production for industries with strict regulations or clients who require high consistency in application behavior. 

Fortunately, installing guardrails on your system gives you an additional layer of control in creating safe and reliable applications. Guardrails are safety mechanisms and validation tools built into AI applications, acting as a protective framework that prevents your application from revealing incorrect, irrelevant, or sensitive information. 

This course will show you how to build robust guardrails from scratch that mitigate common failure modes of LLM-powered applications like hallucinations or revealing personally identifiable information (PII). You’ll also learn how to access a variety of pre-built guardrails on the GuardrailsAI hub and are ready to integrate into your projects.

You’ll implement these guardrails in the context of a RAG-powered customer service chatbot for a small pizzeria.

In detail, you’ll: 

  • Explore the common failure modes of LLM-powered applications including hallucinations, going off-topic, revealing sensitive information, and generating responses that can harm your reputation.
  • Learn to mitigate these failure modes with input and output guards that validate and verify your application’s responses.
  • Create a guardrail to prevent the chatbot from discussing sensitive topics, such as a confidential project at the pizza shop.
  • Develop a guardrail to detect hallucinations using a Natural Language Inference (NLI) model, ensuring responses are grounded in the truth of your trusted documents.
  • Add a Personal Identifiable Information (PII) guardrail to detect and redact sensitive information in user prompts and LLM outputs using tools from the Guardrails hub.
  • Set up a guardrail to limit the chatbot’s responses to topics relevant to the pizza shop, keeping interactions focused and appropriate.
  • Configure a guardrail that prevents your chatbot from mentioning any competitors using a name detection pipeline consisting of conditional logic that routes to an exact match or a threshold check with a named entity recognition.

The tools in this course will unlock more opportunities for you to build and deploy safe, reliable LLM-powered applications ready for real-world use.

Who should join?

Anyone who has basic Python knowledge looking to enhance the safety and reliability of LLM-powered applications with practical, hands-on guardrail techniques.

Course Outline

10 Lessons・6 Code Examples
  • Introduction

    Video6 mins

  • Failure modes in RAG applications

    Video with code examples13 mins

  • What are guardrails

    Video6 mins

  • Building your first guardrail

    Video with code examples7 mins

  • Checking for hallucinations with Natural Language Inference

    Video with code examples11 mins

  • Using hallucination guardrail in a chatbot

    Video with code examples4 mins

  • Keeping a chatbot on topic

    Video with code examples9 mins

  • Ensuring no personal identifiable information (PII) is leaked

    Video with code examples13 mins

  • Preventing competitor mentions

    Video9 mins

  • Conclusion

    Video1 mins

Instructor

Shreya Rajpal

Shreya Rajpal

Founder of GuardrailsAI

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!