Short CourseIntermediate1 Hour 23 Minutes

Building Multimodal Search and RAG

Instructor: Sebastian Witalec

Weaviate
  • Intermediate
  • 1 Hour 23 Minutes
  • 8 Video Lessons
  • 7 Code Examples
  • Instructor: Sebastian Witalec
    • Weaviate
    Weaviate

What you'll learn

  • Learn how multimodality works by implementing contrastive learning, and see how it can be used to build modality-independent embeddings for seamless any-to-any retrieval.

  • Build multimodal RAG systems that retrieve multimodal context and reason over it to generate more relevant answers.

  • Implement industry applications of multimodal search and build multi-vector recommender systems.

About this course

Learn how to build multimodal search and RAG systems. RAG systems enhance an LLM by incorporating proprietary data into the prompt context. Typically, RAG applications use text documents, but, what if the desired context includes multimedia like images, audio, and video? This course covers the technical aspects of implementing RAG with multimodal data to accomplish this.

  • Learn how multimodal models are trained through contrastive learning and implement it on a real dataset.
  • Build any-to-any multimodal search to retrieve relevant context across different data types.
  • Learn how LLMs are trained to understand multimodal data through visual instruction tuning and use them on multiple image reasoning examples
  • Implement an end-to-end multimodal RAG system that analyzes retrieved multimodal context to generate insightful answers
  • Explore industry applications like visually analyzing invoices and flowcharts to output structured data.
  • Create a multi-vector recommender system that suggests relevant items by comparing their similarities across multiple modalities.

As AI systems increasingly need to process and reason over multiple data modalities, learning how to build such systems is an important skill for AI developers.

This course equips you with the key skills to embed, retrieve, and generate across different modalities. By gaining a strong foundation in multimodal AI, you’ll be prepared to build smarter search, RAG, and recommender systems.

Who should join?

This course is for anyone who wants to start building their own multimodal applications. Basic Python knowledge, as well as familiarity with RAG is recommended to get the most out of this course.

Course Outline

8 Lessons・7 Code Examples
  • Introduction

    Video3 mins

  • Overview of Multimodality

    Video with code examples23 mins

  • Multimodal Search

    Video with code examples15 mins

  • Large Multimodal Models (LMMs)

    Video with code examples9 mins

  • Multimodal RAG (MM-RAG)

    Video with code examples9 mins

  • Industry Applications

    Video with code examples7 mins

  • Multimodal Recommender System

    Video with code examples14 mins

  • Conclusion

    Video1 min

  • Appendix - Tips and Help

    Code examples1 min

Instructor

Sebastian Witalec

Sebastian Witalec

Head of Developer Relations at Weaviate

Course access is free for a limited time during the DeepLearning.AI learning platform beta!

Want to learn more about Generative AI?

Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!