Learning center"Four features of the Assistant API you aren't using - but should"Learn more
Preview Mode ()

Retrieval Augmented Generation

By James Briggs

Retrieval Augmented Generation (RAG) has become an essential component of the AI stack. RAG helps us reduce hallucinations, fact-check, provide domain-specific knowledge, and much more. Here, we will learn how to make the most of this powerful technology.

Share:

Introduction

Retrieval Augmented Generation (RAG) has become the go-to method for sorting and organizing information for Large Language Models (LLMs). RAG helps us reduce hallucinations, fact-check, provide domain-specific knowledge, and much more.

When we start with LLMs and RAG, it is very easy to view the retrieval pipeline as nothing more than plugging a vector database into our LLM — and this can be enough for prototypes or simple use cases. However, there is a lot more we can do with retrieval than this, we can create much more powerful and sophisticated retrieval systems. Our LLMs require good input to produce good output, and retrieval is an essential component of that.

In this ebook, we will learn how to build better RAG systems using advanced techniques such as two-stage retrieval with reranking, hybrid search, multi-query, and much more.

Chapter 1
Rerankers for RAG
Explore how reranking can supercharge RAG performance.
Chapter 2
Embedding Models
How we decide which embedding model to use
Chapter 3
Agent Evaluation
Metrics-driven AI agent evaluation

New chapters coming soon!

Get email updates when they're published:

Chapter 4

Hybrid Search

Chapter 5

Enhance Search Scope with Multi-Query

Chapter 6

Metadata-Enhanced Generation

Chapter 7

Optimizing Agents for Search

Chapter 8

Small Model Agents with Grammars