Back to Resources

Understanding RAG

From Zero to Production-Ready

An interactive, visual guide to Retrieval-Augmented Generation. Learn how to build AI systems that don't hallucinate, stay up-to-date, and work reliably at scale.

20 min readBeginner → AdvancedInteractive

The Problem RAG Solves

Large Language Models are incredibly powerful, but they have three fundamental limitations:

Hallucinations

LLMs confidently generate false information when they don't know the answer, making them unreliable for factual queries.

Outdated Knowledge

Models are frozen at their training cutoff date. They can't answer questions about recent events or updated information.

No Private Data

LLMs don't have access to your company's internal documents, databases, or proprietary knowledge.

The Solution: Retrieval-Augmented Generation (RAG)

RAG solves these problems by giving the LLM access to external knowledge. Instead of relying solely on the model's training data, we:

  1. 1Retrieve relevant information from your knowledge base
  2. 2Augment the user's query with this retrieved context
  3. 3Generate a response grounded in real, up-to-date information

Without RAG

User: "What's our Q4 revenue?"

LLM: "I don't have access to your company's financial data."

With RAG

User: "What's our Q4 revenue?"

Retrieves: Q4_Financial_Report.pdf

LLM: "According to the Q4 financial report, revenue was $2.4M, up 18% from Q3."

RAG in 30 Seconds

Here's the core idea: instead of hoping the LLM knows the answer, we find relevant information first and give it to the model as context.

1

User asks

"What's our refund policy?"

2

Search docs

Find relevant chunks from your knowledge base

3

Add context

Include those chunks in the prompt

4

Generate

LLM answers using the retrieved info

The Key Insight

A basic RAG implementation uses semantic search, not keyword matching. It converts text into numbers (embeddings) that capture meaning, so "refund policy" finds content about "money back guarantee" even without matching words.

Why It Works

The LLM doesn't need to memorize your data. It just needs to be good at reading and synthesizing. You provide the facts, it provides the reasoning.

How Semantic Search Works

The "search docs" step above is where the magic happens. Traditional search matches keywords, but RAG uses embeddings to match meaning. Let's see how.

What are Embeddings?

Embeddings are a way to convert text (words, sentences, or documents) into numbers that capture their meaning. Think of them as coordinates in a high-dimensional space where similar concepts are close together.

Text → Numbers

Input Text:

"The cat sat on the mat"

Embedding (1536 dimensions):

[0.023, -0.891, 0.412, 0.067, -0.234, ...]

Input Text:

"A feline rested on the rug"

Embedding (1536 dimensions):

[0.019, -0.887, 0.408, 0.071, -0.229, ...]

Key insight: These two sentences mean almost the same thing, and their embeddings are very similar (even though they share no common words!).

Why This Matters

Traditional keyword search would fail to match "cat" with "feline" or "mat" with "rug". Embeddings capture semantic meaning, allowing us to find relevant content even when exact words don't match.

Embedding Space Visualization

This 2D projection shows how similar concepts cluster together in embedding space.

Dimension 1 →
Dimension 2 →
Machine Learning
Artificial Intelligence
Neural Networks
Deep Learning
Pizza
Pasta
Sushi
Burger
Soccer
Basketball
Tennis
Cat
Dog
Bird
AI/ML
Food
Sports
Animals
Notice: Similar concepts (e.g., "Cat" and "Dog") are close together, while unrelated concepts (e.g., "Pizza" and "Basketball") are far apart.

Measuring Similarity

Once we have embeddings, we can measure how similar two pieces of text are by calculating the distance between their embedding vectors. The closer the vectors, the more similar the meaning.

High Similarity (0.95)

"Machine learning algorithms"

"AI and ML techniques"

Very similar meaning

Low Similarity (0.23)

"Machine learning algorithms"

"Banana smoothie recipe"

Completely different topics

How It Works: Cosine Similarity

The most common metric is cosine similarity, which measures the angle between two vectors. It returns a score from -1 to 1, where:

  • 1.0 = Identical meaning (vectors point in same direction)
  • 0.0 = Unrelated (vectors are perpendicular)
  • -1.0 = Opposite meaning (rarely used in practice)

Cosine Similarity: Geometric View

Cosine similarity measures the angle between vectors. Adjust the slider to see how angle affects similarity.

Vector AVector B30°
Angle between vectors:30°
0° (identical)90° (unrelated)180° (opposite)
Cosine Similarity
0.866
Very Similar
Formula: cos(θ) = (A · B) / (||A|| × ||B||)
At 0° vectors point in the same direction (similarity = 1.0). At 90° they're perpendicular (similarity = 0.0).

How This Enables RAG

1

Convert your knowledge base into embeddings (documents, paragraphs, sentences)

2

Convert the user's question into an embedding

3

Find similar vectors using cosine similarity. These are the most relevant chunks

4

Feed those chunks to the LLM as context for generating the answer

The Complete RAG Pipeline

Now let's put it all together. A RAG system has two phases that work together:

1Indexing (Offline)

Prepare your documents before any user asks a question. This happens once (or when documents change).

  • Load documents from your sources (files, databases, APIs)
  • Chunk them into smaller pieces so each contains a focused idea
  • Embed each chunk into a vector that captures its meaning
  • Store vectors in a database optimized for similarity search

2Retrieval (Online)

Answer user questions in real-time by finding and using relevant context.

  • Embed the user's question using the same model
  • Search for chunks with similar vectors (similar meaning)
  • Rerank results for higher precision (optional but recommended)
  • Generate an answer by sending the question + retrieved chunks to the LLM

The Generation Step

The final step combines everything. You construct a prompt that includes:

# Typical RAG prompt structure
System: You are a helpful assistant. Answer based only on the provided context.
Context: [retrieved chunks inserted here]
User: [original question]

The LLM reads the context and synthesizes an answer. By grounding the response in retrieved documents, we reduce hallucinations and ensure the answer reflects your actual data.

The key insight: by embedding both documents and questions into the same vector space, we can find relevant content based on meaning, not just matching words. Use the interactive view below to see how data flows through each stage.

RAG Pipeline Architecture

Step 1 of 11: Documents

Indexing
Retrieval
Generation

Documents

Indexing Phase - Step 1 of 4

Raw documents are collected from various sources

PDFTXTHTMLDOCX
📋
HR Policy
📖
Product Manual
Customer FAQ

Building Your RAG Stack

You've seen how RAG works. Now it's time to build one. At each stage of the pipeline, you'll need to choose tools and make tradeoffs. There's no one-size-fits-all solution.

We'll walk through each component in the order you'll encounter them: chunking (how to split documents), embedding models (how to convert text to vectors), vector databases (where to store and search), and rerankers (how to improve result quality).

1. Chunking Strategies

Before you can search your documents, you need to break them into smaller pieces called chunks. This is one of the most impactful decisions you'll make. Get it wrong and your retrieval will suffer no matter how good your embedding model is.

The goal is to create chunks that are semantically coherent (each chunk should contain a complete idea) and appropriately sized (small enough to be specific, large enough to have context).

1Fixed-Size Chunking

Split by character/token count with overlap.

chunk_size=512, overlap=50
SimpleMay break context

2Recursive Chunking

Split by paragraphs, then sentences, then words.

separators=["\n\n", "\n", ". ", " "]
Better contextRecommended

3Semantic Chunking

Use embeddings to find natural breakpoints.

similarity_threshold=0.85
Best qualitySlower

4Document-Aware

Respect document structure (headers, sections).

MarkdownHeaderTextSplitter()
Structure-awareFormat-specific

Pro tip: Start with recursive chunking (512-1024 tokens, 10-20% overlap). Test retrieval quality and adjust based on your specific documents.

2. Embedding Models

Once you have your chunks, you need to convert them into embeddings, dense vectors of numbers that capture semantic meaning. This is where the magic happens: similar concepts end up close together in the vector space, even if they use different words.

The embedding model you choose directly impacts retrieval quality. A good model understands synonyms, context, and even cross-lingual similarity. The tradeoffs are typically between quality, speed, cost, and whether your data can leave your infrastructure.

Understanding Embedding Dimensions

Dimensions = the length of the vector (e.g., 1536 numbers). Higher dimensions can capture more nuance but:

  • More dimensions → Better semantic precision, but more storage & slower search
  • Fewer dimensions → Faster search, less storage, but may lose subtle differences
  • Sweet spot: 768-1536 dimensions works well for most use cases
ModelDimsBest ForCost
OpenAI text-embedding-3-large3072Highest quality, supports dimension reduction$$
OpenAI text-embedding-3-small1536Great balance, 5x cheaper than large$
Cohere embed-v31024100+ languages, int8/binary quantization$
Voyage AI voyage-31024Top MTEB scores, domain-specific variants$$
Google text-embedding-004768Good if already on GCP, task-specific$
Jina embeddings-v310248K context, task-specific LoRA$
Nomic embed-text-v1.5768Open source, 8K context, MatryoshkaFree
BGE-M31024Multilingual, multi-granularity, hybridFree

When to use API-based

  • • Quick to start, no infrastructure
  • • Best quality (OpenAI, Voyage, Cohere)
  • • Cost scales with usage
  • • Data leaves your environment

When to self-host

  • • Data privacy requirements
  • • High volume (millions of embeds)
  • • Need to fine-tune on your domain
  • • Models: BGE-M3, Nomic, E5

3. Vector Databases

Now you need somewhere to store your embeddings and search them efficiently. Vector databases are specialized for this. They use algorithms like HNSW or IVF to find the most similar vectors in milliseconds, even with millions of documents.

The choice here often comes down to operational requirements: Do you want a fully managed service or prefer to self-host? Do you need to integrate with an existing database? How many vectors will you store? Different options excel in different scenarios.

Cloud & Managed Solutions

P

Pinecone

Fully managed, serverless. Best for production without ops overhead.

ManagedServerlessFast
W

Weaviate

Open source with cloud option. Built-in hybrid search.

HybridGraphQLModules
Q

Qdrant

High performance, Rust-based. Excellent filtering capabilities.

FastFilteringQuantization

Self-Hosted & Local Options

F

FAISS

Meta's library. Extremely fast, great for local/research use.

Very FastIn-MemoryGPU
C

Chroma

Developer-friendly, easy setup. Perfect for prototyping.

SimplePythonLocal
M

Milvus

Enterprise-grade, billion-scale vectors. Kubernetes-native.

ScaleGPUDistributed
L

LanceDB

Serverless, embedded. Great for edge and local apps.

EmbeddedZero ConfigDisk
PG

pgvector

PostgreSQL extension. Keep vectors with your relational data.

PostgreSQLSQLACID
MG

MongoDB Atlas

Vector search in MongoDB. Combine with your document data.

MongoDBManagedHybrid
ES

Elasticsearch

Vector search in ES 8.x. Great for existing ES users.

Full-textHybridMature

4. Rerankers

Even with great embeddings, your initial retrieval isn't perfect. Rerankers add a second pass that looks at each candidate document more carefully. Instead of relying on vector similarity alone, they use cross-attention between the query and document to score relevance.

The typical pattern is to retrieve 20-50 candidates with fast vector search, then rerank to get the top 3-5 most relevant results. This two-stage approach gives you both speed and accuracy, the best of both worlds.

Cross-Encoders

Process query and document together through a transformer. Most accurate but slowest.

cross-encoder/ms-marco-MiniLM-L-12-v2
Most AccurateSlow

Cohere Rerank

API-based reranking. Easy to use, great quality, handles multiple languages.

cohere.rerank(model="rerank-v3.5")
Easy APIMultilingual

Improving Your RAG System

A basic RAG setup will get you started, but real-world performance often requires optimization. These techniques address common failure modes and can significantly improve retrieval quality.

Hybrid Search

Combine semantic search with traditional keyword search (BM25) for better results. Semantic search understands meaning; keyword search catches exact matches.

🧠

Semantic

"car" finds "automobile"

Combined

Best of both worlds

🔤

Keyword

"BMW X5" finds exact match

final_score = α × semantic_score + (1-α) × bm25_score

Typical α = 0.5 to 0.7

Query Transformation

Transform user queries to improve retrieval. User questions are often vague or poorly phrased.

Query Expansion

Use an LLM to generate multiple search queries from one question.

Original: "How does pricing work?"
Expanded:
• "pricing plans and tiers"
• "cost structure"
• "subscription pricing"

HyDE (Hypothetical Document)

Generate a hypothetical answer, then search for similar real documents.

Query: "vacation policy"
Hypothetical: "Employees receive 15 days of paid vacation per year..."

Step-Back Prompting

Abstract the question to a higher level before searching.

Original: "Why is my Python code slow?"
Step-back: "Python performance optimization techniques"

Query Decomposition

Break complex questions into simpler sub-questions.

Complex: "Compare Q3 vs Q4 revenue by region"
Sub-queries:
• "Q3 revenue by region"
• "Q4 revenue by region"

Advanced Retrieval Patterns

These patterns go beyond simple retrieve-and-generate. They add intelligence to when and how retrieval happens, often using the LLM itself to evaluate and improve the retrieval process.

Parent Document Retriever

Index small chunks for precision, but return the larger parent document for context.

Search: Small chunks (256 tokens)
Return: Parent document (2048 tokens)

Self-RAG

LLM decides when to retrieve, evaluates relevance, and critiques its own output.

1.Decide if retrieval needed
2.Score retrieved passages
3.Generate with reflection tokens

CRAG (Corrective RAG)

Evaluate retrieval quality and fall back to web search if documents are irrelevant.

Correct → Use retrieved docs
Ambiguous → Refine + retry
Incorrect → Web search fallback

Multi-hop Retrieval

Answer complex questions requiring information from multiple documents.

Q: "What's the CEO's alma mater's founding year?"
→ Find CEO: "John Smith"
→ Find education: "Stanford"
→ Find founding: "1885"

Running RAG in Production

Once you've built your RAG system, you need to measure its performance and keep it running reliably. This section covers the metrics that matter, common pitfalls, and observability best practices.

Evaluation Metrics

You can't improve what you can't measure. Here are the key metrics for RAG systems:

Retrieval Metrics

Recall@KHigher is better

% of relevant docs in top K results

MRRMean Reciprocal Rank

How high is the first relevant result?

NDCGNormalized DCG

Quality considering position and relevance

Generation Metrics

FaithfulnessCritical

Is the answer grounded in retrieved docs?

RelevanceImportant

Does the answer address the question?

Context PrecisionEfficiency

How much retrieved context was used?

LLM-as-a-Judge

Using a powerful LLM to evaluate your RAG system's outputs. This is the most flexible and scalable approach to evaluation.

How it works

1. Collect question + context + generated answer

2. Send to judge LLM with evaluation criteria

3. LLM returns binary pass/fail judgment

4. Aggregate scores across test set

Best practices

• Use binary (pass/fail) over numeric scales

• Include few-shot examples in prompt

• Require reasoning before the verdict

# Example judge prompt (binary verdict)
prompt = """
Is the answer faithful to the context?
PASS = Every claim is supported by the context
FAIL = Contains information not in context
Context: {context}
Answer: {answer}
Reasoning: [explain step by step]
Verdict: PASS or FAIL
"""

Evaluation Tools & Frameworks

RAGAS

LLM-based RAG evaluation. Measures faithfulness, relevance, context precision.

TruLens

Feedback functions for LLM apps with built-in judges.

DeepEval

Unit testing for LLM outputs. CI/CD integration.

Phoenix

Observability + evals by Arize. Visual debugging.

Performance Optimization

RAG latency adds up quickly: embedding the query, searching vectors, reranking, and generating the response. Here are the main levers you can pull to speed things up without sacrificing quality.

Caching

  • • Cache embedding results
  • • Cache frequent queries
  • • Semantic cache for similar Qs

Async Processing

  • • Parallel retrievals
  • • Stream LLM responses
  • • Background indexing

Vector DB Tuning

  • • Choose right index (HNSW)
  • • Tune ef_search params
  • • Use filtering wisely

Monitoring & Observability

RAG systems have many moving parts, and problems can hide anywhere in the pipeline. Good observability means you can trace a request from query to response and quickly identify where things went wrong.

What to Track

  • Latency (P50, P95, P99)
  • Retrieval quality scores
  • Token usage / costs
  • User feedback (thumbs up/down)
  • Error rates by component

Tools

LangSmithFull LangChain tracing
Weights & BiasesExperiment tracking
PhoenixOpen source observability
HeliconeLLM proxy + analytics

Common Pitfalls to Avoid

❌ Don't

  • • Skip evaluation - "it seems to work"
  • • Use same chunking for all doc types
  • • Ignore retrieval failures silently
  • • Stuff maximum context always
  • • Deploy without monitoring

✓ Do

  • • Build evaluation dataset early
  • • Test different chunking strategies
  • • Add fallback responses
  • • Use reranking to filter context
  • • Log everything, analyze weekly

Need Help Building Production RAG?

I help startups and scale-ups build production-ready RAG systems with the evaluation rigor and operational discipline it takes to ship reliably at scale.

Get in Touch