Retrieval-Augmented Generation (RAG) and LLM

Cevher Dogan
Self Study Notes
Published in
4 min readDec 10, 2024

Following up my recent post in Participation in GitHub Projects

Below is a structured guide to deep dive into Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs). The focus is on foundational understanding, implementation practices, and hands-on experimentation.

Photo by Kilian Seiler on Unsplash

1. Understanding RAG (Retrieval-Augmented Generation)

Key Concept:

RAG combines:

  • Retrievers (to fetch relevant data from external sources like knowledge bases or documents).
  • Generators (to create coherent and context-aware outputs using retrieved information).

This bridges the gap between retrieval-based and generative systems, making RAG suitable for knowledge-intensive tasks.

A. Key Papers to Read

  1. Lewis et al. (2020): Retrieval-augmented generation for knowledge-intensive NLP tasks.
  • Introduces RAG. Uses Dense Passage Retrieval (DPR) with BART for generating evidence-based responses. Paper
  1. Karpukhin et al. (2020): Dense Passage Retrieval for Open-Domain Question Answering.
  • Focuses on the retriever component, optimizing retrieval through dense embeddings…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Self Study Notes

Published in Self Study Notes

This is a mixed Turkish and English self study notes that takes my interest which are many to list. I will categorize things later in someway. I won't worry about topic logo avatar for now as I don't know what this means and where it goes, but I do a lot of work every day so I wo

Cevher Dogan

Written by Cevher Dogan

unusually unleaned learning experiments to turn unhappy to happy

No responses yet

What are your thoughts?

Recommended from Medium

Lists

See more recommendations