GraphRAG vs Vector BasedRAG: Who Wins? An In-depth, Comprehensive Comparison Using RAGAS Metrics

deepak kumar
16 min readOct 27, 2024
An LLM-generated knowledge graph built using GPT-4 Turbo. : Source : microsoft

With the advent of LLM, RAG has become goto method using which we are able to use with LLM on dataset on which llm are not trained. Dataset can be private data or domain specific data, using RAG we can solve multiple problem with accuracy and promising results.

Lets understand Vector Retrieval-Augmented Generation (RAG) briefly :

Traditional rag relied on power of vector embeddings and semantic similarity. Now lets look at key points which impact vector rag accuracy.

1. User Query Embedding : User Query is converted into vector embedding using embedding model that can be open source like BAAI general embedding or closed source GPT based embeddings. The size of these embeddings is crucial because the text data is confined to this size, implying that a larger embedding size would result in less information loss.

2. Vector Search: The query embedding is used to search relevant data from vector database where information is already embedded and stored based on semantics similarity.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

No responses yet

What are your thoughts?