3 notes tagged with "llm"

Chunking In Rag

We'll get the best answer to our question in RAG if we input the entire document collection as the context for the prompt. But this is expensive(large token size for the prompt). So to optimize this we only use the parts of the documents that is relevant to the question as the context.

Tagged With: #permanent-notes #rag #llm

Published on Jun 28, 2024

Rag Retrieval Augmented Generation

If you want to use LLMs with your own data, you can do it using RAG. You'll have to create a RAG Pipeline...

Tagged With: #permanent-notes #llm #rag

Published on Jun 24, 2024

Vector Database

In the context of LLMs, vector is an array of numbers. This represents a point in a n-dimensional space - where n is the length of the array.

Tagged With: #permanent-notes #llm #rag

Published on Jun 23, 2024