WORKSHOP: MY AI IS BETTER THAN YOURS WS25
MODELS: RAG
Lecturer: Kim Albrecht Lars Christian Schmidt Yağmur Uçkunkaya
Winter 2025
What it does: Grounds an LLM’s responses in your curated library of text documents.
Media: Text
RAG (Retrieval-Augmented Generation) is a framework that connects an existing large language model (LLM) to a knowledge base you curate. Instead of retraining or fine-tuning, you hand it your own resources and the LLM grounds every response in your curated library.
Workings
1. Collect documents
- Choose what you would like to ground your LLM in: research notes, dream diaries, film scripts, poems, archives etc.
2. Build a knowledge base
- Embed them into a searchable index so the system can retrieve relevant passages.
3.Connect to an LLM
- Run it locally (e.g., with Ollama).
4. Generate responses
- The LLM draws only from your curated library, grounding its answers in your sources.
5.Extend creatively
- Use the responses as inputs for other text-based generative processes or interactive events.
Why work with RAG?
- Shape the LLM by curating its library without fine-tuning.
- You get grounded answers tied directly to your chosen resources.
- It's private and offline when connected with a local LLM.
- Experiment with curating your own “mini-LLM” from personal or artistic archives
You get:
- Your own continuously updatable, locally running mini-LLM, shaped by the collection of documents you curate.