WORKSHOP: MY AI IS BETTER THAN YOURS WS25
MODELS: FINE-TUNING
Lecturer: Kim Albrecht Lars Christian Schmidt Yağmur Uçkunkaya
Winter 2025
Model Type: Transfer Learning
What it does: Customizes a pre-trained AI model to do a new task — using your own small dataset.
Media: Image, Text, Sound (depending on the picked model
Fine-tuning is like taking a model that already knows a lot, and teaching it something new. Imagine a model trained on millions of images — it already understands shapes, objects, styles. With fine-tuning, you can show it your own images and make it specialize in your unique style, subject, or task.
How It Works (Simple Version)
You start with a pretrained model — for example:
- An image model that knows how to recognize 1,000 everyday objects
 - A text model that knows how to generate general English sentences
 - A diffusion model that knows how to generate images from text
 
You feed it your own small dataset (10–100 examples), and it updates just enough to specialize in your data — while still relying on everything it already knows.
To dos
1. Pick a model to fine-tune
Choose what kind of thing you want to teach it:
- An image generator (like Stable Diffusion)
 - A text model (like GPT-2)
 - A classifier (like MobileNet)
 
2. Collect your dataset
- Typically 10–100 images or text examples are enough
 - Should be as consistent as possible (size, format, labeling)
 - Example: 20 portraits of yourself, or 50 scanned doodles
 
3. Fine-tune the model
- You’ll use a Colab notebook or tools like Hugging Face AutoTrain
 - You don’t need to code, but you’ll follow setup steps
 - Training usually takes 20–60 minutes
 
4. Test and reflect
- What changed in the model’s behavior?
 - What did it learn? What did it forget?
 - Try different prompts or inputs to explore the result