← All terms
Training

Fine-tuning

Updating a model's weights on your own data so it specializes in your task.

Fine-tuning takes a pre-trained model and continues training it on your task-specific data. In 2026 this almost always means parameter-efficient methods — LoRA or QLoRA — that update a small fraction of weights and ship as adapters rather than full models. Full fine-tunes are reserved for cases where you need deep adaptation (specialty domains, languages, or compliance). Distillation from a frontier model to a smaller production-target is the most common production fine-tuning pattern.

Related terms

Building with Fine-tuning?

We ship production AI systems built around concepts like this every quarter. Send a brief and get a written proposal in 48 hours.

Send a brief →