A Guide to Cost-Effectively Fine-Tuning Mistral

Mistral vs. Llama 2 Performance Stats

In this guide, we run the newly-released Mistral 7B, which outperforms Llama 2 13B on all tested benchmarks, and then use QLoRA (Quantized Low-Rank Adaptation) to cost-effectively fine-tune Mistral 7B on a Huggingface dataset.

Click here for the Jupyter Notebook. Click here to read the full guide, which explains QLoRA and how it reduces computational complexity when finetuning, on Brev.dev!