We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Fast Fine Tuning with Unsloth
Summary
Description
π Discover how to fine-tune LLMs at blazing speeds on Windows and Linux! If you've been jealous of MLX's performance on Mac, Unsloth is the game-changing solution you've been waiting for.
π― In this video, you'll learn:
β’ How to set up Unsloth for lightning-fast model fine-tuning
β’ Step-by-step tutorial from Colab notebook to production script
β’ Tips for efficient fine-tuning on NVIDIA GPUs
β’ How to export your models directly to Ollama
β’ Common pitfalls and how to avoid them
π§ Requirements:
β’ NVIDIA GPU (CUDA 7.0+)
β’ Python 3.10-3.12
β’ 8GB+ VRAM
Links Mentioned:
#MachineLearning #LLM #AIEngineering
My Links π
ππ» Subscribe (free): https://www.youtube.com/technovangelist
ππ» Join and Support: https://www.youtube.com/channel/UCHaF9kM2wn8C3CLRwLkC2GQ/join
ππ» Newsletter: https://technovangelist.substack.com/subscribe
ππ» Twitter: https://www.twitter.com/technovangelist
ππ» Discord: https://discord.gg/uS4gJMCRH2
ππ» Patreon: https://patreon.com/technovangelist
ππ» Instagram: https://www.instagram.com/technovangelist/
ππ» Threads: https://www.threads.net/@technovangelist?xmt=AQGzoMzVWwEq8qrkEGV8xEpbZ1FIcTl8Dhx9VpF1bkSBQp4
ππ» LinkedIn: https://www.linkedin.com/in/technovangelist/
ππ» All Source Code: https://github.com/technovangelist/videoprojects
Want to sponsor this channel? Let me know what your plans are here: https://www.technovangelist.com/sponsor
Translated At: 2025-04-01T04:59:36Z