Training and fine-tuning LLMs

Training and fine-tuning LLMs
Learn to harness the power of LLMs with our comprehensive course. Discover the importance and history of LLMs, explore their architecture, training techniques, and fine-tuning methods. Gain hands-on experience with practical recipes from Jonathan Frankle (MosaicML), and other industry leaders,and learn cutting-edge techniques like LoRA and Prefix Tuning. Perfect for machine learning engineers, data scientists, researchers, and NLP enthusiasts. Stay ahead of the curve and become an expert in LLMs.
4 Hours
Free

Learnings & outcomes

  • Learn the fundamentals of large language models
  • Curate a dataset and establish an evaluation approach
  • Master training and fine-tuning techniques

Curriculum

  • Foundations
  • Evaluation
  • Data
  • Training & fine-tuning techniques
  • Course assessment and next steps
In partnership with
In partnership with
Very interesting course!
I liked the course overall, it contains a lot of information about the LLM. I'm an NLP researcher and I want to know more about the details of LLMs, this course definitely helps me. Unfortunately, I was unable to run the code locally due to an out of memory issue.
Great
Amazing Free Course.
I learned a lot of invaluable insights from this course. In particular, Jonathan Fankle from MosaicML gives excellent tips about training LLMs. Two things that I agree with the most are: - Start small, and - Don't start training if you don't have an evaluation dataset. Thanks a lot for offering this amazing course!
Course instructors

Darek Kłeczek

MLE Weights & Biases
Darek Kłeczek is a Machine Learning Engineer at Weights & Biases, where he leads the W&B education program. Previously, he applied machine learning across supply chain, manufacturing, legal, and commercial use cases. He also worked on operationalizing machine learning at P&G. Darek contributed the first Polish versions of BERT and GPT language models and is a Kaggle Competition Grandmaster.
Explore our other courses