Skip to main content
Intro

Hello, I am Thomas Capelle 😎

I am an ML Engineer @ Weights and Biases, making machine learning better for everyone!


🗺 Location

I live in a city called Chambery in the Alps region in France. I like the mountains and enjoy the lake in summer.




Reports
Training Tiny Llamas for Fun—and Science
Exploring how SoftMax implementation can impact model performance using Karpathy's Tiny llama implementation.
3557 views
Last edit 2 years ago
How to Run LLMs Locally With llama.cpp and GGML
This article explores how to run LLMs locally on your computer using llama.cpp — a repository that enables you to run a model locally in no time with consumer hardware.
19182 views
Last edit 1 year ago
PyTorch Runs On the GPU of Apple M1 Macs Now! - Announcement With Code Samples
Let's try PyTorch's new Metal backend on Apple Macs equipped with M1 processors!
20833 views
Last edit 2 years ago
How To Train a Conditional Diffusion Model From Scratch
In this article, we look at how to train a conditional diffusion model and find out what you can learn by doing so, using W&B to log and track our experiments.
90473 views
Last edit 2 years ago
Testing GTP3.5 vs. GPT4: Which Model Writes Better Code?
In this article, we compare outputs from GPT-3.5_turbo and GPT-4, and explore how to use GPT-4 as a code assistant, using a simple CLI termGPT to access the models.
11843 views
Last edit 2 years ago
Translating Weights & Biases' Documentation with GPT-4
In this article, we explore how to create an automated translating tool powered by LangChain and GPT-4 to help get your website to international audiences.
1602 views
Last edit 2 years ago
termGPT: Interacting with openAI's chatGPT on your terminal
Let's build a minimal app to interact with chatGPT without leaving the terminal
597 views
Last edit 2 years ago
GTC: Diffusion on the Clouds
Here you will find all the relevant information to get you started on training a diffusion model for solar energy forecasting
1841 views
Last edit 10 months ago
Is the New M2Pro Mac Mini a Deep Learning Workstation?
In this article, we explore whether the recent addition of the M2Pro chipset to the Apple Mac Mini family works as a replacement for your power hungry workstation.
26986 views
Last edit 2 years ago
How to Fine-tune an LLM Part 3: The HuggingFace Trainer
Exploring how to get the best out of the Hugging Face Trainer and subclasses
39158 views
Last edit 1 year ago
How to Fine-Tune an LLM Part 1: Preparing a Dataset for Instruction Tuning
Learn how to fine-tune an LLM on an instruction dataset! We'll cover how to format the data and train a model like Llama2, Mistral, etc. is this minimal example in (almost) pure PyTorch.
72444 views
Last edit 9 months ago
How to Fine-Tune an LLM Part 2: Instruction Tuning Llama 2
In part 1, we prepped our dataset. In part 2, we train our model
26103 views
Last edit 1 year ago
Projects
Activity
Mon
Wed
Fri
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Runs
Name
Project
State
Created
No rows found
Loading...