Intro
Hello, I am Thomas Capelle 😎
🗺 Location
I live in a city called Chambery in the Alps region in France. I like the mountains and enjoy the lake in summer.

Reports
Training Tiny Llamas for Fun—and Science
Exploring how SoftMax implementation can impact model performance using Karpathy's Tiny llama implementation.
How to Run LLMs Locally With llama.cpp and GGML
This article explores how to run LLMs locally on your computer using llama.cpp — a repository that enables you to run a model locally in no time with consumer hardware.
PyTorch Runs On the GPU of Apple M1 Macs Now! - Announcement With Code Samples
Let's try PyTorch's new Metal backend on Apple Macs equipped with M1 processors!
How To Train a Conditional Diffusion Model From Scratch
In this article, we look at how to train a conditional diffusion model and find out what you can learn by doing so, using W&B to log and track our experiments.
Testing GTP3.5 vs. GPT4: Which Model Writes Better Code?
In this article, we compare outputs from GPT-3.5_turbo and GPT-4, and explore how to use GPT-4 as a code assistant, using a simple CLI termGPT to access the models.
Translating Weights & Biases' Documentation with GPT-4
In this article, we explore how to create an automated translating tool powered by LangChain and GPT-4 to help get your website to international audiences.
termGPT: Interacting with openAI's chatGPT on your terminal
Let's build a minimal app to interact with chatGPT without leaving the terminal
GTC: Diffusion on the Clouds
Here you will find all the relevant information to get you started on training a diffusion model for solar energy forecasting
Is the New M2Pro Mac Mini a Deep Learning Workstation?
In this article, we explore whether the recent addition of the M2Pro chipset to the Apple Mac Mini family works as a replacement for your power hungry workstation.
How to Fine-tune an LLM Part 3: The HuggingFace Trainer
Exploring how to get the best out of the Hugging Face Trainer and subclasses
How to Fine-Tune an LLM Part 1: Preparing a Dataset for Instruction Tuning
Learn how to fine-tune an LLM on an instruction dataset! We'll cover how to format the data and train a model like Llama2, Mistral, etc. is this minimal example in (almost) pure PyTorch.
How to Fine-Tune an LLM Part 2: Instruction Tuning Llama 2
In part 1, we prepped our dataset. In part 2, we train our model
Projects
fine_tune_timm
pytorch-M1Pro
ft_pets_planet
alpaca_ft
otto
mini_llm
mixtral
axolotl_debug
shearllama
tinyllama
wandbot_llm
Activity
Mon
Wed
Fri
NovDecJanFebMarAprMayJunJulAugSepOct
Runs
Name
Project
State
Created
No rows found
Loading...