Skip to main content

Announcing a new W&B Weave course, a climate RAG tutorial, and more

We're constantly improving W&B Weave. Here's what we've been up to lately
Created on August 14|Last edited on August 14
Welcome to the latest edition of the W&B Weave newsletter. This week, we’re thrilled to share a new course, a pair of fresh cookbooks, and a video covering how to build confidence in your RAG app. But first:

LLM tip of the week

We love using the Instructor library to get consistent structured outputs from LLMs and use it extensively internally at Weights & Biases. In fact, we like it so much we worked with Jason Liu, the creator of Instructor, to build a free course about how to best use it.

New course 🎓

Our new W&B Weave 101 course is live.
Learn to log, debug, and evaluate language model workflows, ensuring accurate and consistent results across your projects. The course includes 18 lessons from Alex, AI evangelist and host of the ThursdAI podcast. You’ll leave the sessions with a thorough understanding of how you can leverage Weave to build better LLM-powered apps and products.

Product news 🚀

Build the confidence to ship your RAG app to production through prototyping, qualitative assessment and user feedback, benchmarking, and model optimization. Nicolas, one of our ML engineers, walks you through a tangible climate RAG use case with lessons you can bring to whatever you’re building now. You can see the cookbook in action here or watch the video below!


We also have a new cookbook called LLM Judge: Detecting hallucinations in language models.
Learn how to fine-tune and evaluate a Mistral AI language model to detect factual inconsistencies and hallucinations in text summaries.
PDF summarization using Claude. Keeping up with ML research is a full-time job. Thankfully, it’s one LLMs are well equipped to help with. Read how to use chain of density prompting to build an app that summarizes arXiv papers so you can stay on top of all the new research.
YOLOv9 object detection. Learn how to use one of the world’s fastest and most accurate object detectors to run inference on the images captured from your webcam using OpenCV.

Events 🏢

We are hosting our inaugural GenAI Salon next week, and we’d love it if you stopped by. We’re starting our series by hosting Shreya Shankar who will focus on evaluating and deploying GenAI models before taking questions. Come to learn about LLMs, stay to network at happy hour.
You can register here to join us on August 15th, 5pm PT.

Community 💡

One of our favorite recent examples of developers using W&B Weave is op-ai-tools, a Q/A RAG system that you can deploy and chat with locally.
Another favorite is from Ayush, an ML Engineer at Weights & Biases. He wrote his own MixEval evaluation script using W&B Weave Evaluations to compare Llama 3.1 405b against GPT-4o and Sonnet 3.5.
Finally, check out our Streamlit app about the Llama 3 paper. The premise is simple: you ask questions about the paper and the app answers them. You can click into the responses to see your output (and everyone else’s) in W&B Weave.
Need help getting started with W&B Weave?

Iterate on AI agents and models faster. Try Weights & Biases today.