Skip to main content

New Weave features: PIL support, export to Python, and Cerebras integration

All the recent Weave activity, wrapped up in one place
Created on September 12|Last edited on September 12
Welcome to the first W&B Weave newsletter of September. This week’s new features span PIL support, a Python export option, and yet another integration, this time Cerebras.
Let’s kickoff with the tip of the week:

LLM tip of the week ⭐

When debugging agentic workflows, always ensure that you have a simple task that a small, fast model will solve quickly. Then ensure you have appropriate data type validation on outputs and try-except catches with appropriate backup values in case your LLM API call has an issue.

Product news 🚀

Image support

W&B Weave now renders images. You can use images from the Pillow Imaging Library (PIL) as inputs and outputs as well as within dataset entries. To try this out, simply update to the latest Weave SDK with pip install weave --upgrade.


Export to Python code

In addition to exporting calls from W&B Weave into CSV, TSV, JSONL, and JSON, you can now export calls using our Python SDK. When exporting, you'll now see an option to export the current view using our Python SDK.


Cerebras integration

Our list of integrations keeps growing. We added Cerebras—currently the fastest Llama 3.1 API—to our already hefty list of integrations, including OpenAI, Anthropic, Cohere, Mistral AI, Google Gemini, Groq and many more.

AI hacker cup lightning competition

We just launched a lightning 7-day competition to try and solve 5 of the 2023 programming challenges from the AI Hacker Cup. We’re providing Mistral AI API access to get people started, and Meta Ray-Ban smart glasses for the winners.

Story illustration with FLUX.1-dev

Creating a short story illustrator is an interesting problem. You need to consider the broader context of the story and a single scene or paragraph to actually illustrate. In this piece, we use GPT-4o and Flux to produce images based on a famous O. Henry story.


Building the world's fastest chatbot

With the launch of Cerebras and verified inference speeds of 370 tokens per second, we break down how to use the Cerebras API—and how you can build and serve a lightning fast chatbot using Flask.

Events 🏢

Judgment day hackathon: building LLM judges

Evaluating LLMs quickly is vital to improving their performance. Since human annotation is slow and expensive, in our hackathon we’ll build LLMs to judge LLMs. This is a two day event in San Francisco, starting on September 21st, and we hope you can join us. (And yes, there will be prizes.)

Quickly build production-ready LLM-powered applications

See how you can confidently move your LLM-powered proof of concept to production faster with Weights & Biases and Amazon Web Services on September 24th 10am PT.

Community 💡

With the public release of their new developer API, SambaNova has released a suite of starter kits covering everything from data ingestion and processing to information retrieval and agents.

Need help getting started with W&B Weave?


Iterate on AI agents and models faster. Try Weights & Biases today.