New W&B Weave features: Integrations, feedback, cost calculations, and more
We're excited to share a host of new integration and features for W&B Weave. Come see what we've been working on.
Created on July 9|Last edited on July 9
Comment
Today, we’re thrilled to share several significant enhancements to W&B Weave, our suite of tools for developing and productionizing AI applications. Since we launched Weave earlier this year, we’ve added the ability to add human feedback through the UI or API, estimate costs, track changes between code commits, and the ability to rename or delete calls.
These Weave updates are all designed to make you more productive. Here’s a quick run through what we’ve been building:
💡
Integrations
Weave seamlessly integrates with popular LLM libraries to automatically track prompts, responses, and token usage. And we’ve been building a slew of new Weave integrations. In addition to logging calls to OpenAI, Weave now also automatically logs LLM calls made to Anthropic, MistralAI, LiteLLM, and LlamaIndex.
To get automatic tracing and token usage, just import your LLM library and call weave.init().
Product updates
Outside of our growing catalog of integrations, we've been hard at work building new features for Weave. Here are our most recent improvements:
Human feedback
Evaluating LLM applications is challenging. Developers often rely on direct end-user feedback and domain experts to assess quality using simple indicators such as thumbs up or down while also actively identifying and correcting content issues.
Now you can combine the power of Weave with human feedback directly within the Weave UI or through the API. You can add emoji reactions, textual notes, and structured data to calls. The feedback helps compile evaluation datasets, monitor application performance, and collect examples for fine-tuning.

LLM cost calculations
Weave now automatically calculates costs and estimated user spend based on the token usage we log from the top model vendors like OpenAI and Anthropic. The new token and cost calculation helps you understand and monitor spend so that you can stay on budget. We’ll include more model vendors soon.

Code tracking and integrated diff view
Weave Traces allows you to track the inputs and outputs of functions. Now you can track code changes between commits, saving time and enabling reproducibility. This helps with keeping track of any ad hoc, experimental code changes you’ve made to your prompt or model configurations.

Deleting and renaming calls
Whether it's a failed evaluation or a crash on your latest prompt iteration, you can now clean up your calls table by removing the records you no longer need. You also have the flexibility to organize and label calls to make better sense of your data.

Getting started with Weave now to try out the new features. Get started with Weave now to try out the new features. Just pip install weave and decorate your Python functions with @weave.op() to begin tracing, experimenting, and evaluating your LLM applications.
For more details, follow the Weave Quickstart instructions or check out the Weave GitHub repo. We're excited to hear what you think.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.