Uncover granular insights about your LLMs with W&B Traces
Understand and debug your LLM chains


Drill into your model architecture
Run OpenAI evaluations with W&B Launch
Use W&B Launch to easily run any evaluation from OpenAI Evals – a fast-growing repository of dozens of evaluation suites for LLM evaluation – with just one click. Launch packages up everything you need to run the evaluation report, logs the evaluation in W&B Tables for easy visualization and analysis, and generates a Report for seamless collaboration. Use the one-line OpenAI integration to log OpenAI model inputs and outputs.


Visualize and analyze text data with W&B Tables
To better support prompt engineering practitioners working with text data, we’ve made several improvements to how we display text in Tables. Users can now visualize Markdown, as well as display the diff between 2 strings, to better understand the impact of changes to their LLM prompts. Long-text fields also now include tooltips and string previews.
W&B is trusted by the teams building state-of-the-art LLMs

Head of Data
“The challenge with GCP is you’re trying to parse terminal output. What I really like about Traces is that when I get an error, I can see which step in the chain broke and why. Trying to get this out [otherwise] is such a pain.”
VP of Product- OpenAI

Ellie Evans
Product Manager- Cohere