Debug GenAI applications with Traces

Spend less time painstakingly debugging errors in your GenAI applications. Add one line of code to log the behavior of your applications so you can pinpoint exactly what went wrong. Understand how data flows through your application by easily capturing the inputs and outputs of any function of interest. 

A system of record for experimental software development

Build advanced RAG applications with efficiency, with full observability into what documents were retrieved, what functions were used, and what chat messages are given to the LLM. Trace unexpected end results directly to the call it came from and get answers to the questions you need to deliver powerful GenAI applications.

Capture and debug the behavior of LLMs with data-rich trace trees

Want to know the exact inputs and outputs of every call? Curious about exactly what was passed to the LLM, from raw content to JSON outputs? Wondering why this chain was slower than anticipated? Traces captures all those details and presents that information in an easy-to-access UI for painless debugging. 

Examine complex edge cases to find your next innovation

Drill down into complex examples and follow the exact execution flow to uncover the root cause of problems. Identify specific failure modes and malformed responses, or analyze how different inputs lead to different behavior in your GenAI application.

The Weights & Biases platform helps you streamline your workflow from end to end

Models

Experiments

Track and visualize your ML experiments

Sweeps

Optimize your hyperparameters

Registry

Publish and share your ML models and datasets

Automations

Trigger workflows automatically

Launch

Package and run your ML workflow jobs

Weave

Traces

Explore and
debug LLMs

Evaluations

Rigorous evaluations of GenAI applications

Core

Artifacts

Version and manage your ML pipelines

Tables

Visualize and explore your ML data

Reports

Document and share your ML insights

SDK

Log ML experiments and artifacts at scale