Quickstart: Creating your first trace in W&B Weave
Weave is a toolkit for developing AI-powered applications.
- Log and debug language model inputs, outputs, and traces
- Build rigorous, apples-to-apples evaluations for language model use cases
- Organize all the data generated across the LLM workflow, from experimentation to evaluation to production
- Set up the Weave library
- See traces for your application in your project
- Get started with Playground
Install the CLI and Python library for interacting with Weave and Wandb.
pip install wandb weave
Next, login to wandb and paste your API key when prompted.
wandb login
You can also set your API key with the following environment variable.
import os
os.environ['WANDB_API_KEY'] = 'your_api_key'
Start tracking inputs and outputs of functions by decorating them with weave.op()
.
Run this sample code to see the new trace.
In this example, we're using a generated OpenAI API key which you can find here.
Using another provider? We support all major clients and frameworks.
# Ensure your OpenAI client is available with:
# pip install openai
# Ensure that your OpenAI API key is available at:
# os.environ['OPENAI_API_KEY'] = "<your_openai_api_key>"
import os
import weave
from openai import OpenAI
weave.init('ablateit/model-registry') # ๐
@weave.op() # ๐ Decorator to track requests
def create_completion(message: str) -> str:
client = OpenAI()
response = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
],
)
return response.choices[0].message.content
message = "Tell me a joke."
create_completion(message)
๐ Congrats! Now, every time you run your code, weave will automatically capture the input & output data and build a tree to help you understand how data flows through your application.
You can interactively develop, review, and test their prompts using our LLM playground which supports all major model providers.