Skip to main content
W&B Models
Build & fine-tune models
W&B Weave
Develop AI apps

Quickstart: Creating your first trace in W&B Weave

Weave is a toolkit for developing AI-powered applications.

You can use Weave to:
  • Log and debug language model inputs, outputs, and traces
  • Build rigorous, apples-to-apples evaluations for language model use cases
  • Organize all the data generated across the LLM workflow, from experimentation to evaluation to production
Start by logging a traceโ€”either from the playground or right from your code. Visit our documentation to learn more.
  1. Set up the Weave library
  2. Install the CLI and Python library for interacting with Weave and Wandb.

    pip install wandb weave

    Next, login to wandb and paste your API key when prompted.

    wandb login

    You can also set your API key with the following environment variable.

    import os os.environ['WANDB_API_KEY'] = 'your_api_key'
  3. Log a trace with code or
  4. Start tracking inputs and outputs of functions by decorating them with weave.op().
    Run this sample code to see the new trace.

    In this example, we're using a generated OpenAI API key which you can find here.
    Using another provider? We support all major clients and frameworks.

    # Ensure your OpenAI client is available with: # pip install openai # Ensure that your OpenAI API key is available at: # os.environ['OPENAI_API_KEY'] = "<your_openai_api_key>" import os import weave from openai import OpenAI weave.init('ablateit/model-registry') # ๐Ÿ @weave.op() # ๐Ÿ Decorator to track requests def create_completion(message: str) -> str: client = OpenAI() response = client.chat.completions.create( model="gpt-5", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": message} ], ) return response.choices[0].message.content message = "Tell me a joke." create_completion(message)
  5. See traces for your application in your project
  6. ๐ŸŽ‰ Congrats! Now, every time you run your code, weave will automatically capture the input & output data and build a tree to help you understand how data flows through your application.

  7. Get started with Playground
  8. You can interactively develop, review, and test their prompts using our LLM playground which supports all major model providers.