Skip to main content

KBRA Weave Onboarding Guide

One stop shop for everything you need to test out during the W&B Pilot.
Created on January 22|Last edited on April 15
Access Weights & Biases here: https://wandb.ai/kbra
💡
For Any Questions, please reach out via the Slack channel: #wandb-kbra
💡


Weights and Biases (W&B) 💫

Weights and Biases is a ML Ops platform built to facilitate collaboration and reproducibility across the machine learning development lifecycle. Machine learning projects can quickly become a mess without some best practices in place to aid developers and scientists as they iterate on models and move them to production.
W&B is lightweight enough to work with whatever framework or platform teams are currently using, but enables teams to quickly start logging their important results to a central system of record. On top of this system of record, W&B has built visualization, automation, and documentation capabilities for better debugging, model tuning, and project management.

PoC Workshop Sessions

DateSessionRecording LinkTopics Discussed
Nov 20, 2024W&B Weave Intro Callhttps://us-39259.app.gong.io/e/c-share/?tkn=1hweycb2vfc0y11e1qf4up0tpnW&B Overview with demo for W&B Weave


W&B Installation & Authentication

To start using W&B, you first need to install the Python package (if it's not already there)
pip install wandb weave
Once it's installed, authenticate your user account by logging in through the CLI or SDK. You should have receive an email to sign up to the platform, after which you can obtain your API token (The API token is in your "Settings" section under your profile)
wandb login --host <YOUR W&B HOST URL> <YOUR API TOKEN>
OR through Python:
wandb.login(host=os.getenv("WANDB_BASE_URL"), key=os.getenv("WANDB_API_KEY"))
In headless environments, you can instead define the WANDB_API_KEY environment variable.
Once you are logged in, you are ready to track your workflows!

Use Cases / Test Cases

S NoCapability & Success Criteria
1Get started with W&B Weave
2Track Traces with LlamaIndex



Test Case 1: Get started with W&B Weave

Once you have authenticated with W&B, you can start by creating a Weave project with the following command
import weave
weave.init('kbra/<project-name>') # this ensures the project is created in the kbra team that has been created for the PoC
Now you can decorate the functions you want to track by adding this one line decorator weave.op() to your functions.
Here's what an example script would look like (feel free to copy paste this in your IDE and run this script)
import weave
from openai import OpenAI

client = OpenAI()

# Weave will track the inputs, outputs and code of this function
@weave.op()
def extract_dinos(sentence: str) -> dict:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": """In JSON format extract a list of `dinosaurs`, with their `name`,
their `common_name`, and whether its `diet` is a herbivore or carnivore"""
},
{
"role": "user",
"content": sentence
}
],
response_format={ "type": "json_object" }
)
return response.choices[0].message.content


# Initialise the weave project
weave.init('kbra/jurassic-park')

sentence = """I watched as a Tyrannosaurus rex (T. rex) chased after a Triceratops (Trike), \
both carnivore and herbivore locked in an ancient dance. Meanwhile, a gentle giant \
Brachiosaurus (Brachi) calmly munched on treetops, blissfully unaware of the chaos below."""

result = extract_dinos(sentence)
print(result)

Test Case 2: Track traces with LlamaIndex

Weave is designed to simplify the tracking and logging of all calls made through the LlamaIndex Python library via the integration. This integration automatically captures traces for your LlamaIndex applications.
Here's an example script to get started with LlamaIndex
import weave
from llama_index.core.chat_engine import SimpleChatEngine

# Initialize Weave with your project name
weave.init("kbra/llamaindex_demo")

chat_engine = SimpleChatEngine.from_defaults()
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
print(response)
This doc walks through more details on using W&B Weave with LlamaIndex


Track and evaluate GenAI applications via W&B Weave



Weave is a lightweight toolkit for tracking and evaluating GenAI applications
The goal is to bring rigor, best-practices, and composability to the inherently experimental process of developing GenAI applications, without introducing cognitive overhead.


Weave can be used to:
  • Log and debug model inputs, outputs, and traces
  • Build rigorous, apples-to-apples evaluations for language model use cases
  • Capture valuable feedback that can be used to build new training and evaluation sets
  • Organize all the information generated across the LLM workflow, from experimentation to evaluations to production
A quick-start guide to weave can be found here.

FAQs

W&B Weave

1. How does Tracing with W&B Weave work
This loom video (~4mins) walks through how tracing works with W&B Weave
2. How can I add a custom cost for my GenAI model?
You can add a custom cost by using the add_cost method. This guide walks you through the steps of adding a custom cost. Additionally we also have this cookbook on Setting up a custom cost model with associated notebook.
3. How can I create my own custom Scorers with W&B Weave?
W&B Weave has it's own predefined scorers that you use as well as create your own Scorers. This documentation walks through creating your own scorers with W&B Weave
4. Can I control/customize the data that is logged?
Yes, If you want to change the data that is logged to weave without modifying the original function (e.g. to hide sensitive data), you can pass postprocess_inputs and postprocess_output to the op decorator.
Here's more details on how to do so
5. How to publish prompts to W&B Weave?
W&B Weave support Prompts as first class object. You can use weave.publish() to log prompts or any object as well (eg: Datasets, Models etc.) to Weave. This guide walks into details on publishing prompts to W&B Weave