Skip to main content

Welocalize Onboarding Guide

All you need to know to get started with Weights & Biases
Created on July 24|Last edited on July 30
Access Weights & Biases at https://welocalize.wandb.io/ 
💡
Questions? Post them to the #wandb-welocalize slack channel
💡

Table of Contents



What is Weights and Biases (W&B)? 💫

Weights and Biases is a ML Ops platform built to facilitate collaboration and reproducibility across the machine learning development lifecycle. Machine learning projects can quickly become a mess without some best practices in place to aid developers and scientists as they iterate on models and move them to production.
W&B is lightweight enough to work with whatever framework or platform teams are currently using, but enables teams to quickly start logging their important results to a central system of record. On top of this system of record, W&B has built visualization, automation, and documentation capabilities for better debugging, model tuning, and project management.
Here's an Youtube video on overview of Weights & Biases



W&B Installation & Authentication

To start using W&B, you first need to install the Python package (if not already available)
pip install wandb
Once it's installed, authenticate your user account by logging in through the CLI or SDK. You should have receive an email to sign up to the platform, after which you can obtain your API token (The API token is in your "Settings" section under your profile)
wandb login <YOUR API TOKEN>
OR through Python:
wandb.login(host=os.getenv("WANDB_BASE_URL"), key=os.getenv("WANDB_API_KEY"))
in headless environments, you can instead define the WANDB_API_KEY environment variable.
Once you are logged in, you are ready to track your workflows!

Track any python process or experiment with W&B's Experiment Tracking 🍽

At the core of W&B is a Run, which is a logged unit of execution of Python code. A Run captures the entire execution context of that unit: Python library versions, hardware info, system metrics, git state, etc.. To create a run, call wandb.init(). There are a number of important arguments you can pass to wandb.init() to provide additional context for the run and make if possible to better organize your runs, thus facilitating navigation on the W&B front end:
import wandb

wandb.init(project="my-sample-project",
entity="<enter team name>", # Team
group='my_group', # for organizing runs (e.g. distributed training)
job_type='training', # for organizing runs (e.g. preprocessing vs. training)
config={'hyperparam1': 24, # Hyperparams and other config
'hyperparam2': 'resnet'})
See the full documentation for wandb.init for other arguments to customize its behavior.

What Can I log and How do I log it?

Within a run context, you can log all sorts of useful info such as metrics, visualizations, charts, and interactive data tables explicitly with wandb.log. Here is a comprehensive guide of wandb.log and its api docs.

Scalar Metrics

Scalar metrics can be logged by passing them in to wandb.log as a dictionary with a name.
wandb.log({"my_metric": some_scalar_value})
Each time wandb.log is called, W&B increments the run's intrinsic _step variable, which is used by default as the x-axis of all the run's metrics charts.
💡
If you call wandb.log every epoch, then the intrinsic _step value will represents the epoch count, but if you call wandb.log at other times (e.g., in validation or testing loops) the meaning of _step will not be clear. In these cases, you can pass a step manually by adding the step = my_int_variable parameter to your wandb.log call. This will give you full control over the resolution of your charts.
In Pytorch Lightning modules, for example, you may want to set step=trainer.global_step. The best practice is to pack all your step metrics into a single dictionary and logging them in one go vs. making multiple wandb.log calls per step.

Run set

You will notice that if you log a scalar metric multiple times in a run, it will not only appear as a line chart with the _step as the x-axis, but it will also appear in the Runs Table. The value shown in the Runs Table is the summary metric, which defaults to the last value logged during the course of the run. You can change this behavior by explicitly setting the summary metric of the run using the run.summary object (i.e., run.summary["my_metric_name"]=some_value). This is useful if you want to compare runs according to different aggregations (e.g. mean, max, min) as opposed to simply using the last value logged:
wandb.init()

for i in range(5):
wandb.log({"my_metric": i})

wandb.summary["my_metric"] = 2 # 2 instead of the default 4

wandb.finish()
The W&B Experiment tracking dashboard offers easier comparisons across different runs with the Run comparer visualization where you can use the diff only toggle to easily look at the rows with different values across runs.

Run set


Distributed Training

W&B supports logging distributed training experiments. In distributed training, models are trained using multiple GPUs in parallel. W&B supports two patterns to track distributed training experiments:
  1. One process: Initialize W&B (wandb.init) and log experiments (wandb.log) from a single process. This is a common solution for logging distributed training experiments with the PyTorch Distributed Data Parallel (DDP) Class. In some cases, users funnel data over from other processes using a multiprocessing queue (or another communication primitive) to the main logging process.
  2. Many processes: Initialize W&B (wandb.init) and log experiments (wandb.log) in every process. Each process is effectively a separate experiment. Use the group parameter when you initialize W&B (wandb.init(group='group-name')) to define a shared experiment and group the logged values together in the W&B App UI.
Here's the detailed guide on how to log distributed training experiments

Visualize and query dataframes via W&B Tables

Tables are a special wandb Data Type, which allow you to log data, including other wandb Data Types, into an interactive dataframe in the workspace. This is especially useful for logging model predictions in order to filter them and inspect errors. To log a table you can add data row-by-row or as a pandas dataframe or Python lists. The elements of the dataframe can be any wandb Data Type (e.g. wandb.Image, wandb.Html, wandb.Plotly) or simple scalar or text values:
# Add data as a list of lists or pandas dataframe
my_data = [
[0, wandb.Image("img_0.jpg"), 0, 0],
[1, wandb.Image("img_1.jpg"), 8, 0],
[2, wandb.Image("img_2.jpg"), 7, 1],
[3, wandb.Image("img_3.jpg"), 1, 1]
]
# create a wandb.Table() with corresponding columns
columns=["id", "image", "prediction", "truth"]
test_table = wandb.Table(data=my_data, columns=columns)

# Add data incrementally
for img_id, img in enumerate(mnist_test_data):
true_label = mnist_test_data_labels[img_id]
guess_label = my_model.predict(img)
test_table.add_data(img_id, wandb.Image(img), \
guess_label, true_label)

wandb.log({"test_table": test_table})
Use tables to log validation, sample predictions, or model errors, not entire training datasets. They can handle up to 200k rows but UI performance will vary depending on how many rich media types you have embedded. Here is a comprehensive guide to logging tables.
Note on Tables: when logging tables you will see in the workspace wandb.summary["my_table_name"] like below. This is using a weave expression to query logged data in W&B and render it appropriately. Read more about weave here. The upshot for right now is that W&B by default only renders the last version of a table (the summary one) logged in a run. So if you are logging tables multiple times throughout a run, you will only see the last one by default.


Track and version any serialized data via Artifact Tracking and Versioning

Artifacts enable you to track and version any serialized data as the inputs and outputs of runs. This can be datasets (e.g. image files), evaluation results (e.g. heatmaps), or model checkpoints. W&B is agnostic to the formats or structure of the data you want to log as an artifact.

Logging Artifacts

To log an artifact, you first create an Artifact object with a name , type, and optionally description and metadata dictionary. You can then add any of these to the artifact object:
  • local files
  • local directories
  • wandb Data Types (e.g. wandb.Plotly or wandb.Tables) which will render alongside the artifact in the UI
  • remote files and directories (e.g. s3 buckets)
# 1. Log a dataset version as an artifact
import wandb
import os

# Initialize a new W&B run to track this job
run = wandb.init(project="artifacts-quickstart", job_type="dataset-creation")

# Create a sample dataset to log as an artifact
f = open('my-dataset.txt', 'w')
f.write('Imagine this is a big dataset.')
f.close()

# Create a new artifact, which is a sample dataset
dataset = wandb.Artifact('my-dataset', type='dataset')
# Add files to the artifact, in this case a simple text file
dataset.add_file('my-dataset.txt')
# Log the artifact to save it as an output of this run
run.log_artifact(dataset)

wandb.finish()
Each time you log this artifact, W&B will checksum the file assets you add to it and compare that to previous versions of the artifact. If there is a difference, a new version will be created, indicated by the alias v1 , v2, v3, etc. Users can optionally add/subtract additional aliases through the UI or API. Aliases are important because they uniquely identify an artifact version, so you can use them to pull down your best model for example.

Error: Could not load

Consuming Artifacts

To consume an artifact, execute the following:
import wandb
run = wandb.init()
# Indicate we are using a dependency
artifact = run.use_artifact('dummy-team/that_was_easy/my-dataset:v3', type='dataset')
artifact_dir = artifact.download()

Tracking Artifacts By Reference

You may already have large datasets sitting in a cloud object store like s3 and just want to track what versions of those datasets Runs are utilizing and any other metadata associated with those datasets. You can do so by logging these artifacts by reference, in which case W&B only tracks the checksums and metadata of an artifact and does not copy the entire data asset to W&B. Here are some more details on tracking artifacts by reference.
With artifacts you can now refer to arbitrary data assets through durable and simple names and aliases (similar to how you deal with Docker containers). This makes it really easy to hand off these assets between people and processes and see the lineage of all data, models, and results.

House staged/candidate models via W&B's Model Registry

Model Registry allows you to:
  • Bookmark your best model versions for each machine learning task.
  • Automate downstream processes and model CI/CD
  • Move model versions through its ML lifecycle; from staging to production.
  • Track a model's lineage and audit the history of changes to production models.
Here's a getting started guide for Model Registry

Tune Hyperparameters via W&B Sweeps

Anything logged in wandb.config appears as a column in the runs table and is considered a hyperparameter in W&B. These hyperparameters can be viewed dynamically in a Parallel Coordinates Chart, which you can add and manipulate in a workspace. You can edit this chart to display different hyperparameters or different metrics. The lines in the chart are different runs which have "swept" through the hyperparameter space. You can also plot a parameter importance chart to get a sense of what hyper-paramaeters are most important or correlated with the target metric. These importances are calculated using a random forest trained in your browser! Here are docs on the Parallel Coordinates Plot and the Parameter Importance Plot

Run set
50

W&B provides a mechanism for automating hyper-parameter search through W&B Sweeps. Sweeps allows you to configure a large set of experiments across a pre-specified hyper-parameter space. To implement a sweep you just need to:
  1. Add wandb.init() to your training script, ensuring that all hyper-parameters are passed to your training logic via wandb.config.
  2. Write a yaml file with your hyper-parameter search specified i.e. method of search, hyper-parameter distributions and values to search over.
  3. Run the sweep controller, which runs in W&B through wandb.sweep or through the UI. The controller will delegate new hyperparameter values to wandb.config of the various agents running.
  4. Run agents in however many machines you want to run the experiments with wandb.agent
The agents will execute the training script replacing the wandb.config with queued hyper-parameter values the controller is keeping track of.
If you prefer to use other hyper-parameter optimization frameworks, W&B has integrations with RayTune, Optuna, among others.

Organize visualizations and share your findings with collaborators via W&B Reports

Reports are flexible documents you can build on top of your W&B projects. You can easily embed any asset (chart, artifact, table) logged in W&B into a report alongside markdown, LaTeX, code blocks, etc. You can created rich documentation from your logged assets without copy-pasting static figures into word docs or managing excel spreadsheets. Reports are live in that as new experiments run, they will update accordingly. This report you are viewing is a good example of what all you can put into them.

Programmatic Reports

It may be useful to programmatically generate a report, such as for a standard model comparison analysis you might be doing repeatedly when retraining models, or after doing a large hyperparamater search. The W&B Python sdk provides a means of programmatically generating reports very easily under wandb.apis.reports. Check out the docs and this quickstart notebook.

Track and evaluate LLM applications via W&B Weave

Weave is a lightweight toolkit for tracking and evaluating LLM applications
The goal is to bring rigor, best-practices, and composability to the inherently experimental process of developing AI applications, without introducing cognitive overhead.
Weave can be used to:
  • Log and debug language model inputs, outputs, and traces
  • Build rigorous, apples-to-apples evaluations for language model use cases
  • Organize all the information generated across the LLM workflow, from experimentation to evaluations to production
Quickstart guide to weave can be found here.

Other Useful Resources

Import/Export API

All data logged to W&B can be accessed programmatically through the import/export API (also called the public API). This enables you to pull down run and artifact data, filter and manipulate it how you please in Python.

Slack Alerts

You can set slack alerts within a run to trigger when things happen in your training / evaluation scripts. For example, you may want to notify you when training is done or when a metric exceeds a certain value.

FAQ

1. Why can't I login to W&B?

Make sure you have been properly authenticated to W&B or defined the correct API key before making any calls to wandb.init()


2. I didn't name my run. Where is the run name coming from?
If you do not explicitly name your run, a random run name will be assigned to the run to help identify the run in the UI. For instance, random run names will look like "pleasant-flower-4" or "misunderstood-glade-2".
3. How can I configure the name of the run in my training code?
At the top of your training script when you call wandb.init, pass in an experiment name, like this:
wandb.init(name="my_awesome_run")
4. If wandb crashes, will it crash my run?
It is extremely important to us that we never interfere with your training runs. We run wandb in a separate process to make sure that if wandb somehow crashes, your training will nevertheless continue to run.
5. Does W&B support Distributed training?
Yes, W&B supports distributed training, here's the detailed guide on how to log distributed training experiments.
6. Can I use PyTorch profiler with W&B?
Here's a detailed report that walks through using the PyTorch profiler with W&B along with this associated Colab notebook.
7. What is a service account and why is it useful?
A service account is an API key that has permissions to write to your team, but is not associated with a specific user.
Among other things, service accounts are useful for tracking automated jobs logged to wandb, like periodic retraining, nightly builds, and so on.
Here's more details on service accounts and how to create one.
8. How do I get the list of the teams I am a part of and the list of all projects in a team?
You can use the W&B API to fetch the list of all teams and projects.
This script fetches the list of teams a user is a part of:
import wandb

# Ensure you are logged in
wandb.login()

# Get the current user
user = wandb.api.viewer()

# Access the teams the user is part of
teams = user['teams']
print("Teams:", teams)
The following script gives the list of projects in a particular team:
import wandb

api = wandb.Api()
projects=api.projects(entity='your-entity')
for p in projects:
print(p.name)
9. How do I stop wandb from writing to my terminal or my jupyter notebook output?
Set the environment variable WANDB_SILENT to true.
In Python
os.environ["WANDB_SILENT"] = "true"
Within Jupyter Notebook
%env WANDB_SILENT=true
With Command Line
WANDB_SILENT=true

artifact