Skip to main content

Form Bio - PoV Guide

One stop shop for everything you need to test out during the W&B Pilot.
Created on May 21|Last edited on May 22


Weights and Biases (W&B) 💫

Weights and Biases is a ML Ops platform built to facilitate collaboration and reproducibility across the machine learning development lifecycle. Machine learning projects can quickly become a mess without some best practices in place to aid developers and scientists as they iterate on models and move them to production.
W&B is lightweight enough to work with whatever framework or platform teams are currently using, but enables teams to quickly start logging their important results to a central system of record. On top of this system of record, W&B has built visualization, automation, and documentation capabilities for better debugging, model tuning, and project management.

PoC Workshop Sessions (Customize this section)

DateTimeSessionRecording LinkTopics Discussed
Models Demohttps://us-39259.app.gong.io/e/c-share/?tkn=1h3ad09x0zx0j1kaes9bwgt5dlW&B Demo
W&B Onboarding"Coming soon"W&B Getting Started and PoC Overview Guide


W&B Installation & Authentication

Track any python process or experiment with W&B's Experiment Tracking 🍽

Visualize and query dataframes via W&B Tables

Track and version any serialized data via W&B Artifacts Tracking and Versioning

House staged/candidate models via W&B's Registry

Tune Hyperparameters via W&B Sweeps

Organize visualizations and share your findings with collaborators via W&B Reports

Other Useful Resources

Import/Export API

All data logged to W&B can be accessed programmatically through the import/export API (also called the public API). This enables you to pull down run and artifact data, filter and manipulate it how you please in Python.

Slack Alerts

You can set slack alerts within a run to trigger when things happen in your training / evaluation scripts. For example, you may want to notify you when training is done or when a metric exceeds a certain value.
Details on enabling these alerts on your dedicated deployments can be found here

FAQs

W&B Models

1. I didn't name my run. Where is the run name coming from?
If you do not explicitly name your run, a random run name will be assigned to the run to help identify the run in the UI. For instance, random run names will look like "pleasant-flower-4" or "misunderstood-glade-2".
2. How can I configure the name of the run in my training code?
At the top of your training script when you call wandb.init, pass in an experiment name, like this:
wandb.init(name="my_awesome_run")
3. If wandb crashes, will it possibly crash my training run?
It is extremely important to us that we never interfere with your training runs. We run wandb in a separate process to make sure that if wandb somehow crashes, your training will nevertheless continue to run.
4. Why is a run marked crashed in W&B when it’s training fine locally?
This is likely a connection problem — if your server loses internet access and data stops syncing to W&B, we mark the run as crashed after a short period of retrying.
5. Does W&B support Distributed training?
Yes, W&B supports distributed training, here's the detailed guide on how to log distributed training experiments.
6. Can I use PyTorch profiler with W&B?
Here's a detailed report that walks through using the PyTorch profiler with W&B along with this associated Colab notebook.
7. How do I stop wandb from writing to my terminal or my jupyter notebook output?
Set the environment variable WANDB_SILENT to true.
In Python
os.environ["WANDB_SILENT"] = "true"
Within Jupyter Notebook
%env WANDB_SILENT=true
With Command Line
WANDB_SILENT=true
8. How do I track data not being held in W&B?
Track files saved outside the W&B such as in an Amazon S3 bucket, GCS bucket, HTTP file server, or even an NFS share. This will use the reference artifact.
9. How can W&B be used in my CI/CD pipeline such as Github Actions
Check out the webinar for a deeper dive: Optimizing CI/CD model management and evaluation workflows
  • If the receiving service requires it to authenticate incoming webhooks, generate the required token or API key. If necessary, save the sensitive string securely, such as in a password manager.
  • Log in to W&B and go to the team’s Settings page.
  • In the Team Secrets section, click New secret.
  • Using letters, numbers, and underscores (_), provide a name for the secret.
  • Paste the sensitive string into the Secret field.
  • Click Add secret
  • Log in to W&B and go to Team Settings page.
  • In the Webhooks section, click New webhook.
  • Provide a name for the webhook.
  • Provide the endpoint URL for the webhook.
  • Click Test. W&B attempts to connect to the webhook’s endpoint using the credentials you configured. If you provided a payload, W&B sends it.
  • Now you can create an automation that uses the webhook.
10. Reports are we limited?
On the demo call you mentioned if you can create Bokeh and other custom code charts in reports. We have a way to do so that is highlighted within the docs labeled:
" Log custom HTML to W&B Tables - W&B supports logging interactive charts from Plotly and Bokeh as HTML and adding them to Tables. "