Deloitte PoC Guide
One stop shop for everything you need to test out during the PoC.
Created on September 6|Last edited on September 6
Comment
Weights and Biases (W&B) 💫Workshop SessionsUse Cases / Test CasesEnvironmentGetting Started (SDK Installation and Login)Test CasesTest Case 1: End to End Artifacts TrackingTest Case 2: Track Model Training MetricsTest Case 3: Automate the end to end processTest Case 4: Observability view for portfolio of modelExperiment Tracking 🍽 Artifact Tracking and VersioningW&B RegistryInteractive TablesHyperparameter SweepsReportsFAQs
Weights and Biases (W&B) 💫
Weights and Biases is a ML Ops platform built to facilitate collaboration and reproducibility across the machine learning development lifecycle. Machine learning projects can quickly become a mess without some best practices in place to aid developers and scientists as they iterate on models and move them to production.
W&B is lightweight enough to work with whatever framework or platform teams are currently using, but enables teams to quickly start logging their important results to a central system of record. On top of this system of record, W&B has built visualization, automation, and documentation capabilities for better debugging, model tuning, and project management.
Workshop Sessions
Date | Session | Recording Link | Topics Discussed |
---|---|---|---|
Future workshop sessions will be added here |
Use Cases / Test Cases
Environment
Weights & Biases Trial account is hosted here and everyone should have access. Let us know if you haven't received an invite.
Getting Started (SDK Installation and Login)
To start using W&B, you first need to install the Python package (if it's not already there)
pip install wandb
Once it's installed, authenticate your user account by logging in through the CLI or SDK. You should have receive an email to sign up to the platform, after which you can obtain your API token (The API token is in your "Settings" section under your profile)
wandb login --host <YOUR W&B HOST URL> <YOUR API TOKEN>
OR through Python:
wandb.login(host=os.getenv("WANDB_BASE_URL"), key=os.getenv("WANDB_API_KEY"))
Once you are logged in, you are ready to track your workflows!
Test Cases
S No | Capability & Success Criteria | W&B Product Area |
---|---|---|
1 | End to End Model Tracking | W&B Artifacts |
2 | Track Model Training Metrics | W&B Experiment Tracking |
3 | Automate the process with triggering downstream tasks automatically | W&B Automations, W&B Launch, W&B Reports |
4 | Observability view for portfolio of model | W&B Reports |
Test Case 1: End to End Artifacts Tracking
Test Case 2: Track Model Training Metrics
Test Case 3: Automate the end to end process
Test Case 4: Observability view for portfolio of model
W&B reports help contextualize and document the system of record built through logging diagnostics and results from different pieces of your pipeline. Reports are interactive and dynamic, reflecting filtered run sets logged in W&B. You can add all sorts of assets to a report; the one you are reading now includes plots, tables, images, code, and nested reports.
More details in the Reports section below
Experiment Tracking 🍽
Artifact Tracking and Versioning
W&B Registry
Interactive Tables
Hyperparameter Sweeps
Reports
FAQs
1. I didn't name my run. Where is the run name coming from?
Ans: If you do not explicitly name your run, a random run name will be assigned to the run to help identify the run in the UI. For instance, random run names will look like "pleasant-flower-4" or "misunderstood-glade-2".
2. How can I configure the name of the run in my training code?
Ans: At the top of your training script when you call wandb.init, pass in an experiment name, like this:
wandb.init(name="my_awesome_run")
3. If wandb crashes, will it possibly crash my training run?
Ans: It is extremely important to us that we never interfere with your training runs. We run wandb in a separate process to make sure that if wandb somehow crashes, your training will continue to run. If the internet goes out, wandb will continue to retry sending data to wandb.ai.
4. Why is a run marked crashed in W&B when it’s training fine locally?
This is likely a connection problem — if your server loses internet access and data stops syncing to W&B, we mark the run as crashed after a short period of retrying.
5. How do I stop wandb from writing to my terminal or my jupyter notebook output?
Ans: Set the environment variable WANDB_SILENT to true.
In Python
os.environ["WANDB_SILENT"] = "true"
Within Jupyter Notebook
%env WANDB_SILENT=true
With Command Line
WANDB_SILENT=true
Add a comment