Build cutting-edge models in half the time
Tier 1: $50/user/mo from 250 to 5000 cumulative tracked hours*
Tier 2: $100/user/mo from 5,000 to 10,000 cumulative tracked hours*
Tier 3: $150/user/mo from 10,000 to 15,000 cumulative tracked hours*
One team, up to 10 users
Email and chat support
100 GB storage and artifacts tracking included. For additional storage, see prices.
For personal projects only. Corporate use not allowed.
Unlimited tracked hours*
10 GB storage and artifacts tracking included. For additional storage, see prices
Run a W&B server locally on any machine with Docker and Python installed
Unlimited tracked hours* tracked
Dedicated Technical Account Manager to ensure success
Dedicated support channel
Custom storage plan
View-only seats available
Service account for CI workflows
Run a W&B server locally on your own infrastructure with a free enterprise trial license
Tired of pasting results into a spreadsheet? Track models automatically.
Debug model issues quickly with logs, charts, and tables of tracked results.
Quickly share findings and discuss model results with your team.
Use W&B’s reliable system of record to make all models reproducible.
Storage and Artifacts are free up to 100GB, then usage is billed monthly.
You have complete control of what is logged in W&B. You can curate and delete data from the system at any time.
The hosting costs of files saved to W&B servers
100 GB included free
$0.08 per GB up to 10 TB
$0.06 per GB up to 100 TB
$0.05 per GB up to 1000 TB
Files explicitly tracked with artifacts for reproducibility
100 GB included free
$0.05 per GB up to 10 TB
$0.03 per GB up to 100 TB
$0.02 per GB up to 1000 TB
Custom plans available.Contact sales
How our lightweight, interoperable tools empower your ML team
Develop better models faster
Use the single, central system of record to save all the relevant metadata for your models automatically, so you can focus on model training. Spend more time in a flow state.
Use live dashboards, with system metrics and terminal logs for each experiment, to understand bottlenecks and debug model training quickly, without hopping between tools.
Quickly understand what architecture and hyperparameter choices work, and focus on training new models. Avoid slowly digging through scattered files of manually-tracked results.
How are the latest experiments doing, compared to previous model versions?
Is this model running out of GPU memory? What are the system bottlenecks?
How does this hyperparameter affect accuracy across different classes?
What do sample predictions look like from this model?
Capture valuable insights centrally
Every time you run a new experiment, W&B captures the changes you made and gives you a place to jot down notes. Quickly compare your latest results with previous baselines.
Automatically track the exact version of the code, hyperparameters, and even the dataset your model trained on. Trace back exactly where your resulting models came from.
Annotate findings inside the central W&B system, so your models can speak for themselves. Use interactive notes, comments, and reports to clearly explain your research.
Where are all the model files stored for this project?
What have we tried already, and what avenues should we explore next for improving this model?
What are the key findings from this research?
Move quickly on big projects, and make handoffs seamless
Train on whatever compute is available - AWS, GCP, Azure, or the GPU box in the office. All results are organized in a central place.
Pick the tools that solve your problems best. With W&B, you're not locked in to an end-to-end platform. We pride ourselves on making our tools play well with other systems.
Spend more time in a flow state, and less time doing tedious tracking. With W&B, you can focus on the hard machine learning problems, and let us take care of tracking all the details to make the models reproducible.
How should new team members get started contributing to this ML project?
When someone leaves the team, how is their research saved and communicated?
How are all our projects doing, across the organization? Are any particular projects roadblocked or stuck? Are certain projects performing better than expected?
Version and track every artifact in your model pipeline
Know where a model comes from end to end, and what is running where.
Reliably capture all changes to the data you're using to train your models.
Save datasets in any system: GCP, AWS, Azure, or even uploaded directly to W&B servers.
Clearly identify what downstream steps in your pipeline were affected by a change in data.
Capture all of your trained models in one central system.
Maintain a clear picture of all the data, preprocessing, and evaluation that was done on each model.
Trace back the lineage of any production model to the exact code and data that it was trained on.
What data was this production model trained on?
How did relabeling a chunk of data affect the accuracy of this model?
This data was corrupted — what downstream models were affected by this issue?
How did changing this preprocessing step affect model accuracy?