Skip to main content

Indeed / W&B Commonly Asked Questions

Answers to questions from Indeed's butterfly team.
Created on August 20|Last edited on August 20
Hello Indeed team! We wanted to ensure we shared answers to some of the questions Mario had relayed to us as a pre-cursor to the enablement session so you can focus on any follow up questions and the material at hand! Please comment / send any follow up or feedback to us in here or in Slack through our joint channel #wandb-indeed

Questions

How can I create an account and get started?

Are there any restrictions or usage limits?

How do I find a specific model in Weights & Biases?

How to find the metadata, configuration & hyper-parameters of a model?

How do I visualize/compare the results of two separate models?

How do I visualize/compare the results of an offline evaluation?

How do I visualize/compare the results of a single model's multiple hyper-parameter tuning runs?

Can we check the test result for multiple ongoing hyper-parameter tuning runs in a way humans can tell what parameters seem promising?

Can we obtain the "percentage change" for a model's metrics against a chosen baseline? e.g. prod model baseline vs incumbent model, 5% better accuracy metric

Feedback

I want a functionality to easily share offline test results with others in an easy to understand way.

I want to see AUCs per target and give individual up-and-down percentages.