Comment
runs.summary["eval_results"]
- 12 of 2531
What does the `validate` function in `ImageMask` do?
```python validate( val: dict ) -> bool ```
**Warning: Falling back to gpt-3.5.** These results are sometimes not as good as gpt-4 The `validate` function in `ImageMask` validates the input data for an image mask. It checks if the input dictionary contains the required key `"mask_data"`, which is a 2D array representing the predictions. If `"mask_data"` is missing, it raises a `TypeError`. If `"mask_data"` exists, it checks whether it is a 2D array and has integers between 0 and 255 as values. If `"class_labels"` key exists in the dictionary, it checks if the values are in the format of a dictionary of numbers to strings. If everything passes validation, it returns `True`.
['https://github.com/wandb/wandb/blob/main/wandb/sdk/data_types/helper_types/image_mask.py', 'https://github.com/wandb/wandb/blob/main/wandb/apis/reports/validators.py', 'https://docs.wandb.ai/ref/python/data-types/imagemask', 'https://github.com/wandb/wandb/blob/main/wandb/apis/reports/validators.py', 'https://docs.wandb.ai/ref/python/data-types/boundingboxes2d']
[0.39795262, 0.47428107, 0.4888879, 0.49892056, 0.5175265]
How do I configure a command key in a YAML file to use custom CLI commands with sweeps?
```yaml program: script.py command: - ${env} - python3 - ${program} - ${args} ```
To use custom commands with W&B Sweeps, you can edit the `command` key in your YAML file. For example, if you use command line arguments to configure some aspects of training, you can edit the `command` key to include those arguments. Here is an example YAML configuration file: ```yaml program: train.py method: grid parameters: batch_size: value: 8 lr: value: 0.0001 command: - ${env} - python - ${program} - "-b" - your-training-config - ${args} ``` In this example, `${env}` and `${args}` are variables that expand to environment variables and command line arguments specified in the sweep configuration file, respectively. Note that if you have extra arguments that you don't want to specify with `argparse`, you can use `argparse.ArgumentParser().parse_known_args()` instead. For more information, see the "How do I use custom CLI commands with sweeps?" section of the wandb documentation at https://docs.wandb.ai/guides/sweeps/faq.
['https://docs.wandb.ai/guides/sweeps/add-w-and-b-to-your-code', 'https://docs.wandb.ai/guides/sweeps/faq', 'https://docs.wandb.ai/guides/sweeps/define-sweep-configuration', 'https://docs.wandb.ai/guides/launch/launch-jobs']
[0.26089323, 0.28160694, 0.2905783, 0.29943165]
What is the basic structure for logging metrics to wandb within a training loop?
```python for epoch in range(NUM_EPOCHS): for input, ground_truth in data: prediction = model(input) loss = loss_fn(prediction, ground_truth) metrics = { ���loss”: loss } wandb.log(metrics) ```
Here is an example of the basic structure for logging metrics to wandb within a training loop: ```python import wandb # Initialize W&B wandb.init(project="my-project") # Initialize your model and data # Train your model in a loop for epoch in range(num_epochs): for input, target in data: # Forward pass output = model(input) # Calculate loss loss = loss_fn(output, target) # Update model parameters optimizer.zero_grad() loss.backward() optimizer.step() # Log metrics inside your training loop to visualize model performance wandb.log({"loss": loss}) # Optional: log an artifact to W&B wandb.log_artifact(model) ``` This structure initializes wandb and logs metrics inside a training loop to visualize model performance. You can also log an artifact to W&B at the end of the loop.
['https://docs.wandb.ai/guides/integrations/add-wandb-to-any-library', 'https://docs.wandb.ai/guides/track/launch', 'https://docs.wandb.ai/guides/integrations/tensorflow', 'https://docs.wandb.ai/guides/integrations/huggingface']
[0.21295944, 0.22214356, 0.2480681, 0.24943809]
What is the process for embedding media into a report using the Weights & Biases SDK?
```python import wandb import wandb.apis.reports as wr report = wr.Report( project='report-editing' ) report.blocks = [ wr.Video(url="https://www.youtube.com/embed/6riDJMI-Y8U"), wr.Twitter( embed_html='<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The voice of an angel, truly. <a href="https://twitter.com/hashtag/MassEffect?src=hash&ref_src=twsrc%5Etfw">#MassEffect</a> <a href="https://t.co/nMev97Uw7F">pic.twitter.com/nMev97Uw7F</a></p>— Mass Effect (@masseffect) <a href="https://twitter.com/masseffect/status/1428748886655569924?ref_src=twsrc%5Etfw">August 20, 2021</a></blockquote>\n' ) ] report.save() ```
To embed media into a report using the Weights & Biases SDK, you can log images, videos, audio, and more directly from NumPy arrays, PIL images, or from the filesystem. Here is an example of how to log images to track inputs, outputs, filter weights, activations, and more: ```python import wandb # Log images directly from numpy arrays image_array = np.zeros((224, 224, 3), dtype=np.uint8) images = wandb.Image(image_array, caption="My image caption") wandb.log({"examples": images}) # Provide PIL images directly from PIL import Image pil_image = Image.new(mode='RGB', size=(256, 256), color=0) wandb.log({"examples": wandb.Image(pil_image)}) # Log images from a filepath wandb.log({"examples": wandb.Image("path/to/image.png")}) ``` You can also log videos with `wandb.Video()` and audio with `wandb.Audio()`, as well as other media types. Once logged, run the artifacts can be found in the Artifacts section of Weights & Biases and can be referenced with aliases generated automatically or manually when logging. For more information, see the documentation on [logging media and objects](https://docs.wandb.ai/guides/track/log/media).
['https://docs.wandb.ai/guides/track/log/media', 'https://docs.wandb.ai/guides/artifacts/quickstart', 'https://docs.wandb.ai/guides/integrations/add-wandb-to-any-library', 'https://docs.wandb.ai/guides/track/log/media']
[0.24875498, 0.27283782, 0.2777209, 0.28682038]
query
orig_response
orig_document
response
documents
scores
retrieval_match
string_distance
model_score
1
2
3
4
5
6
7
8
9
10
11
12
Run set
35
Add a comment
Created with ❤️ on Weights & Biases.
https://wandb.ai/wandbot/wandbbot/reports/Weave-eval_results-23-04-07-23-52-13---Vmlldzo0MDA0NDc2