Morg's workspace
Runs
31
Name
1 visualized
Tags
Runtime
User
State
56m 2s
ayut
Crashed
4h 9m 2s
ayut
Finished
1h 18m
morgan
Finished
Final
10d 22h 25m 7s
morg
Finished
12m 43s
morg
Crashed
2h 53m 9s
morg
Crashed
5h 37m 5s
morg
Crashed
42m 33s
morg
Finished
3m 55s
morg
Crashed
3m 30s
morg
Finished
10s
morg
Crashed
18s
morg
Crashed
10s
morg
Crashed
12s
morg
Crashed
7m 37s
morg
Finished
5m 50s
morg
Crashed
1h 10m 1s
morg
Crashed
14h 15m 47s
morg
Crashed
1h 26m 13s
morg
Crashed
6m
morg
Crashed
1-20
of 31runs.summary["logs/qa_with_eval_pt1-2-3"]
- 5 of 7203
2023-03-01 00:59:14
s17gs93ou5jlaxjm
Can one run use another run's output Artifact as input?
3280
Yes, you can use another run's output Artifact as input. You can use the `use_artifact` method to indicate the artifact you want to use and this returns a run object. Here is an example of how to use another run's output Artifact as input: ```python import wandb # Create a new run run = wandb.init(project="my-project", job_type="my-job") # Use an artifact from another run as input artifact = run.use_artifact('other-run/artifact:latest') # Download the contents of the artifact artifact_dir = artifact.download() ``` For more information, see the documentation on [Download and Use Artifacts](https://docs.wandb.ai/guides/artifacts/download-and-use-an-artifact).
{"source_doc_0": {"page_content": "description: Download and use Artifacts from multiple projects.\n\nDownload and use artifacts\n\nDownload and use an artifact that is already stored on the Weights & Biases server or construct an artifact object and pass it in to be deduplicated as necessary.\n\n:::note\nTeam members with view-only seats cannot download artifacts.\n:::\n\nDownload and use an artifact stored on Weights & Biases\n\nDownload and use an artifact that is stored in Weights & Biases either inside or outside of a W&B Run. Use the Public API (wandb.Api) to export (or update data) already saved in Weights & Biases. For more information, see the Weights & Biases Public API Reference guide.\n\nimport Tabs from '@theme/Tabs';\nimport TabItem from '@theme/TabItem';\n\nFirst, import the W&B Python SDK. Next, create a W&B Run:\n\n```python\nimport wandb\n\nrun = wandb.init(project=\"\", job_type=\"\")\n```\n\nIndicate the artifact you want to use with the use_artifact method. This returns a run object. In the proceeding code snippet we specify an artifact called 'bike-dataset' with alias 'latest':\n\npython\nartifact = run.use_artifact('bike-dataset:latest')\n\nUse the object returned to download all the contents of the artifact:\n\npython\ndatadir = artifact.download()\n\nYou can optionally pass a path to the root parameter to download the contents of the artifact to a specific directory. For more information, see the Python SDK Reference Guide.\n\nUse the get_path method to download only subset of files:\n\npython\npath = artifact.get_path(name)\n\nThis fetches only the file at the path name. It returns an Entry object with the following methods:\n\nEntry.download: Downloads file from the artifact at path name\n\nEntry.ref: If the entry was stored as a reference using add_reference, returns the URI\n\nReferences that have schemes that Weights & Biases knows how to handle can be downloaded just like artifact files. For more information, see Track external files.\n\nFirst, import the Weights & Biases SDK. Next, create an artifact from the Public API Class. Provide the entity, project, artifact, and alias associated with that artifact:\n\n```python\nimport wandb\n\napi = wandb.Api()\nartifact = api.artifact('entity/project/artifact:alias')\n```\n\nUse the object returned to download the contents of the artifact:\n\npython\nartifact.download()\n\nYou can optionally pass a path the root parameter to download the contents of the artifact to a specific directory. For more information, see the API Reference Guide.\n\nUse the wandb artifact get command to download an artifact from the Weights & Biases server.\n\n$ wandb artifact get project/artifact:alias --root mnist/\n\nUse an artifact from a different project\n\nSpecify the name of artifact along with its project name to reference an artifact. You can also reference artifacts across entities by specifying the name of the artifact with its entity name.\n\nThe following code example demonstrates how to query an artifact from another project as input to our current W&B run.\n\n```python\nimport wandb\n\nrun = wandb.init(project=\"\", job_type=\"\")\n\nQuery W&B for an artifact from another project and mark it\n\nas an input to this run.\n\nartifact = run.use_artifact('my-project/artifact:alias')\n\nUse an artifact from another entity and mark it as an input\n\nto this run.\n\nartifact = run.use_artifact('my-entity/my-project/artifact:alias')\n```\n\nConstruct and use an artifact simultaneously\n\nSimultaneously construct and use an artifact. Create an artifact object and pass it to use_artifact. This will create an artifact in Weights & Biases if it does not exist yet. The use_artifact API is idempotent, so you can", "metadata": "https://docs.wandb.ai/guides/artifacts/download-and-use-an-artifact", "lookup_index": 0, "lookup_str": ""}, "source_doc_1": {"page_content": "description: Artifacts quickstart shows how to create, track, and use a dataset artifact with W&B.\n\nQuickstart\n\nThe proceeding quick start demonstrates how to create, track, and use a dataset artifact. Ensure you have a Weights & Biases account before you begin.\n\nThe following procedure lists how to construct and use an artifact. Steps 1 and 2 are not unique to W&B Artifacts.\n\nLog into Weights & Biases.\n\nInitialize a Run.\n\nCreate an artifact object.\n\nAdd the dataset to the artifact.\n\nLog the dataset.\n\nDownload and use the artifact.\n\nLog into Weights & Biases\n\nImport the Weights & Biases library and log in to W&B. You will need to sign up for a free W&B account if you have not done so already.\n\n```python\nimport wandb\n\nwandb.login()\n```\n\nInitialize a run\n\nUse the wandb.init() API to generate a background process to sync and log data as a W&B Run. Provide a project name and a job type:\n\n```python\n\nCreate a W&B Run. Here we specify 'dataset' as the job type since this example\n\nshows how to create a dataset artifact.\n\nrun = wandb.init(project=\"artifacts-example\", job_type='upload-dataset')\n```\n\nCreate an artifact object\n\nCreate an artifact object with the wandb.Artifact() API. Provide a name for the artifact and a description of the file type for the name and type parameters, respectively.\n\nFor example, the following code snippet demonstrates how to create an artifact called \u0098bicycle-dataset' with a \u0098dataset' label:\n\npython\nartifact = wandb.Artifact(name='bicycle-dataset', type='dataset')\n\nFor more information about how to construct an artifact, see Construct artifacts.\n\nAdd the dataset to the artifact\n\nAdd a file to the artifact. Common file types include models and datasets. The following example adds a dataset named dataset.h5 that is saved locally on our machine to the artifact:\n\n```python\n\nAdd a file to the artifact's contents\n\nartifact.add_file(local_path='dataset.h5')\n```\n\nReplace the filename dataset.h5 in the preceding code snippet with the path to the file you want to add to the artifact.\n\nLog the dataset\n\nUse the W&B run objects log_artifact() method to both save your artifact version and declare the artifact as an output of the run.\n\n```python\n\nSave the artifact version to W&B and mark it as the output of this run\n\nrun.log_artifact(artifact)\n```\n\nA 'latest' alias is created by default when you log an artifact. For more information about artifact aliases and versions, see Create a custom alias and Create new artifact versions, respectively.\n\nDownload and use the artifact\n\nThe following code example demonstrates the steps you can take to use an artifact you have logged and saved to the Weights & Biases servers.\n\nFirst, initialize a new run object with wandb.init().\n\nSecond, use the run objects use_artifact() method to tell Weights & Biases what artifact to use. This returns an artifact object.\n\nThird, use the artifacts download() method to download the contents of the artifact.\n\n```python\n\nCreate a W&B Run. Here we specify 'training' for 'type' because\n\nwe will use this run to track training.\n\nrun = wandb.init(project=\"artifacts-example\", job_type='training')\n\nQuery W&B for an artifact and mark it as input to this run\n\nartifact = run.use_artifact('bicycle-dataset:latest')\n\nDownload the artifact's contents\n\nartifact_dir = artifact.download()\n```\n\nAlternatively, you can use the Public API (wandb.Api) to export (or update data) data already saved in a Weights & Biases outside of a Run.", "metadata": "https://docs.wandb.ai/guides/artifacts/quickstart", "lookup_index": 0, "lookup_str": ""}, "source_doc_2": {"page_content": "For information on how to create a custom alias, see Create a Custom Alias\n\nYou can log output Artifacts at any frequency (for example, every epoch, every 500 steps and so on) and are automatically versioned.\n\nLog And Track Pre-trained Models Or Datasets\n\nYou can log artifacts that are used as inputs to your training such as pre-trained models or datasets. The following snippet demonstrates how to log an Artifact and add it as an input to the ongoing Run as shown in the graph above.\n\npython\nartifact_input_data = wandb.Artifact(name=\u009dflowers\u009d, type=\u009ddataset\u009d)\nartifact_input_data.add_file(\u009cflowers.npy\u009d)\nwandb.use_artifact(artifact_input_data)\n\nDownload A W&B Artifact\n\nYou re-use an Artifact (dataset, model\u2026) and wandb will download a copy locally (and cache it):\n\npython\nartifact = wandb.run.use_artifact(\u009cuser/project/artifact:latest\u009d)\nlocal_path = artifact.download(\u009c./tmp\u009d)\n\nArtifacts can be found in the Artifacts section of W&B and can be referenced with aliases generated automatically (\u009clatest\u009d, \u009cv2\u009d, \u009cv3\u009d) or manually when logging (\u009cbest_accuracy\u009d\u2026).\n\nTo download an Artifact without creating a wandb run (through wandb.init), for example in distributed environments or for simple inference, you can instead reference the artifact with the wandb API:\n\npython\nartifact = wandb.Api().artifact(\u009cuser/project/artifact:latest\u009d)\nlocal_path = artifact.download()\n\nFor more information, see Download and Use Artifacts.\n\nHyper-parameter Tuning\n\nIf your library would like to leverage W&B hyper-parameter tuning, W&B Sweeps can also be added to your library\n\nAdvanced Integrations\n\nYou can also see what an advanced W&B integrations look like in the following integrations. Note most integrations will not be as complex as these:\n\nHugging Face Transformers WandbCallback\n\nPyTorch Lightning WandbLogger", "metadata": "https://docs.wandb.ai/guides/integrations/add-wandb-to-any-library", "lookup_index": 0, "lookup_str": ""}, "source_doc_3": {"page_content": "4) Visualize your results\n\nOnce you have logged your training results you can explore your results dynamically in the W&B Dashboard. It's easy to compare across dozens of runs at once, zoom in on interesting findings, and coax insights out of complex data with flexible, interactive visualizations.\n\nHighlighted Articles\n\nBelow are 6 Transformers and W&B related articles you might enjoy\n\nAdvanced features\n\nTurn on model versioning\n\nWeights & Biases' Artifacts, you can store up to 100GB of models and datasets. Logging your Hugging Face model to W&B Artifacts can be done by setting a W&B environment variable called\n\nsave_steps in the\n\nTrainingArguments.\n\nbash\nWANDB_LOG_MODEL='end'\n\npython\n%env WANDB_LOG_MODEL='end'\n\nrun_name in your\n\nAny Trainer you initialize from now on will upload models to your W&B project. Your model file will be viewable through the W&B Artifacts UI. See the Weights & Biases' Artifacts guide for more about how to use Artifacts for model and dataset versioning.\n\nHow do I save the best model?\n\nIf load_best_model_at_end=True is passed to Trainer, then W&B will save the best performing model to Artifacts.\n\nLoading a saved model\n\nIf you saved your model to W&B Artifacts with WANDB_LOG_MODEL, you can download your model weights for additional training or to run inference. You just load them back into the same Hugging Face architecture that you used before.\n\n```python\n\nCreate a new run\n\nwith wandb.init(project=\"amazon_sentiment_analysis\") as run:\n\n# Connect an Artifact to the run\n my_model_name = \"model-bert-base-high-lr:latest\"\n my_model_artifact = run.use_artifact(my_model_name)\n\n# Download model weights to a folder and return the path\n model_dir = my_model_artifact.download()\n\n# Load your Hugging Face model from that folder\n # using the same model class\n model = AutoModelForSequenceClassification.from_pretrained(\n model_dir, num_labels=num_labels)\n\n# Do additional training, or run inference\n```\n\nResume training from a checkpoint\n\n```python\nlast_run_id = \"xxxxxxxx\" # fetch the run_id from your wandb workspace\n\nresume the wandb run from the run_id\n\nwith wandb.init(\n project=os.environ[\"WANDB_PROJECT\"],\n id=last_run_id,\n resume=\"must\",) as run:\n\n# Connect an Artifact to the run\n my_checkpoint_name = f\"checkpoint-{last_run_id}:latest\"\n my_checkpoint_artifact = run.use_artifact(my_model_name)\n\n# Download checkpoint to a folder and return the path\n checkpoint_dir = my_checkpoint_artifact.download()\n\n# reinitialize your model and trainer\n model = AutoModelForSequenceClassification.from_pretrained(\n , num_labels=num_labels)\n # your awesome training arguments here.\n training_args = TrainingArguments(...)\n\ntrainer = Trainer(\n model=model,\n args=training_args,\n ...)\n\n# make sure use the checkpoint dir to resume training from the checkpoint\n trainer.train(resume_from_checkpoint=checkpoint_dir)", "metadata": "https://docs.wandb.ai/guides/integrations/huggingface", "lookup_index": 0, "lookup_str": ""}, "source_doc_4": {"page_content": "In the following cell, start a new W&B run and using the `use_artifact` function, we tell W&B that the given artifact is being used as input to this run.', '', \"**Note:** Usage of Weights & Biases for training is optional, in case you don't wish to use it, simply uncheck `use_wandb`.\"]'\n\n 'code' cell: '['wandb_project = \"intro_to_pyg\" #@param {type:\"string\"}', 'wandb_run_name = \"upload_and_analyze_dataset\" #@param {type:\"string\"}', '', '# Initialize W&B run for training', 'if use_wandb:', ' wandb.init(project=\"intro_to_pyg\")', ' wandb.use_artifact(\"manan-goel/intro_to_pyg/MUTAG:v0\")']'", "metadata": "https://github.com/wandb/examples/blob/master/colabs/pyg/Graph_Classification_with_PyG_and_W&B.ipynb", "lookup_index": 0, "lookup_str": ""}}
35.045
2023-03-01 01:02:05
699plqtuu28raj4g
How can you specify the project you want to upload to using wandb sync?
8227
To specify the project you want to upload to using wandb sync, you can use the `-p` or `--project` option followed by the project name. For example, `wandb sync -p my_project` will upload the specified folder to the `my_project` project. You can find more information about the `wandb sync` command and its options in the documentation [here](https://docs.wandb.ai/ref/cli/wandb-sync).
{"source_doc_0": {"page_content": "Here's a sample terminal output:\n\nAnd once you're ready, just run a sync command to send that folder to the cloud.\n\npython\nwandb sync wandb/dryrun-folder-name\n\nWhat is the difference between wandb.init modes?\n\nModes can be \"online\", \"offline\" or \"disabled\", and default to online.\n\nonline(default): In this mode, the client sends data to the wandb server.\n\noffline: In this mode, instead of sending data to the wandb server, the client will store data on your local machine which can be later synced with the wandb sync command.\n\ndisabled: In this mode, the client returns mocked objects and prevents all network communication. The client will essentially act like a no-op. In other words, all logging is entirely disabled. However, stubs out of all the API methods are still callable. This is usually used in tests.\n\nMy run's state is \"crashed\" on the UI but is still running on my machine. What do I do to get my data back?\n\nYou most likely lost connection to your machine while training. You can recover your data by running wandb sync [PATH_TO_RUN]. The path to your run will be a folder in your wandb directory corresponding to the Run ID of the run in progress.\n\nLaunchError: Permission denied\n\nIf you're getting the error message Launch Error: Permission denied, you don't have permissions to log to the project you're trying to send runs to. This might be for a few different reasons.\n\nYou aren't logged in on this machine. Run wandb login on the command line.\n\nYou've set an entity that doesn't exist. \"Entity\" should be your username or the name of an existing team. If you need to create a team, go to our Subscriptions page.\n\nYou don't have project permissions. Ask the creator of the project to set the privacy to Open so you can log runs to this project.\n\nDoes W&B uses the multiprocessing library?\n\nYes, W&B uses the multiprocessing library. If you see an error message such as:\n\nAn attempt has been made to start a new process before the current process \nhas finished its bootstrapping phase.\n\nThis might mean that you might need to add an entry point protection if name == main. Note that you would only need to add this entry point protection in case you're trying to run W&B directly from the script.", "metadata": "https://docs.wandb.ai/guides/track/tracking-faq", "lookup_index": 0, "lookup_str": ""}, "source_doc_1": {"page_content": "wandb sync\n\nUsage\n\nwandb sync [OPTIONS] [PATH]...\n\nSummary\n\nUpload an offline training directory to W&B\n\nOptions\n\n| Option | Description |\n| :--- | :--- |\n| --id | The run you want to upload to. |\n| -p, --project | The project you want to upload to. |\n| -e, --entity | The entity to scope to. |\n| --sync-tensorboard / --no-sync-tensorboard | Stream tfevent files to wandb. |\n| --include-globs | Comma seperated list of globs to include. |\n| --exclude-globs | Comma seperated list of globs to exclude. |\n| --include-online / --no-include-online | Include online runs |\n| --include-offline / --no-include-offline | Include offline runs |\n| --include-synced / --no-include-synced | Include synced runs |\n| --mark-synced / --no-mark-synced | Mark runs as synced |\n| --sync-all | Sync all runs |\n| --clean | Delete synced runs |\n| --clean-old-hours | Delete runs created before this many hours. To be used alongside --clean flag. |\n| --clean-force | Clean without confirmation prompt. |\n| --show | Number of runs to show |\n| --append | Append run |\n| --help | Show this message and exit. |", "metadata": "https://docs.wandb.ai/ref/cli/wandb-sync", "lookup_index": 0, "lookup_str": ""}, "source_doc_2": {"page_content": "Can I just set the run name to the run ID?\n\nIf you'd like to overwrite the run name (like snowy-owl-10) with the run ID (like qvlp96vk) you can use this snippet:\n\npython\nimport wandb\nwandb.init()\nwandb.run.name = wandb.run.id\nwandb.run.save()\n\nI didn't name my run. Where is the run name coming from?\n\nIf you do not explicitly name your run, a random run name will be assigned to the run to help identify the run in the UI. For instance, random run names will look like \"pleasant-flower-4\" or \"misunderstood-glade-2\".\n\nHow can I save the git commit associated with my run?\n\nWhen wandb.init is called in your script, we automatically look for git information to save, including a link to a remote repo and the SHA of the latest commit. The git information should show up on your run page. If you aren't seeing it appear there, make sure that your shell's current working directory when executing your script is located in a folder managed by git.\n\nThe git commit and command used to run the experiment are visible to you but are hidden to external users, so if you have a public project, these details will remain private.\n\nIs it possible to save metrics offline and sync them to W&B later?\n\nBy default, wandb.init starts a process that syncs metrics in real time to our cloud hosted app. If your machine is offline, you don't have internet access, or you just want to hold off on the upload, here's how to run wandb in offline mode and sync later.\n\nYou will need to set two environment variables.\n\nWANDB_API_KEY=$KEY, where $KEY is the API Key from your settings page\n\nWANDB_MODE=\"offline\"\n\nAnd here's a sample of what this would look like in your script:\n\n```python\nimport wandb\nimport os\n\nos.environ[\"WANDB_API_KEY\"] = YOUR_KEY_HERE\nos.environ[\"WANDB_MODE\"] = \"offline\"\n\nconfig = {\n \"dataset\": \"CIFAR10\",\n \"machine\": \"offline cluster\",\n \"model\": \"CNN\",\n \"learning_rate\": 0.01,\n \"batch_size\": 128,\n}\n\nwandb.init(project=\"offline-demo\")\n\nfor i in range(100):\n wandb.log({\"accuracy\": i})", "metadata": "https://docs.wandb.ai/guides/track/tracking-faq", "lookup_index": 0, "lookup_str": ""}, "source_doc_3": {"page_content": "import Tabs from '@theme/Tabs';\nimport TabItem from '@theme/TabItem';\n\nSetup\n\nHow can I configure the name of the run in my training code?\n\nAt the top of your training script when you call wandb.init, pass in an experiment name, like this: wandb.init(name=\"my_awesome_run\").\n\nCan I run wandb offline?\n\nIf you're training on an offline machine and want to upload your results to our servers afterwards, we have a feature for you!\n\nSet the environment variable WANDB_MODE=offline to save the metrics locally, no internet required.\n\nWhen you're ready, run wandb init in your directory to set the project name.\n\nRun wandb sync YOUR_RUN_DIRECTORY to push the metrics to our cloud service and see your results in our hosted web app.\n\nYou can check via API whether your run is offline by using run.settings._offline or run.settings.mode after your wandb.init().\n\nSome use-cases where you can use wandb sync\n\nIf you don't have internet.\n\nIf you need to fully disable things.\n\nTo sync your run later due to any reason. For instance: if you want to avoid using resources on a training machine.\n\nDoes this only work for Python?\n\nCurrently, the library only works with Python 2.7+ & 3.6+ projects. The architecture mentioned above should enable us to integrate with other languages easily. If you have a need for monitoring other languages, send us a note at contact@wandb.com.\n\nIs there an anaconda package?\n\nYes! You can either install with pip or with conda. For the latter, you'll need to get the package from the conda-forge channel.\n\n```bash\n\nCreate a conda env\n\nconda create -n wandb-env python=3.8 anaconda\n\nActivate created env\n\nconda activate wandb-env\n\ninstall wandb with pip in this conda env\n\npip install wandb\n```\n\nconda activate myenv\nconda install wandb --channel conda-forge\n\nIf you run into issues with this install, please let us know. This Anaconda doc on managing packages has some helpful guidance.\n\nHow do I install the wandb Python library in environments without gcc?\n\nIf you try to install wandb and see this error:\n\nunable to execute 'gcc': No such file or directory\nerror: command 'gcc' failed with exit status 1\n\nYou can install psutil directly from a pre-built wheel. Find your Python version and OS here: https://pywharf.github.io/pywharf-pkg-repo/psutil\n\nFor example, to install psutil on Python 3.8 in Linux:\n\nbash\nWHEEL_URL=https://github.com/pywharf/pywharf-pkg-repo/releases/download/psutil-5.7.0-cp38-cp38-manylinux2010_x86_64.whl/psutil-5.7.0-cp38-cp38-manylinux2010_x86_64.whl#sha256=adc36dabdff0b9a4c84821ef5ce45848f30b8a01a1d5806316e068b5fd669c6d\npip install $WHEEL_URL\n\nAfter psutil has been installed, you can install wandb with pip install wandb.\n\nDoes the W&B client support Python 2?\n\nThe W&B client library supported both Python 2.7 and Python 3 through version 0.10. Due to the Python 2 end of life, support for Python 2.7 was discontinued as of version 0.11. Users who runpip install --upgrade wandb on a Python 2.7 system will get new releases of the 0.", "metadata": "https://docs.wandb.ai/guides/technical-faq/setup", "lookup_index": 0, "lookup_str": ""}, "source_doc_4": {"page_content": "Next steps: Visualize your experiment\n\nUse the W&B Dashboard as a central place to organize and visualize results from your machine learning models. With just a few clicks, construct rich, interactive charts like parallel coordinates plots, parameter importance analyzes, and more.\n\nFor more information on how to view experiments and specific runs, see Visualize results from experiments.\n\nBest Practices\n\nThe following are some suggested guidelines to consider when you create experiments:\n\nConfig: Track hyperparameters, architecture, dataset, and anything else you'd like to use to reproduce your model. These will show up in columns\u2013 use config columns to group, sort, and filter runs dynamically in the app.\n\nProject: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.\n\nNotes: A quick commit message to yourself. The note can be set from your script. You can edit notes at a later time on the Overview section of your project's dashboard on the W&B App.\n\nTags: Identify baseline runs and favorite runs. You can filter runs using tags. You can edit tags at a later time on the Overview section of your project's dashboard on the W&B App.\n\nThe following code snippet demonstrates how to define a W&B Experiment using the best practices listed above:\n\n```python\nimport wandb\n\nconfig = dict (\n learning_rate = 0.01,\n momentum = 0.2,\n architecture = \"CNN\",\n dataset_id = \"cats-0192\"\n)\n\nwandb.init(\n project=\"detect-cats\",\n notes=\"tweak baseline\",\n tags=[\"baseline\", \"paper1\"],\n config=config,\n)\n```\n\nFor more more information about available parameters when defining a W&B Experiment, see the wandb.init API docs in the API Reference Guide.", "metadata": "https://docs.wandb.ai/guides/track/launch", "lookup_index": 0, "lookup_str": ""}, "source_doc_5": {"page_content": "Default project privacy in your personal account: This is set to 'Private' by default. In other words, your projects will be private and can only be accessed by you.\n\nEnable code saving in your personal account: This is turned off by default. You can turn this on to save the main script or notebook to W&B.\n\n:::note\nThese settings can also be specified by passing arguments to \nwandb.init.\n:::\n\nFrequently Asked Questions\n\nHow can I delete projects?\n\nYou can delete your project by clicking the three dots on the right of the overview tab.\n\nIf the project is empty (i.e. it has no runs), you can delete it by clicking the dropdown menu in the top-right and selecting \"Delete project\".\n\nWhere are the privacy settings for projects? How can I make a project public or private?\n\nClick the lock in the navigation bar at the top of the page to change project privacy settings. You can edit who can view or submit runs to your project. These settings include all runs and reports in the project. If you'd like to share your results with just a few people, you can create a private team.\n\nHow do I reset my workspace?\n\nIf you see an error like the one below on your project page, here's how to reset your workspace.\"objconv: \"100000000000\" overflows the maximum values of a signed 64 bits integer\"\n\nAdd ?workspace=clear to the end of the URL and press enter. This should take you to a cleared version of your project page workspace.", "metadata": "https://docs.wandb.ai/guides/app/pages/project-page", "lookup_index": 0, "lookup_str": ""}}
30.081
2023-03-01 01:09:13
86ygtbcoa14234vk
What is the maximum number of authentication keys that can be passed with the wandb login command?
8322
The documentation for the `wandb login` command does not specify a maximum number of authentication keys that can be passed. It only mentions that you can pass a key or keys as arguments to the command.
{"source_doc_0": {"page_content": "User Login\n\nThere are a few ways for your users to log in to W&B:\n\nLog into W&B with a bash command in a terminal\n\nbash\nwandb login $MY_WANDB_KEY\n \n \nIf they're in a Jupyter or Colab notebook, log into W&B like so\n\npython\nimport wandb\nwandb.login\n\nSet a W&B environment variable for the API key\n\nbash\nexport WANDB_API_KEY=$YOUR_API_KEY\n\nor\n\nos.environ['WANDB_API_KEY'] = \"abc123...\"\n\nIf a user is using wandb for the first time without following any of the steps mentioned above, they will automatically be prompted to login when your script calls wandb.init\n\nStarting A wandb Run\n\nA W&B Run is a unit of computation logged by Weights & Biases. Typically you associate a single W&B Run per training experiment.\n\nInitialize W&B and start a Run within your code with:\n\npython\nwandb.init()\n\nOptionally you can provide a name for their project, or let the user set it themselves with parameter such as wandb_project in your code along with the username or team name, such as wandb_entity , for the entity parameter:\n\npython\nwandb.init(project=wandb_project, entity=wandb_entity)\n\nWhere To Place wandb.init?\n\nYour library should create W&B Run as early as possible because any output in your console, including error messages, are logged as part of the W&B Run. This makes debugging easier.\n\nRun The Library With wandb As Optional\n\nIf you want to make wandb optional when your users use your library, you can either:\n\nDefine a wandb flag such as:\n\npython\ntrainer = my_trainer(..., use_wandb=True)\n\nbash\npython train.py ... --use-wandb\n\nOr, set wandb to be disabled in wandb.init\n\npython\nwandb.init(mode=\u009cdisabled\u009d)\n\nbash\nexport WANDB_MODE=disabled\nor\n\nbash\nwandb disabled\n\nOr, set wandb to be offline - note this will still run wandb, it just won't try and communicate back to Weights & Biases over the internet\n\nbash\nexport WANDB_MODE=offline\n\nor\n\npython\nos.environ['WANDB_MODE'] = 'offline'\n\nwandb offline\n\nDefining A wandb Run Config\n\nWith a wandb run config you can provide metadata about your model, dataset, and so on when you create a W&B Run. You can use this information to compare different experiments and quickly understand what are the main differences.\n\nTypical config parameters you can log include:\n\nModel name, version, architecture parameters etc\n\nDataset name, version, number of train/val examples etc\n\nTraining parameters such as learning rate, batch size, optimizer etc\n\nThe following code snippet shows how to log a config:\n\npython\nconfig = {\u009cbatch_size\u009d:32, \u2026}\nwandb.init(\u2026, config=config)\n\nUpdating The wandb config\n\nUse wandb.config.update to update the config. Updating your configuration dictionary is useful when parameters are obtained after the dictionary was defined, for example you might want to add a model's parameters after the model is instantiated.\n\npython\nwandb.config.update({\u009cmodel_parameters\u009d = 3500})\n\nFor more information on how to define a config file, see Configure Experiments with wandb.config\n\nLogging To Weights & Biases\n\nLog Metrics\n\nCreate a dictionary where the key value is the name of the metric. Pass this dictionary object to wandb.log:\n\npython\nfor epoch in range(NUM_EPOCHS):\n for input, ground_truth in data: \n prediction = model(input) \n loss = loss_fn(prediction, ground_truth", "metadata": "https://docs.wandb.ai/guides/integrations/add-wandb-to-any-library", "lookup_index": 0, "lookup_str": ""}, "source_doc_1": {"page_content": "wandb login\n\nUsage\n\nwandb login [OPTIONS] [KEY]...\n\nSummary\n\nLogin to Weights & Biases\n\nOptions\n\n| Option | Description |\n| :--- | :--- |\n| --cloud | Login to the cloud instead of local |\n| --host | Login to a specific instance of W&B |\n| --relogin | Force relogin if already logged in. |\n| --anonymously | Log in anonymously |\n| --help | Show this message and exit. |", "metadata": "https://docs.wandb.ai/ref/cli/wandb-login", "lookup_index": 0, "lookup_str": ""}, "source_doc_2": {"page_content": "Tools for collaboration: Use W&B to organize complex machine learning projects. It's easy to share a link to W&B, and you can use private teams to have everyone send results to a shared project. We also support collaboration via reports\u2013 add interactive visualizations and describe your work in markdown. This is a great way to keep a work log, share findings with your supervisor, or present findings to your lab.\n\nGet started with a free personal account \u00e2\u0086\u2019\n\nHow does wandb stream logs and writes to disk?\n\nW&B queues in memory but also write the events to disk asynchronously to handle failures and for the WANDB_MODE=offline case where you can sync the data after it's been logged.\n\nIn your terminal, you can see a path to the local run directory. This directory will contain a .wandb file that is the datastore above. If you're also logging images, we write them to media/images in that directory before uploading them to cloud storage.\n\nHow to get multiple charts with different selected runs?\n\nWith wandb reports the procedure is as follows:\n\nHave multiple panel grids.\n\nAdd filters to filter the run sets of each panel grid. This will help in selecting the runs that you want to portray in the respective panels.\n\nCreate the charts you want in the panel grids.\n\nHow is access to the API controlled?\n\nFor simplicity, W&B uses API keys for authorization when accessing the API. You can find your API keys in your settings. Your API key should be stored securely and never checked into version control. In addition to personal API keys, you can add Service Account users to your team.\n\nDoes W&B support SSO for SaaS?\n\nYes, W&B supports setting up Single Sign-On (SSO) for the SaaS offering via Auth0. W&B support SSO integration with any OIDC compliant identity provider(ex: Okta, AzureAD etc.). If you have an OIDC provider, please follow the steps below:\n\nCreate a Single Page Application (SPA) on your Identity Provider.\n\nSet grant_type to implicit flow.\n\nSet the callback URI to https://wandb.auth0.com/login/callback.\n\nWhat W&B needs?\n\nOnce you have the above setup, contact your customer success manager(CSM) and let us know the Client ID and Issuer URL associated with the application.\n\nWe'll then set up an Auth0 connection with the above details and enable SSO.\n\nWhat is a service account, and why is it useful?\n\nA service account is an API key that has permissions to write to your team, but is not associated with a particular user. Among other things, service accounts are useful for tracking automated jobs logged to wandb, like periodic retraining, nightly builds, and so on. If you'd like, you can associate a username with one of these machine-launched runs with the environment variable WANDB_USERNAME.\n\nYou can get the API key in your Team Settings page /teams/<your-team-name> where you invite new team members. Select service and click create to add a service account.\n\nHow can I rotate or revoke access?\n\nBoth personal and service account keys can be rotated or revoked. Simply create a new API Key or Service Account user and reconfigure your scripts to use the new key. Once all processes are reconfigured, you can remove the old API key from your profile or team.\n\nHow do I switch between accounts on the same machine?\n\nIf you have two W&B accounts working from the same machine, you'll need a nice way to switch between your different API keys. You can store both API keys in a file on your machine then add code like the following to your repos. This is to avoid checking your secret key into a source control system, which is potentially dangerous.\n\npython\nif os.path.exists(\"~/keys.json\"):\n os.environ[\"WANDB_API_KEY", "metadata": "https://docs.wandb.ai/guides/technical-faq/general", "lookup_index": 0, "lookup_str": ""}, "source_doc_3": {"page_content": " API key will be on the Authorize page.\n\nHow do I use W&B in an automated environment?\nIf you are training models in an automated environment where it's inconvenient to run shell commands, such as Google's CloudML, you should look at our guide to configuration with Environment Variables.\n\nDo you offer local, on-prem installs?\nYes, you can privately host W&B locally on your own machines or in a private cloud, try this quick tutorial notebook to see how. Note, to login to wandb local server you can set the host flag to the address of the local instance. ****", "metadata": "https://docs.wandb.ai/quickstart", "lookup_index": 0, "lookup_str": ""}, "source_doc_4": {"page_content": "Command Line Interface\n\nUsage\n\nwandb [OPTIONS] COMMAND [ARGS]...\n\nOptions\n\n| Option | Description |\n| :--- | :--- |\n| --version | Show the version and exit. |\n| --help | Show this message and exit. |\n\nCommands\n\n| Command | Description |\n| :--- | :--- |\n| agent | Run the W&B agent |\n| artifact | Commands for interacting with artifacts |\n| controller | Run the W&B local sweep controller |\n| disabled | Disable W&B. |\n| docker | Run your code in a docker container. |\n| docker-run | Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER... |\n| enabled | Enable W&B. |\n| init | Configure a directory with Weights & Biases |\n| launch | Launch or queue a job from a uri (Experimental). |\n| launch-agent | Run a W&B launch agent (Experimental). |\n| login | Login to Weights & Biases |\n| offline | Disable W&B sync |\n| online | Enable W&B sync |\n| pull | Pull files from Weights & Biases |\n| restore | Restore code, config and docker state for a run |\n| scheduler | Run a W&B launch sweep scheduler (Experimental) |\n| server | Commands for operating a local W&B server |\n| status | Show configuration settings |\n| sweep | Create a sweep |\n| sync | Upload an offline training directory to W&B |\n| verify | Verify your local instance |", "metadata": "https://docs.wandb.ai/ref/cli", "lookup_index": 0, "lookup_str": ""}}
28.475
2023-03-01 01:16:56
1xslj0xaxc3ylldv
Is Wandb Pull only used for downloading files from a project or are there other use cases?
2683
Yes, `wandb pull` is primarily used for downloading files from a project. It allows you to pull files from Weights & Biases. However, there are other use cases for Wandb, such as logging and tracking pre-trained models or datasets, hyper-parameter tuning, and visualizing your results. You can also use Wandb to create and track artifacts, which can be used to store up to 100GB of models and datasets. Additionally, you can use Wandb to track external files, such as those stored on a mounted filesystem, and download them as reference artifacts.
{"source_doc_0": {"page_content": "For information on how to create a custom alias, see Create a Custom Alias\n\nYou can log output Artifacts at any frequency (for example, every epoch, every 500 steps and so on) and are automatically versioned.\n\nLog And Track Pre-trained Models Or Datasets\n\nYou can log artifacts that are used as inputs to your training such as pre-trained models or datasets. The following snippet demonstrates how to log an Artifact and add it as an input to the ongoing Run as shown in the graph above.\n\npython\nartifact_input_data = wandb.Artifact(name=\u009dflowers\u009d, type=\u009ddataset\u009d)\nartifact_input_data.add_file(\u009cflowers.npy\u009d)\nwandb.use_artifact(artifact_input_data)\n\nDownload A W&B Artifact\n\nYou re-use an Artifact (dataset, model\u2026) and wandb will download a copy locally (and cache it):\n\npython\nartifact = wandb.run.use_artifact(\u009cuser/project/artifact:latest\u009d)\nlocal_path = artifact.download(\u009c./tmp\u009d)\n\nArtifacts can be found in the Artifacts section of W&B and can be referenced with aliases generated automatically (\u009clatest\u009d, \u009cv2\u009d, \u009cv3\u009d) or manually when logging (\u009cbest_accuracy\u009d\u2026).\n\nTo download an Artifact without creating a wandb run (through wandb.init), for example in distributed environments or for simple inference, you can instead reference the artifact with the wandb API:\n\npython\nartifact = wandb.Api().artifact(\u009cuser/project/artifact:latest\u009d)\nlocal_path = artifact.download()\n\nFor more information, see Download and Use Artifacts.\n\nHyper-parameter Tuning\n\nIf your library would like to leverage W&B hyper-parameter tuning, W&B Sweeps can also be added to your library\n\nAdvanced Integrations\n\nYou can also see what an advanced W&B integrations look like in the following integrations. Note most integrations will not be as complex as these:\n\nHugging Face Transformers WandbCallback\n\nPyTorch Lightning WandbLogger", "metadata": "https://docs.wandb.ai/guides/integrations/add-wandb-to-any-library", "lookup_index": 0, "lookup_str": ""}, "source_doc_1": {"page_content": "description: Artifacts quickstart shows how to create, track, and use a dataset artifact with W&B.\n\nQuickstart\n\nThe proceeding quick start demonstrates how to create, track, and use a dataset artifact. Ensure you have a Weights & Biases account before you begin.\n\nThe following procedure lists how to construct and use an artifact. Steps 1 and 2 are not unique to W&B Artifacts.\n\nLog into Weights & Biases.\n\nInitialize a Run.\n\nCreate an artifact object.\n\nAdd the dataset to the artifact.\n\nLog the dataset.\n\nDownload and use the artifact.\n\nLog into Weights & Biases\n\nImport the Weights & Biases library and log in to W&B. You will need to sign up for a free W&B account if you have not done so already.\n\n```python\nimport wandb\n\nwandb.login()\n```\n\nInitialize a run\n\nUse the wandb.init() API to generate a background process to sync and log data as a W&B Run. Provide a project name and a job type:\n\n```python\n\nCreate a W&B Run. Here we specify 'dataset' as the job type since this example\n\nshows how to create a dataset artifact.\n\nrun = wandb.init(project=\"artifacts-example\", job_type='upload-dataset')\n```\n\nCreate an artifact object\n\nCreate an artifact object with the wandb.Artifact() API. Provide a name for the artifact and a description of the file type for the name and type parameters, respectively.\n\nFor example, the following code snippet demonstrates how to create an artifact called \u0098bicycle-dataset' with a \u0098dataset' label:\n\npython\nartifact = wandb.Artifact(name='bicycle-dataset', type='dataset')\n\nFor more information about how to construct an artifact, see Construct artifacts.\n\nAdd the dataset to the artifact\n\nAdd a file to the artifact. Common file types include models and datasets. The following example adds a dataset named dataset.h5 that is saved locally on our machine to the artifact:\n\n```python\n\nAdd a file to the artifact's contents\n\nartifact.add_file(local_path='dataset.h5')\n```\n\nReplace the filename dataset.h5 in the preceding code snippet with the path to the file you want to add to the artifact.\n\nLog the dataset\n\nUse the W&B run objects log_artifact() method to both save your artifact version and declare the artifact as an output of the run.\n\n```python\n\nSave the artifact version to W&B and mark it as the output of this run\n\nrun.log_artifact(artifact)\n```\n\nA 'latest' alias is created by default when you log an artifact. For more information about artifact aliases and versions, see Create a custom alias and Create new artifact versions, respectively.\n\nDownload and use the artifact\n\nThe following code example demonstrates the steps you can take to use an artifact you have logged and saved to the Weights & Biases servers.\n\nFirst, initialize a new run object with wandb.init().\n\nSecond, use the run objects use_artifact() method to tell Weights & Biases what artifact to use. This returns an artifact object.\n\nThird, use the artifacts download() method to download the contents of the artifact.\n\n```python\n\nCreate a W&B Run. Here we specify 'training' for 'type' because\n\nwe will use this run to track training.\n\nrun = wandb.init(project=\"artifacts-example\", job_type='training')\n\nQuery W&B for an artifact and mark it as input to this run\n\nartifact = run.use_artifact('bicycle-dataset:latest')\n\nDownload the artifact's contents\n\nartifact_dir = artifact.download()\n```\n\nAlternatively, you can use the Public API (wandb.Api) to export (or update data) data already saved in a Weights & Biases outside of a Run.", "metadata": "https://docs.wandb.ai/guides/artifacts/quickstart", "lookup_index": 0, "lookup_str": ""}, "source_doc_2": {"page_content": "wandb pull\n\nUsage\n\nwandb pull [OPTIONS] RUN\n\nSummary\n\nPull files from Weights & Biases\n\nOptions\n\n| Option | Description |\n| :--- | :--- |\n| -p, --project | The project you want to download. |\n| -e, --entity | The entity to scope the listing to. |\n| --help | Show this message and exit. |", "metadata": "https://docs.wandb.ai/ref/cli/wandb-pull", "lookup_index": 0, "lookup_str": ""}, "source_doc_3": {"page_content": "4) Visualize your results\n\nOnce you have logged your training results you can explore your results dynamically in the W&B Dashboard. It's easy to compare across dozens of runs at once, zoom in on interesting findings, and coax insights out of complex data with flexible, interactive visualizations.\n\nHighlighted Articles\n\nBelow are 6 Transformers and W&B related articles you might enjoy\n\nAdvanced features\n\nTurn on model versioning\n\nWeights & Biases' Artifacts, you can store up to 100GB of models and datasets. Logging your Hugging Face model to W&B Artifacts can be done by setting a W&B environment variable called\n\nsave_steps in the\n\nTrainingArguments.\n\nbash\nWANDB_LOG_MODEL='end'\n\npython\n%env WANDB_LOG_MODEL='end'\n\nrun_name in your\n\nAny Trainer you initialize from now on will upload models to your W&B project. Your model file will be viewable through the W&B Artifacts UI. See the Weights & Biases' Artifacts guide for more about how to use Artifacts for model and dataset versioning.\n\nHow do I save the best model?\n\nIf load_best_model_at_end=True is passed to Trainer, then W&B will save the best performing model to Artifacts.\n\nLoading a saved model\n\nIf you saved your model to W&B Artifacts with WANDB_LOG_MODEL, you can download your model weights for additional training or to run inference. You just load them back into the same Hugging Face architecture that you used before.\n\n```python\n\nCreate a new run\n\nwith wandb.init(project=\"amazon_sentiment_analysis\") as run:\n\n# Connect an Artifact to the run\n my_model_name = \"model-bert-base-high-lr:latest\"\n my_model_artifact = run.use_artifact(my_model_name)\n\n# Download model weights to a folder and return the path\n model_dir = my_model_artifact.download()\n\n# Load your Hugging Face model from that folder\n # using the same model class\n model = AutoModelForSequenceClassification.from_pretrained(\n model_dir, num_labels=num_labels)\n\n# Do additional training, or run inference\n```\n\nResume training from a checkpoint\n\n```python\nlast_run_id = \"xxxxxxxx\" # fetch the run_id from your wandb workspace\n\nresume the wandb run from the run_id\n\nwith wandb.init(\n project=os.environ[\"WANDB_PROJECT\"],\n id=last_run_id,\n resume=\"must\",) as run:\n\n# Connect an Artifact to the run\n my_checkpoint_name = f\"checkpoint-{last_run_id}:latest\"\n my_checkpoint_artifact = run.use_artifact(my_model_name)\n\n# Download checkpoint to a folder and return the path\n checkpoint_dir = my_checkpoint_artifact.download()\n\n# reinitialize your model and trainer\n model = AutoModelForSequenceClassification.from_pretrained(\n , num_labels=num_labels)\n # your awesome training arguments here.\n training_args = TrainingArguments(...)\n\ntrainer = Trainer(\n model=model,\n args=training_args,\n ...)\n\n# make sure use the checkpoint dir to resume training from the checkpoint\n trainer.train(resume_from_checkpoint=checkpoint_dir)", "metadata": "https://docs.wandb.ai/guides/integrations/huggingface", "lookup_index": 0, "lookup_str": ""}, "source_doc_4": {"page_content": "By default, W&B imposes a 10,000 file limit when adding a reference to a directory. You can adjust this limit by specifying max_objects= in calls to add_reference.\n\nNote the triple slash in the URL. The first component is the file:// prefix that denotes the use of filesystem references. The second is the path to our dataset, /mount/datasets/mnist/.\n\nThe resulting artifact mnist:latest looks and acts just like a regular artifact. The only difference is that the artifact only consists of metadata about the files, such as their sizes and MD5 checksums. The files themselves never leave your system.\n\nYou can interact with this artifact just as you would a normal artifact. In the UI, you can browse the contents of the reference artifact using the file browser, explore the full dependency graph, and scan through the versioned history of your artifact. However, the UI will not be able to render rich media such as images, audio, etc. as the data itself is not contained within the artifact.\n\nDownloading a reference artifact is simple:\n\n```python\nimport wandb\n\nrun = wandb.init()\nartifact = run.use_artifact('entity/project/mnist:latest', type='dataset')\nartifact_dir = artifact.download()\n```\n\nFor filesystem references, a download() operation copies the files from the referenced paths to construct the artifact directory. In the above example, the contents of /mount/datasets/mnist will be copied into the directory artifacts/mnist:v0/. If an artifact contains a reference to a file that was overwritten, then download() will throw an error as the artifact can no longer be reconstructed.\n\nPutting everything together, here's a simple workflow you can use to track a dataset under a mounted filesystem that feeds into a training job:\n\n```python\nimport wandb\n\nrun = wandb.init()\n\nartifact = wandb.Artifact('mnist', type='dataset')\nartifact.add_reference('file:///mount/datasets/mnist/')\n\nTrack the artifact and mark it as an input to\n\nthis run in one swoop. A new artifact version\n\nis only logged if the files under the directory\n\nchanged.\n\nrun.use_artifact(artifact)\n\nartifact_dir = artifact.download()\n\nPerform training here...\n\n```\n\nTo track models, we can log the model artifact after the training script writes the model files to the mount point:\n\n```python\nimport wandb\n\nrun = wandb.init()\n\nTraining here...\n\nwith open('/mount/cnn/my_model.h5') as f:\n # Output our model file.\n\nmodel_artifact = wandb.Artifact('cnn', type='model')\nmodel_artifact.add_reference('file:///mount/cnn/my_model.h5')\nrun.log_artifact(model_artifact)\n```", "metadata": "https://docs.wandb.ai/guides/artifacts/track-external-files", "lookup_index": 0, "lookup_str": ""}}
42.099
2023-03-01 01:30:03
tuhnbmh2waj3rzfx
What kind of behavior can be altered in the Edit panel of a W&B visualization?
6993
In the Edit panel of a W&B visualization, the dependent and independent values (x and y axis) can be altered to use custom expressions, and the behavior of the graph can be modified.
{"source_doc_0": {"page_content": "Customize the chart\n\nAdd a new custom chart to get started, then edit the query to select data from your visible runs. The query uses GraphQL to fetch data from the config, summary, and history fields in your runs.\n\nCustom visualizations\n\nSelect a Chart in the upper right corner to start with a default preset. Next, pick Chart fields to map the data you're pulling in from the query to the corresponding fields in your chart. Here's an example of selecting a metric to get from the query, then mapping that into the bar chart fields below.\n\nHow to edit Vega\n\nClick Edit at the top of the panel to go into Vega edit mode. Here you can define a Vega specification that creates an interactive chart in the UI. You can change any aspect of the chart, from the visual style (e.g. change the title, pick a different color scheme, show curves as a series of points instead of as connected lines) to the data itself (use a Vega transform to bin an array of values into a histogram, etc.). The panel preview will update interactively, so you can see the effect of your changes as you edit the Vega spec or query. The Vega documentation and tutorials are an excellent source of inspiration.\n\nField references\n\nTo pull data into your chart from W&B, add template strings of the form \"${field:<field-name>}\" anywhere in your Vega spec. This will create a dropdown in the Chart Fields area on the right side, which users can use to select a query result column to map into Vega.\n\nTo set a default value for a field, use this syntax: \"${field:<field-name>:<placeholder text>}\"\n\nSaving chart presets\n\nApply any changes to a specific visualization panel with the button at the bottom of the modal. Alternatively, you can save the Vega spec to use elsewhere in your project. To save the reusable chart definition, click Save as at the top of the Vega editor and give your preset a name.\n\nArticles and guides\n\nThe W&B Machine Learning Visualization IDE\n\nVisualizing NLP Attention Based Models\n\nVisualizing The Effect of Attention on Gradient Flow\n\nLogging arbitrary curves\n\nFrequently asked questions\n\nComing soon\n\nPolling: Auto-refresh of data in the chart\n\nSampling: Dynamically adjust the total number of points loaded into the panel for efficiency\n\nGotchas\n\nNot seeing the data you're expecting in the query as you're editing your chart? It might be because the column you're looking for is not logged in the runs you have selected. Save your chart and go back out to the runs table, and select the runs you'd like to visualize with the eye icon.\n\nHow to show a \"step slider\" in a custom chart?\n\nThis can be enabled on the \u009cOther settings\u009d page of the custom chart editor. If you change your query to use a historyTable instead of the summaryTable, you'll get an option to \u009cShow step selector\u009d in the custom chart editor. This gives you a slider that lets you select the step.\n\nHow to delete a custom chart preset?\n\nYou can do this by going into the custom chart editor. Then click on the currently selected chart type, this will open up a menu with all your presets. Hover the mouse on a preset you want to delete and then click on the Trash icon.\n\nCommon use cases\n\nCustomize bar plots with error bars\n\nShow model validation metrics which require custom x-y coordinates (like precision-recall curves)\n\nOverlay data distributions from two different models/experiments as histograms\n\nShow changes in a metric via snapshots at multiple points during training\n\nCreate a unique visualization not yet available in W&B (and hopefully share it with the world)", "metadata": "https://docs.wandb.ai/guides/app/features/custom-charts", "lookup_index": 0, "lookup_str": ""}, "source_doc_1": {"page_content": "description: Create centralized, interactive, visual dashboards for your ML experiments\n\nView results from experiments\n\nUse the Weights & Biases Dashboard as a central place to organize and visualize results from your machine learning models.\n\nPersistent, centralized projects\n\nAnywhere you train your models, whether on your local machine, your lab cluster, or spot instances in the cloud, we give you the same centralized dashboard. You don't need to spend your time copying outputs from your terminal into a spreadsheet or organizing TensorBoard files from different machines.\n\nAutomatic organization\n\nIf you hand off a project to a collaborator or take a vacation, W&B makes it easy to see all the models your team has already tried so you're not wasting hours re-running old experiments.\n\nPowerful, queryable tables\n\nCompare each training run and see what hyperparameters changed. Search, filter, sort, and group results from different models. It's easy to look over thousands of model versions and find the best performing models for different tasks.\n\nReproducible models\n\nWeights & Biases is good for experimentation, exploration, and reproducing models later. We capture not just the metrics, but also the hyperparameters and version of the code, and we can save your model checkpoints for you so your project is reproducible.\n\nFast, flexible integration\n\nAdd W&B to your project in 5 minutes. Install our free open-source Python package and add a couple of lines to your code, and every time you run your model you'll have nice logged metrics and records.\n\nTools for Collaboration\n\nUse W&B to organize complex machine learning projects. It's easy to share a link to W&B, and you can use private teams to have everyone sending results to a shared project. We also support collaboration via reports\u2013 add interactive visualizations and describe your work in markdown. This is a great way to keep a work log, share findings with your supervisor, or present findings to your lab.\n\nDashboard FAQ\n\nI accidentally deleted a panel in W&B, how do I undo it?\n\nTo undo a change in your workspace, click the \"undo\" button at the bottom of the page.\n\nHow to sort by more than one column in the runs table?\n\nTo sort by more than 1 column in your runs table, click on \"Sort\" and then \"Add another field\".", "metadata": "https://docs.wandb.ai/guides/track/app", "lookup_index": 0, "lookup_str": ""}, "source_doc_2": {"page_content": "description: Visualize the results of your Weights & Biases Sweeps with the W&B App UI.\n\nVisualize sweep results\n\nVisualize the results of your Weights & Biases Sweeps with the W&B App UI. Navigate to the W&B App UI at https://wandb.ai/home. Choose the project that you specified when you initialized a W&B Sweep. You will be redirected to your project workspace. Select the Sweep icon on the left panel (broom icon). From the Sweep UI, select the name of your Sweep from the list.\n\nBy default, Weights & Biases will automatically create a parallel coordinates plot, a parameter importance plot, and a scatter plot when you start a W&B Sweep job.\n\nParallel coordinates charts summarize the relationship between large numbers of hyperparameters and model metrics at a glance. For more information on parallel coordinates plots, see Parallel coordinates.\n\nThe scatter plot(left) compares the W&B Runs that were generated during the Sweep. For more information about scatter plots, see Scatter Plots.\n\nThe parameter importance plot(right) lists the hyperparameters that were the best predictors of, and highly correlated to desirable values of your metrics. For more information parameter importance plots, see Parameter Importance.\n\nYou can alter the dependent and independent values (x and y axis) that are automatically used. Within each panel there is a pencil icon called Edit panel. Choose Edit panel. A model will appear. Within the modal, you can alter the behavior of the graph.\n\nFor more information on all default W&B visualization options, see Panels. See the Data Visualization docs for information on how to create plots from W&B Runs that are not part of a W&B Sweep.", "metadata": "https://docs.wandb.ai/guides/sweeps/visualize-sweep-results", "lookup_index": 0, "lookup_str": ""}, "source_doc_3": {"page_content": "Expressions\n\nY Axis Expressions: Add calculated metrics to your graph. You can use any of the logged metrics as well as configuration values like hyperparameters to calculate custom lines.\n\nX Axis Expressions: Rescale the x-axis to use calculated values using custom expressions. Useful variables include**_step** for the default x-axis, and the syntax for referencing summary values is ${summary:value}\n\nVisualize average values on a plot\n\nIf you have several different experiments and you'd like to see the average of their values on a plot, you can use the Grouping feature in the table. Click \"Group\" above the run table and select \"All\" to show averaged values in your graphs.\n\nHere is what the graph looks like before averaging:\n\nHere I have grouped the lines to see the average value across runs.\n\nVisualize NaN value on a plot\n\nYou can also plot NaN values including PyTorch tensors on a line plot with wandb.log. For example:\n\npython\nwandb.log({'test': [..., float(\"nan\"), ...]})\n\nCompare two metrics on one chart\n\nClick on a run to go to the run page. Here's an example run from Stacey's Estuary project. The auto-generated charts show single metrics.\n\nClick the plus sign at the top right of the page, and select the Line Plot.\n\nIn the Y variables field, select a few metrics you'd like to compare. They'll show up together on the line graph.\n\nChanging the color of the line plots\n\nSometimes the default color of runs is not helpful for comparison. To help overcome this, wandb provides two instances with which one can manually change the colors.\n\nFrom the run table\n\nEach run is given a random color by default upon initialization.\n\nUpon clicking any of the colors, a color palette appears from which we can manually choose the color we want.\n\nFrom the chart legend settings\n\nOne can also change the color of the runs from the chart legend settings.\n\nVisualize on different x axes\n\nIf you'd like to see the absolute time that an experiment has taken, or see what day an experiment ran, you can switch the x axis. Here's an example of switching from steps to relative time and then to wall time.\n\nArea plots\n\nIn the line plot settings, in the advanced tab, click on different plot styles to get an area plot or a percentage area plot.\n\nZoom\n\nClick and drag a rectangle to zoom vertically and horizontally at the same time. This changes the x-axis and y-axis zoom.\n\nHide chart legend\n\nTurn off the legend in the line plot with this simple toggle:", "metadata": "https://docs.wandb.ai/guides/app/features/panels/line-plot", "lookup_index": 0, "lookup_str": ""}, "source_doc_4": {"page_content": "description: Tutorial of using the custom charts feature in the Weights & Biases UI\n\nCustom Charts Walkthrough\n\nTo go beyond the built-in charts in Weights & Biases, use the new Custom Charts feature to control the details of exactly what data you're loading in to a panel and how you visualize that data.\n\nOverview\n\nLog data to W&B\n\nCreate a query\n\nCustomize the chart\n\n1. Log data to W&B\n\nFirst, log data in your script. Use wandb.config for single points set at the beginning of training, like hyperparameters. Use wandb.log() for multiple points over time, and log custom 2D arrays with wandb.Table(). We recommend logging up to 10,000 data points per logged key.\n\n```python\n\nLogging a custom table of data\n\nmy_custom_data = [[x1, y1, z1], [x2, y2, z2]]\nwandb.log({\"custom_data_table\": wandb.Table(data=my_custom_data,\n columns = [\"x\", \"y\", \"z\"])})", "metadata": "https://docs.wandb.ai/guides/app/features/custom-charts/walkthrough", "lookup_index": 0, "lookup_str": ""}}
51.06
request_timestamp
request_id
question
question_id
wandbot_answer
retrived_source_documents
answer_elapsed_time_s
synth_grader_feedback
grader_justification
prompt_tokens
completion_tokens
total_tokens
answer_cost_usd
successful_requests
answer_system_prompt_version
eval_elapsed_time_s
eval_cost_usd
eval_total_tokens
eval_tokens_for_source_docs
eval_prompt_tokens
eval_completion_tokens
is_source_documents_truncated
wandb_run_id
wandbot_model
wandbot_temperature
hyde_llm_temperature
eval_model
1
2
3
4
5