Skip to main content

Migrating your data from Neptune to Weights & Biases

A primer on migrating your data from Neptune to Weights & Biases
Created on December 3|Last edited on December 3
On December 3, Neptune announced its acquisition by OpenAI and the upcoming shutdown of its services. We’re excited for the Neptune team on this milestone, and we also know this news may feel stressful if you rely on Neptune every day.
If you’re a Neptune customer affected by the shutdown, Weights & Biases is here to help keep your ML experiments and workflows running smoothly. You can follow the step-by-step instructions below to migrate your data from Neptune to Weights & Biases, or contact us and our AI engineers will work with you directly to migrate your data and experiment history.
This guide shows you how to export your data from Neptune and load it into Weights & Biases using the neptune-exporter tool.

Before you start

Here are the prerequisites:
  • Python 3.13+** (required by neptune-exporter)
  • uv package manager
  • Neptune API token (base64-encoded JWT format)
  • Set up your Weights & Biases account (run wandb login to authenticate)
  • W&B entity: You can set it with the environment variable WANDB_ENTITY or in the command line with --wandb-entity.
  • API key: You can set it with the environment variable WANDB_API_KEY or in the command line with --wandb-api-key.

Step 1: Export your data from Neptune

Run:
```bash
uv run neptune-exporter export \
--exporter neptune2 \
-p "workspace/project-name" \
-d ./exports/data \
-f ./exports/files
```
Note: Use neptune2 for most cases. Try neptune3 if you have a newer Neptune account.
💡

Step 2: Validate export

Before loading the data into Weights & Biases, you should inspect what was exported. This command returns the number of exported projects or runs, attribute types, and the basic step statistics:
```bash
uv run neptune-exporter summary --data-path ./exports/data
```

Step 3: Load exported data to Weights & Biases

The following command loads the data into Weights & Biases, using the default data and file paths:
```bash
uv run neptune-exporter load \
--loader wandb \
--wandb-entity your-entity \
--name-prefix "project-name" \
-d ./exports/data \
-f ./exports/files
```

Common options

### Export Options
- `-p, --project-ids`: Neptune project (format: `workspace/project`)
- `-r, --runs`: Filter runs (e.g., `"RUN-*"`)
- `-c, --classes`: Include specific types: `parameters`, `metrics`, `series`, `files`
- `--exclude`: Exclude specific types
- `--api-token`: Neptune API token (or set `NEPTUNE_API_TOKEN` env var)

### Load Options
- `--wandb-entity`: Your W&B organization/username
- `--name-prefix`: Prefix for W&B project name
- `--wandb-api-key`: W&B API key (or use `wandb login`)

Examples

Previously, we showed how to migrate entire projects. But if you are only interested in migrating specific data or want to load with modified project names, follow the examples below.

Export specific runs

```bash
uv run neptune-exporter export \
--exporter neptune2 \
-p "workspace/project" \
-r "EXP-10*" \
-d ./exports/data
```

Export only parameters and metrics

```bash
uv run neptune-exporter export \
--exporter neptune2 \
-p "workspace/project" \
-c parameters -c metrics \
--exclude files
```

Load with custom project name

```bash
uv run neptune-exporter load \
--loader wandb \
--wandb-entity my-org \
--name-prefix "migrated-project" \
-d ./exports/data
```

Troubleshooting

  • Python version error: The neptune exporter tool requires Python 3.13+. Make sure you have this version active.
  • Authentication errors with neptune3: Try --exporter neptune2 instead.

What Gets Migrated

The neptune-exporter tool preserves the following:
  • Parameters, metrics, and time series data
  • Files and artifacts
  • System monitoring data
  • Run metadata (names, IDs, timestamps)
  • W&B project naming: {name-prefix}_{workspace}_{project}

Resources

Official migration guide: https://docs.neptune.ai/migration/to_wandb/



About Weights & Biases

Weights & Biases is trusted by over one million AI practitioners and 1,500 organizations—including Meta and AstraZeneca—to train, fine-tune, and manage frontier models, applications, and agents at scale. We continue to invest in building best, cloud- and framework-agnostic tools for AI development, and we’re committed to raising the bar on performance, scale, innovation, and flexibility across our MLOps platform.
Recent releases include synchronized video playback, semantic color schemes, full-screen image modals, starred registries, automations history, and a new terminal interface. To learn more about these features, check out our product newsletter.

Iterate on AI agents and models faster. Try Weights & Biases today.