Getting started with the Gemini 2.5 Pro reasoning model API
A quickstart tutorial on how to set up the Gemini 2.5 Pro API with W&B logging in Python.
Created on March 25|Last edited on March 25
Comment
This tutorial is a quickstart for anyone getting started with the Gemini 2.5 Pro Experimental API—especially in Python. After seeing the response to our GPT-4.5 quickstart, we figured now’s a good time to walk through the setup for Google’s most capable model yet.
We’ll follow up with task-specific guides in the near future, but this should get you exploring right away.
And, if you’d prefer to try it out before going step-by-step, you can run the full example in this Colab:
Here’s what we’ll cover:
Table of contents
A bit about Gemini 2.5 ProGetting your Gemini 2.5 Pro API keyQuickstart tutorial for Gemini 2.5 Pro in PythonStep 1: Installing Gemini 2.0 and W&B WeaveStep 2: Import the libraries and pass the Gemini 2.5 API keyStep 3: Add your API KeyStep 4: Add the queryStep 5: Choose the Gemini 2.5 Pro modelStep 6: Name your Weights & Biases projectStep 7: Generate text with Gemini 2.5 Pro (and view reasoning output)Conclusion
If you're REALLY just getting started and don't yet have your machine set up to run Python, I've created a quick tutorial here that will have you up and running in just a few minutes.
💡
All ready? Let's dive in.
A bit about Gemini 2.5 Pro
Gemini 2.5 Pro Experimental is Google’s most advanced model yet, with strong results across reasoning, coding, and complex task solving. It’s a “thinking model”—able to reason through its responses and handle layered instructions across multiple modalities.
Here are the key updates and capabilities:
- Advanced reasoning: State-of-the-art performance in math, science, and logic benchmarks.
- Code generation: Excels at building web apps, refactoring code, and completing agent-style tasks.
- Multimodal input: Accepts text, images, audio, and video as input.
- Text output: Supports structured outputs, function calling, and code execution.
- Tool integration: Can call tools like Google Search, execute functions, and support complex workflows.
- Context window: 1 million input tokens, 64,000 output tokens.
It launched on March 25, 2025 at the top of the LMarena leaderboard:

Getting your Gemini 2.5 Pro API key

Then click "Explore models in Google AI Studio." That will take you to:

From here you'll click the "Get API key" to the top left.
And then "Create API key."

Choose an existing project or create one, and you'll be ready to go:

With that in hand, let's jump into using the API.
Quickstart tutorial for Gemini 2.5 Pro in Python
We’ll use Jupyter Notebooks for this. If you’ve never used one, there’s a quick guide here to get you going.
Once set up, open a new notebook and follow along below
Step 1: Installing Gemini 2.0 and W&B Weave
In this project, we’ll use W&B Weave to track and visualize the performance of our Gemini 2.5 API calls—especially the model’s reasoning process.
Weave is a powerful logging and observability tool built by Weights & Biases. It captures inputs, outputs, and metadata from your Python functions and lets you inspect them in a rich UI. This is especially helpful for LLM workflows, where understanding why a model responded a certain way is just as important as what it responded with.
By simply decorating your function with @weave.op(), Weave will automatically log each call: the prompt you sent, the model’s response, and any intermediate steps in between. You can then open the Weave interface in your W&B project to explore results, compare runs, and evaluate the reasoning behind each output.
This makes Weave particularly useful when iterating on prompts, debugging model behavior, or tracking how Gemini 2.5 performs over time in a production setting.
This is a good time to sign up for a free Weights & Biases account. It will help keep you from having to interrupt your workflow in a couple minutes, when you're further along.
💡
The code:
!pip install -q -U google-generativeai!pip install --upgrade weave
When you run the Jupyter cell you'll notice that the space between the [ ] gets filled with an asterisk ([*]). This means the cell is running, and you need to wait for the * to turn into a number before continuing.
Now you've installed what we need, let's import so we can actually use our tools.
If you're new to Python, basically when we install a library we simply grab the code. When we import it, we make that library available for use.
💡
Step 2: Import the libraries and pass the Gemini 2.5 API key
The next step is to import the required libraries, as well as pass the API key across to our friends at Google so we have access to Gemini 2.5 Pro.
In this block, we'll import the libraries we've installed as well as a couple others. They are:
- weave: For logging our prompts and responses to W&B Weave.
- generativeai: The Google Generative AI library. The reason you're here, we assume!
Here's the code:
import google.generativeai as genaiimport weavedef format_res(text):return text.replace('•', ' *')
Step 3: Add your API Key
Now we just need to configure Gemini 2.5 to recognize your API key.
To keep things secure, we’ll prompt you to paste your key manually instead of hardcoding it. It’ll be printed back for confirmation—but feel free to remove that line if you’re working in a shared or public environment.
gemini_api_key = input ("What is your Gemini API key? :")genai.configure(api_key=gemini_api_key)print(gemini_api_key)
Step 4: Add the query
Let’s add the content you want Gemini to generate:
query = input ("What content would you like me to produce ? :")print(query)
This makes it easy to experiment with new prompts without editing the code directly.
Step 5: Choose the Gemini 2.5 Pro model
We’ll use the latest experimental model, as of this writing:
model = genai.GenerativeModel('gemini-2.5-pro-exp-03-25')
Step 6: Name your Weights & Biases project
Time to name your Weights & Biases project, which lets us organize all our traces and logs.
project_name = input("What would you like to name your Weave project? :")wandb_name = project_name.replace(" ", "-") # Replace spaces with dashes in nameweave.init(wandb_name)
Wait for the cell to finish executing (look for the asterisk to change to a number).
You'll end up with something that looks like:

Step 7: Generate text with Gemini 2.5 Pro (and view reasoning output)
Now it’s time to see Gemini 2.5 Pro in action—and inspect how it thinks through your prompt.
You have to explicitly instruct the model to output the reasoning steps, which we'll add to the prompt.
We’ll take the query we entered earlier in Step 4 and enhance it by appending an instruction to think step-by-step before responding. This helps expose Gemini’s internal logic, which you’ll be able to view directly in Weave.
@weave.op()def generate_text_with_gemini(model, query):# Append reasoning promptenhanced_query = (query.strip() +"\n\nPlease think step-by-step and explain your reasoning before producing the final output.")response = model.generate_content(enhanced_query)return response.textnum_responses = 1 # Set how many completions you'd likefor i in range(num_responses):response_text = generate_text_with_gemini(model, query)print("=== Reasoning Trace ===\n")print(response_text.split("\n\n")[0]) # Likely where reasoning beginsprint("\n=== Full Output ===\n")print(format_res(response_text))
...you'll get an output like this:

Each response is automatically tracked in Weave. You’ll see both your prompt and Gemini’s full response - including its internal reasoning - logged under the function tab in your project.
This is especially useful for understanding how Gemini 2.5 structures its answers, and for comparing prompt variations or failures.

If you're interested, you can also dig easily into the "Summary" section inside Weave to view more metrics on our function:

Conclusion
I hope you've gotten to the end of this Gemini 2.5 Pro tutorial with more confidence in how to work with the API in Python and can see how useful W&B Weave can be when analyzing model behavior.
We’ll be adding more task-specific tutorials soon, including step-by-step workflows for reasoning chains, code generation, and multimodal prompts.
In the meantime, feel free to leave questions or ideas in the comments.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.