Skip to main content

o1 model Python quickstart using the OpenAI API

Getting set up and running the new o1 models in Python using the OpenAI API. We'll be working with o1-preview.
Created on September 17|Last edited on December 17
Getting started running the new o1 models from OpenAI via the API is surprisingly easy, and will offer you a lot of flexibility in what you can do with it beyond what you might encounter through the ChatGPT interface and even GPT-4.
In this quickstart using the o1-preview model, we'll have you up-and-running in about 5 minutes.
If you'd prefer to get your hands dirty faster, you can test the script out in this Colab.


Note: Currently, the o1-series of models are only accessible via API if you have a Tier 5 account.
Unfortunately, if you don't, you'll have to access o1-preview through the ChatGPT interface.
We recommend bookmarking this article for when the models open up more generally.
If you'd like to get started with GPT-4o, and upgrade your code later, we have a tutorial for that here.
💡
Here's what we're covering in this quickstart tutorial:

Table Of Contents



If you're just getting started and don't yet have your machine set up to run Python, we've created a quick tutorial here that will have you up and running in just a few minutes.

Additionally, if you need a walkthrough on getting the OpenAI API, you'll find that here.
💡

W&B Weave

W&B Weave simplifies the process of tracking and analyzing model outputs in your project. To get started with Weave, you'll first import it and initialize it with your project name.
One of its standout features is the @weave.op decorator. In Python, a decorator is a powerful tool that extends the behavior of a function. By placing @weave.op above any function in your code, you’re telling Weave to automatically log that function’s inputs and outputs. This makes it incredibly easy to keep track of what data goes in and what comes out.
Once your code runs, these logs appear in the Weave dashboard, where you’ll find detailed visualizations and traces of the function calls. This not only aids in debugging but also helps structure your experimental data, making it much easier to develop and fine-tune models like o1-preview.

o1 models in Python quickstart

Let's jump right in. This tutorial assumes you're working in a Jupyter notebook but of course, the code will work in other applications.
We're going to be working with the o1-preview model specifically.

Step 1: The OpenAI o1-preview API key

The first thing we need to do is define our o1-preview API key.
The code to do this is:
%env OPENAI_API_KEY=KEY
You'll want to replace "KEY" with your OpenAI API key.
When you run the cell you'll get your key back as the output. This confirms that it's been successfully entered as an environment variable.


Step 2: Installing OpenAI and W&B Weave

To get started with the o1-preview model, all you need to install is OpenAI. However, we’ll also show you how to simplify reviewing multiple outputs using W&B Weave, which makes the process much more efficient.
This is a great time to sign up for Weights & Biases, if you haven't already. This will save you from interrupting your workflow later.
The code to do this is:
!pip install openai weave
Run the Jupyter cell after entering this code.

When you execute the cell, you'll notice an asterisk ([*]) appear between the brackets [ ]. This indicates that the cell is running, and you’ll need to wait until the asterisk turns into a number before proceeding.
💡
Now you've installed it, we still need to import it for use.
If you're new to Python, think of it this way: Installing a library is like uploading an image to a server—you’ve got the resource. Importing is like embedding that image on a webpage, making it accessible in your code.

Step 3: Import libraries and pass the OpenAI API key

The next step is to import the libraries we'll need to make magic, as well as pass the API key across to OpenAI so we have access to the o1-preview model.
In this block, we'll import the OpenAI and W&B Weave libraries we've already installed. We're also importing os and re to give us access to regular expressions and os to fetch environment variables (in this case, the OpenAI API).
import os
import weave
from openai import OpenAI
import re

# Initialize Weave and W&B
weave.init('o1-preview-setup')

# Initialize OpenAI client
client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
In the line: weave.init('o1-preview-setup')
You can change o1-preview-setup to whatever you would like. This is going to be the project name inside Weights & Biases.

Step 4: Setting up your prompt

The following code will generate inputs you can fill in. They will ask for:
  • Who the model should act as. This ties to the assistant role, and is used to define traits the model should take on. For example, I might respond to the question, "Who should I be, as I answer your prompt?" - "an SEO."
  • The prompt of what they should do. This ties to the user role, and is used to define what's expected from them.
o1_assistant_prompt = "You are a " + input ("Who should I be, as I answer your prompt?")
o1_user_prompt = input ("What prompt do you want me to do?")
o1_prompt = o1_assistant_prompt, o1_user_prompt
print(o1_prompt)
Currently the model only supports the assistant and user roles, though the system role should be available when out of beta.
💡

Step 5: Generating content with an o1 model

Hopefully, this hasn't been too painful because now we're at the fun part.
I’m breaking the content generation and logging step into two parts to make the script a bit easier to digest. In the first cell, we see the functions, and in the second, we're simply running them.
The functions essentially do two things:
  • Request that o1-preview generate a list of steps it would take to complete the request you entered above. These steps will be used as inputs for the reply. They will also appear as inputs in Weave.
  • Request that GPT-4o generate a response to your request, based in part on the list generated by o1-preview.
I could have easily used o1-preview for both of these steps, but at present o1-preview does not offer the same functionality in the form of missing parameters. Additionally, I thought it a good illustration of how we can use one model to feed another. GPT-4o is less expensive and less limited, so in production this setup would make sense.
# Function to parse steps_text into a list of steps
def parse_steps(steps_text):
steps = re.split(r'\n\s*\d+\.\s+|\n\s*-\s+|\n\s*\*\s+', steps_text.strip())
steps = [step.strip() for step in steps if step.strip()]
return steps

def generate_steps(o1_assistant_prompt: str, o1_user_prompt: str) -> list:
steps_prompt = f"You want to {o1_user_prompt}. Before generating the strategy, please outline the reasoning steps you would take to develop the strategy. The steps should be given as a simple list, avoiding commentary or unnecessary detail. Make your best guess at what you are being asked to create a list of reasoning steps for. List the steps numerically."
steps_messages = [
{"role": "assistant", "content": o1_assistant_prompt},
{"role": "user", "content": steps_prompt}
]
# API call to get steps
steps_response = client.chat.completions.create(
model="o1-preview",
messages=steps_messages
)
steps_text = steps_response.choices[0].message.content
steps_list = steps_text.split("\n") # Assuming steps are returned as a list-like format
return steps_list


@weave.op()
def generate_content_with_steps(o1_assistant_prompt: str, o1_user_prompt: str, steps_list: list) -> str:
# Step 2: Use the steps as input to generate the final content strategy
steps_text = "\n".join(f"{i+1}. {step}" for i, step in enumerate(steps_list))

strategy_prompt = (
f"Using the following steps, please generate a content strategy to {o1_user_prompt}:\n{steps_text}\n"
f"Please proceed directly without requesting additional information.")
strategy_messages = [
{"role": "assistant", "content": o1_assistant_prompt},
{"role": "user", "content": strategy_prompt}
]
# API call to generate the strategy
strategy_response = client.chat.completions.create(
model="gpt-4o",
messages=strategy_messages,
temperature=0.2,
max_tokens=1000,
frequency_penalty=0.0
)
strategy_text = strategy_response.choices[0].message.content
# Only return the strategy text
return strategy_text

and finally ...
# First, get the steps
steps = generate_steps("Initial assistant prompt", "desired user prompt")

# Then, pass the steps to generate the content strategy
strategy = generate_content_with_steps(o1_assistant_prompt, o1_user_prompt, steps)

print(strategy)
When run, these appear as:


Viewing your o1 model output in W&B Weave

From the final cell you can access the output, and copy it from there. If you want to dig deeper into what's going on or review past requests (and I recommend you do) you can do so by visiting your Weights & Biases dashboard or clicking the links outputted with the response.
You can see the output from one of my Traces at:
This is and investigation of that trace:

And of course it's handy when you just want to review how past data was collected, but when you hit errors:

That's where some of the biggest value to logging with Weave comes from. I know that the output did not have access to the list from the o1 model and as such is likely not as complete or informed.
Hopefully you've found this o1 model quickstart tutorial helpful. Please drop in the comments what you end up working on!

Iterate on AI agents and models faster. Try Weights & Biases today.