GPT OSS models on W&B Inference

OpenAI GPT OSS 20B inference overview

Price per 1M tokens

$0.05 (input)
$0.20 (output)

Parameters

3.6B (Active)
20B (Total)

Context window

131K

Release date

Aug 2025

OpenAI GPT OSS 20B inference details

OpenAI GPT OSS 20B is an open-weight 21B parameter model released by OpenAI. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration and structured outputs.
 
Instantly access GPT OSS 20B at some of the industry’s lowest token costs, all running on CoreWeave’s purpose-built AI cloud. Rapidly evaluate, monitor, and iterate on your agentic AI applications using integrated W&B Weave tracing,  available readily through W&B Inference.
 
Created by: OpenAI
License: Apache 2.0
🤗 model card: openai/gpt-oss-20b
 
 
				
					import openai
import weave

# Weave autopatches OpenAI to log LLM calls to W&B
weave.init("<team>/<project>")

client = openai.OpenAI(
    # The custom base URL points to W&B Inference
    base_url='https://api.inference.wandb.ai/v1',

    # Get your API key from https://wandb.ai/authorize
    # Consider setting it in the environment as OPENAI_API_KEY instead for safety
    api_key="<your-apikey>",

    # Team and project are required for usage tracking
    project="<team>/<project>",
)

response = client.chat.completions.create(
    model="openai/gpt-oss-20b",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a joke."}
    ],
)

print(response.choices[0].message.content)
				
			

OpenAI GPT OSS 120B inference overview

Price per 1M tokens

$0.15 (input)
$0.60 (output)

Parameters

5.1B (Active)
117B (Total)

Context window

131K

Release date

Aug 2025

OpenAI GPT OSS 120B inference details

OpenAI GPT OSS 120B is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass. The model supports configurable reasoning depth, full chain-of-thought access, and structured output generation.
 
Instantly access GPT OSS 120B at some of the industry’s lowest token costs, all running on CoreWeave’s purpose-built AI cloud. Rapidly evaluate, monitor, and iterate on your agentic AI applications using integrated W&B Weave tracing,  available readily through W&B Inference.
 
Created by: OpenAI
License: Apache 2.0
🤗 model card: openai/gpt-oss-120b
 
 
				
					import openai
import weave

# Weave autopatches OpenAI to log LLM calls to W&B
weave.init("<team>/<project>")

client = openai.OpenAI(
    # The custom base URL points to W&B Inference
    base_url='https://api.inference.wandb.ai/v1',

    # Get your API key from https://wandb.ai/authorize
    # Consider setting it in the environment as OPENAI_API_KEY instead for safety
    api_key="<your-apikey>",

    # Team and project are required for usage tracking
    project="<team>/<project>",
)

response = client.chat.completions.create(
    model="openai/gpt-oss-120b",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a joke."}
    ],
)

print(response.choices[0].message.content)