Qwen3 Coder 480B A35B inference overview

Price

$1.00 (input)
$1.50 (output)

Parameters

35B (active)
480B (total)

Context Window

262K

Release Date

Jul 2025

Qwen3 Coder 480B A35B inference details

Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts).

Created by: 

Alibaba

License: 

apache-2.0
				
					import openai
import weave

# Weave autopatches OpenAI to log LLM calls to W&B
weave.init("/")

client = openai.OpenAI(
    # The custom base URL points to W&B Inference
    base_url='https://api.inference.wandb.ai/v1',

    # Get your API key from https://wandb.ai/authorize
    # Consider setting it in the environment as OPENAI_API_KEY instead for safety
    api_key="",

    # Team and project are required for usage tracking
    project="/",
)

response = client.chat.completions.create(
    model="Qwen/Qwen3-Coder-480B-A35B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a joke."}
    ],
)

print(response.choices[0].message.content)
				
			

Qwen3 Coder 480B A35B resources

Screenshot 2025-07-30 at 1.03.14 PM
Course
AI engineering course: Agents
Inference_logo
Guide
W&B Inference powered by CoreWeave
Screenshot 2025-07-30 at 8.00.14 AM
Whitepaper
A primer on building successful AI agents