Qwen3 235B A22B Thinking-2507
Text
Model overview
Price
$0.10 - $0.10
Input - Output
Parameters
22B - 235B
Active - Total
Context window
262K
Release date
Jul 2025
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs in challenging domains.
The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases as of July 2025.
Supported features
Reasoning
JSON mode
Structured output
Tool calling
Use this model