OpenAI o3-mini is here
Created on January 31|Last edited on January 31
Comment
OpenAI has announced the release of o3-mini, a new AI model designed to offer cost-effective, high-performance reasoning with a focus on STEM subjects. Available today in ChatGPT and through OpenAI’s API, o3-mini builds on the foundation of previous mini models while improving speed, accuracy, and developer features.
Advancing Small Model Performance
o3-mini represents a major step in OpenAI’s efforts to push the boundaries of small AI models. It delivers strong capabilities in science, technology, engineering, and mathematics (STEM), with improvements in mathematical reasoning, coding, and scientific problem-solving. In evaluations, testers found that o3-mini reduced major errors by 39% compared to its predecessor, OpenAI o1-mini, and provided more precise responses.
New Developer Features and Flexible Reasoning Effort
For the first time, a small OpenAI reasoning model supports function calling, structured outputs, and developer messages, making o3-mini a production-ready tool from launch. It also allows developers to choose between three levels of reasoning effort—low, medium, and high—depending on the complexity of the task. This customization ensures that users can optimize for speed or accuracy based on their needs. Unlike OpenAI o1, o3-mini does not support vision tasks, so users requiring image-based AI should continue using o1.
Wider Availability and Higher Limits
ChatGPT Plus, Team, and Pro users can access o3-mini starting today, while Enterprise users will gain access within a week. This model will replace OpenAI o1-mini in the ChatGPT model picker, offering faster responses and higher rate limits. Plus and Team users now receive 150 messages per day with o3-mini, tripling the previous 50-message limit with o1-mini. Additionally, free users can try o3-mini for the first time by selecting the “Reason” option in the ChatGPT message composer.
Optimized for STEM and Competitive Coding
o3-mini has been fine-tuned for technical problem-solving, with notable improvements in competitive math and coding benchmarks. In the AIME 2024 competition math test, o3-mini-high achieved an 83.6% accuracy rate, surpassing previous small models. In coding challenges on Codeforces, the model reached an Elo rating of 2073, outperforming earlier versions. Similar improvements were seen in PhD-level science questions, software engineering tasks, and research-level mathematics evaluations.

Speed and Performance Improvements
o3-mini is designed to be faster than previous models, with a 24% reduction in response time compared to o1-mini. This makes it a practical choice for applications where both speed and accuracy are crucial. The model delivers near-o1 performance on reasoning-intensive tasks while offering lower latency.
Enhanced Safety and Alignment
To ensure safe deployment, OpenAI applied its deliberative alignment technique, which teaches the model to reason about safety specifications before responding. Testing shows that o3-mini performs well on safety and jailbreak evaluations, surpassing GPT-4o in these areas. OpenAI conducted extensive red-teaming and preparedness assessments before release, ensuring the model meets rigorous safety standards.
What’s Next for OpenAI’s Small Models?
The launch of o3-mini continues OpenAI’s effort to provide cost-effective, high-quality AI while expanding access to advanced reasoning capabilities. By improving efficiency and lowering costs, OpenAI is making AI more accessible across different industries and user levels. With ongoing advancements, future mini models may bring even greater performance improvements while maintaining affordability.
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.