Fine-tuning is now available for GPT-4o
After several months, fine tuning is now available for OpenAI's flagship model!
Created on August 20|Last edited on August 20
Comment
Fine-tuning is now available for GPT-4o! This feature allows users to customize the model with their datasets, enabling higher performance and accuracy tailored to specific applications. OpenAI is also offering one million training tokens per day for free to each organization until September 23, encouraging widespread adoption and experimentation with this new capability.
Pricing
For those interested in getting started, fine-tuning is accessible to all developers on paid usage tiers. The process involves selecting the GPT-4o-2024-08-06 model from the fine-tuning dashboard, where training costs $25 per million tokens. Inference pricing is set at $3.75 per million input tokens and $15 per million output tokens. Developers can also fine-tune GPT-4o mini, with two million training tokens available daily for free until September 23.
Real-World Success Stories with GPT-4o Fine-Tuning
Over the past months, fine-tuning of GPT-4o has been tested by select partners, yielding impressive results in real-world applications. Cosine's AI assistant, Genie, used a fine-tuned version of GPT-4o to achieve state-of-the-art (SOTA) results on the SWE-bench Verified benchmark, with a remarkable score of 43.8%. Genie, designed to assist in software engineering tasks like bug identification and code refactoring, demonstrates how fine-tuning can lead to higher accuracy and efficiency in technical problem-solving. The model was also trained to output in specific formats, such as patches, which can be seamlessly integrated into codebases, enhancing its practical utility.
Similarly, Distyl, an AI solutions partner for Fortune 500 companies, leveraged a fine-tuned GPT-4o model to secure the top spot on the BIRD-SQL benchmark, a leading text-to-SQL benchmark.
Data Privacy and Safety in Fine-Tuned Models
OpenAI assures developers that fine-tuned GPT-4o models remain fully under their control, with complete ownership of business data, including all inputs and outputs. This means that the data used in fine-tuning is not shared or repurposed for training other models, ensuring confidentiality and compliance with data privacy standards.
To further protect against misuse, OpenAI has implemented layered safety measures for fine-tuned models. These include continuous automated safety evaluations and monitoring to ensure that applications adhere to established usage policies. This commitment to safety and privacy aims to provide developers with peace of mind as they explore the expanded capabilities of GPT-4o through fine-tuning.
Looking Ahead
The introduction of fine-tuning for GPT-4o marks a significant step forward in model customization. OpenAI is eager to see how developers will leverage this new feature to create more powerful and efficient AI applications.
Add a comment