OpenAI has introduced fine-tuning for GPT-4, enabling developers to enhance performance and accuracy for their applications. This announcement follows the release of fine-tuning for GPT-4-mini late last month, as anticipated. The highly requested feature is now available to all developers on paid usage tiers. Until September 23, organizations can access 1 million training tokens per day for free.
Key Features
- Custom Datasets: Developers can fine-tune GPT-4o with custom datasets to improve performance and reduce costs for specific use cases.
- Customization: Fine-tuning allows for customization of response structure, tone, and adherence to complex domain-specific instructions.
- Ease of Use: Strong results can be achieved with just a few dozen examples in the training dataset.
Getting Started
- Access: Available to all developers on paid usage tiers.
- Dashboard: Visit the fine-tuning dashboard, create a new project, and select
gpt-4o-2024-08-06
from the base model drop-down. - Costs: Training costs $25 per million tokens, with inference costs of $3.75 per million input tokens and $15 per million output tokens.
- Mini Version: GPT-4o mini fine-tuning is also available, offering 2M training tokens per day for free through September 23.
Data Privacy and Safety
- Control: Fine-tuned models remain under the developer's control, with full ownership of business data, including all inputs and outputs.
- Safety: Layered safety mitigations are in place, including automated safety evaluations and usage monitoring to ensure compliance with usage policies.
OpenAI encourages developers to explore more model customization options and offers support for those interested.