NEW WEBSITE LAUNCH
Subscribe to our newsletter

Ada (fine tuning) GPT-3

When fine-tuning a GPT model like Ada, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Ada, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
As the smallest GPT-3 model, Ada is less computationally intensive and quicker, making it ideal for tasks that don’t demand complex language understanding or generation. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.

Training

$0.0004 / 1k tokens

Usage

$0.0016 / 1k tokens

Licence

Model

Unit/Currency

Ada (fine tuning) GPT-3
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.