NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Google

    Cloud Platform

    OTHER
    Google Cloud Platform (GCP) is a cloud computing service that includes innovative AI and machine learning products, solutions, and services. Google AI Studio is a low-code development environment that makes it easy to build and deploy applications and has a variety of features, such as pre-trained models that can be used to get started quickly, a unified experience for managing the entire ML lifecycle, from data preparation to model deployment, and a variety of tools for monitoring the performance of ML models in production. Vertex AI can be used to train and deploy models, and GCP also offers a variety of data storage services, including Cloud Storage, which can be used to store large datasets.
  • Google

    code chat (codechat-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    The code chat API can power a chatbot that assists with code-related questions. For example, you can use it for help debugging code. The code chat API supports the code-chat-bison model.

    The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code chat include debugging, where it assists with issues related to code that doesn’t compile or contains a bug; documentation, where it aids in understanding unfamiliar code to ensure accurate representation; and learning, as it provides help in comprehending code that you might not be very familiar with.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code completion (code-gecko)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud. The code completion API provides code autocompletion suggestions as you write code. The API uses the context of the code you’re writing to make its suggestions.

    The code completion API supports the code-gecko model. Use the code-gecko model to help improve the speed and accuracy of writing code. The Codey APIs support a wide range of programming languages including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio. Some common use cases for code completion include writing code faster, where the code-gecko model is employed to expedite the coding process by leveraging suggested code; and minimizing bugs in code, by utilizing code suggestions that are known to be syntactically correct to circumvent errors, thus reducing the risk of inadvertently introducing bugs that can arise during code creation.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code generation (code-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    code generation (code-bison) generates code based on a natural language description of the desired code. For example, it can generate a unit test for a function. The code generation API supports the code-bison model. The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code generation include creating unit tests, where you can design a prompt to request a unit test for a specific function; writing a function, which involves passing a problem to the model and receiving a function that solves the problem; and creating a class, where you can use a prompt to describe the purpose of a class and have the code defining that class returned to you.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • OpenAI

    Curie (fine tuning) GPT-3

    $0.012
    When fine-tuning a GPT model like Curie, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Curie, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
    Curie is a larger variant of GPT-3, offering more sophisticated language capabilities. It is a good choice for tasks requiring a deeper understanding of context or more complex language generation. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.
  • OpenAI

    Curie Instruct model

    $0.002

    Open AI’s Instruct model Curie is very capable and is faster and costs less than Davinci. Curie can understand and generate natural language. InstructGPT models are sibling models to ChatGPT. They are built on GPT-3 models but made to be safer, more helpful, and more aligned to users’ needs using a technique called reinforcement learning from human feedback (RLHF). Instruct models are meant to generate text with a clear instruction, and they are not optimized for conversational chat. Instruct models are optimized to follow single-turn instructions (e.g., specifically designed to follow instructions provided in a prompt). Developers can use Instruct models for extracting knowledge, generating text, performing NLP tasks, automating tasks involving natural language, and translating languages. Instruct model also make up facts less often than GPT-3 base models and show slight decreases in toxic output generation. Access is available through a request to OpenAI’s API.

  • OpenAI

    DALL·E 2

    $0.016
    DALL-E 2 is a browser-based AI system that can create realistic images and art from a description in natural language. It currently supports the ability, given a prompt, to create a new image with a certain size, edit an existing image, or create variations of a user-provided image. Currently, DALL·E 2 charges for an image by pixel resolution.
    Also to note, for developers, there is also an API available for the beta version and the API allows you to integrate state of the art image generation capabilities directly into your product. The API usage is offered on a pay-as-you-go basis and is billed separately. To note, OpenAI offers large volume discounts (>$5k/month) through their sale team.

  • OpenAI

    Davinci (fine tuning) GPT-3

    $0.12
    When fine-tuning a GPT model like Davinci, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Davinci, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
    Davinci is the largest and most powerful variant of GPT-3. It’s the best choice for tasks requiring the most sophisticated language capabilities, but it also requires more processing power and time to generate results. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.
  • OpenAI

    Davinci Instruct model

    $0.02
    Davinci is the most capable Instruct model and it can do any task the other models can (Ada, Babbage and Curie), often with higher quality. InstructGPT models are sibling models to the ChatGPT. They are built on GPT-3 models but made to be safer, more helpful, and more aligned to users’ needs using a technique called reinforcement learning from human feedback (RLHF). Instruct models are meant to generate text with a clear instruction, and they are not optimized for conversational chat. Instruct models are optimized to follow single-turn instructions (e.g., specifically designed to follow instructions provided in a prompt). Developers can use Instruct models for extracting knowledge, generating text, performing NLP tasks, automating tasks involving natural language, and translating languages. Instruct models make up facts less often than GPT-3 base models and show slight decreases in toxic output generation. Access is available through a request to OpenAI’s API.

  • OpenAI

    GPT-3.5-turbo 16k

    $0.004
    GPT-3.5-turbo 16k has the same capabilities as the standard gpt-3.5-turbo (4k model) but with 4 times the context but at twice the price. In general, a larger context window can be more powerful because it takes into account more information from the surrounding text, which can lead to better predictions
    GPT-3.5-turbo was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT. However, paying customers who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat.
    GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003. GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. Access is available through a request to OpenAI’s API or through the web application (try for free).
  • OpenAI

    GPT-3.5-turbo 4k

    $0.002
    GPT-3.5-turbo is an upgraded version of the GPT-3 model. It was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT (however, paying customer who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat).
    GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003 (however is also ten times cheaper and has been seen to be three times faster). GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. In some cases, GPT-3.5-turbo results can sometimes be too “chatty” or “creative”. Access is available through a request to OpenAI’s API or through the web application (try for free).

  • OpenAI

    GPT-4 32K context

    $0.12

    GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. This is the extended 32k token context-length model, which is separate to the 8k model (and is more expensive).

    GPT-4 API access is now available.

     

    Note: At the time of writing, ChatGPT Plus subscribers can access Chat GPT-4 by logging into the web application.

1 2 3

Ada (fine tuning) GPT-3
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.