NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Anthropic

    Claude Instant

    $0.00551
    Claude Instant is a faster and less expensive model than Claude-v1 that can handle casual dialog, text analysis and summarization, and document Q&A. Optimized for low latency, it handles high throughput use cases at lower costs that other Claude family of models. Anthropic is an AI startup founded by former OpenAI employees. Anthropic specializes in developing general AI systems and language models, with a company ethos of responsible AI usage.
    API access can be gained after application.

  • Anthropic

    Claude Instant v1

    $0.03268
    A powerful model, Claude-v1 can handle sophisticated dialog, creative content generation, and detailed instructions. Optimized for superior performance on tasks that require complex reasoning, Claude is Anthropic’s best-in-class offering.
    API access can be gained after application.
  • Google

    Cloud Platform

    OTHER
    Google Cloud Platform (GCP) is a cloud computing service that includes innovative AI and machine learning products, solutions, and services. Google AI Studio is a low-code development environment that makes it easy to build and deploy applications and has a variety of features, such as pre-trained models that can be used to get started quickly, a unified experience for managing the entire ML lifecycle, from data preparation to model deployment, and a variety of tools for monitoring the performance of ML models in production. Vertex AI can be used to train and deploy models, and GCP also offers a variety of data storage services, including Cloud Storage, which can be used to store large datasets.
  • Google

    code chat (codechat-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    The code chat API can power a chatbot that assists with code-related questions. For example, you can use it for help debugging code. The code chat API supports the code-chat-bison model.

    The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code chat include debugging, where it assists with issues related to code that doesn’t compile or contains a bug; documentation, where it aids in understanding unfamiliar code to ensure accurate representation; and learning, as it provides help in comprehending code that you might not be very familiar with.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code completion (code-gecko)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud. The code completion API provides code autocompletion suggestions as you write code. The API uses the context of the code you’re writing to make its suggestions.

    The code completion API supports the code-gecko model. Use the code-gecko model to help improve the speed and accuracy of writing code. The Codey APIs support a wide range of programming languages including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio. Some common use cases for code completion include writing code faster, where the code-gecko model is employed to expedite the coding process by leveraging suggested code; and minimizing bugs in code, by utilizing code suggestions that are known to be syntactically correct to circumvent errors, thus reducing the risk of inadvertently introducing bugs that can arise during code creation.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code generation (code-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    code generation (code-bison) generates code based on a natural language description of the desired code. For example, it can generate a unit test for a function. The code generation API supports the code-bison model. The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code generation include creating unit tests, where you can design a prompt to request a unit test for a specific function; writing a function, which involves passing a problem to the model and receiving a function that solves the problem; and creating a class, where you can use a prompt to describe the purpose of a class and have the code defining that class returned to you.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • OpenAI

    Curie (fine tuning) GPT-3

    $0.012
    When fine-tuning a GPT model like Curie, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Curie, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
    Curie is a larger variant of GPT-3, offering more sophisticated language capabilities. It is a good choice for tasks requiring a deeper understanding of context or more complex language generation. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.
  • OpenAI

    Curie Instruct model

    $0.002

    Open AI’s Instruct model Curie is very capable and is faster and costs less than Davinci. Curie can understand and generate natural language. InstructGPT models are sibling models to ChatGPT. They are built on GPT-3 models but made to be safer, more helpful, and more aligned to users’ needs using a technique called reinforcement learning from human feedback (RLHF). Instruct models are meant to generate text with a clear instruction, and they are not optimized for conversational chat. Instruct models are optimized to follow single-turn instructions (e.g., specifically designed to follow instructions provided in a prompt). Developers can use Instruct models for extracting knowledge, generating text, performing NLP tasks, automating tasks involving natural language, and translating languages. Instruct model also make up facts less often than GPT-3 base models and show slight decreases in toxic output generation. Access is available through a request to OpenAI’s API.

  • OpenAI

    DALL·E 2

    $0.016
    DALL-E 2 is a browser-based AI system that can create realistic images and art from a description in natural language. It currently supports the ability, given a prompt, to create a new image with a certain size, edit an existing image, or create variations of a user-provided image. Currently, DALL·E 2 charges for an image by pixel resolution.
    Also to note, for developers, there is also an API available for the beta version and the API allows you to integrate state of the art image generation capabilities directly into your product. The API usage is offered on a pay-as-you-go basis and is billed separately. To note, OpenAI offers large volume discounts (>$5k/month) through their sale team.

  • OpenAI

    Davinci (fine tuning) GPT-3

    $0.12
    When fine-tuning a GPT model like Davinci, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Davinci, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
    Davinci is the largest and most powerful variant of GPT-3. It’s the best choice for tasks requiring the most sophisticated language capabilities, but it also requires more processing power and time to generate results. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.
  • OpenAI

    Davinci Instruct model

    $0.02
    Davinci is the most capable Instruct model and it can do any task the other models can (Ada, Babbage and Curie), often with higher quality. InstructGPT models are sibling models to the ChatGPT. They are built on GPT-3 models but made to be safer, more helpful, and more aligned to users’ needs using a technique called reinforcement learning from human feedback (RLHF). Instruct models are meant to generate text with a clear instruction, and they are not optimized for conversational chat. Instruct models are optimized to follow single-turn instructions (e.g., specifically designed to follow instructions provided in a prompt). Developers can use Instruct models for extracting knowledge, generating text, performing NLP tasks, automating tasks involving natural language, and translating languages. Instruct models make up facts less often than GPT-3 base models and show slight decreases in toxic output generation. Access is available through a request to OpenAI’s API.

  • Cohere

    Generate

    $0.015
    Cohere is a Canadian startup that provides high-performance and secure LLMs for the enterprise. Their models work on public, private, or hybrid clouds.
    Cohere Generate can be used for tasks such as copywriting, named entity recognition, paraphrasing, and summarization. It can be particularly useful for automating time-consuming and repetitive copywriting tasks and re-wording text to suit a specific reader or context.
    Cohere Generate is available as an API that can be integrated into various libraries using Python, Node, or Go software development kits (SDKs).
    We have shown the price of the Cohere Generate Default version, but a Cohere Generate Custom model is available but is double the price (0.030 per 1/k tokens). However, custom models can lead to some of the best-performing NLP models for many tasks.
1 2 3 4

Ada (fine tuning) GPT-3
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.