NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • OpenAI

    Davinci (fine tuning) GPT-3

    $0.12
    When fine-tuning a GPT model like Davinci, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Davinci, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
    Davinci is the largest and most powerful variant of GPT-3. It’s the best choice for tasks requiring the most sophisticated language capabilities, but it also requires more processing power and time to generate results. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.
  • OpenAI

    Davinci Instruct model

    $0.02
    Davinci is the most capable Instruct model and it can do any task the other models can (Ada, Babbage and Curie), often with higher quality. InstructGPT models are sibling models to the ChatGPT. They are built on GPT-3 models but made to be safer, more helpful, and more aligned to users’ needs using a technique called reinforcement learning from human feedback (RLHF). Instruct models are meant to generate text with a clear instruction, and they are not optimized for conversational chat. Instruct models are optimized to follow single-turn instructions (e.g., specifically designed to follow instructions provided in a prompt). Developers can use Instruct models for extracting knowledge, generating text, performing NLP tasks, automating tasks involving natural language, and translating languages. Instruct models make up facts less often than GPT-3 base models and show slight decreases in toxic output generation. Access is available through a request to OpenAI’s API.

  • Cohere

    Generate

    $0.015
    Cohere is a Canadian startup that provides high-performance and secure LLMs for the enterprise. Their models work on public, private, or hybrid clouds.
    Cohere Generate can be used for tasks such as copywriting, named entity recognition, paraphrasing, and summarization. It can be particularly useful for automating time-consuming and repetitive copywriting tasks and re-wording text to suit a specific reader or context.
    Cohere Generate is available as an API that can be integrated into various libraries using Python, Node, or Go software development kits (SDKs).
    We have shown the price of the Cohere Generate Default version, but a Cohere Generate Custom model is available but is double the price (0.030 per 1/k tokens). However, custom models can lead to some of the best-performing NLP models for many tasks.
  • OpenAI

    GPT-3.5-turbo 16k

    $0.004
    GPT-3.5-turbo 16k has the same capabilities as the standard gpt-3.5-turbo (4k model) but with 4 times the context but at twice the price. In general, a larger context window can be more powerful because it takes into account more information from the surrounding text, which can lead to better predictions
    GPT-3.5-turbo was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT. However, paying customers who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat.
    GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003. GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. Access is available through a request to OpenAI’s API or through the web application (try for free).
  • OpenAI

    GPT-3.5-turbo 4k

    $0.002
    GPT-3.5-turbo is an upgraded version of the GPT-3 model. It was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT (however, paying customer who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat).
    GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003 (however is also ten times cheaper and has been seen to be three times faster). GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. In some cases, GPT-3.5-turbo results can sometimes be too “chatty” or “creative”. Access is available through a request to OpenAI’s API or through the web application (try for free).

  • OpenAI

    GPT-4 32K context

    $0.12

    GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. This is the extended 32k token context-length model, which is separate to the 8k model (and is more expensive).

    GPT-4 API access is now available.

     

    Note: At the time of writing, ChatGPT Plus subscribers can access Chat GPT-4 by logging into the web application.

  • OpenAI

    GPT-4 8K context

    $0.06

    GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. There are a few different GPT-4 models to choose from. The standard GPT-4 model offers 8k tokens for the context. GPT-4 API access is now available.

    Note: For the ChatGPT web application, ChatGPT is powered by GPT-3.5 turbo by default. However, if you are a paying customer and subscribe to ChatGPT Plus, you can change the model to GPT-4 before you start a chat.

  • NVIDIA

    LaunchPad

    FREE
    NVIDIA LaunchPad provides free access to enterprise NVIDIA hardware and software through an internet browser. NVIDIA customers can experience the power of AI with end-to-end solutions through guided hands-on labs or use NVIDIA-Certified Systems as a sandbox, but you need to fill out an Application Form and wait for approval. Sample labs include training and deploying a support chatbot, deploying an end-to-end AI workload, configuring and deploying a language model on the hardware accelerator, and deploying a fraud detection model.

     

    *FREE via Application Form
  • Microsoft, NVIDIA

    MT-NLG

    OTHER
    MT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers.
  • NVIDIA

    NeMo

    FREE
    NVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use.
  • Amazon

    SageMaker

    FREE
    Amazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.

     

    The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment.
  • Cohere

    Summarize

    $0.015
    Cohere is a Canadian startup that provides high-performance and secure LLMs for the enterprise. Their models work on public, private, or hybrid clouds and is available as an API that can be integrated into various libraries using Python, Node, or Go software development kits (SDKs).
    Cohere Summarize generates a succinct version of a provided text. This summary relays the most important messages of the text, and a user can configure the results with a variety of parameters to support unique use cases. It can instantly encapsulate the key points of a document and provides text summarization capabilities at scale.
1 2 3

Ada (fine tuning) GPT-3
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.