NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • OpenAI

    DALL·E 2

    $0.016
    DALL-E 2 is a browser-based AI system that can create realistic images and art from a description in natural language. It currently supports the ability, given a prompt, to create a new image with a certain size, edit an existing image, or create variations of a user-provided image. Currently, DALL·E 2 charges for an image by pixel resolution.
    Also to note, for developers, there is also an API available for the beta version and the API allows you to integrate state of the art image generation capabilities directly into your product. The API usage is offered on a pay-as-you-go basis and is billed separately. To note, OpenAI offers large volume discounts (>$5k/month) through their sale team.

  • OpenAI

    Davinci (fine tuning) GPT-3

    $0.12
    When fine-tuning a GPT model like Davinci, you are fine-tuning the GPT-3 base model (not the instruction-oriented variant of GPT-3). Fine-tuning involves taking the pre-trained base model and further training it on your specific dataset or task to enhance its performance. Fine-tuning allows OpenAI API customers to leverage the power of pre-trained GPT-3 language models, such as Davinci, while tailoring them to their specific needs (the fine-tuning process allows a model to specialize in a specific task or context, making it more efficient and effective for a particular use case, which can help to reduce costs and latency for high-volume tasks). You are also able to continue fine-tuning a fine-tuned model to add additional data without having to start from scratch.
    Davinci is the largest and most powerful variant of GPT-3. It’s the best choice for tasks requiring the most sophisticated language capabilities, but it also requires more processing power and time to generate results. Note: There are two fine-tuning costs to be aware of, a one-time training cost and a pay-as-you-go usage cost.
  • OpenAI

    Davinci Instruct model

    $0.02
    Davinci is the most capable Instruct model and it can do any task the other models can (Ada, Babbage and Curie), often with higher quality. InstructGPT models are sibling models to the ChatGPT. They are built on GPT-3 models but made to be safer, more helpful, and more aligned to users’ needs using a technique called reinforcement learning from human feedback (RLHF). Instruct models are meant to generate text with a clear instruction, and they are not optimized for conversational chat. Instruct models are optimized to follow single-turn instructions (e.g., specifically designed to follow instructions provided in a prompt). Developers can use Instruct models for extracting knowledge, generating text, performing NLP tasks, automating tasks involving natural language, and translating languages. Instruct models make up facts less often than GPT-3 base models and show slight decreases in toxic output generation. Access is available through a request to OpenAI’s API.

  • Databricks

    Dolly 2.0

    FREE
    Dolly 2.0 by Databricks, is the first open source, instruction-following Large Language Model, fine-tuned on a human-generated instruction dataset and is licensed for research and commercial use, which means any organization can create, own, and customize powerful LLMs that can talk to people without paying for API access or sharing data with third parties.

    Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset (crowdsourced among Databricks employees – so cool). Dolly-v2-12b is not a state-of-the-art model, but it does exhibit surprisingly high-quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in smaller model sizes: dolly-v2-7b, a 6.9 billion parameter based on pythia-6.9b and dolly-v2-3b, a 2.8 billion parameter based on pythia-2.8b.

    Dolly 2.0 can be used for brainstorming, classification, open Q&A, closed Q&A, content generation, information extraction, and summarization. You can access the Dolly 2.0 can training code, the dataset, and the model weights on Hugging Face.
  • OpenAI

    GPT-3.5-turbo 16k

    $0.004
    GPT-3.5-turbo 16k has the same capabilities as the standard gpt-3.5-turbo (4k model) but with 4 times the context but at twice the price. In general, a larger context window can be more powerful because it takes into account more information from the surrounding text, which can lead to better predictions
    GPT-3.5-turbo was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT. However, paying customers who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat.
    GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003. GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. Access is available through a request to OpenAI’s API or through the web application (try for free).
  • OpenAI

    GPT-3.5-turbo 4k

    $0.002
    GPT-3.5-turbo is an upgraded version of the GPT-3 model. It was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT (however, paying customer who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat).
    GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003 (however is also ten times cheaper and has been seen to be three times faster). GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. In some cases, GPT-3.5-turbo results can sometimes be too “chatty” or “creative”. Access is available through a request to OpenAI’s API or through the web application (try for free).

  • OpenAI

    GPT-4 32K context

    $0.12

    GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. This is the extended 32k token context-length model, which is separate to the 8k model (and is more expensive).

    GPT-4 API access is now available.

     

    Note: At the time of writing, ChatGPT Plus subscribers can access Chat GPT-4 by logging into the web application.

  • OpenAI

    GPT-4 8K context

    $0.06

    GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. There are a few different GPT-4 models to choose from. The standard GPT-4 model offers 8k tokens for the context. GPT-4 API access is now available.

    Note: For the ChatGPT web application, ChatGPT is powered by GPT-3.5 turbo by default. However, if you are a paying customer and subscribe to ChatGPT Plus, you can change the model to GPT-4 before you start a chat.

  • Meta AI

    Llama

    FREE
    Meta has created Llama (Large Language Model Meta AI), its state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
    Training smaller foundation models like Llama is desirable in the Large Language Model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. Meta is making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and they also share a Llama model card that details how we built the model in keeping with our approach to responsible AI practices.

  • Meta AI

    Llama 2

    FREE
    Meta has released Llama 2. It has an open license, which allows commercial use for businesses. Llama 2 will be available for use in the Hugging Face Transformers library from today (you will need to sign Meta’s Llama 2 Community License Agreement – https://ai.meta.com/resources/models-and-libraries/llama-downloads/, via MSFT Azure cloud computing service, and through Amazon SageMaker JumpStart).
    Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations. According to Meta, the tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Llama 2 was pre-trained on 2 trillion tokens of data from publicly available sources. The tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks.
    Link to the live demo of Llama2 70B Chatbot -https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI

  • RedPajama

    RedPajama-INCITE-7B-Instruct

    FREE
    The RedPajama project aims to create a set of leading open source models. RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open source AI community. RedPajama-INCITE-7B-Instruct model represents the top-performing open source entry on the HELM benchmarks, surpassing other cutting-edge open models like LLaMA-7B, Falcon-7B, and MPT-7B. The instruct-tuned model is designed for versatility and shines when tasked with few-shot performance.

     

    The Instruct, Chat, Base Model, and ten interim checkpoints are now available on HuggingFace, and all the RedPajama LLMs come with commercial licenses under Apache 2.0.

     

    Play with the RedPajama chat model version here – https://lnkd.in/g3npSEbg
  • StableLM

    StableLM-Base-Alpha -7B

    FREE

    Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.

1 2 3

Ada (fine tuning) GPT-3
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.