Compare Models
-
OpenAI
GPT-3.5-turbo 16k
$0.004GPT-3.5-turbo 16k has the same capabilities as the standard gpt-3.5-turbo (4k model) but with 4 times the context but at twice the price. In general, a larger context window can be more powerful because it takes into account more information from the surrounding text, which can lead to better predictionsGPT-3.5-turbo was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT. However, paying customers who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat.GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003. GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. Access is available through a request to OpenAI’s API or through the web application (try for free). -
OpenAI
GPT-3.5-turbo 4k
$0.002GPT-3.5-turbo is an upgraded version of the GPT-3 model. It was designed to provide better performance and is well-known as the model that, by default, powers ChatGPT (however, paying customer who subscribe to ChatGPT Plus can change the model to GPT-4 before you start a chat).GPT-3.5-turbo is optimized for conversational formats and is superior to GPT-3 models, and the performance of GPT-3.5-turbo is on par with Instruct Davinci-003 (however is also ten times cheaper and has been seen to be three times faster). GPT-3.5-turbo was trained on a massive corpus of text data, including books, articles, and web pages from across the internet and is used for tasks like content and code generation, question answering, translation, and more. In some cases, GPT-3.5-turbo results can sometimes be too “chatty” or “creative”. Access is available through a request to OpenAI’s API or through the web application (try for free). -
OpenAI
GPT-4 32K context
$0.12GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. This is the extended 32k token context-length model, which is separate to the 8k model (and is more expensive).
GPT-4 API access is now available.
Note: At the time of writing, ChatGPT Plus subscribers can access Chat GPT-4 by logging into the web application.
-
OpenAI
GPT-4 8K context
$0.06GPT-4 is OpenAI’s new design that incorporates additional improvements and advancements, including being multimodal so it can take both text and image inputs. With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy. GPT-4 has a more diverse range of training data, incorporating additional languages and sources beyond just English. This means that the model will be able to process and generate text in multiple languages and better understand the nuances and subtleties of different languages and dialects. There are a few different GPT-4 models to choose from. The standard GPT-4 model offers 8k tokens for the context. GPT-4 API access is now available.
Note: For the ChatGPT web application, ChatGPT is powered by GPT-3.5 turbo by default. However, if you are a paying customer and subscribe to ChatGPT Plus, you can change the model to GPT-4 before you start a chat.
-
AI21 Labs
Jurassic-2 Grande (Base & Instruct)
$0.01J2-Grande offers enhanced text generation capabilities, making it well-suited to language tasks with a greater degree of complexity. Its fine-tuning options allow for optimization of quality, while maintaining an affordable price and high efficiency (see site for more details). It is an ideal choice for complex language processing tasks and generative text applications. All of J2 models support several non-English languages, including: Spanish, French, German, Portuguese, Italian and Dutch. All Jurassic foundation models are trained on a massive corpus of text, making them a powerful basis for a wide range of natural language processing applications, capable of understanding and composing human-like text. Models are available through an API and you can start with a free trial and then pay based on usage. -
AI21 Labs
Jurassic-2 Jumbo (Base & Instruct)
$0.015As the largest and most powerful model in the Jurassic series, J2-Jumbo is an ideal choice for the most complex language processing tasks and generative text applications. Further, the model can be fine-tuned for optimum performance in any custom application. Jurassic-2 not only improves upon Jurassic-1 (AI21 Studio previous generation models) in every aspect, making it highly versatile in general purpose text-generators, and capable of composing human-like text and solving complex tasks such as question answering and text classification. All of the J2 models support several non-English languages, including: Spanish, French, German, Portuguese, Italian and Dutch. All Jurassic foundation models are trained on a massive corpus of text, making them a powerful basis for a wide range of natural language processing applications, capable of understanding and composing human-like text. Models are available through an API and you can start with a free trial and then pay based on usage. -
AI21 Labs
Jurassic-2 Large (Base & Instruct)
$0.003Designed for fast responses, the Jurassic-2 Large model can be fine-tuned to optimize performance for relatively simple tasks, making it an ideal choice for language processing tasks that require maximum affordability and less processing power. All of the J2 models support several non-English languages, including: Spanish, French, German, Portuguese, Italian and Dutch. All Jurassic foundation models are trained on a massive corpus of text, making them a powerful basis for a wide range of natural language processing applications, capable of understanding and composing human-like text. Models are available through an API and you can start with a free trial and then pay based on usage.
-
Aleph Alpha
Luminous-base
$0.0055Aleph Alpha have the Luminous large language model. Luminous models vary in size, price and parameters. Luminous-base speaks and writes 5 languages: English, French, German, Italian and Spanish and the model can perform information extraction, language simplification and has multi-capable image description capability. Aleph Alpha is targeting “critical enterprises” — organizations like law firms, healthcare providers and banks, which rely heavily on trustable, accurate information. You can try Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. Aleph Alpha are based in Europe, allowing customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-extended
$0.0082Aleph Alpha luminous-extended is the second largest model which is faster and cheaper than Luminous-supreme. the model can perform information extraction, language simplification and has multi-capable image description capability. You can try Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation and Conversion and Multimodal. Aleph Alpha are based in Europe, which allows customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-supreme
$0.0319Supreme is the largest model but the most expensive Aleph Alpha Luminous model. Supreme can do all the tasks of the other smaller models (it speaks and writes 5 languages, English, French, German, Italian and Spanish and can undertake Information extraction, language simplification, semantically compare texts, summarize documents, perform Q&A tasks and more) and is well suited for creative writing. You can try out the Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
Aleph Alpha
Luminous-supreme-control
$0.0398Supreme-control is its own model, although it is based on Luminous-supreme and is optimized on a certain set of tasks. The models differ in complexity and ability but this model excels when it can be optimized for question and answering and Natural Language Inference.You can try out the combination of the Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
OpenAI
text-davinci-003
$0.02Text-davinci-003 is recognized as GPT 3.5 and is a variant of the GPT-3 model. While both Davinci and text-davinci-003 are powerful models, they differ in a few key ways. Text-davinci-003 is a newer and more capable model explicitly designed for instruction-following tasks. Text-davinci-003 was trained on a more recent dataset containing data up to June 2021. It can do any language task with better quality, longer output, and consistent instruction-following than the Curie, Babbage, or Ada models. Text-davinci-003 supports a longer context window (max prompt plus completion length) than Davinci.For those requesting the OpenAI’s API, GPT-3.5-turbo may be a better choice for tasks that require high accuracy in math or zero-shot classification and sentiment analysis than text-davinci-003. To note, GPT-3.5-turbo performs at a similar capability to text-davinci-003 but at 10 percent the price per token. OpenAI recommends GPT-3.5-turbo for most use cases.