Compare Models
-
EleutherAI
GPT-NeoX-20B
FREEEleutherAI has trained and released several LLMs and the codebases used to train them. EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J- 6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. It is a transformer-based language model and is English-language only, and thus cannot be used for translation or generating text in other languages. It is freely and openly available to the public through a permissive license. -
Google
LaMDA
OTHERLaMDA stands for Language Model for Dialogue Application. It is a conversational Large Language Model (LLM) built by Google as an underlying technology to power dialogue-based applications that can generate natural-sounding human language. LaMDA is built by fine-tuning a family of Transformer-based neural language models specialized for dialog and teaching the models to leverage external knowledge sources. The potential use cases for LaMDA are diverse, ranging from customer service and chatbots to personal assistants and beyond. LaMDA is not open source; currently, there are no APIs or downloads. However, Google is working on making LaMDA more accessible to researchers and developers. In the future, it is likely that LaMDA will be released as an open source project, and that APIs and downloads will be made available. -
Aleph Alpha
Luminous-base
$0.0055Aleph Alpha have the Luminous large language model. Luminous models vary in size, price and parameters. Luminous-base speaks and writes 5 languages: English, French, German, Italian and Spanish and the model can perform information extraction, language simplification and has multi-capable image description capability. Aleph Alpha is targeting “critical enterprises” — organizations like law firms, healthcare providers and banks, which rely heavily on trustable, accurate information. You can try Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. Aleph Alpha are based in Europe, allowing customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-extended
$0.0082Aleph Alpha luminous-extended is the second largest model which is faster and cheaper than Luminous-supreme. the model can perform information extraction, language simplification and has multi-capable image description capability. You can try Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation and Conversion and Multimodal. Aleph Alpha are based in Europe, which allows customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-supreme
$0.0319Supreme is the largest model but the most expensive Aleph Alpha Luminous model. Supreme can do all the tasks of the other smaller models (it speaks and writes 5 languages, English, French, German, Italian and Spanish and can undertake Information extraction, language simplification, semantically compare texts, summarize documents, perform Q&A tasks and more) and is well suited for creative writing. You can try out the Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
Aleph Alpha
Luminous-supreme-control
$0.0398Supreme-control is its own model, although it is based on Luminous-supreme and is optimized on a certain set of tasks. The models differ in complexity and ability but this model excels when it can be optimized for question and answering and Natural Language Inference.You can try out the combination of the Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
Google
PaLM 2 chat-bison-001
$0.0021535PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and chat-bison-001 has been fine-tuned for multi-turn conversation use cases. If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).Watch Paige Bailey introducing PaLM 2: view here
-
ChatGLM
PaLM 2 text-bison-001
$0.004PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification, question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and cost, and text-bison-001 can be fine-tuned to follow natural language instructions and is suitable for various language tasks such as classification, sentiment analysis, entity extraction, extractive question answering, summarization, re-writing text in a different style, and concept ideation.If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).
Watch Paige Bailey introducing PaLM 2: view here
-
Google
PaLM 2 textembedding-gecko-001
$0.0004PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes: Unicorn (the largest), Bison, Otter, and Gecko (the smallest) and the different sizes of the submodels allow PaLM 2 to be more efficient and to perform different tasks. Gecko is the smallest and cheapest model for simple tasks and textembedding-gecko-001 returns model embeddings for text inputs.If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).Watch Paige Bailey introducing PaLM 2: view here
-
Amazon
SageMaker
FREEAmazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment. -
StableLM
StableLM-Base-Alpha -7B
FREEStability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.
-
Cohere
Summarize
$0.015Cohere is a Canadian startup that provides high-performance and secure LLMs for the enterprise. Their models work on public, private, or hybrid clouds and is available as an API that can be integrated into various libraries using Python, Node, or Go software development kits (SDKs).Cohere Summarize generates a succinct version of a provided text. This summary relays the most important messages of the text, and a user can configure the results with a variety of parameters to support unique use cases. It can instantly encapsulate the key points of a document and provides text summarization capabilities at scale.