Compare Models
-
BigScience
BLOOM
FREEBigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available. -
Anthropic
Claude 2 – API version
$0.03268Anthropic’s Claude 2 much larger context window (launching with 100k for now but will go up to 200K).will make it possible to feed it entire books or have it generate entire books at once.Claude 2 scored 76.5 percent on the multiple choice section of the Bar exam and in the 90th percentile on the reading and writing portion of the GRE. Its coding skills have improved from its predecessor scoring 71.2 percent on a Python coding test compared to Claude’s 56 percent.Claude 2 is also 63% cheaper on inputs and 46% cheaper on outputs than the GPT-4 8K context version (the default version of the OpenAI model). -
OpenAI
Claude 2 (Web Browser Version)
FREEAnthropic’s Claude 2 is now available to the public if you’re in the US or UK. For the web browser version. just click “Talk to Claude,” and you’ll be prompted to provide an email address. After you confirm the address you enter, you’ll be ready to go.Claude 2 scored 76.5 percent on the multiple choice section of the Bar exam and in the 90th percentile on the reading and writing portion of the GRE. Its coding skills have improved from its predecessor scoring 71.2 percent on a Python coding test compared to Claude’s 56 percent. While the Google-backed Anthropic initially launched Claude in March, the chatbot was only available to businesses by request or as an app in Slack. With Claude 2, Anthropic is building upon the chatbot’s existing capabilities with a number of improvements. -
Anthropic
Claude Instant
$0.00551Claude Instant is a faster and less expensive model than Claude-v1 that can handle casual dialog, text analysis and summarization, and document Q&A. Optimized for low latency, it handles high throughput use cases at lower costs that other Claude family of models. Anthropic is an AI startup founded by former OpenAI employees. Anthropic specializes in developing general AI systems and language models, with a company ethos of responsible AI usage.API access can be gained after application. -
Anthropic
Claude Instant v1
$0.03268A powerful model, Claude-v1 can handle sophisticated dialog, creative content generation, and detailed instructions. Optimized for superior performance on tasks that require complex reasoning, Claude is Anthropic’s best-in-class offering.API access can be gained after application. -
Aleph Alpha
Luminous-base
$0.0055Aleph Alpha have the Luminous large language model. Luminous models vary in size, price and parameters. Luminous-base speaks and writes 5 languages: English, French, German, Italian and Spanish and the model can perform information extraction, language simplification and has multi-capable image description capability. Aleph Alpha is targeting “critical enterprises” — organizations like law firms, healthcare providers and banks, which rely heavily on trustable, accurate information. You can try Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. Aleph Alpha are based in Europe, allowing customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-extended
$0.0082Aleph Alpha luminous-extended is the second largest model which is faster and cheaper than Luminous-supreme. the model can perform information extraction, language simplification and has multi-capable image description capability. You can try Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation and Conversion and Multimodal. Aleph Alpha are based in Europe, which allows customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-supreme
$0.0319Supreme is the largest model but the most expensive Aleph Alpha Luminous model. Supreme can do all the tasks of the other smaller models (it speaks and writes 5 languages, English, French, German, Italian and Spanish and can undertake Information extraction, language simplification, semantically compare texts, summarize documents, perform Q&A tasks and more) and is well suited for creative writing. You can try out the Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
Aleph Alpha
Luminous-supreme-control
$0.0398Supreme-control is its own model, although it is based on Luminous-supreme and is optimized on a certain set of tasks. The models differ in complexity and ability but this model excels when it can be optimized for question and answering and Natural Language Inference.You can try out the combination of the Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
Amazon
SageMaker
FREEAmazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment. -
StableLM
StableLM-Base-Alpha -7B
FREEStability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.
-
Yandex
YaLM
FREEYaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. Researchers and developers can use the corporate-size solution to solve the most complex problems associated with natural language processing.Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles. The model is published under the Apache 2.0 license that permits both research and commercial use.