Compare Models
-
Stanford University
Alpaca
FREEStanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement. -
BigScience
BLOOM
FREEBigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available. -
Databricks
Dolly 2.0
FREEDolly 2.0 by Databricks, is the first open source, instruction-following Large Language Model, fine-tuned on a human-generated instruction dataset and is licensed for research and commercial use, which means any organization can create, own, and customize powerful LLMs that can talk to people without paying for API access or sharing data with third parties.Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset (crowdsourced among Databricks employees – so cool). Dolly-v2-12b is not a state-of-the-art model, but it does exhibit surprisingly high-quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in smaller model sizes: dolly-v2-7b, a 6.9 billion parameter based on pythia-6.9b and dolly-v2-3b, a 2.8 billion parameter based on pythia-2.8b.Dolly 2.0 can be used for brainstorming, classification, open Q&A, closed Q&A, content generation, information extraction, and summarization. You can access the Dolly 2.0 can training code, the dataset, and the model weights on Hugging Face. -
Technology Innovation Institute
Falcon-40B
OTHERThe Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses.
The family includes Falcon-40B and Falcon-7B, trained on 1 trillion tokens, mainly (>80%) from the RefinedWeb datase. A special variant, Falcon-40B-Instruct, has been made available which may be more suitable for assistant-style tasks. Falcon-40B can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, virtual assistants, language translation, content generation, and sentiment analysis (and more).To use these models, PyTorch 2.0 is required. TII is now calling for proposals from users worldwide to submit their most creative ideas for Falcon 40B’s deployment – https://falconllm.tii.ae/call-for-proposal.php or you can pay to access it via Amazon SageMaker JumpStart.
A demo of Falcon-Chat is available on Hugging Face at https://huggingface.co/spaces/HuggingFaceH4/falcon-chat. -
Technology Innovation Institute
Falcon-7B
FREEThe Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses. Falcon-7B only needs ~15GB and therefore is accessible even on consumer hardware. The model can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, customer service operations, virtual assistants, language translation, content generation, and sentiment analysis.
This raw pretrained model should be finetuned for specific use cases. Falcon-7B-Instruct is also available at https://huggingface.co/tiiuae/falcon-7b-instruct.
If you are looking for a version better-suited model to take generic instructions in a chat format, we recommend Falcon-7B-Instruct rather than the base model. -
Aleph Alpha
Luminous-base
$0.0055Aleph Alpha have the Luminous large language model. Luminous models vary in size, price and parameters. Luminous-base speaks and writes 5 languages: English, French, German, Italian and Spanish and the model can perform information extraction, language simplification and has multi-capable image description capability. Aleph Alpha is targeting “critical enterprises” — organizations like law firms, healthcare providers and banks, which rely heavily on trustable, accurate information. You can try Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. Aleph Alpha are based in Europe, allowing customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-extended
$0.0082Aleph Alpha luminous-extended is the second largest model which is faster and cheaper than Luminous-supreme. the model can perform information extraction, language simplification and has multi-capable image description capability. You can try Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation and Conversion and Multimodal. Aleph Alpha are based in Europe, which allows customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure. -
Aleph Alpha
Luminous-supreme
$0.0319Supreme is the largest model but the most expensive Aleph Alpha Luminous model. Supreme can do all the tasks of the other smaller models (it speaks and writes 5 languages, English, French, German, Italian and Spanish and can undertake Information extraction, language simplification, semantically compare texts, summarize documents, perform Q&A tasks and more) and is well suited for creative writing. You can try out the Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
Aleph Alpha
Luminous-supreme-control
$0.0398Supreme-control is its own model, although it is based on Luminous-supreme and is optimized on a certain set of tasks. The models differ in complexity and ability but this model excels when it can be optimized for question and answering and Natural Language Inference.You can try out the combination of the Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. -
StableLM
StableLM-Base-Alpha -7B
FREEStability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.
-
Yandex
YaLM
FREEYaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. Researchers and developers can use the corporate-size solution to solve the most complex problems associated with natural language processing.Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles. The model is published under the Apache 2.0 license that permits both research and commercial use.