Compare Models
-
Stanford University
Alpaca
FREEStanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement. -
Google
BERT
FREEBERT (Bidirectional Encoder Representations from Transformers) was introduced in 2018 by researchers at Google AI. BERT uses AI in the form of natural language processing (NLP), natural language understanding (NLU), and sentiment analysis to process every word in a search query in relation to all the other words in a sentence, giving it a robust understanding of context and semantics. This pre-training process is incredibly powerful and the learned weights can be fine-tuned with just one additional output layer to create models for a variety of NLP tasks such as question answering and sentiment analysis. You can download the smaller BERT models for FREE from the official BERT GitHub page. -
Google, Stanford University
Electra
FREEELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) is a transformer-based model like BERT, but it uses a different pre-training approach, which is more efficient and requires less computational resources. It was created by a team of researchers from Google Research, Brain Team, and Stanford University. ELECTRA models are trained to distinguish “real” input tokens vs “fake” input tokens generated by another neural network (for the more technical audience, ELECTRA uses a new pre-training task, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions). Inspired by generative adversarial networks (GANs), ELECTRA trains the model to distinguish between “real” and “fake” input data. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. Go to GitHub where you can access the three models (ELECTRA-Small, ELECTRA-Base and ELECTRA-Large). -
Technology Innovation Institute
Falcon-40B
OTHERThe Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses.
The family includes Falcon-40B and Falcon-7B, trained on 1 trillion tokens, mainly (>80%) from the RefinedWeb datase. A special variant, Falcon-40B-Instruct, has been made available which may be more suitable for assistant-style tasks. Falcon-40B can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, virtual assistants, language translation, content generation, and sentiment analysis (and more).To use these models, PyTorch 2.0 is required. TII is now calling for proposals from users worldwide to submit their most creative ideas for Falcon 40B’s deployment – https://falconllm.tii.ae/call-for-proposal.php or you can pay to access it via Amazon SageMaker JumpStart.
A demo of Falcon-Chat is available on Hugging Face at https://huggingface.co/spaces/HuggingFaceH4/falcon-chat. -
Technology Innovation Institute
Falcon-7B
FREEThe Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses. Falcon-7B only needs ~15GB and therefore is accessible even on consumer hardware. The model can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, customer service operations, virtual assistants, language translation, content generation, and sentiment analysis.
This raw pretrained model should be finetuned for specific use cases. Falcon-7B-Instruct is also available at https://huggingface.co/tiiuae/falcon-7b-instruct.
If you are looking for a version better-suited model to take generic instructions in a chat format, we recommend Falcon-7B-Instruct rather than the base model. -
Google
FLAN-T5
FREEIf you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1,000 additional tasks covering more languages – the NLP is for English, German, French. It has Apache-2.0 license which is a permissive open source license that allows for commercial use. With appropriate prompting, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question answering, sentence and sentiment classification, translation, and pronoun resolution. -
Google
Flan-UL2
FREEDeveloped by Google, Flan-UL2, which is a more powerful version of the T5 model that has been trained using Flan, and it is downloadable from Hugging Face. It shows performance exceeding the ‘prior’ versions of Flan-T5. With the ability to reason for itself and generalize better than the previous models, Flan-UL2 is a great improvement. Flan-UL2 is a machine learning model that can generate textual descriptions of images and has the potential to be used for image search, video captioning, automated content generation, and visual question answering. Flan-UL2 has an Apache-2.0 license, which is a permissive open source license that allows for commercial use.If Flan-UL2’s 20B parameters are too much, consider the previous iteration of Flan-T5, which comes in five different sizes and might be more suitable for your needs.