Compare Models
-
Google
BARD
FREEGoogle’s Bard is now powered by PaLM 2, the new powerful LLM launched in May 2023. PaLM 2 is trained on a massive dataset of text and code. Bard can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Bard is programmed to use the web to find the most recent answers to questions. This means that when you ask Bard a question, it will not only use its knowledge of the world to answer your question, but it will also use the internet to find the most recent information on the topic. This allows Bard to provide you with the most accurate and up-to-date information possible (very cool).The exact billing structure for Bard is still under development (it is free to try at the moment) but you will likely be able to purchase tokens in bulk at a discounted price. According to Google, you may also be able to use tokens you have earned through other means, such as completing surveys or participating in beta testing programs. -
BigScience
BLOOM
FREEBigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available. -
ChatGLM
ChatGLM-6B
FREEResearchers at the Tsinghua University in China have worked on developing the ChatGLM series of models that have comparable performance to other models such as GPT-3 and BLOOM. ChatGLM-6B is an open bilingual language model (trained on Chinese and English). It is based on General Language Model (GLM) framework, with 6.2B parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). The following models are available: ChatGLM-130B (an open source LLM), ChatGLM-100B (not open source but available through invite-only access), and ChatGLM-6 (a lightweight open source alternative). ChatGLM LLMs are available with a Apache-2.0 license that allows commercial use. We have included the link to the Hugging Face page where you can try the ChatGLM-6B Chatbot for free. -
OpenAI
ChatGPT (Web Browser Version)
FREEThe ChatGPT Web Browser Version is an accessible online powerful language model. The chatbot is designed to provide users with a user-friendly interface that facilitates interaction without needing any specialized programming or machine learning knowledge. Users can leverage ChatGPT for a wide range of applications, including but not limited to tutoring in academic subjects, generating creative content, drafting and editing text, providing personalized recommendations, translating languages, and even programming help. Businesses can use it for automating customer service, generating marketing content, and providing personalized user experiences.ChatGPT is powered by GPT-3.5-turbo by default and is free to try. If you are a paying customer and subscribe to ChatGPT Plus, you can change the model to GPT-4 before you start a chat. Currently, the ChatGPT models support several languages, including but not limited to English, Spanish, French, German, Portuguese, Italian and Dutch. New features for ChatGPT-Plus users have just been announced. These include a web-browsing feature that provides up-to-date information (prior to the update, ChatGPT was limited in what it could answer, as it was only trained on data until 2021). ChatGPT-Plus users can also access third-party plug-ins for web services like Expedia, Kayak, and Instacart. With these plug-ins, users can prompt ChatGPT to perform tasks on specific websites. -
OpenAI
Claude 2 (Web Browser Version)
FREEAnthropic’s Claude 2 is now available to the public if you’re in the US or UK. For the web browser version. just click “Talk to Claude,” and you’ll be prompted to provide an email address. After you confirm the address you enter, you’ll be ready to go.Claude 2 scored 76.5 percent on the multiple choice section of the Bar exam and in the 90th percentile on the reading and writing portion of the GRE. Its coding skills have improved from its predecessor scoring 71.2 percent on a Python coding test compared to Claude’s 56 percent. While the Google-backed Anthropic initially launched Claude in March, the chatbot was only available to businesses by request or as an app in Slack. With Claude 2, Anthropic is building upon the chatbot’s existing capabilities with a number of improvements. -
Databricks
Dolly 2.0
FREEDolly 2.0 by Databricks, is the first open source, instruction-following Large Language Model, fine-tuned on a human-generated instruction dataset and is licensed for research and commercial use, which means any organization can create, own, and customize powerful LLMs that can talk to people without paying for API access or sharing data with third parties.Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset (crowdsourced among Databricks employees – so cool). Dolly-v2-12b is not a state-of-the-art model, but it does exhibit surprisingly high-quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in smaller model sizes: dolly-v2-7b, a 6.9 billion parameter based on pythia-6.9b and dolly-v2-3b, a 2.8 billion parameter based on pythia-2.8b.Dolly 2.0 can be used for brainstorming, classification, open Q&A, closed Q&A, content generation, information extraction, and summarization. You can access the Dolly 2.0 can training code, the dataset, and the model weights on Hugging Face. -
Google, Stanford University
Electra
FREEELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) is a transformer-based model like BERT, but it uses a different pre-training approach, which is more efficient and requires less computational resources. It was created by a team of researchers from Google Research, Brain Team, and Stanford University. ELECTRA models are trained to distinguish “real” input tokens vs “fake” input tokens generated by another neural network (for the more technical audience, ELECTRA uses a new pre-training task, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions). Inspired by generative adversarial networks (GANs), ELECTRA trains the model to distinguish between “real” and “fake” input data. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. Go to GitHub where you can access the three models (ELECTRA-Small, ELECTRA-Base and ELECTRA-Large). -
Google
FLAN-T5
FREEIf you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1,000 additional tasks covering more languages – the NLP is for English, German, French. It has Apache-2.0 license which is a permissive open source license that allows for commercial use. With appropriate prompting, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question answering, sentence and sentiment classification, translation, and pronoun resolution. -
Google
Flan-UL2
FREEDeveloped by Google, Flan-UL2, which is a more powerful version of the T5 model that has been trained using Flan, and it is downloadable from Hugging Face. It shows performance exceeding the ‘prior’ versions of Flan-T5. With the ability to reason for itself and generalize better than the previous models, Flan-UL2 is a great improvement. Flan-UL2 is a machine learning model that can generate textual descriptions of images and has the potential to be used for image search, video captioning, automated content generation, and visual question answering. Flan-UL2 has an Apache-2.0 license, which is a permissive open source license that allows for commercial use.If Flan-UL2’s 20B parameters are too much, consider the previous iteration of Flan-T5, which comes in five different sizes and might be more suitable for your needs. -
StableLM
StableLM-Base-Alpha -7B
FREEStability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.
-
LMSYS Org
Vicuna-13B
FREEVicuna-13B is an open-source chatbot developed by a team of researchers from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego. The chatbot was trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. There is a 13B and 7B parameter models that are available on Hugging Face.
Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. The code and weights and an online demo are publicly available for non-commercial use. Here is a link to learn more about how it compares to other models – https://lmsys.org/blog/2023-03-30-vicuna/.
To use this model, you need to install LLaMA weights first and convert them into Hugging Face weights, and the cost of training Vicuna-13B is around $300.