Compare Models

  • Stanford University

    Alpaca

    FREE
    Stanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement.
  • ChatGLM

    ChatGLM-6B

    FREE
    Researchers at the Tsinghua University in China have worked on developing the ChatGLM series of models that have comparable performance to other models such as GPT-3 and BLOOM. ChatGLM-6B is an open bilingual language model (trained on Chinese and English). It is based on General Language Model (GLM) framework, with 6.2B parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). The following models are available: ChatGLM-130B (an open source LLM), ChatGLM-100B (not open source but available through invite-only access), and ChatGLM-6 (a lightweight open source alternative). ChatGLM LLMs are available with a Apache-2.0 license that allows commercial use. We have included the link to the Hugging Face page where you can try the ChatGLM-6B Chatbot for free.
  • Google, Stanford University

    Electra

    FREE
    ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) is a transformer-based model like BERT, but it uses a different pre-training approach, which is more efficient and requires less computational resources. It was created by a team of researchers from Google Research, Brain Team, and Stanford University. ELECTRA models are trained to distinguish “real” input tokens vs “fake” input tokens generated by another neural network (for the more technical audience, ELECTRA uses a new pre-training task, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions). Inspired by generative adversarial networks (GANs), ELECTRA trains the model to distinguish between “real” and “fake” input data. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. Go to GitHub where you can access the three models (ELECTRA-Small, ELECTRA-Base and ELECTRA-Large).

  • Google

    FLAN-T5

    FREE
    If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1,000 additional tasks covering more languages – the NLP is for English, German, French. It has Apache-2.0 license which is a permissive open source license that allows for commercial use. With appropriate prompting, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question answering, sentence and sentiment classification, translation, and pronoun resolution.
  • Google

    Flan-UL2

    FREE
    Developed by Google, Flan-UL2, which is a more powerful version of the T5 model that has been trained using Flan, and it is downloadable from Hugging Face. It shows performance exceeding the ‘prior’ versions of Flan-T5. With the ability to reason for itself and generalize better than the previous models, Flan-UL2 is a great improvement. Flan-UL2 is a machine learning model that can generate textual descriptions of images and has the potential to be used for image search, video captioning, automated content generation, and visual question answering. Flan-UL2 has an Apache-2.0 license, which is a permissive open source license that allows for commercial use.
    If Flan-UL2’s 20B parameters are too much, consider the previous iteration of Flan-T5, which comes in five different sizes and might be more suitable for your needs.
  • EleutherAI

    GPT-J

    FREE
    EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. EleutherAI has trained and released several LLMs and the codebases used to train them. GPT-J can be used for code generation, making a chat bot, story writing, language translation and searching. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, which is generating text from a prompt. EleutherAI has a web page where you can test to see how the GPT-J works, or you can run GPT-J on google colab, or use the Hugging Face Transformers library.
  • EleutherAI

    GPT-NeoX-20B

    FREE
    EleutherAI has trained and released several LLMs and the codebases used to train them. EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J- 6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. It is a transformer-based language model and is English-language only, and thus cannot be used for translation or generating text in other languages. It is freely and openly available to the public through a permissive license.

  • NVIDIA

    NeMo

    FREE
    NVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use.