NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Google, Stanford University

    Electra

    FREE
    ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) is a transformer-based model like BERT, but it uses a different pre-training approach, which is more efficient and requires less computational resources. It was created by a team of researchers from Google Research, Brain Team, and Stanford University. ELECTRA models are trained to distinguish “real” input tokens vs “fake” input tokens generated by another neural network (for the more technical audience, ELECTRA uses a new pre-training task, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions). Inspired by generative adversarial networks (GANs), ELECTRA trains the model to distinguish between “real” and “fake” input data. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. Go to GitHub where you can access the three models (ELECTRA-Small, ELECTRA-Base and ELECTRA-Large).

  • Google

    FLAN-T5

    FREE
    If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1,000 additional tasks covering more languages – the NLP is for English, German, French. It has Apache-2.0 license which is a permissive open source license that allows for commercial use. With appropriate prompting, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question answering, sentence and sentiment classification, translation, and pronoun resolution.
  • Google

    Flan-UL2

    FREE
    Developed by Google, Flan-UL2, which is a more powerful version of the T5 model that has been trained using Flan, and it is downloadable from Hugging Face. It shows performance exceeding the ‘prior’ versions of Flan-T5. With the ability to reason for itself and generalize better than the previous models, Flan-UL2 is a great improvement. Flan-UL2 is a machine learning model that can generate textual descriptions of images and has the potential to be used for image search, video captioning, automated content generation, and visual question answering. Flan-UL2 has an Apache-2.0 license, which is a permissive open source license that allows for commercial use.
    If Flan-UL2’s 20B parameters are too much, consider the previous iteration of Flan-T5, which comes in five different sizes and might be more suitable for your needs.
  • EleutherAI

    GPT-J

    FREE
    EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. EleutherAI has trained and released several LLMs and the codebases used to train them. GPT-J can be used for code generation, making a chat bot, story writing, language translation and searching. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, which is generating text from a prompt. EleutherAI has a web page where you can test to see how the GPT-J works, or you can run GPT-J on google colab, or use the Hugging Face Transformers library.
  • EleutherAI

    GPT-NeoX-20B

    FREE
    EleutherAI has trained and released several LLMs and the codebases used to train them. EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J- 6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. It is a transformer-based language model and is English-language only, and thus cannot be used for translation or generating text in other languages. It is freely and openly available to the public through a permissive license.

  • Google

    LaMDA

    OTHER
    LaMDA stands for Language Model for Dialogue Application. It is a conversational Large Language Model (LLM) built by Google as an underlying technology to power dialogue-based applications that can generate natural-sounding human language. LaMDA is built by fine-tuning a family of Transformer-based neural language models specialized for dialog and teaching the models to leverage external knowledge sources. The potential use cases for LaMDA are diverse, ranging from customer service and chatbots to personal assistants and beyond. LaMDA is not open source; currently, there are no APIs or downloads. However, Google is working on making LaMDA more accessible to researchers and developers. In the future, it is likely that LaMDA will be released as an open source project, and that APIs and downloads will be made available.
  • Google

    PaLM 2 chat-bison-001

    $0.0021535
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.
    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and chat-bison-001 has been fine-tuned for multi-turn conversation use cases. If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • ChatGLM

    PaLM 2 text-bison-001

    $0.004
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification, question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.

     

    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and cost, and text-bison-001 can be fine-tuned to follow natural language instructions and is suitable for various language tasks such as classification, sentiment analysis, entity extraction, extractive question answering, summarization, re-writing text in a different style, and concept ideation.

     

    If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • Google

    PaLM 2 textembedding-gecko-001

    $0.0004
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.
    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes: Unicorn (the largest), Bison, Otter, and Gecko (the smallest) and the different sizes of the submodels allow PaLM 2 to be more efficient and to perform different tasks. Gecko is the smallest and cheapest model for simple tasks and textembedding-gecko-001 returns model embeddings for text inputs.
    If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • RedPajama

    RedPajama-INCITE-7B-Instruct

    FREE
    The RedPajama project aims to create a set of leading open source models. RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open source AI community. RedPajama-INCITE-7B-Instruct model represents the top-performing open source entry on the HELM benchmarks, surpassing other cutting-edge open models like LLaMA-7B, Falcon-7B, and MPT-7B. The instruct-tuned model is designed for versatility and shines when tasked with few-shot performance.

     

    The Instruct, Chat, Base Model, and ten interim checkpoints are now available on HuggingFace, and all the RedPajama LLMs come with commercial licenses under Apache 2.0.

     

    Play with the RedPajama chat model version here – https://lnkd.in/g3npSEbg
1 2

Alpaca
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.