NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • BigScience

    BLOOM

    FREE
    BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available.

  • Deepmind

    Chinchilla AI

    OTHER

    Google’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.

  • Databricks

    Dolly 2.0

    FREE
    Dolly 2.0 by Databricks, is the first open source, instruction-following Large Language Model, fine-tuned on a human-generated instruction dataset and is licensed for research and commercial use, which means any organization can create, own, and customize powerful LLMs that can talk to people without paying for API access or sharing data with third parties.

    Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset (crowdsourced among Databricks employees – so cool). Dolly-v2-12b is not a state-of-the-art model, but it does exhibit surprisingly high-quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in smaller model sizes: dolly-v2-7b, a 6.9 billion parameter based on pythia-6.9b and dolly-v2-3b, a 2.8 billion parameter based on pythia-2.8b.

    Dolly 2.0 can be used for brainstorming, classification, open Q&A, closed Q&A, content generation, information extraction, and summarization. You can access the Dolly 2.0 can training code, the dataset, and the model weights on Hugging Face.
  • EleutherAI

    GPT-J

    FREE
    EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. EleutherAI has trained and released several LLMs and the codebases used to train them. GPT-J can be used for code generation, making a chat bot, story writing, language translation and searching. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, which is generating text from a prompt. EleutherAI has a web page where you can test to see how the GPT-J works, or you can run GPT-J on google colab, or use the Hugging Face Transformers library.
  • EleutherAI

    GPT-NeoX-20B

    FREE
    EleutherAI has trained and released several LLMs and the codebases used to train them. EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J- 6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. It is a transformer-based language model and is English-language only, and thus cannot be used for translation or generating text in other languages. It is freely and openly available to the public through a permissive license.

  • Meta AI

    Llama

    FREE
    Meta has created Llama (Large Language Model Meta AI), its state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
    Training smaller foundation models like Llama is desirable in the Large Language Model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. Meta is making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and they also share a Llama model card that details how we built the model in keeping with our approach to responsible AI practices.

  • Meta AI

    Llama 2

    FREE
    Meta has released Llama 2. It has an open license, which allows commercial use for businesses. Llama 2 will be available for use in the Hugging Face Transformers library from today (you will need to sign Meta’s Llama 2 Community License Agreement – https://ai.meta.com/resources/models-and-libraries/llama-downloads/, via MSFT Azure cloud computing service, and through Amazon SageMaker JumpStart).
    Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations. According to Meta, the tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Llama 2 was pre-trained on 2 trillion tokens of data from publicly available sources. The tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks.
    Link to the live demo of Llama2 70B Chatbot -https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI

  • RedPajama

    RedPajama-INCITE-7B-Instruct

    FREE
    The RedPajama project aims to create a set of leading open source models. RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open source AI community. RedPajama-INCITE-7B-Instruct model represents the top-performing open source entry on the HELM benchmarks, surpassing other cutting-edge open models like LLaMA-7B, Falcon-7B, and MPT-7B. The instruct-tuned model is designed for versatility and shines when tasked with few-shot performance.

     

    The Instruct, Chat, Base Model, and ten interim checkpoints are now available on HuggingFace, and all the RedPajama LLMs come with commercial licenses under Apache 2.0.

     

    Play with the RedPajama chat model version here – https://lnkd.in/g3npSEbg
  • Amazon

    SageMaker

    FREE
    Amazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.

     

    The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment.

BLOOM
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.