NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • BigScience

    BLOOM

    FREE
    BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available.

  • Deepmind

    Chinchilla AI

    OTHER

    Google’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.

  • Technology Innovation Institute

    Falcon-40B

    OTHER
    The Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses.
    The family includes Falcon-40B and Falcon-7B, trained on 1 trillion tokens, mainly (>80%) from the RefinedWeb datase. A special variant, Falcon-40B-Instruct, has been made available which may be more suitable for assistant-style tasks. Falcon-40B can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, virtual assistants, language translation, content generation, and sentiment analysis (and more).

    To use these models, PyTorch 2.0 is required. TII is now calling for proposals from users worldwide to submit their most creative ideas for Falcon 40B’s deployment – https://falconllm.tii.ae/call-for-proposal.php or you can pay to access it via Amazon SageMaker JumpStart.
    A demo of Falcon-Chat is available on Hugging Face at https://huggingface.co/spaces/HuggingFaceH4/falcon-chat.

  • Technology Innovation Institute

    Falcon-7B

    FREE

    The Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses. Falcon-7B only needs ~15GB and therefore is accessible even on consumer hardware. The model can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, customer service operations, virtual assistants, language translation, content generation, and sentiment analysis.

    This raw pretrained model should be finetuned for specific use cases. Falcon-7B-Instruct is also available at https://huggingface.co/tiiuae/falcon-7b-instruct.
    If you are looking for a version better-suited model to take generic instructions in a chat format, we recommend Falcon-7B-Instruct rather than the base model.

  • Meta AI

    Llama

    FREE
    Meta has created Llama (Large Language Model Meta AI), its state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
    Training smaller foundation models like Llama is desirable in the Large Language Model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. Meta is making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and they also share a Llama model card that details how we built the model in keeping with our approach to responsible AI practices.

  • Meta AI

    Llama 2

    FREE
    Meta has released Llama 2. It has an open license, which allows commercial use for businesses. Llama 2 will be available for use in the Hugging Face Transformers library from today (you will need to sign Meta’s Llama 2 Community License Agreement – https://ai.meta.com/resources/models-and-libraries/llama-downloads/, via MSFT Azure cloud computing service, and through Amazon SageMaker JumpStart).
    Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations. According to Meta, the tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Llama 2 was pre-trained on 2 trillion tokens of data from publicly available sources. The tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks.
    Link to the live demo of Llama2 70B Chatbot -https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI

  • Aleph Alpha

    Luminous-base

    $0.0055
    Aleph Alpha have the Luminous large language model. Luminous models vary in size, price and parameters. Luminous-base speaks and writes 5 languages: English, French, German, Italian and Spanish and the model can perform information extraction, language simplification and has multi-capable image description capability. Aleph Alpha is targeting “critical enterprises” — organizations like law firms, healthcare providers and banks, which rely heavily on trustable, accurate information. You can try Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. Aleph Alpha are based in Europe, allowing customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure.

  • Aleph Alpha

    Luminous-extended

    $0.0082
    Aleph Alpha luminous-extended is the second largest model which is faster and cheaper than Luminous-supreme. the model can perform information extraction, language simplification and has multi-capable image description capability. You can try Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation and Conversion and Multimodal. Aleph Alpha are based in Europe, which allows customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure.
  • Aleph Alpha

    Luminous-supreme

    $0.0319
    Supreme is the largest model but the most expensive Aleph Alpha Luminous model. Supreme can do all the tasks of the other smaller models (it speaks and writes 5 languages, English, French, German, Italian and Spanish and can undertake Information extraction, language simplification, semantically compare texts, summarize documents, perform Q&A tasks and more) and is well suited for creative writing. You can try out the Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal.
  • Aleph Alpha

    Luminous-supreme-control

    $0.0398
    Supreme-control is its own model, although it is based on Luminous-supreme and is optimized on a certain set of tasks. The models differ in complexity and ability but this model excels when it can be optimized for question and answering and Natural Language Inference.
    You can try out the combination of the Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal.

  • LMSYS Org

    Vicuna-13B

    FREE

    Vicuna-13B is an open-source chatbot developed by a team of researchers from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego. The chatbot was trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. There is a 13B and 7B parameter models that are available on Hugging Face.

    Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. The code and weights and an online demo are publicly available for non-commercial use. Here is a link to learn more about how it compares to other models – https://lmsys.org/blog/2023-03-30-vicuna/.

    To use this model, you need to install LLaMA weights first and convert them into Hugging Face weights, and the cost of training Vicuna-13B is around $300.

  • Yandex

    YaLM

    FREE
    YaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. Researchers and developers can use the corporate-size solution to solve the most complex problems associated with natural language processing.
    Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles. The model is published under the Apache 2.0 license that permits both research and commercial use.

BLOOM
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.