NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • BigScience

    BLOOM

    FREE
    BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available.

  • BloombergGPT

    BloombergGPT

    OTHER
    BloombergGPT represents the first step in developing and applying LLM and generative AI technology for the financial industry. Bloomberg GPT has been trained on enormous amounts of financial data and is purpose-built for finance. The mixed dataset training leads to a model that outperforms existing LLMs on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Bloomberg GPT can perform a range of NLP tasks such as sentiment analysis, named entity recognition, news classification, and even writing headlines. With Bloomberg GPT, traders and analysts can perform financial analysis and insights more quickly and efficiently, saving valuable time that can be used for other critical tasks. To use Bloomberg GPT, you need access to Bloomberg’s terminal software (a platform investors and financial professionals use to access real-time market data, breaking news, financial research, and advanced analytics). Bloomberg also offers a variety of other subscription options, including subscriptions for financial institutions, universities, and governments. The price of a Bloomberg terminal varies depending on the type of subscription and the number of users.
  • ChatGLM

    ChatGLM-6B

    FREE
    Researchers at the Tsinghua University in China have worked on developing the ChatGLM series of models that have comparable performance to other models such as GPT-3 and BLOOM. ChatGLM-6B is an open bilingual language model (trained on Chinese and English). It is based on General Language Model (GLM) framework, with 6.2B parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). The following models are available: ChatGLM-130B (an open source LLM), ChatGLM-100B (not open source but available through invite-only access), and ChatGLM-6 (a lightweight open source alternative). ChatGLM LLMs are available with a Apache-2.0 license that allows commercial use. We have included the link to the Hugging Face page where you can try the ChatGLM-6B Chatbot for free.
  • EleutherAI

    GPT-J

    FREE
    EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. EleutherAI has trained and released several LLMs and the codebases used to train them. GPT-J can be used for code generation, making a chat bot, story writing, language translation and searching. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, which is generating text from a prompt. EleutherAI has a web page where you can test to see how the GPT-J works, or you can run GPT-J on google colab, or use the Hugging Face Transformers library.
  • EleutherAI

    GPT-NeoX-20B

    FREE
    EleutherAI has trained and released several LLMs and the codebases used to train them. EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J- 6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. It is a transformer-based language model and is English-language only, and thus cannot be used for translation or generating text in other languages. It is freely and openly available to the public through a permissive license.

  • Meta AI

    Llama

    FREE
    Meta has created Llama (Large Language Model Meta AI), its state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
    Training smaller foundation models like Llama is desirable in the Large Language Model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. Meta is making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and they also share a Llama model card that details how we built the model in keeping with our approach to responsible AI practices.

  • Meta AI

    Llama 2

    FREE
    Meta has released Llama 2. It has an open license, which allows commercial use for businesses. Llama 2 will be available for use in the Hugging Face Transformers library from today (you will need to sign Meta’s Llama 2 Community License Agreement – https://ai.meta.com/resources/models-and-libraries/llama-downloads/, via MSFT Azure cloud computing service, and through Amazon SageMaker JumpStart).
    Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations. According to Meta, the tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Llama 2 was pre-trained on 2 trillion tokens of data from publicly available sources. The tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks.
    Link to the live demo of Llama2 70B Chatbot -https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI

  • StableLM

    StableLM-Base-Alpha -7B

    FREE

    Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.

  • TruthGPT

    TruthGPT

    Other
    TruthGPT is a large language model (LLM), and according to Elon Musk, TruthGPT will be a “maximum truth-seeking” AI. In terms of how it works, it filters through thousands of datasets and draws educated conclusions to provide answers that are as unbiased as possible. TruthGPT is powered by $TRUTH, a tradable cryptocurrency on the Binance Smart Chain. $TRUTH holders will soon access additional benefits when using TruthGPT AI. When we learn more, we will update this section.
  • LMSYS Org

    Vicuna-13B

    FREE

    Vicuna-13B is an open-source chatbot developed by a team of researchers from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego. The chatbot was trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. There is a 13B and 7B parameter models that are available on Hugging Face.

    Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. The code and weights and an online demo are publicly available for non-commercial use. Here is a link to learn more about how it compares to other models – https://lmsys.org/blog/2023-03-30-vicuna/.

    To use this model, you need to install LLaMA weights first and convert them into Hugging Face weights, and the cost of training Vicuna-13B is around $300.

BLOOM
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.