NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Stanford University

    Alpaca

    FREE
    Stanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement.
  • BloombergGPT

    BloombergGPT

    OTHER
    BloombergGPT represents the first step in developing and applying LLM and generative AI technology for the financial industry. Bloomberg GPT has been trained on enormous amounts of financial data and is purpose-built for finance. The mixed dataset training leads to a model that outperforms existing LLMs on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Bloomberg GPT can perform a range of NLP tasks such as sentiment analysis, named entity recognition, news classification, and even writing headlines. With Bloomberg GPT, traders and analysts can perform financial analysis and insights more quickly and efficiently, saving valuable time that can be used for other critical tasks. To use Bloomberg GPT, you need access to Bloomberg’s terminal software (a platform investors and financial professionals use to access real-time market data, breaking news, financial research, and advanced analytics). Bloomberg also offers a variety of other subscription options, including subscriptions for financial institutions, universities, and governments. The price of a Bloomberg terminal varies depending on the type of subscription and the number of users.
  • AI21 Labs

    Jurassic-2 Grande (Base & Instruct)

    $0.01
    J2-Grande offers enhanced text generation capabilities, making it well-suited to language tasks with a greater degree of complexity. Its fine-tuning options allow for optimization of quality, while maintaining an affordable price and high efficiency (see site for more details). It is an ideal choice for complex language processing tasks and generative text applications. All of J2 models support several non-English languages, including: Spanish, French, German, Portuguese, Italian and Dutch. All Jurassic foundation models are trained on a massive corpus of text, making them a powerful basis for a wide range of natural language processing applications, capable of understanding and composing human-like text. Models are available through an API and you can start with a free trial and then pay based on usage.

  • AI21 Labs

    Jurassic-2 Jumbo (Base & Instruct)

    $0.015
    As the largest and most powerful model in the Jurassic series, J2-Jumbo is an ideal choice for the most complex language processing tasks and generative text applications. Further, the model can be fine-tuned for optimum performance in any custom application. Jurassic-2 not only improves upon Jurassic-1 (AI21 Studio previous generation models) in every aspect, making it highly versatile in general purpose text-generators, and capable of composing human-like text and solving complex tasks such as question answering and text classification. All of the J2 models support several non-English languages, including: Spanish, French, German, Portuguese, Italian and Dutch. All Jurassic foundation models are trained on a massive corpus of text, making them a powerful basis for a wide range of natural language processing applications, capable of understanding and composing human-like text. Models are available through an API and you can start with a free trial and then pay based on usage.

  • AI21 Labs

    Jurassic-2 Large (Base & Instruct)

    $0.003

    Designed for fast responses, the Jurassic-2 Large model can be fine-tuned to optimize performance for relatively simple tasks, making it an ideal choice for language processing tasks that require maximum affordability and less processing power. All of the J2 models support several non-English languages, including: Spanish, French, German, Portuguese, Italian and Dutch. All Jurassic foundation models are trained on a massive corpus of text, making them a powerful basis for a wide range of natural language processing applications, capable of understanding and composing human-like text. Models are available through an API and you can start with a free trial and then pay based on usage.

  • Meta AI

    Llama

    FREE
    Meta has created Llama (Large Language Model Meta AI), its state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
    Training smaller foundation models like Llama is desirable in the Large Language Model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. Meta is making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and they also share a Llama model card that details how we built the model in keeping with our approach to responsible AI practices.

  • Meta AI

    Llama 2

    FREE
    Meta has released Llama 2. It has an open license, which allows commercial use for businesses. Llama 2 will be available for use in the Hugging Face Transformers library from today (you will need to sign Meta’s Llama 2 Community License Agreement – https://ai.meta.com/resources/models-and-libraries/llama-downloads/, via MSFT Azure cloud computing service, and through Amazon SageMaker JumpStart).
    Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations. According to Meta, the tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Llama 2 was pre-trained on 2 trillion tokens of data from publicly available sources. The tuned models are intended for assistant-like chat, whereas pre-trained models can be adapted for a variety of natural language generation tasks.
    Link to the live demo of Llama2 70B Chatbot -https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI

  • LMSYS Org

    Vicuna-13B

    FREE

    Vicuna-13B is an open-source chatbot developed by a team of researchers from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego. The chatbot was trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. There is a 13B and 7B parameter models that are available on Hugging Face.

    Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. The code and weights and an online demo are publicly available for non-commercial use. Here is a link to learn more about how it compares to other models – https://lmsys.org/blog/2023-03-30-vicuna/.

    To use this model, you need to install LLaMA weights first and convert them into Hugging Face weights, and the cost of training Vicuna-13B is around $300.

  • Yandex

    YaLM

    FREE
    YaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. Researchers and developers can use the corporate-size solution to solve the most complex problems associated with natural language processing.
    Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles. The model is published under the Apache 2.0 license that permits both research and commercial use.

Alpaca
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.