Compare Models
-
Stanford University
Alpaca
FREEStanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement. -
BloombergGPT
BloombergGPT
OTHERBloombergGPT represents the first step in developing and applying LLM and generative AI technology for the financial industry. Bloomberg GPT has been trained on enormous amounts of financial data and is purpose-built for finance. The mixed dataset training leads to a model that outperforms existing LLMs on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Bloomberg GPT can perform a range of NLP tasks such as sentiment analysis, named entity recognition, news classification, and even writing headlines. With Bloomberg GPT, traders and analysts can perform financial analysis and insights more quickly and efficiently, saving valuable time that can be used for other critical tasks. To use Bloomberg GPT, you need access to Bloomberg’s terminal software (a platform investors and financial professionals use to access real-time market data, breaking news, financial research, and advanced analytics). Bloomberg also offers a variety of other subscription options, including subscriptions for financial institutions, universities, and governments. The price of a Bloomberg terminal varies depending on the type of subscription and the number of users. -
ChatGLM
ChatGLM-6B
FREEResearchers at the Tsinghua University in China have worked on developing the ChatGLM series of models that have comparable performance to other models such as GPT-3 and BLOOM. ChatGLM-6B is an open bilingual language model (trained on Chinese and English). It is based on General Language Model (GLM) framework, with 6.2B parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). The following models are available: ChatGLM-130B (an open source LLM), ChatGLM-100B (not open source but available through invite-only access), and ChatGLM-6 (a lightweight open source alternative). ChatGLM LLMs are available with a Apache-2.0 license that allows commercial use. We have included the link to the Hugging Face page where you can try the ChatGLM-6B Chatbot for free. -
EleutherAI
GPT-J
FREEEleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. EleutherAI has trained and released several LLMs and the codebases used to train them. GPT-J can be used for code generation, making a chat bot, story writing, language translation and searching. GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for, which is generating text from a prompt. EleutherAI has a web page where you can test to see how the GPT-J works, or you can run GPT-J on google colab, or use the Hugging Face Transformers library. -
EleutherAI
GPT-NeoX-20B
FREEEleutherAI has trained and released several LLMs and the codebases used to train them. EleutherAI is a leading non-profit research institute focused on large-scale artificial intelligence research. GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J- 6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. It is a transformer-based language model and is English-language only, and thus cannot be used for translation or generating text in other languages. It is freely and openly available to the public through a permissive license. -
NVIDIA
LaunchPad
FREENVIDIA LaunchPad provides free access to enterprise NVIDIA hardware and software through an internet browser. NVIDIA customers can experience the power of AI with end-to-end solutions through guided hands-on labs or use NVIDIA-Certified Systems as a sandbox, but you need to fill out an Application Form and wait for approval. Sample labs include training and deploying a support chatbot, deploying an end-to-end AI workload, configuring and deploying a language model on the hardware accelerator, and deploying a fraud detection model.*FREE via Application Form -
Microsoft, NVIDIA
MT-NLG
OTHERMT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers. -
NVIDIA
NeMo
FREENVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use. -
StableLM
StableLM-Base-Alpha -7B
FREEStability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.
-
TruthGPT
TruthGPT
OtherTruthGPT is a large language model (LLM), and according to Elon Musk, TruthGPT will be a “maximum truth-seeking” AI. In terms of how it works, it filters through thousands of datasets and draws educated conclusions to provide answers that are as unbiased as possible. TruthGPT is powered by $TRUTH, a tradable cryptocurrency on the Binance Smart Chain. $TRUTH holders will soon access additional benefits when using TruthGPT AI. When we learn more, we will update this section. -
Yandex
YaLM
FREEYaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. Researchers and developers can use the corporate-size solution to solve the most complex problems associated with natural language processing.Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles. The model is published under the Apache 2.0 license that permits both research and commercial use.