Compare Models
-
Stanford University
Alpaca
FREEStanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement. -
Microsoft
Azure OpenAI Service
OTHERMicrosoft’s Azure OpenAI Service allows you to take advantage of large-scale, generative AI models with deep understandings of language and code to enable new reasoning and comprehension capabilities for building cutting-edge applications. Apply these coding and language models to a variety of use cases, such as writing assistance, code generation, and reasoning over data. Detect and mitigate harmful use with built-in responsible AI and access enterprise-grade Azure security. GPT-4 is available in preview in the Azure OpenAI Service and the billing for GPT-4 8K and 32K instances per 1/K tokens and can be found under those models on the tokes compare site. To note, Microsoft’s Azure OpenAI Service customers can access GPT-3.5, ChatGPT, and DALL·E too. -
Microsoft
Bing Search APIs
OTHERMicrosoft’s Bing AI search engine is powered by GPT-4. Microsoft claims the new model is faster and more accurate than ever. Bing Search APIs provide a variety of APIs with trained models for your use. The Bing Search APIs add intelligent search to your app, combining hundreds of billions of webpages, images, videos, and news to provide relevant results without ads. The results can be automatically customized to your user’s locations or markets, increasing relevancy by staying local. There are various prices for Bing Search APIs which are dependent on the feature. For customers who are interested in more flexible terms related to presenting Bing API results with their models check out the website for prices per 1,000 transactions. -
ChatGLM
ChatGLM-6B
FREEResearchers at the Tsinghua University in China have worked on developing the ChatGLM series of models that have comparable performance to other models such as GPT-3 and BLOOM. ChatGLM-6B is an open bilingual language model (trained on Chinese and English). It is based on General Language Model (GLM) framework, with 6.2B parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). The following models are available: ChatGLM-130B (an open source LLM), ChatGLM-100B (not open source but available through invite-only access), and ChatGLM-6 (a lightweight open source alternative). ChatGLM LLMs are available with a Apache-2.0 license that allows commercial use. We have included the link to the Hugging Face page where you can try the ChatGLM-6B Chatbot for free. -
Deepmind
Chinchilla AI
OTHERGoogle’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.
-
NVIDIA
LaunchPad
FREENVIDIA LaunchPad provides free access to enterprise NVIDIA hardware and software through an internet browser. NVIDIA customers can experience the power of AI with end-to-end solutions through guided hands-on labs or use NVIDIA-Certified Systems as a sandbox, but you need to fill out an Application Form and wait for approval. Sample labs include training and deploying a support chatbot, deploying an end-to-end AI workload, configuring and deploying a language model on the hardware accelerator, and deploying a fraud detection model.*FREE via Application Form -
Microsoft, NVIDIA
MT-NLG
OTHERMT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers. -
NVIDIA
NeMo
FREENVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use. -
StableLM
StableLM-Base-Alpha -7B
FREEStability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.
-
Microsoft
VALL-E
OTHERVALL-E is a LLM for text to speech synthesis (TTS) developed by Microsoft (technically it is a neural codec language model). Its creators state that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models. Studies indicate that VALL-E notably surpasses the leading zero-shot TTS system regarding speech authenticity and resemblance to the speaker. Furthermore, it has been observed that VALL-E is capable of retaining the emotional expression and ambient acoustics of the speaker within the synthesized output. Unfortunately, VALL-E is not available for any form of public consumption at this time. At the time of writing, VALL-E is a research project, and there is no customer onboarding queue or waitlist (but you can apply to be part of the first testers group). -
LMSYS Org
Vicuna-13B
FREEVicuna-13B is an open-source chatbot developed by a team of researchers from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego. The chatbot was trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. There is a 13B and 7B parameter models that are available on Hugging Face.
Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. The code and weights and an online demo are publicly available for non-commercial use. Here is a link to learn more about how it compares to other models – https://lmsys.org/blog/2023-03-30-vicuna/.
To use this model, you need to install LLaMA weights first and convert them into Hugging Face weights, and the cost of training Vicuna-13B is around $300.
-
Yandex
YaLM
FREEYaLM 100B is a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world. It took 65 days to train the model on a cluster of 800 A100 graphics cards and 1.7 TB of online texts, books, and countless other sources in both English and Russian. Researchers and developers can use the corporate-size solution to solve the most complex problems associated with natural language processing.Training details and best practices on acceleration and stabilizations can be found on Medium (English) and Habr (Russian) articles. The model is published under the Apache 2.0 license that permits both research and commercial use.