Compare Models
-
Stanford University
Alpaca
FREEStanford University released an instruction-following language model called Alpaca, which was fine-tuned from Meta’s LLaMA 7B model. The Alpaca model was trained on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003. Alpaca aims to help the academic community engage with the models by providing an open source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce. All training data and techniques have been released. The Alpaca license explicitly prohibits commercial use, and the model can only be used for research/personal projects, and users need to follow LLaMA’s license agreement. -
Microsoft
Azure OpenAI Service
OTHERMicrosoft’s Azure OpenAI Service allows you to take advantage of large-scale, generative AI models with deep understandings of language and code to enable new reasoning and comprehension capabilities for building cutting-edge applications. Apply these coding and language models to a variety of use cases, such as writing assistance, code generation, and reasoning over data. Detect and mitigate harmful use with built-in responsible AI and access enterprise-grade Azure security. GPT-4 is available in preview in the Azure OpenAI Service and the billing for GPT-4 8K and 32K instances per 1/K tokens and can be found under those models on the tokes compare site. To note, Microsoft’s Azure OpenAI Service customers can access GPT-3.5, ChatGPT, and DALL·E too. -
BigScience
BLOOM
FREEBigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available. -
OpenAI
Claude 2 (Web Browser Version)
FREEAnthropic’s Claude 2 is now available to the public if you’re in the US or UK. For the web browser version. just click “Talk to Claude,” and you’ll be prompted to provide an email address. After you confirm the address you enter, you’ll be ready to go.Claude 2 scored 76.5 percent on the multiple choice section of the Bar exam and in the 90th percentile on the reading and writing portion of the GRE. Its coding skills have improved from its predecessor scoring 71.2 percent on a Python coding test compared to Claude’s 56 percent. While the Google-backed Anthropic initially launched Claude in March, the chatbot was only available to businesses by request or as an app in Slack. With Claude 2, Anthropic is building upon the chatbot’s existing capabilities with a number of improvements. -
Microsoft, NVIDIA
MT-NLG
OTHERMT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers. -
RedPajama
RedPajama-INCITE-7B-Instruct
FREEThe RedPajama project aims to create a set of leading open source models. RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open source AI community. RedPajama-INCITE-7B-Instruct model represents the top-performing open source entry on the HELM benchmarks, surpassing other cutting-edge open models like LLaMA-7B, Falcon-7B, and MPT-7B. The instruct-tuned model is designed for versatility and shines when tasked with few-shot performance.The Instruct, Chat, Base Model, and ten interim checkpoints are now available on HuggingFace, and all the RedPajama LLMs come with commercial licenses under Apache 2.0.Play with the RedPajama chat model version here – https://lnkd.in/g3npSEbg -
Amazon
SageMaker
FREEAmazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment. -
Microsoft
VALL-E
OTHERVALL-E is a LLM for text to speech synthesis (TTS) developed by Microsoft (technically it is a neural codec language model). Its creators state that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models. Studies indicate that VALL-E notably surpasses the leading zero-shot TTS system regarding speech authenticity and resemblance to the speaker. Furthermore, it has been observed that VALL-E is capable of retaining the emotional expression and ambient acoustics of the speaker within the synthesized output. Unfortunately, VALL-E is not available for any form of public consumption at this time. At the time of writing, VALL-E is a research project, and there is no customer onboarding queue or waitlist (but you can apply to be part of the first testers group).