Compare Models
-
NVIDIA
NeMo
FREENVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use. -
OpenAI
text-davinci-003
$0.02Text-davinci-003 is recognized as GPT 3.5 and is a variant of the GPT-3 model. While both Davinci and text-davinci-003 are powerful models, they differ in a few key ways. Text-davinci-003 is a newer and more capable model explicitly designed for instruction-following tasks. Text-davinci-003 was trained on a more recent dataset containing data up to June 2021. It can do any language task with better quality, longer output, and consistent instruction-following than the Curie, Babbage, or Ada models. Text-davinci-003 supports a longer context window (max prompt plus completion length) than Davinci.For those requesting the OpenAI’s API, GPT-3.5-turbo may be a better choice for tasks that require high accuracy in math or zero-shot classification and sentiment analysis than text-davinci-003. To note, GPT-3.5-turbo performs at a similar capability to text-davinci-003 but at 10 percent the price per token. OpenAI recommends GPT-3.5-turbo for most use cases. -
OpenAI
text-embedding-ada-002
$0.0001An embedding API model, such as Ada, is a powerful tool that converts words into numerical representations, enabling computers to understand and process natural language more effectively. This process is crucial for developing machine learning algorithms and artificial intelligence systems that can interact with humans, analyze text, or make predictions based on text. OpenAI’s text embeddings is built for advanced search, clustering, topic modeling, and classification functionality.Access is available through a request to OpenAI’s API. -
Microsoft
VALL-E
OTHERVALL-E is a LLM for text to speech synthesis (TTS) developed by Microsoft (technically it is a neural codec language model). Its creators state that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models. Studies indicate that VALL-E notably surpasses the leading zero-shot TTS system regarding speech authenticity and resemblance to the speaker. Furthermore, it has been observed that VALL-E is capable of retaining the emotional expression and ambient acoustics of the speaker within the synthesized output. Unfortunately, VALL-E is not available for any form of public consumption at this time. At the time of writing, VALL-E is a research project, and there is no customer onboarding queue or waitlist (but you can apply to be part of the first testers group). -
LMSYS Org
Vicuna-13B
FREEVicuna-13B is an open-source chatbot developed by a team of researchers from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego. The chatbot was trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. There is a 13B and 7B parameter models that are available on Hugging Face.
Vicuna-13B achieves more than 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90% of cases. The code and weights and an online demo are publicly available for non-commercial use. Here is a link to learn more about how it compares to other models – https://lmsys.org/blog/2023-03-30-vicuna/.
To use this model, you need to install LLaMA weights first and convert them into Hugging Face weights, and the cost of training Vicuna-13B is around $300.
-
OpenAI
Whisper
0.006Whisper is an automatic speech recognition (ASR) system capable of transcribing in multiple languages as well as translating them into English. With Whisper, you can easily transcribe speech into text, allowing you to capture conversations and meetings for future reference. And if you need to communicate with someone who speaks a different language, Whisper can help with that too — it can translate many different languages into English, making it easier than ever to bridge the gap and ensure that everyone is on the same page.
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. The speech to text API has two endpoints (transcriptions and translations) and file uploads are currently limited to 25 MB, and the following input file types are supported: mp3, mp4, mpeg, mpga, m4a, wav, and webm.