NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Microsoft, NVIDIA

    MT-NLG

    OTHER
    MT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers.
  • NVIDIA

    NeMo

    FREE
    NVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use.
  • StableLM

    StableLM-Base-Alpha -7B

    FREE

    Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.

  • OpenAI

    text-davinci-003

    $0.02
    Text-davinci-003 is recognized as GPT 3.5 and is a variant of the GPT-3 model. While both Davinci and text-davinci-003 are powerful models, they differ in a few key ways. Text-davinci-003 is a newer and more capable model explicitly designed for instruction-following tasks. Text-davinci-003 was trained on a more recent dataset containing data up to June 2021. It can do any language task with better quality, longer output, and consistent instruction-following than the Curie, Babbage, or Ada models. Text-davinci-003 supports a longer context window (max prompt plus completion length) than Davinci.
    For those requesting the OpenAI’s API, GPT-3.5-turbo may be a better choice for tasks that require high accuracy in math or zero-shot classification and sentiment analysis than text-davinci-003. To note, GPT-3.5-turbo performs at a similar capability to text-davinci-003 but at 10 percent the price per token. OpenAI recommends GPT-3.5-turbo for most use cases.

  • OpenAI

    text-embedding-ada-002

    $0.0001
    An embedding API model, such as Ada, is a powerful tool that converts words into numerical representations, enabling computers to understand and process natural language more effectively. This process is crucial for developing machine learning algorithms and artificial intelligence systems that can interact with humans, analyze text, or make predictions based on text. OpenAI’s text embeddings is built for advanced search, clustering, topic modeling, and classification functionality.
    Access is available through a request to OpenAI’s API.

  • Microsoft

    VALL-E

    OTHER
    VALL-E is a LLM for text to speech synthesis (TTS) developed by Microsoft (technically it is a neural codec language model). Its creators state that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models. Studies indicate that VALL-E notably surpasses the leading zero-shot TTS system regarding speech authenticity and resemblance to the speaker. Furthermore, it has been observed that VALL-E is capable of retaining the emotional expression and ambient acoustics of the speaker within the synthesized output. Unfortunately, VALL-E is not available for any form of public consumption at this time. At the time of writing, VALL-E is a research project, and there is no customer onboarding queue or waitlist (but you can apply to be part of the first testers group).
  • OpenAI

    Whisper

    0.006

    Whisper is an automatic speech recognition (ASR) system capable of transcribing in multiple languages as well as translating them into English. With Whisper, you can easily transcribe speech into text, allowing you to capture conversations and meetings for future reference. And if you need to communicate with someone who speaks a different language, Whisper can help with that too — it can translate many different languages into English, making it easier than ever to bridge the gap and ensure that everyone is on the same page.

    Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. The speech to text API has two endpoints (transcriptions and translations) and file uploads are currently limited to 25 MB, and the following input file types are supported: mp3, mp4, mpeg, mpga, m4a, wav, and webm.
1 2 3

Ada (fine tuning) GPT-3
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.