NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Microsoft

    Azure OpenAI Service

    OTHER
    Microsoft’s Azure OpenAI Service allows you to take advantage of large-scale, generative AI models with deep understandings of language and code to enable new reasoning and comprehension capabilities for building cutting-edge applications. Apply these coding and language models to a variety of use cases, such as writing assistance, code generation, and reasoning over data. Detect and mitigate harmful use with built-in responsible AI and access enterprise-grade Azure security. GPT-4 is available in preview in the Azure OpenAI Service and the billing for GPT-4 8K and 32K instances per 1/K tokens and can be found under those models on the tokes compare site. To note, Microsoft’s Azure OpenAI Service customers can access GPT-3.5, ChatGPT, and DALL·E too.
  • Microsoft

    Bing Search APIs

    OTHER
    Microsoft’s Bing AI search engine is powered by GPT-4. Microsoft claims the new model is faster and more accurate than ever. Bing Search APIs provide a variety of APIs with trained models for your use. The Bing Search APIs add intelligent search to your app, combining hundreds of billions of webpages, images, videos, and news to provide relevant results without ads. The results can be automatically customized to your user’s locations or markets, increasing relevancy by staying local. There are various prices for Bing Search APIs which are dependent on the feature. For customers who are interested in more flexible terms related to presenting Bing API results with their models check out the website for prices per 1,000 transactions.
  • Deepmind

    Chinchilla AI

    OTHER

    Google’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.

  • Technology Innovation Institute

    Falcon-40B

    OTHER
    The Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses.
    The family includes Falcon-40B and Falcon-7B, trained on 1 trillion tokens, mainly (>80%) from the RefinedWeb datase. A special variant, Falcon-40B-Instruct, has been made available which may be more suitable for assistant-style tasks. Falcon-40B can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, virtual assistants, language translation, content generation, and sentiment analysis (and more).

    To use these models, PyTorch 2.0 is required. TII is now calling for proposals from users worldwide to submit their most creative ideas for Falcon 40B’s deployment – https://falconllm.tii.ae/call-for-proposal.php or you can pay to access it via Amazon SageMaker JumpStart.
    A demo of Falcon-Chat is available on Hugging Face at https://huggingface.co/spaces/HuggingFaceH4/falcon-chat.

  • Technology Innovation Institute

    Falcon-7B

    FREE

    The Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses. Falcon-7B only needs ~15GB and therefore is accessible even on consumer hardware. The model can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, customer service operations, virtual assistants, language translation, content generation, and sentiment analysis.

    This raw pretrained model should be finetuned for specific use cases. Falcon-7B-Instruct is also available at https://huggingface.co/tiiuae/falcon-7b-instruct.
    If you are looking for a version better-suited model to take generic instructions in a chat format, we recommend Falcon-7B-Instruct rather than the base model.

  • Microsoft, NVIDIA

    MT-NLG

    OTHER
    MT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers.
  • RedPajama

    RedPajama-INCITE-7B-Instruct

    FREE
    The RedPajama project aims to create a set of leading open source models. RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open source AI community. RedPajama-INCITE-7B-Instruct model represents the top-performing open source entry on the HELM benchmarks, surpassing other cutting-edge open models like LLaMA-7B, Falcon-7B, and MPT-7B. The instruct-tuned model is designed for versatility and shines when tasked with few-shot performance.

     

    The Instruct, Chat, Base Model, and ten interim checkpoints are now available on HuggingFace, and all the RedPajama LLMs come with commercial licenses under Apache 2.0.

     

    Play with the RedPajama chat model version here – https://lnkd.in/g3npSEbg
  • StableLM

    StableLM-Base-Alpha -7B

    FREE

    Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.

  • TruthGPT

    TruthGPT

    Other
    TruthGPT is a large language model (LLM), and according to Elon Musk, TruthGPT will be a “maximum truth-seeking” AI. In terms of how it works, it filters through thousands of datasets and draws educated conclusions to provide answers that are as unbiased as possible. TruthGPT is powered by $TRUTH, a tradable cryptocurrency on the Binance Smart Chain. $TRUTH holders will soon access additional benefits when using TruthGPT AI. When we learn more, we will update this section.
  • Microsoft

    VALL-E

    OTHER
    VALL-E is a LLM for text to speech synthesis (TTS) developed by Microsoft (technically it is a neural codec language model). Its creators state that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models. Studies indicate that VALL-E notably surpasses the leading zero-shot TTS system regarding speech authenticity and resemblance to the speaker. Furthermore, it has been observed that VALL-E is capable of retaining the emotional expression and ambient acoustics of the speaker within the synthesized output. Unfortunately, VALL-E is not available for any form of public consumption at this time. At the time of writing, VALL-E is a research project, and there is no customer onboarding queue or waitlist (but you can apply to be part of the first testers group).

Azure OpenAI Service
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.