NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Technology Innovation Institute

    Falcon-40B

    OTHER
    The Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses.
    The family includes Falcon-40B and Falcon-7B, trained on 1 trillion tokens, mainly (>80%) from the RefinedWeb datase. A special variant, Falcon-40B-Instruct, has been made available which may be more suitable for assistant-style tasks. Falcon-40B can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, virtual assistants, language translation, content generation, and sentiment analysis (and more).

    To use these models, PyTorch 2.0 is required. TII is now calling for proposals from users worldwide to submit their most creative ideas for Falcon 40B’s deployment – https://falconllm.tii.ae/call-for-proposal.php or you can pay to access it via Amazon SageMaker JumpStart.
    A demo of Falcon-Chat is available on Hugging Face at https://huggingface.co/spaces/HuggingFaceH4/falcon-chat.

  • Technology Innovation Institute

    Falcon-7B

    FREE

    The Technology Innovation Institute (TII), an Abu Dhabi government funded research institution, has introduced Falcon, a state-of-the-art autoregressive decoder-only language model series released under the Apache 2.0 license, which means it can be used for commerical and research uses. Falcon-7B only needs ~15GB and therefore is accessible even on consumer hardware. The model can support English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish). It can be used to generate creative text and solve complex problems, chatbots, customer service operations, virtual assistants, language translation, content generation, and sentiment analysis.

    This raw pretrained model should be finetuned for specific use cases. Falcon-7B-Instruct is also available at https://huggingface.co/tiiuae/falcon-7b-instruct.
    If you are looking for a version better-suited model to take generic instructions in a chat format, we recommend Falcon-7B-Instruct rather than the base model.

  • Google

    FLAN-T5

    FREE
    If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1,000 additional tasks covering more languages – the NLP is for English, German, French. It has Apache-2.0 license which is a permissive open source license that allows for commercial use. With appropriate prompting, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question answering, sentence and sentiment classification, translation, and pronoun resolution.
  • Google

    Flan-UL2

    FREE
    Developed by Google, Flan-UL2, which is a more powerful version of the T5 model that has been trained using Flan, and it is downloadable from Hugging Face. It shows performance exceeding the ‘prior’ versions of Flan-T5. With the ability to reason for itself and generalize better than the previous models, Flan-UL2 is a great improvement. Flan-UL2 is a machine learning model that can generate textual descriptions of images and has the potential to be used for image search, video captioning, automated content generation, and visual question answering. Flan-UL2 has an Apache-2.0 license, which is a permissive open source license that allows for commercial use.
    If Flan-UL2’s 20B parameters are too much, consider the previous iteration of Flan-T5, which comes in five different sizes and might be more suitable for your needs.
  • Google

    LaMDA

    OTHER
    LaMDA stands for Language Model for Dialogue Application. It is a conversational Large Language Model (LLM) built by Google as an underlying technology to power dialogue-based applications that can generate natural-sounding human language. LaMDA is built by fine-tuning a family of Transformer-based neural language models specialized for dialog and teaching the models to leverage external knowledge sources. The potential use cases for LaMDA are diverse, ranging from customer service and chatbots to personal assistants and beyond. LaMDA is not open source; currently, there are no APIs or downloads. However, Google is working on making LaMDA more accessible to researchers and developers. In the future, it is likely that LaMDA will be released as an open source project, and that APIs and downloads will be made available.
  • NVIDIA

    LaunchPad

    FREE
    NVIDIA LaunchPad provides free access to enterprise NVIDIA hardware and software through an internet browser. NVIDIA customers can experience the power of AI with end-to-end solutions through guided hands-on labs or use NVIDIA-Certified Systems as a sandbox, but you need to fill out an Application Form and wait for approval. Sample labs include training and deploying a support chatbot, deploying an end-to-end AI workload, configuring and deploying a language model on the hardware accelerator, and deploying a fraud detection model.

     

    *FREE via Application Form
  • Microsoft, NVIDIA

    MT-NLG

    OTHER
    MT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers.
  • NVIDIA

    NeMo

    FREE
    NVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use.
  • Google

    PaLM 2 chat-bison-001

    $0.0021535
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.
    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and chat-bison-001 has been fine-tuned for multi-turn conversation use cases. If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • ChatGLM

    PaLM 2 text-bison-001

    $0.004
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification, question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.

     

    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and cost, and text-bison-001 can be fine-tuned to follow natural language instructions and is suitable for various language tasks such as classification, sentiment analysis, entity extraction, extractive question answering, summarization, re-writing text in a different style, and concept ideation.

     

    If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • Google

    PaLM 2 textembedding-gecko-001

    $0.0004
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.
    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes: Unicorn (the largest), Bison, Otter, and Gecko (the smallest) and the different sizes of the submodels allow PaLM 2 to be more efficient and to perform different tasks. Gecko is the smallest and cheapest model for simple tasks and textembedding-gecko-001 returns model embeddings for text inputs.
    If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • StableLM

    StableLM-Base-Alpha -7B

    FREE

    Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.

1 2 3

Azure OpenAI Service
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.