NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Google

    LaMDA

    OTHER
    LaMDA stands for Language Model for Dialogue Application. It is a conversational Large Language Model (LLM) built by Google as an underlying technology to power dialogue-based applications that can generate natural-sounding human language. LaMDA is built by fine-tuning a family of Transformer-based neural language models specialized for dialog and teaching the models to leverage external knowledge sources. The potential use cases for LaMDA are diverse, ranging from customer service and chatbots to personal assistants and beyond. LaMDA is not open source; currently, there are no APIs or downloads. However, Google is working on making LaMDA more accessible to researchers and developers. In the future, it is likely that LaMDA will be released as an open source project, and that APIs and downloads will be made available.
  • Amazon

    SageMaker

    FREE
    Amazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.

     

    The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment.
  • StableLM

    StableLM-Base-Alpha -7B

    FREE

    Stability AI released a new open-source language model, StableLM. The Alpha version of the model is available in 3 billion and 7 billion parameters. StableLM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size. The models are now available on GitHub and on Hugging Face, and developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes subject to the terms of the CC BY-SA-4.0 license.

1 2

Alpaca
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.