NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Google

    BARD

    FREE
    Google’s Bard is now powered by PaLM 2, the new powerful LLM launched in May 2023. PaLM 2 is trained on a massive dataset of text and code. Bard can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Bard is programmed to use the web to find the most recent answers to questions. This means that when you ask Bard a question, it will not only use its knowledge of the world to answer your question, but it will also use the internet to find the most recent information on the topic. This allows Bard to provide you with the most accurate and up-to-date information possible (very cool).
    The exact billing structure for Bard is still under development (it is free to try at the moment) but you will likely be able to purchase tokens in bulk at a discounted price. According to Google, you may also be able to use tokens you have earned through other means, such as completing surveys or participating in beta testing programs.

  • Deepmind

    Chinchilla AI

    OTHER

    Google’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.

  • Google

    Cloud Platform

    OTHER
    Google Cloud Platform (GCP) is a cloud computing service that includes innovative AI and machine learning products, solutions, and services. Google AI Studio is a low-code development environment that makes it easy to build and deploy applications and has a variety of features, such as pre-trained models that can be used to get started quickly, a unified experience for managing the entire ML lifecycle, from data preparation to model deployment, and a variety of tools for monitoring the performance of ML models in production. Vertex AI can be used to train and deploy models, and GCP also offers a variety of data storage services, including Cloud Storage, which can be used to store large datasets.
  • Google

    code chat (codechat-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    The code chat API can power a chatbot that assists with code-related questions. For example, you can use it for help debugging code. The code chat API supports the code-chat-bison model.

    The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code chat include debugging, where it assists with issues related to code that doesn’t compile or contains a bug; documentation, where it aids in understanding unfamiliar code to ensure accurate representation; and learning, as it provides help in comprehending code that you might not be very familiar with.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code completion (code-gecko)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud. The code completion API provides code autocompletion suggestions as you write code. The API uses the context of the code you’re writing to make its suggestions.

    The code completion API supports the code-gecko model. Use the code-gecko model to help improve the speed and accuracy of writing code. The Codey APIs support a wide range of programming languages including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio. Some common use cases for code completion include writing code faster, where the code-gecko model is employed to expedite the coding process by leveraging suggested code; and minimizing bugs in code, by utilizing code suggestions that are known to be syntactically correct to circumvent errors, thus reducing the risk of inadvertently introducing bugs that can arise during code creation.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code generation (code-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    code generation (code-bison) generates code based on a natural language description of the desired code. For example, it can generate a unit test for a function. The code generation API supports the code-bison model. The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code generation include creating unit tests, where you can design a prompt to request a unit test for a specific function; writing a function, which involves passing a problem to the model and receiving a function that solves the problem; and creating a class, where you can use a prompt to describe the purpose of a class and have the code defining that class returned to you.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    LaMDA

    OTHER
    LaMDA stands for Language Model for Dialogue Application. It is a conversational Large Language Model (LLM) built by Google as an underlying technology to power dialogue-based applications that can generate natural-sounding human language. LaMDA is built by fine-tuning a family of Transformer-based neural language models specialized for dialog and teaching the models to leverage external knowledge sources. The potential use cases for LaMDA are diverse, ranging from customer service and chatbots to personal assistants and beyond. LaMDA is not open source; currently, there are no APIs or downloads. However, Google is working on making LaMDA more accessible to researchers and developers. In the future, it is likely that LaMDA will be released as an open source project, and that APIs and downloads will be made available.
  • Google

    PaLM 2 chat-bison-001

    $0.0021535
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.
    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and chat-bison-001 has been fine-tuned for multi-turn conversation use cases. If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • ChatGLM

    PaLM 2 text-bison-001

    $0.004
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification, question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.

     

    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes. Bison is the best value in terms of capability and cost, and text-bison-001 can be fine-tuned to follow natural language instructions and is suitable for various language tasks such as classification, sentiment analysis, entity extraction, extractive question answering, summarization, re-writing text in a different style, and concept ideation.

     

    If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

  • Google

    PaLM 2 textembedding-gecko-001

    $0.0004
    PaLM 2 has just launched (May 2023) and is Google’s next-generation Large Language Model, built on Google’s Pathways AI architecture. PaLM 2 was trained on a massive dataset of text and code, and it can handle many different tasks and learn new ones quickly. It is seen as a direct competitor to OpenAI’s GPT-4 model. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency (100 languages), and natural language generation better than our previous state-of-the-art LLMs, including its predecessor PaLM.
    PaLM 2 is the underlying model driving the PaLM API that can be accessed through Google’s Generative AI Studio. PaLM 2 has four submodels with different sizes: Unicorn (the largest), Bison, Otter, and Gecko (the smallest) and the different sizes of the submodels allow PaLM 2 to be more efficient and to perform different tasks. Gecko is the smallest and cheapest model for simple tasks and textembedding-gecko-001 returns model embeddings for text inputs.
    If you want to see PaLM 2 capabilities, the simplest way to use it is through Google Bard (PaLM 2 is the technology that powers Google Bard).

     

    Watch Paige Bailey introducing PaLM 2: view here

Chinchilla AI
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.