NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • Google

    BARD

    FREE
    Google’s Bard is now powered by PaLM 2, the new powerful LLM launched in May 2023. PaLM 2 is trained on a massive dataset of text and code. Bard can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Bard is programmed to use the web to find the most recent answers to questions. This means that when you ask Bard a question, it will not only use its knowledge of the world to answer your question, but it will also use the internet to find the most recent information on the topic. This allows Bard to provide you with the most accurate and up-to-date information possible (very cool).
    The exact billing structure for Bard is still under development (it is free to try at the moment) but you will likely be able to purchase tokens in bulk at a discounted price. According to Google, you may also be able to use tokens you have earned through other means, such as completing surveys or participating in beta testing programs.

  • Google

    BERT

    FREE
    BERT (Bidirectional Encoder Representations from Transformers) was introduced in 2018 by researchers at Google AI. BERT uses AI in the form of natural language processing (NLP), natural language understanding (NLU), and sentiment analysis to process every word in a search query in relation to all the other words in a sentence, giving it a robust understanding of context and semantics. This pre-training process is incredibly powerful and the learned weights can be fine-tuned with just one additional output layer to create models for a variety of NLP tasks such as question answering and sentiment analysis. You can download the smaller BERT models for FREE from the official BERT GitHub page.
  • BigScience

    BLOOM

    FREE
    BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a transformer-based LLM. Over 1,000 AI researchers created it to provide a free large language model for everyone who wants to try and it is a multilingual LLM. BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. It can output coherent text in 46 languages and 13 programming languages. It is free, and everybody who wants to can try it out. To interact with the API, you’ll need to request a token. This is done with a post request to the server. Tokens are only valid for two weeks. After which, a new one must be generated. Trained on around 176B parameters, it is considered an alternative to OpenAI models. There is a downloadable model, and a hosted API is available.

  • BloombergGPT

    BloombergGPT

    OTHER
    BloombergGPT represents the first step in developing and applying LLM and generative AI technology for the financial industry. Bloomberg GPT has been trained on enormous amounts of financial data and is purpose-built for finance. The mixed dataset training leads to a model that outperforms existing LLMs on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Bloomberg GPT can perform a range of NLP tasks such as sentiment analysis, named entity recognition, news classification, and even writing headlines. With Bloomberg GPT, traders and analysts can perform financial analysis and insights more quickly and efficiently, saving valuable time that can be used for other critical tasks. To use Bloomberg GPT, you need access to Bloomberg’s terminal software (a platform investors and financial professionals use to access real-time market data, breaking news, financial research, and advanced analytics). Bloomberg also offers a variety of other subscription options, including subscriptions for financial institutions, universities, and governments. The price of a Bloomberg terminal varies depending on the type of subscription and the number of users.
  • ChatGLM

    ChatGLM-6B

    FREE
    Researchers at the Tsinghua University in China have worked on developing the ChatGLM series of models that have comparable performance to other models such as GPT-3 and BLOOM. ChatGLM-6B is an open bilingual language model (trained on Chinese and English). It is based on General Language Model (GLM) framework, with 6.2B parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). The following models are available: ChatGLM-130B (an open source LLM), ChatGLM-100B (not open source but available through invite-only access), and ChatGLM-6 (a lightweight open source alternative). ChatGLM LLMs are available with a Apache-2.0 license that allows commercial use. We have included the link to the Hugging Face page where you can try the ChatGLM-6B Chatbot for free.
  • Deepmind

    Chinchilla AI

    OTHER

    Google’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.

  • Google

    Cloud Platform

    OTHER
    Google Cloud Platform (GCP) is a cloud computing service that includes innovative AI and machine learning products, solutions, and services. Google AI Studio is a low-code development environment that makes it easy to build and deploy applications and has a variety of features, such as pre-trained models that can be used to get started quickly, a unified experience for managing the entire ML lifecycle, from data preparation to model deployment, and a variety of tools for monitoring the performance of ML models in production. Vertex AI can be used to train and deploy models, and GCP also offers a variety of data storage services, including Cloud Storage, which can be used to store large datasets.
  • Google

    code chat (codechat-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    The code chat API can power a chatbot that assists with code-related questions. For example, you can use it for help debugging code. The code chat API supports the code-chat-bison model.

    The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code chat include debugging, where it assists with issues related to code that doesn’t compile or contains a bug; documentation, where it aids in understanding unfamiliar code to ensure accurate representation; and learning, as it provides help in comprehending code that you might not be very familiar with.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code completion (code-gecko)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud. The code completion API provides code autocompletion suggestions as you write code. The API uses the context of the code you’re writing to make its suggestions.

    The code completion API supports the code-gecko model. Use the code-gecko model to help improve the speed and accuracy of writing code. The Codey APIs support a wide range of programming languages including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio. Some common use cases for code completion include writing code faster, where the code-gecko model is employed to expedite the coding process by leveraging suggested code; and minimizing bugs in code, by utilizing code suggestions that are known to be syntactically correct to circumvent errors, thus reducing the risk of inadvertently introducing bugs that can arise during code creation.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google

    code generation (code-bison)

    $0.002

    Based on Google’s PaLM 2 large language model, the company specifically trained Codey APIs to handle coding-related prompts, but it also trained the model to handle queries related to Google Cloud.

    code generation (code-bison) generates code based on a natural language description of the desired code. For example, it can generate a unit test for a function. The code generation API supports the code-bison model. The Codey APIs support a wide range of programming languages, including C++, C#, Go, GoogleSQL, Java, JavaScript, Kotlin, PHP, Python, Ruby, Rust, Scala, Swift, and TypeScript. You can run with the API and in Generative AI Studio.

    Some common use cases for code generation include creating unit tests, where you can design a prompt to request a unit test for a specific function; writing a function, which involves passing a problem to the model and receiving a function that solves the problem; and creating a class, where you can use a prompt to describe the purpose of a class and have the code defining that class returned to you.

    Note: We have converted characters to tokens for the prices (based on the approximation of 4 characters per 1 token).

  • Google, Stanford University

    Electra

    FREE
    ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) is a transformer-based model like BERT, but it uses a different pre-training approach, which is more efficient and requires less computational resources. It was created by a team of researchers from Google Research, Brain Team, and Stanford University. ELECTRA models are trained to distinguish “real” input tokens vs “fake” input tokens generated by another neural network (for the more technical audience, ELECTRA uses a new pre-training task, called replaced token detection (RTD), that trains a bidirectional model while learning from all input positions). Inspired by generative adversarial networks (GANs), ELECTRA trains the model to distinguish between “real” and “fake” input data. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset. Go to GitHub where you can access the three models (ELECTRA-Small, ELECTRA-Base and ELECTRA-Large).

  • Google

    FLAN-T5

    FREE
    If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1,000 additional tasks covering more languages – the NLP is for English, German, French. It has Apache-2.0 license which is a permissive open source license that allows for commercial use. With appropriate prompting, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question answering, sentence and sentiment classification, translation, and pronoun resolution.
1 2

BARD
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.