NEW WEBSITE LAUNCH
Subscribe to our newsletter

Compare Models

  • BloombergGPT

    BloombergGPT

    OTHER
    BloombergGPT represents the first step in developing and applying LLM and generative AI technology for the financial industry. Bloomberg GPT has been trained on enormous amounts of financial data and is purpose-built for finance. The mixed dataset training leads to a model that outperforms existing LLMs on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Bloomberg GPT can perform a range of NLP tasks such as sentiment analysis, named entity recognition, news classification, and even writing headlines. With Bloomberg GPT, traders and analysts can perform financial analysis and insights more quickly and efficiently, saving valuable time that can be used for other critical tasks. To use Bloomberg GPT, you need access to Bloomberg’s terminal software (a platform investors and financial professionals use to access real-time market data, breaking news, financial research, and advanced analytics). Bloomberg also offers a variety of other subscription options, including subscriptions for financial institutions, universities, and governments. The price of a Bloomberg terminal varies depending on the type of subscription and the number of users.
  • Deepmind

    Chinchilla AI

    OTHER

    Google’s DeepMind Chinchilla AI is still in the testing phase. Once released, Chinchilla AI will be useful for developing various artificial intelligence tools, such as chatbots, virtual assistants, and predictive models. It functions in a manner analogous to that of other large language models such as GPT-3 (175B parameters), Jurassic-1 (178B parameters), Gopher (280B parameters), and Megatron-Turing NLG (300B parameters) but because Chinchilla is smaller (70B parameters), inference and fine-tuning costs less, easing the use of these models for smaller companies or universities that may not have the budget or hardware to run larger models.

  • NVIDIA

    LaunchPad

    FREE
    NVIDIA LaunchPad provides free access to enterprise NVIDIA hardware and software through an internet browser. NVIDIA customers can experience the power of AI with end-to-end solutions through guided hands-on labs or use NVIDIA-Certified Systems as a sandbox, but you need to fill out an Application Form and wait for approval. Sample labs include training and deploying a support chatbot, deploying an end-to-end AI workload, configuring and deploying a language model on the hardware accelerator, and deploying a fraud detection model.

     

    *FREE via Application Form
  • Aleph Alpha

    Luminous-base

    $0.0055
    Aleph Alpha have the Luminous large language model. Luminous models vary in size, price and parameters. Luminous-base speaks and writes 5 languages: English, French, German, Italian and Spanish and the model can perform information extraction, language simplification and has multi-capable image description capability. Aleph Alpha is targeting “critical enterprises” — organizations like law firms, healthcare providers and banks, which rely heavily on trustable, accurate information. You can try Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal. Aleph Alpha are based in Europe, allowing customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure.

  • Aleph Alpha

    Luminous-extended

    $0.0082
    Aleph Alpha luminous-extended is the second largest model which is faster and cheaper than Luminous-supreme. the model can perform information extraction, language simplification and has multi-capable image description capability. You can try Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification and Labelling, Generation, Information Extraction, Translation and Conversion and Multimodal. Aleph Alpha are based in Europe, which allows customers with sensitive data to process their information in compliance with European regulations for data protection and security on a sovereign, European computing infrastructure.
  • Aleph Alpha

    Luminous-supreme

    $0.0319
    Supreme is the largest model but the most expensive Aleph Alpha Luminous model. Supreme can do all the tasks of the other smaller models (it speaks and writes 5 languages, English, French, German, Italian and Spanish and can undertake Information extraction, language simplification, semantically compare texts, summarize documents, perform Q&A tasks and more) and is well suited for creative writing. You can try out the Aleph Alpha models for free. Go to the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal.
  • Aleph Alpha

    Luminous-supreme-control

    $0.0398
    Supreme-control is its own model, although it is based on Luminous-supreme and is optimized on a certain set of tasks. The models differ in complexity and ability but this model excels when it can be optimized for question and answering and Natural Language Inference.
    You can try out the combination of the Aleph Alpha models with predefined examples for free. Go to at the Jumpstart page on their site and click through the examples on Classification & Labelling, Generation, Information Extraction, Translation & Conversion and Multimodal.

  • Microsoft, NVIDIA

    MT-NLG

    OTHER
    MT-NLG (Megatron-Turing Natural Language Generation) uses the architecture of the transformer-based Megatron to generate coherent and contextually relevant text for a range of tasks, including completion prediction, reading comprehension, commonsense reasoning, natural language inferences, and word sense disambiguation. MT-NLG is the successor to Microsoft Turing NLG 17B and NVIDIA Megatron-LM 8.3B. The MT-NLG model is three times larger than GPT-3 (530B vs 175B). Following the original Megatron work, NVIDIA and Microsoft trained the model on over 4,000 GPUs. NVIDIA has announced an Early Access program for its managed API service to the MT-NLG model for organizations and researchers.
  • NVIDIA

    NeMo

    FREE
    NVIDIA NeMo, part of the NVIDIA AI platform, is an end-to-end, cloud-native enterprise framework to help build, customize, and deploy generative AI models. NeMo makes generative AI model development easy, cost-effective and fast for enterprises. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. NeMo framework supports both language and image generative AI models. Currently, the workflow for language is in open beta, and the workflow for images is in early access. You must be a member of the NVIDIA Developer Program and logged in with your organization’s email address to access it. It is licensed under the Apache License 2.0, which is a permissive open source license that allows for commercial use.
  • RedPajama

    RedPajama-INCITE-7B-Instruct

    FREE
    The RedPajama project aims to create a set of leading open source models. RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open source AI community. RedPajama-INCITE-7B-Instruct model represents the top-performing open source entry on the HELM benchmarks, surpassing other cutting-edge open models like LLaMA-7B, Falcon-7B, and MPT-7B. The instruct-tuned model is designed for versatility and shines when tasked with few-shot performance.

     

    The Instruct, Chat, Base Model, and ten interim checkpoints are now available on HuggingFace, and all the RedPajama LLMs come with commercial licenses under Apache 2.0.

     

    Play with the RedPajama chat model version here – https://lnkd.in/g3npSEbg
  • Amazon

    SageMaker

    FREE
    Amazon SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and supports one-click deployment and fine-tuning of more than 150 popular open source models such as natural language processing, object detection, and image classification models that can help with extracting and analyzing data, fraud detection, churn prediction and personalized recommendations.

     

    The Hugging Face LLM Inference DLCs on Amazon SageMaker, allows support the following models: BLOOM / BLOOMZ, MT0-XXL, Galactica, SantaCoder, GPT-Neox 20B (joi, pythia, lotus, rosey, chip, RedPajama, open assistant, FLAN-T5-XXL (T5-11B), Llama (vicuna, alpaca, koala), Starcoder / SantaCoder, and Falcon 7B / Falcon 40B. Hugging Face’s LLM DLC is a new purpose-built Inference Container to easily deploy LLMs in a secure and managed environment.

BloombergGPT
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy Policy.