Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service

Microsoft is partnering with Mistral to bring its Large Language Models (LLMs) to Azure. Mistral AI's OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI's new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.

The Mistral Large model

Mistral Large is Mistral AI's most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:

  • Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
  • Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
  • Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
  • Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.


You can read more about the model and review evaluation results on Mistral AI's blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral's OSS models and Mistral Large.

Using Mistral Large on Azure AI

Let's take care of the prerequisites first:

  1. If you don't have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
  2. Create an Azure  AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.

Next, you need to create a deployment to obtain the inference API and key:

  1. Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
  2. Click on Deploy and pick the Pay-as-you-go option.
  3. Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
  4. You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.

The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.

You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral…. Let's review samples for some popular clients.

Develop with integrated content safety

Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.


  • What does it cost to use Mistral Large on Azure?
  • Do I need GPU capacity in my Azure subscription to use Mistral Large?
    • No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
  • This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Studio?
  • Does Mistral Large on Azure support function calling and Json output?
    • The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform. 
  • Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
    • Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
  • Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
  • Is my inference data shared with Mistral AI?
    • No, Microsoft does not share the content of any inference request or response data with Mistral AI.
  • Are there rate limits for the Mistral Large API on Azure?
    • Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn't suffice.
  • Are Mistral Large Azure APIs region specific?
    • Mistral Large API endpoints can be created in AI Studio projects to Azure workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you  can use the API from any Azure region once you create it in East US 2 or France Central.
  • Can I fine-tune Mistal Large?
    • Not yet, stay tuned…

Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.


This article was originally published by Microsoft's AI - Machine Learning Blog. You can find the original article here.