Streamline your MLOps practice with model packages

We are happy to announce the Public Preview of model packages, a capability in Azure that allows you to collect all the dependencies required to deploy a model to a serving platform and combine them into a single moving unit. Creating packages before deploying models provides robust and reliable deployment and a more efficient MLOps workflow. Packages can be moved across workspaces and even outside of Azure Machine Learning.

Why model packages?

After you train a model, you need to deploy it so others can consume its predictions. However, deploying a model requires more than just the weights or the model's artifacts.

Typically, a model's dependencies include:

  • A base image in which your model gets executed.
  • Python packages and dependencies that the model depends on to function properly.
  • Extra assets that your model might need to generate inference. These assets can include label's maps, preprocessing parameters, transformations, etc.
  • Software required for the inference server to serve requests; for example, flask server or TensorFlow Serving.
  • Inference routine code with the logic to generate predictions.

All these elements need to be collected to then be deployed in the serving infrastructure.


Benefits of packaging models

Packaging models before deployment has the following advantages:
  • Reproducibility: All dependencies are collected at packaging time, rather than deployment time. Once dependencies are resolved, you can deploy the package as many times as needed while guaranteeing that dependencies have already been resolved.
  • Faster conflict resolution: Azure Machine Learning detects any misconfigurations related with the dependencies, like a missing Python package, while packaging the model. You don't need to deploy the model to discover such issues.
  • Easier integration with the inference server: Because the inference server you're using might need specific software configurations (for instance, Torch Serve package), such software can generate conflicts with your model's dependencies. Model packages in Azure Machine Learning inject the dependencies required by the inference server to help you detect conflicts before deploying a model.
  • Portability: You can move Azure Machine Learning model packages from one workspace to another, using registries. You can also generate packages that can be deployed outside Azure Machine Learning.
  • MLflow support with private networks: For MLflow models, Azure Machine Learning requires an internet connection to be able to dynamically install necessary Python packages for the models to run. By packaging MLflow models, these Python packages get resolved during the model packaging operation, so that the MLflow model package wouldn't require an internet connection to be deployed.

Get MLOps to the next stage

Packages allow customers to move models to productions more reliable and with higher level of confidence about what they get. Combined with Azure Machine Learning Registries, they can unlock solid deployment strategies to multiple environments.

Streamline MLOps.png

Get stated today

Start using packages and taking advantage of them in your deployments. Read our documentation in Microsoft Learn: Model packages for deployment (preview) or explore some of our examples in azureml-examples repository for CLI or SDK (Python). If you have any questions or feedback don't hesitate to start a conversation on GitHub


This article was originally published by Microsoft's AI - Machine Learning Blog. You can find the original article here.