The lifespan of a machine learning project can be accelerated and managed using Azure Machine Learning, a cloud service. Professionals in machine learning, data scientists, and engineers can use it in their regular workflows: Manage MLOps while training and deploying models.
You have the option of building a model in Azure Machine Learning or using one that was created using an open-source framework like Pytorch, TensorFlow, or scikit-learn. You can track, retrain, and redeploy models with the use of MLOps tools.
A workflow of a finished machine learning task that may be independently executed is called an Azure Machine Learning pipeline. An Azure Machine Learning pipeline increases model construction efficiency, allows the team to execute at scale, and helps standardize the best practices for creating machine learning models.
To orchestrate the pipelines the most obvious solution is to use Azure Data Factory. However, there may be other solutions, less evident and less documented by yet efficient and interesting, like Azure Functions.
Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running.
In this tutorial we are going to see how we can manage Azure ML pipelines from Azure Functions. Up we go.
Configure a Service Principal
To use a service principal, you must first create the service principal and grant it access to your workspace. As mentioned earlier, role-based access control (Azure RBAC) is used to control access. Therefore, you also need to decide what access is granted to the service principal.
When using a service principal, grant it the minimum access required for the task it is being used for. For example, you cannot grant owner or contributor access to the service principal if it is used only for reading the access token for a web deployment.
The reason for granting the lowest access is that a service principal uses a password for authentication and the password can be stored as part of an automation script. If the password is disclosed, having the minimum access required for a specific task minimizes malicious use of the service principal.
When configuring a machine learning flow as an automated process, we recommend using core service authentication. This approach decouples authentication from any specific user login and allows for managed access control.
Note that you must have administrator privileges on the Azure subscription to perform these steps.
The first step is to create a service principal. First, go to Azure Portal, select Azure Active Directory and App Registrations.
Then select +New App, give your service principal a name, for example my-svc-principal.
You can leave the other settings as is.
Then click on Register
From your newly created service principal page, copy the Application ID and Tenant ID , as they will be needed later.
Finally, you need to give the service principal permissions to access your workspace. Navigate to "Resource Groups", to the resource group for your "Machine Learning Workspace".
Then select Access Control (IAM) and Add Role Assignment. For Role, specify the level of access you need to grant, for example Contributor. Start typing the name of your Service Principal and once it is found, select it and click Save.
Azure Function
Here's what my project structure looks like:
PipelineLauncher folder
scripts folder
The scripts folder contains all the training code and a helper module (BlobHelper.py) to connect to Azure Blob Storage, get the data, and save all the artefacts back to the same storage. The training scripts include only classical libraries like Gensim or Scikit Learn or Azure Storage sdk, no Azure ML related libraries. For instance, here's the contents of launcher.py
from train_helper import apprentissage
from BlobHelper import BlobHelper
blob_helper = BlobHelper()
train_df = blob_helper.get_trainset()
BUILD_MODEL = apprentissage(train_df)
blob_helper.save_trained_model(BUILD_MODEL)
The __init.py__ file in the PipelineLauncher folder is the principle entry to the Azure function.
So, we can authenticate to Azure ML with the Main Service
from azureml.core.authentication import ServicePrincipalAuthentication
from azureml.core import Workspace
tenant_id = 'xxxxxx-xxxx-xxxx-xxxx-xxxxxx'
service_principal_id = 'xxxxxx-xxxx-xxxx-xxxx-xxxxxx'
service_principal_password = 'xxxxxx-xxxx-xxxx-xxxx-xxxxxx'
auth = ServicePrincipalAuthentication(tenant_id, service_principal_id, service_principal_password)
subscription_id = 'xxxxxx-xxxx-xxxx-xxxx-xxxxxx'
resource_group = 'CAFPI'
workspace_name = 'cafpiwrkpcglobal'
ws = Workspace(subscription_id=subscription_id,
resource_group=resource_group,
workspace_name=workspace_name,
auth=auth)
Then we will configure the cluster on which the training will take place.
aml_compute = AmlCompute(ws, 'worker-cluster')
run_amlcompute = RunConfiguration()
run_amlcompute.target = 'worker-cluster'
run_amlcompute.environment.docker.enabled = True
run_amlcompute.environment.docker.base_image = DEFAULT_GPU_IMAGE
run_amlcompute.environment.python.user_managed_dependencies = False
run_amlcompute.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[
'azure-storage-blob',
'joblib',
'numpy',
'pandas',
'scikit-learn',
'scipy'
])
Note that here we use the existing cluster (worker-cluster), which we created in Azure ML Studio.
Now we will create a training step
mainStep = PythonScriptStep(
name="main_launcher_"+timestamp,
script_name="launcher.py",
compute_target=aml_compute,
runconfig=run_amlcompute,
source_directory = 'scripts'
)
Important: if we don't specify the source_directory Azure ML will load all the content of the root in the experiment. In our case the 'scripts' folder contains all the training files.
All we have to do is validate the Pipeline and send it to execution
pipeline = Pipeline(workspace=ws, steps=[mainStep])
pipeline.validate()
experiment_name = 'worker-pipeline'
pipeline_run = Experiment(ws, experiment_name).submit(pipeline)
Execute Azure Function
Go to the Azure portal to get the URL (Function App name > Functions > Function name > Code + Test > Get Function URL)
Copy this URL into the Browser and run it.
You will see the message that the Pipeline has been sent to execution (the text of the message is defined by yourself).
Go to Azure Machine Learning Studio Pipelines to see the status of Pipeline.
You will see that it is running (Status Running).
As you remember our pipelines runs the training, serialises the trained model and, using BlobHelper module, saves the trained model to Azure Storage. Consequently, when the Pipeline status changes to Completed, we can go to the blob storage and see that the trained model has been loaded into the blob storage in .gz format.
In today's tutorial we've seen how to orchestrate Azure Machine Learning pipelines with Azure Functions. For simplicity we've created an Http-triggered function, but you can, of course, create a time triggered one, that will run the pipeline on a regular basis. Besides being very fast and easy to maintain, Azure Functions is the cheapest possible solution for such kind of job.
Hope this was useful.
Comments