In MLOps level 1 the machine learning pipeline is automated. By doing so you achieve continuous training of the machine learning model as well as continuous delivery of the model prediction service.

Photo by Christopher Burns on Unsplash

Recommended Pre-Reading

In order to get the most from this article it is important that you understand the terminology I will be using. To familiarise yourself with the terminology I have outlined the various steps of a machine learning service in great detail in this article: https://towardsdatascience.com/steps-of-a-machine-learning-process-7acc43973385.

Some of the concepts discussed overlaps with DevOps philosophy and others are specific to MLOps. To read more about how these two practices compare and contract please check out my article: https://towardsdatascience.com/mlops-vs-devops-5c9a4d5a60ba

If you are interested to learn more about how this level of MLOps differs from level 0 please check out my…


MLOps level 0 is the basic level of maturity of a machine learning process. For some organisations this may be sufficient, especially if models are infrequently required to be re-trained. This level of MLOps is characterised by its highly manual steps within the workflow. To address the challenges prevalent in MLOps level 0 a continuous integration, continuous deployment and continuous training component can be utilised.

Photo by fabio on Unsplash

In my previous article I went over the steps that make up a machine learning process (https://towardsdatascience.com/steps-of-a-machine-learning-process-7acc43973385). The level of automation of these steps determines how mature the machine learning process is. In this article I will be going into more detail the first level (level 0) of MLOps, the basic level of maturity.

The graph below illustrates the various steps and pipelines (workflow) in order to serve a model as a prediction service in order for applications to consume the model predictions.


A machine learning process is made up of several steps which are cyclical in nature. The more of these steps an organisation can automate through MLOps the more mature the machine learning process is.

Photo by Joshua Sortino on Unsplash

MLOps is about the application of DevOps philosophy into a machine learning system (to read more about these two practices please checkout my article: https://towardsdatascience.com/mlops-vs-devops-5c9a4d5a60ba). The machine learning process to bring a machine learning model to production involves several steps. The level of automation of the steps that make up the machine learning process determines the maturity of the machine learning process. Generally, the more automated the process, the higher the velocity of training new models given new input/model implementations.

Steps of a machine learning process

  1. Data extraction: This step involves the integration of data used for the machine learning…


When ingesting, processing and analysing big data is not feasible using traditional systems a big data architecture is the way to go.

Photo by Hunter Harritt on Unsplash

With the advancement of technology, the volumes of data organisation’s collect have increased exponentially. A big data architecture is used to ingest, process and analyse data that is too large and/or complex for traditional database management systems to manage.

Workloads of a Big Data Architecture

  • Batch processing data at rest.
  • Real time processing of data in transit.
  • Exploration of big data.
  • Machine learning and advanced analytics.

Components of the Big Data Architecture


This article goes through the similarities and differences between DevOps and MLOps as well as platforms that help enable MLOps

Photo by HalGatewood.com on Unsplash

As the field of machine learning has matured in recent years, the need for integrating automatic continuous integration (CI), continuous delivery (CD) and continuous training (CT) to machine learning systems has increased. The application of DevOps philosophy to a machine learning system has been termed MLOps. The aim of MLOps is to fuse together the machine learning system development (ML) and machine learning system operation (Ops) together.

What is DevOps?

DevOps is a practice used by individuals and teams when developing software systems. The benefits individuals and teams can obtain through a DevOps culture and practice includes:

  1. Rapid development life…


This article, part 1 of a series of articles, covers some python libraries I have found useful throughout my career working in analytics ranging from data curating, model training and machine learning model deployment

Photo by Joshua Sortino on Unsplash

Throughout my career I have had the opportunity to cover the end to end spectrum of analytics. From sourcing data, feature engineering, developing data pipelines, training machine learning models and deploying machine learning models. As a result of this wide breadth of exposure in my career, I have been exposed to several tools, technology and libraries. This is part 1 of a multi-part series of articles I will publish covering the tools I use.

Python

This is my go to programming language when developing data pipelines, big data processing, machine learning and plenty of other applications (eg. web applications). Python…


Photo by Michael Dziedzic on Unsplash

Machine learning model deployment can be categorised into 3 broad categories:

  1. Real-time inference: typically it involves hosting a machine learning model as a endpoint on a web server. Applications can then provide data via https and receive back, in a short period of time, the predictions of the model.
  2. Batch inference: on a regular basis (triggered by time or events such as data landing into a data lake/data store) resources are spun up and a machine learning model is deployed to predict on the new data that is now available in the data lake/data store.
  3. Model deployment onto edge: Instead…


A scale-able design pattern for machine learning batch transforms using AWS and SageMaker to embed advanced analytics into business applications.

Photo by h heyerlein on Unsplash

A common scenario I encounter goes like the following:

An organisation’s team of data scientists have spent significant time and resources into fine tuning and training a machine learning model to solve a business problem. Finally they have settled on the one (or an ensemble/stack ensemble).

The business are thrilled to hear the great news. News spreads that a revolutionary technology is going to radically transform the business by improving profit by x% or improve efficiency by y% etc etc. Soon the news reaches the senior executives and they want it available now now now.

The team of data scientists…


Photo by Gertrūda Valasevičiūtė on Unsplash

During my time as a consultant working in the analytics space, I have had the opportunity to work on both AWS and Azure environments to implement analytic solutions.

Below are my thoughts on the similarities and differences between the two machine learning services provided by the two biggest cloud vendors (as of 10 December 2020).

Similarities

  1. Estimators: Model training, inference is done through the use of estimators. Under the hood, they are Docker containers that are deployed to 1 or more VMs/EC2 instances to do training/inference. As a result the script that actually does the model training/pre/post-processing is quite easily…

Marco Susilo

A Kaggle expert, certified Machine Learning Specialist AWS, passionate about cloud, analytics and technology. Founder of PassionIT (https://www.passionit.tech)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store