MLOps and machine learning (ML) Pipelines are quickly becoming the de facto way to build production ML systems with a Feature Store.
In this workshop, you will develop an ML System that consists of 3 programs: a Feature pipeline, a Training pipeline, and an Inference pipeline. These 3 ML pipelines will be run on a (free) serverless compute platform (modal.com) and connected together via a (free) serverless Feature Store/Model-Registry (hopsworks.ai). The user interface for our ML system will also be hosted on a free serverless UI platform (streamlit.io). We will go beyond just developing the system, we will also show how to evolve and manage the ML system with best practices from MLOps. We will work only in Python for the whole ML system, without any YAML or infrastructure-as-code. Our example ML system will be drawn from the free serverless machine learning course and it will be a combination of both a batch ML system and an operational ML system.
During the workshop, you will learn about best practices for feature engineering, model training, and batch/online inference in Python, and learn about how (free) serverless services can get you up and running faster. We will avoid all infrastructure (containers, kubernetes, cloud infrastructure) and focus on the core MLOps principles (testing and versioning of ML assets).
Requirements
Laptop
Prerequisities
Attendees should be able to write code in Python in an IDE or a Jupyter notebook.
Attendees will have to pip install the following Python libraries: 'hopsworks' and 'modal' and register free accounts on the hopsworks.ai and modal.com websites.
What do you need to know to enjoy this workshop
Python level
Medium knowledge: You use frameworks and third-party libraries.
About the topic
No previous knowledge of the topic is required, basic concepts will be explained.