Nowadays, startups implement machine learning code in haste, with focus on the research part and using as little engineering as possible, just to ship a MVP. When successful, those MVPs quickly evolve into services with infrastructure code mixed with ML algorithms, with use cases buried deep in implementation details and with several, slightly different re-implementations of concerns like consuming from a message broker, liveness probe or shutdown signal handling.
Keeping such service healthy in the production costs the researchers a lot of time, which could be better spent on the machine learning part. Let's see where to draw the boundary between domain-specific machine learning code/use cases and domain-agnostic boilerplate using the Actor model to hide the infrastructure concerns from fellow researchers.
As not everything is roses, we'll mention where the Actor model requires a bit of wrestling with existing libraries and frameworks (stdlib's HTTPServer, Prometheus client, gRPC, Alembic and Gunicorn).
What do you need to know to enjoy this talk
Python level
Medium knowledge: You use frameworks and third-party libraries.
About the topic
You use it or do it on a regular basis.