This presentation discusses dockerizing machine learning models. It shows how an ML model in .pkl format can be served via an HTTP REST API using a WSGI HTTP server running in a container. The container also includes an HTTP server that connects to a database backend. A tunnel allows user requests to reach the container and interact with the ML model.