After successfully training ML model with Apache Spark the next task becomes important - how to serve it? One way is to keep using Spark for serving as well, but sometimes it's not desired or possible. For instance if one would like to expose model as HTTP service, run in Docker container or use it on mobile device. This talk explores various approaches of how to allow model portability outside Spark to achieve this.