This document discusses pipelines for model deployment using Kubernetes (k8s) and Jenkins CI/CD. It describes deploying both static and dynamic models. Static models involve calculating a model offline and deploying pre-trained models. Dynamic models allow for distributed continuous training and serving multiple live models. The architecture uses k8s, a REST API, processing engines, and a queue system to extract features, train models, and serve predictions. Models and metadata are stored in a distributed data platform called Guillotina. Tensorflow Serving pods are used to serve models via gRPC. Dynamic flows allocate workers, train models, and write results to the shared storage.