This document discusses Unity's approach to building a large scale Kubernetes infrastructure by distributing ownership of shared components across development teams. Key aspects include:
1. Dividing over 200 microservices across 20 development teams, with each team owning deployment of their services through shared build pipelines, Terraform modules, and Helm charts.
2. Using shared infrastructure components like cloud resources, networking, monitoring tools, and databases that are developed and maintained through an "internal open source model" with clear ownership.
3. Standardizing on tools like Terraform, Kubernetes, Helm, Jenkins, and GitLab CI to ensure consistent environments and enable independent deployments by each team.
2. Practical examples of building a large scale
Kubernetes infrastructure, handling 50K
requests/sec, by distributing development of
shared components, increasing ownership
and reducing bottlenecks in the development
process
About me:
Rasmus Selsmark
DevOps Team Lead, Unity Ads
2
8. DevOps Handbook describes three primary types of
organizational structures
Scaling Engineering Teams
8
Matrix-oriented: combination of functional and market oriented
Functional-oriented: centralizing/optimizing expertise
Market-oriented: optimize for fast response to customer needs, each
team is responsible for feature delivery and deployment
Model at Unity
9. Unity Ads dev teams
9
Unity Ads: 140 developers in
Helsinki and San Francisco offices
20 dev teams
~200 repositories
~90 microservices
Unity: 2400 employees
worldwide
SRE
Seattle
SRE
Helsinki
SRE
Shanghai
10. Component ownership and development
11
Shared: Cloud infra, network, Prometheus, Terraform Enterprise Shared Unity
infrastructure
Common CI/Build and Deployment Pipeline DevOps team
Shared
development
Terraform infrastructure
Microservices
Databases
Terraform infrastructure
Microservices
Databases
DevOps Handbook deployment pipeline requirements:
● Automated, repeatable and predictable
● Consistent environments, by using same deployment tools for
staging and production
● Enabling easy automated self-service deploymentsTerraform modules Monitoring and alerts framework
Messaging + monitoring libraries
q
Dev teams
q
“Internal Open Source Model”:
● Typically one team is maintainer, not necessarily only
developer of a shared module
● Focus on consistency, while allowing others to contribute
● Most development is done by dev teams, not DevOps team
11. The tools we’re using to support
model with teams owning and
deploying services
● Terraform
● Kubernetes / Helm
● Jenkins / GitLab CI
12
Scaling Services
12. Keeping build and deployment
relevant configuration in service
repo, makes it visible to team,
allowing independent ownership and
simplifies workflows for the team
All configuration in service repo
client
(...)
helm
prd.yaml
stg.yaml
values.yaml
scripts
build.sh
test.sh
server
(...)
Dockerfile
Jenkinsfile
monitoring.yml
sonar-project.properties13
13. ● All services deployed using same common build pipeline
● Using Jenkins shared libraries / GitLab CI include files:
○ https://jenkins.io/doc/book/pipeline/shared-libraries
○ https://docs.gitlab.com/ee/ci/yaml/#include
Common build pipeline
if (config.deploy_prod == "true") {
stage('Deploy to production') {
print "Check if we can auto-deploy for ..."
if (config.skipConfirmForUsers != null && ...)
{
print "Skip deployment confirmation"
deployToProduction()
} else {
timeout(time: 1, unit: 'HOURS') {
userInput = input(message: 'Deploy?', ...)
if (userInput) {
deployToProduction()
Common build pipeline logic example
14
Script {
deploy = "helm"
deploy_prod = "true"
staging_env = ["ads-gke-stg"]
production_env = ["ads-gke-prd"]
}
Jenkinsfile for a service repo, only
containing configuration relevant to
build/deploy, no code
15. ● https://helm.sh - “The package manager for Kubernetes”
● Abstracts the complexity of Kubernetes manifests for dev teams, only specifying
parameters relevant for deploying the service
● Helm templates stored in central repository, maintained by DevOps team
● Helm config is stored in service repo, i.e. with application code
● Shared “unity-common-chart” chart, hosted on internal
https://github.com/helm/chartmuseum repository
Helm and Kubernetes
16
16. {{- if .Values.enableDeployment -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: {{ .Values.namespace }}
name: {{ .Chart.Name }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app: {{ .Chart.Name }}
environment: {{ .Values.environment }}
productgroup: {{ .Values.productgroup }}
spec:
progressDeadlineSeconds: {{ .Values.deployment.progressDeadlineSeconds }}
(...)
unity-common-chart/templates/deployment.yaml
Helm chart example and usage
17
Using Helm template ensures consistent naming
across services, which would be harder to achieve
if teams maintained their individual Kubernetes
manifest files
Service helm usage
environment: stg
deployment:
replicas: 2
resources:
requests:
cpu: 0.1
memory: 256Mi
limits:
cpu: 0.1
memory: 512Mi
18. ● We have found a model, which works
well for us in terms of organizational
and technical implementation of our
service ownership model
● Don’t underestimate the task for dev
teams to own their infrastructure. Teams
needs support from organization
19
Learnings
19. Generative Art – Made with Unity
Thank you!
20
Rasmus Selsmark
rasmus@unity3d.com
https://careers.unity.com/location/helsinki