2. Safe Harbor Statement
This presentation contains statements which are intended to outline the general direction of certain of Pivotal's offerings. It is
intended for information purposes only and may not be incorporated into any contract. Any information regarding the
pre-release of Pivotal offerings, future updates or other planned modifications is subject to ongoing evaluation by Pivotal and is
subject to change. All software releases are on an “if and when available” basis and are subject to change. This information is
provided without warranty or any kind, express or implied, and is not a commitment to deliver any material, code, or functionality,
and should not be relied upon in making purchasing decisions regarding Pivotal's offerings. Any purchasing decisions should
only be based on features currently available. The development, release, and timing of any features or functionality described
for Pivotal's offerings in this presentation remain at the sole discretion of Pivotal. Pivotal has no obligation to update
forward-looking information in this presentation.
This presentation contains statements relating to Pivotal’s expectations, projections, beliefs, and prospects which are
"forward-looking statements” and by their nature are uncertain. Words such as "believe," "may," "will," "estimate," "continue,"
"anticipate," "intend," "expect," "plans," and similar expressions are intended to identify forward-looking statements. Such
forward-looking statements are not guarantees of future performance, and you are cautioned not to place undue reliance on
these forward-looking statements. Actual results could differ materially from those projected in the forward-looking statements as
a result of many factors. All information set forth in this presentation is current as of the date of this presentation. These
forward-looking statements are based on current expectations and are subject to uncertainties, risks, assumptions, and changes
in condition, significance, value and effect as well as other risks disclosed previously and from time to time by us. Additional
information we disclose could cause actual results to vary from expectations. Pivotal disclaims any obligation to, and does not
currently intend to, update any such forward-looking statements, whether written or oral, that may be made from time to time
except as required by law.
3. apiVersion: content.pivotal.io/v1
kind: Author
metadata:
name: "Neven Cvetkovic"
role: "Platform Architect"
company: "Pivotal, now part of VMware"
email: "nevenc@pivotal.io"
twitter: "@nevenc"
github: "https://github.com/nevenc"
spec:
status:
mode: Presenting
Introduction
{
"name": "Dieter Hubau",
"role" : "Platform Architect"
"company": "Pivotal, now part of
VMware",
"email": "dhubau@pivotal.io",
"twitter": " @dhubau",
"github": " https://github.com/turbots"
}
4. Assumptions
● You are a Java developer who is (somewhat) familiar with
○ Spring Boot
○ Docker
○ Kubernetes
● You have tried deploying Spring and Spring Boot apps to Kubernetes
● You want to understand best practices for building, deploying and running
Spring and Spring Boot apps on Kubernetes
5. Cover w/ Image
Agenda
■ Best practices for building
containers for Spring Boot apps
■ Best practices for configuring
Kubernetes resource limits for your
Spring Boot app (CPU and memory)
■ Spring Boot with Cloud Native
Buildpacks
■ Spring Cloud Kubernetes project is
out of scope for this webinar
6. POLL: What are you mostly familiar with
● Java
● Java, Spring and Spring Boot
● Java, Spring, Spring Boot, Docker
● Java, Spring, Spring Boot, Docker, Kubernetes
● Java, Docker, Kubernetes
12. Musl libc vs. GNU libc
● “musl is a new general-purpose
implementation of the C library. It is
lightweight, fast, simple, free, and aims to
be correct in the sense of
standards-conformance and safety.”
● Used by Alpine Linux as its C Library
● Smaller than GNU libc (glibc)
● Has functional differences to glibc [1] [2]
● glibc (GNU C library) is the defacto
standard C library
● Used by Ubuntu, Debian, CentOS, RHEL,
SUSE
● Much larger than musl C library
● Large C/C++ code bases might depend on
the behaviour of GNU libc implementation
(bugs in glibc that can’t be fixed due to
backward compatibility - become features)
13. OpenJDK does not officially support musl libc
Source: https://openjdk.java.net/jeps/8229469 [3]
14. Recommendation: Use Ubuntu (glibc) Operating System
● Ubuntu with glibc is a great choice for running Java Spring based applications
● All glibc based images are okay to run Java avoid running with musl based images
until OpenJDK officially supports musl
18. Docker Official Images* and Verified Publishers
● Docker Official Images are curated set of Docker repositories, hosted on Docker Hub,
reviewed and published by the Docker team [5]
● All images in the Official Images repository are scanned for vulnerabilities
● Docker Verified Publisher are curated set of Docker repositories from ISV that are Hub
Verified Publisher in Technology Partner Program at Docker [6]
19. OpenJDK vs AdoptOpenJDK Java distributions
● OpenJDK means few different things
○ OpenJDK project with repository with the Java source code [7]
○ OpenJDK Java distribution of binaries maintained by Oracle [8]
● AdoptOpenJDK distribution [9] are built and maintained by the community
● AdoptOpenJDK distribution provides many benefits such as:
○ Java versions will be supported for longer time (especially LTS versions)
○ Favourable licensing terms
○ Supported by many big vendors (e.g. Amazon, Azul, GoDaddy, IBM, Microsoft, Pivotal)
● See Matt Raible’s [10] great blog post [11] on the topic of Java SDK choices
● AdoptOpenJDK has also been added as a Docker repository (i.e. Docker Official Images) [12]
20. Recommendation: Use AdoptOpenJDK
● Use the official AdoptOpenJDK image repository at
https://hub.docker.com/_/adoptopenjdk [13]
● Upstream build of OpenJDK with no modifications
● Binaries available for many operating systems, CPU
architectures, and JDK versions
● Offers a hassle free API for downloading binaries
● Open Quality Assurance systems (AQuA) [14]
21. AdoptOpenJDK Image Variants
● Many official and non-official image variants of AdoptOpenJDK, see details [15]
● Slim builds [16] are stripped down JDK builds that remove functionality not typically needed while
running in a cloud, applets, fonts, debug symbols, additional charsets, Java source, etc.
● Alpine based AdoptOpenJDK images use glibc for Java [17], [18]
● https://hub.docker.com/u/adoptopenjdk [19] has variety of AdoptOpenJDK
images for non-official variants
23. POLL: What version of Java do you use in production?
● Java 7 or earlier
● Java 8 LTS
● Java 11 LTS
● Java 13
● I don’t know
24. Java Version History
● New Java version every 6 months
● Long Term Support (LTS) Java version
● Current LTS is Java SE 11
● Next LTS is Java SE 17 (Sep 2021)
Source: https://en.wikipedia.org/wiki/Java_version_history [20]
25. Recommendation: Use latest Java LTS version
● Use the latest Java 11 LTS if you can
● User Java 8u191+ or later Java 8 versions (more on this soon)
27. Item 1: Use AdoptOpenJDK 11 for Spring apps
● Official AdoptOpenJDK images at https://hub.docker.com/_/adoptopenjdk [21] based
on ubuntu
○ adoptopenjdk:11-jdk-hotspot ~423 MB for builds
○ adoptopenjdk:11-jre-hotspot ~225 MB for running applications
○ Based on ubuntu:18.04 official base image
● Leverage variants from https://hub.docker.com/r/adoptopenjdk/openjdk11 [22] if you
are concerned about image size or want an non-ubuntu based image (e.g. alpine)
28. Software & Support for OpenJDK, Spring, and Tomcat [23]
Pivotal’s Java™ Experts Support 24/7 Simple & Fair Pricing
31. AdoptOpenJDK Official Image Layers
Images layers from ubuntu:18.04 ~64.2 MB
Image layers from adoptopenjdk:11-jre-hotspot ~160.5 MB
Total image size of adoptopenjdk:11-jre-hotspot ~225 MB
32. App.jar in a Single Layer Docker build
FROM adoptopenjdk:11-jre-hotspot
ARG JAR_FILE=build/libs/*.jar
ADD ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
33. App.jar in a Single Layer Docker build
● The app.jar size is around ~58 MB
● The total image size is ~283 MB (160.5 MB + 64.2 MB + 58 MB)
34. App.jar in a Single Layer Docker build
● Very basic single-stage Dockerfile, builds an application image by adding app.jar to
the base adoptopenjdk:11-jre-hotspot image layer, exposes port 8080 locally
● Also it adds an ARG pointing to a generated application fat jar, so this could be
parameterized during the build process with different jarfile names
● Any application change results in fat jar application update, so the entire file layer in
the container image needs to be updated!
● Container image size and layering has an impact on performance:
○ Network latency (image push / pull)
○ Scaling elasticity (container creation)
○ Development agility (faster builds)
● Can we do better?
35. App.jar in a Single Layer Docker build
● Build the application binary
./gradlew build
● Build the container image
docker build -t nevenc/spring-music-k8s:with-dockerfile -f Dockerfile .
● Build the image with a JAR_FILE argument
docker build -t nevenc/spring-music-k8s:with-dockerfile
--build-arg JAR_FILE=build/libs/spring-music-1.0.jar -f Dockerfile .
● Run the image locally
docker run -it -p8080:8080 nevenc/spring-music-k8s:with-dockerfile
● Run the image on Kubernetes
kubectl create deployment spring-music
image=nevenc/spring-music-k8s:with-dockerfile
kubectl expose deployment spring-music --port=8080 --type=NodePort
● Examples code
https://github.com/nevenc/spring-music-k8s [24] [25]
HANDS-ON EXAMPLE
37. Multi-stage Docker Build with Layered Application Files
# Stage 1: Extract layers of the app
FROM adoptopenjdk:11-jdk-hotspot AS build
ARG JAR_FILE=build/libs/*.jar
ADD ${JAR_FILE} app.jar
RUN mkdir /app
&& cd /app
&& jar xf /app.jar
# Stage 2: Build layered container image
FROM adoptopenjdk:11-jre-hotspot
COPY --from=build /app/BOOT-INF/lib /app/lib
COPY --from=build /app/META-INF /app/META-INF
COPY --from=build /app/BOOT-INF/classes /app
VOLUME /tmp
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["java","-cp","/app:/app/lib/*","org.cloudfoundry.samples.music.Application"]
38. Multi-stage Docker Build with Layered Application Files
● Application JAR file is separated out in multiple (three different) layers
● The image size is ~283 MB (160.5 MB + 64.2 MB + 57.8 MB + 247 B + 89.7 kB)
39. Multi-stage Docker Build with Layered Application Files
● Application container image is built in two stages
○ Unpacking applications 3rd
party libraries (jars) and application code
○ Packaging layers as a new build off of adoptopenjdk:11-jre-hotspot
● 3rd
party jars and application code are in their own separate file layers
● Upon application code change, only “thin” application layer needs to be updated
○ Resulting in faster builds and faster image push/pull from the image registry
40. Multi-stage Docker Build with Layered Application Files
● Build the application binary
./gradlew build
● Build the image with
docker build -f Dockerfile.multistage -t nevenc/spring-music-k8s:with-multistage .
● Run the image locally
docker run -it -p8080:8080 nevenc/spring-music-k8s:with-multistage
● Run the image on Kubernetes
kubectl create deployment spring-music
--image=nevenc/spring-music-k8s:with-multistage
kubectl expose deployment spring-music --port=8080 --type=NodePort
● Example code
https://github.com/nevenc/spring-music-k8s [26] [27]
HANDS-ON EXAMPLE
41. Spring Boot Layertools
● Spring Boot team has been actively adding new features to support cloud-native and
container-friendly tools, see release notes [28] for Spring Boot 2.3.0 M1
● Support for building jar files with contents separated into layers has been added to
both Maven and Gradle plugins
● The layering separates the JAR’s contents based on how frequently they will change
● Building more efficient Docker images with more frequently changing layers on top
● Layertools provide built in tools for listing and extracting layers
42. Multi-stage Docker Build with Layertools
# Stage 1: Extract layers of the app
FROM adoptopenjdk:11-jdk-hotspot AS build
WORKDIR application
ARG JAR_FILE=build/libs/*.jar
ADD ${JAR_FILE} app.jar
RUN java -Djarmode=layertools -jar app.jar extract
# Stage 2: Build layered container image
FROM adoptopenjdk:11-jre-hotspot
WORKDIR application
COPY --from=build application/dependencies/ ./
COPY --from=build application/snapshot-dependencies/ ./
COPY --from=build application/resources/ ./
COPY --from=build application/application/ ./
EXPOSE 8080
ENTRYPOINT ["java","org.springframework.boot.loader.JarLauncher"]
43. Multi-stage Docker Build with Layertools
● Application container image is built in two stages, similarly as before
○ Unpacking application using layertools
○ Packaging layers as a new build off of adoptopenjdk:11-jre-hotspot
● Layers are separated out based on how frequently they typically change, e.g.
○ dependencies
○ snapshot-dependencies
○ resources
○ application
● This results in even better optimization of Docker image layering for efficiency
● More details and examples on Phil Webb’s [29] blog post [30]
● Please try these new features and provide your feedback to Spring Boot team!
44. Multi-stage Docker Build with Layertools
● Application JAR file is separated out in multiple (four different) layers
● The image size is the same ~283 MB (64.2 MB + 160.5 MB + 57.8 MB + 0 B + 45.3 kB + 276 kB)
45. ● Rebuild the application binary (with appropriate layered bootJar)
./gradlew -b build.gradle.layertools build
● List image layers
java -Djarmode=layertools -jar build/libs/spring-music-k8s-1.0.jar list
● Build the image
docker build -f Dockerfile.layertools -t nevenc/spring-music-k8s:with-layertools .
● Run the image locally
docker run -it -p8080:8080 nevenc/spring-music-k8s:with-layertools
● Run the image on Kubernetes
kubectl create deployment spring-music --image=nevenc/spring-music-k8s:with-layertools
kubectl expose deployment spring-music --port=8080 --type=NodePort
● Example code
https://github.com/nevenc/spring-music-k8s [31] [32]
Multi-stage Docker Build with Layertools
HANDS-ON EXAMPLE
46. Item 2: Use unpacked Spring Boot multi-layered images
● Use multiple layers for your Spring Boot application code, based on
frequency of updates to the layer (more frequent at the top)
○ A layer for the 3rd
party libraries you include the app, these will
likely be very similar across many builds on CI
○ Could break this layer into two (releases and snapshots)
○ Additional layers for your resources and application code
● Consider using Spring Boot layertools for easier layering of your app
47. ITEM 3
How to optimize the JVM
configuration to run in a
container?
48. One question becomes two questions!
How to get the JVM to use the
container CPU limits rather than the
host CPU limits?
How to configure the various memory
regions of the JVM to fit within the
container CPU limits?
49. Host vs. Container CPU count
● In Java versions prior to 8u191 the JVM was reading the number of
CPUs on the host rather than the container.
● A JVM running in a 2 CPU container on a Kubernetes worker node with
16 CPU would assume that it had access to 16 CPU causing the
following problems
○ ForkJoin threadpool incorrectly configured
○ Libraries using Runtime.getRuntime().availableProcessors() to
configure themselves would work with the wrong number of cores
● Recommendation: Use Java 8u191 or later to make sure the JVM
reads the correct number of cores available to it
50. Kubernetes Resource Requests and Limits
● Kubernetes resources requests are used to set a minimum
amount of CPU and RAM required to run a container
● Kubernetes resources limits are used to set a maximum amount
of CPU and RAM that a container can consume
● What should these value be set to for a Spring application?
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
env:
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "2048Mi"
cpu: "2000m"
51. CPU Requests vs. Limits
● CPU requests are measured in millicores, a core is
○ 1 vCPU on GCP / Azure / AWS
○ 1 hyperthread on your own hardware
● Requests are minimum guaranteed amount of CPU millicores
allocated to the container
● Limits are the maximum amount of CPU milicores that the
container is allowed to consume
● Kubernetes defines three quality of service classes
○ Guaranteed → requests == limits
○ Burstable → requests < limits
○ Best Effort → requests and limits not set
● For more details refer to Kubernetes documentation [33] [34]
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
env:
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "2048Mi"
cpu: "2000m"
52. Kubernetes Behaviour under CPU pressure
● Kubernetes defines three quality of service classes
○ Guaranteed → requests == limits
○ Burstable → requests < limits
○ Best Effort → requests and limits not set
● Guaranteed keeps the same CPU when the worker node CPU is
highly utilized
● Burstable CPU is taken away when the worker node CPU is
busy but the container will at least get requested CPU
● Best Effort container is killed when the worker node CPU is
busy to free up resources for workloads that have a guaranteed
and burstable quality of service
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
env:
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "2048Mi"
cpu: "2000m"
53. JVM and Kubernetes CPU Requests and Limits
● Runtime.getRuntime().availableProcessor() returns a number of cores
that corresponds to the resource limit not the resource requests
● On startup the JVM might require more CPU than after it’s warmed up and serving
requests
● Set CPU requests == CPU limits → we might be over provisioning CPU to get a fast
startup time but the JVM will know exactly how much CPU it has
● Set CPU request < CPU limits → can burst during startup if CPU is available but why
bound the amount of CPU at startup to the limits
● Set CPU requests & don’t set CPU limits potentially burst up to consume all available
CPU on the worker node but is guaranteed minimum requested cpu.
54. Java on Kubernetes CPU Recommendations
● Measure the CPU requirements for warmed up instance of the JVM in millicores
● Set CPU requests for warmed up JVM
● DO NOT set CPU limits for the JVM let it burst up to full worker node CPU
● Consider configuring -XX:ActiveProcessorCount to a number that matches the
CPU requests, if your app will launch thread pools based on available CPU
55. RAM Requests vs. Limits
● RAM requests and limits are measured in bytes
● Requests are minimum guaranteed amount of RAM
● Limits are the maximum amount of RAM that the container can
use. Containers requesting more than the limits will be
terminated.
● Kubernetes defines three quality of service classes
○ Guaranteed → requests == limits
○ Burstable → requests < limits
○ Best Effort → requests and limits not set
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
env:
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "2048Mi"
cpu: "2000m"
56. Kubernetes Behaviour under RAM pressure
● Kubernetes defines three quality of service classes
○ Guaranteed → requests == limits
○ Burstable → requests < limits
○ Best Effort → requests and limits not set
● Guaranteed keeps the same RAM when the worker node is low
on memory
● Burstable container will be restarted and constrained to it’s
resource request. (Memory can’t be taken away after it is
allocated without a process restart)
● Best Effort container is killed when the worker node RAM low to
free up resources for workloads that have a guaranteed and
burstable quality of service
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
env:
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "2048Mi"
cpu: "2000m"
57. JVM & Kubernetes RAM Requests & Limits
● Set the memory requests == memory limits
● JVM must be configured so that it does not consume more RAM than the limit across
all memory used by the JVM
○ Metaspace
○ Code cache
○ Heap
○ … etc
● Two choices to configure JVM memory consumption
○ -XX:MaxRAMPercentage=75.0 (Java 8u191 or later, Java 11)
○ Use the CloudFoundry memory calculator
https://github.com/cloudfoundry/java-buildpack-memory-calculator [35]
58. ITEM 3: Consider Java memory settings in containers
● Measure the CPU requirements for warmed up instance of the JVM in millicores
● Set CPU requests for warmed up JVM
● DO NOT set CPU limits for the JVM let it burst up to full worker node CPU
● Consider configuring -XX:ActiveProcessorCount to match the requests if
your app will launch very large thread pools based on available CPU
● Set the memory requests == memory limits
● Use the CloudFoundry memory calculator or -XX:MaxRAMPercentage
59. How can we automate container
building process in a modular,
repeatable, and secure way, at scale?
ITEM 4
60. ITEM 4: Consider using container building automation
● Consider different ways of building your container image of your Spring app
○ Your own multi-staged Dockerfile containing separate file layers
○ Cloud Native Buildpacks [36]
○ Other container building tools, e.g. jib [37]
● Cloud Native Buildpacks are supported in Spring Boot 2.3.x
61. Multi-stage Docker Build with Cloud Native Buildpacks
● Rebuild the application binary
./gradlew -b build.gradle.cnb build
● Build the image using cloud native buildpacks
./gradlew -b build.gradle.cnb bootBuildImage
● Run the image locally
docker run -it -p8080:8080 nevenc/spring-music-k8s:with-cnb
● Run the image on Kubernetes
kubectl create deployment spring-music --image=nevenc/spring-music-k8s:with-cnb
kubectl expose deployment spring-music --port=8080 --type=NodePort
● Example code
https://github.com/nevenc/spring-music-k8s [38] [39]
HANDS-ON EXAMPLE
62. Multi-stage Docker Build with JIB
● Rebuild the application binary
./gradlew -b build.gradle.jib build
● Build the image using jib
./gradlew -b build.gradle.jib jib
● Run the image locally
docker run -it -p8080:8080 nevenc/spring-music-k8s:with-jib
● Run the image on Kubernetes
kubectl create deployment spring-music --image=nevenc/spring-music-k8s:with-jib
kubectl expose deployment spring-music --port=8080 --type=NodePort
● Example code
https://github.com/nevenc/spring-music-k8s [40] [41]
HANDS-ON EXAMPLE
64. Cloud Native Buildpacks (CNB) Bring Developer Productivity to K8s
Pluggable, modular tools that
translate source code into OCI
images.
● Portability via the OCI [42] standard
● Greater modularity
● Faster builds
● Run in local dev environments for faster
troubleshooting
● Developed in partnership with Heroku [43]
● CNCF project [44]
66. Learn more about Cloud Native Buildpacks
● Additional resources (videos and articles)
○ Cloud Native Buildpacks on Heroku Blog [46]
○ “CNB: Industry standard build process for kubernetes and beyond” [47]
- by Emily Casey, Pivotal now part of VMware
○ “Pack to the Future: Cloud-Native Buildpacks on k8s” - [48] [49]
- by Joe Kutner, Heroku and Emily Casey, Pivotal now part of VMware
○ “Introducing kpack - a Kubernetes Cloud Native Build Service“ [50]
- by Matthew McNew, Pivotal now part of VMware
● Start exploring Cloud Native Buildpacks and provide your feedback
67. cflinuxfs3 releases
0.92.0 released - 22 hours ago
0.91.0 released - 22 hours ago
0.90.0 released - 22 hours ago
0.89.0 released - 6 days ago
0.88.0 released - 9 days ago
0.87.0 released - 9 days ago
openjdk CNB releases
v1.0.0-M7 released - Apr 10
v1.0.0-M6 released - Apr 1
v1.0.0-M5 released - Jan 31
v1.0.0-M4 released - Jan 16
v1.0.0-M3 released - Dec 10
v1.0.0-M2 released - Nov 29
source:
git:
https://github.com/myapp
revision: dev
consumes
consum
es
consum
es
example.com/myapp/mytag
Build # 1
Build # 2
Build # 3
Latest Build
CI/CD Tools
deploys
builds
Build
Service
Build Service
69. Cloud Native Buildpacks Demo
● b2b app is a modular app with three app components (all built in Spring Boot)
○ b2b-accounts, b2b-confirmation, b2b-payments
● They rely on two backing services
○ redis, rabbitmq
● The build system includes
○ Concourse for driving pipelines, Kpack build system, Github Repo, Docker registry
● We will look at two use cases
○ CASE 1: Updating a component (e.g. b2b-accounts UI change)
○ CASE 2: Updating a builder with more recent Java runtime, patching all apps images
● Example code
https://github.com/turbots/b2b [51]
HANDS-ON EXAMPLE
70. ➔ Use AdoptOpenJDK 11 for Spring apps.
Consider using a base image based on ubuntu,
because JVM supports only glibc, not musl libc.
➔ Use unpacked Spring Boot multi-layered
images. Consider layering your container image
from least to most frequently changing content.
➔ Consider Java memory settings in containers.
Configure your Kubernetes deployment with
appropriate CPU and memory request and
limits.
➔ Consider using container building tools.
Cloud Native Buildpacks provide great tooling
for creating container images at enterprise
scale.
Effective
Spring on
Kubernetes
Best Practices
71. Two additional videos not to miss!
“Spring Cloud on Kubernetes” [52]
by Ryan Baxter [53] and Alexandre Roman [54],
Platform Architects at Pivotal, now part of VMware
“Best Practices to Spring to Kubernetes
Easier and Faster” [55]
by Ray Tsang [56], Developer Advocate, Google