All righty, let’s kick it off! Hi everybody. The title on the slide is Develop, Deliver, Run Oracle ADF applications with Docker. Oracle ADF is a mature framework and many organizations practicing ADF have their development process all set and running. So why would we even think about changing it and incorporating Docker. Containers and technologies related to them are pretty hot nowadays. People talk and think about that a lot. Customers started asking if you guys can deliver your software in Docker containers. On one hand they want to be able to easily try out your application on another hand they started considering containerization of their environments on prem and in the cloud. So this session is about how and about why.
Ok, so who I am. My name is Eugene Fedorenko. I am Ukrainian and you can hear my accent, however recently I relocated to the US and joined Flexagon. Perhaps someone knows me as a blogger and follows my posts on ADF Practice blog, probably someone follows me on twitter. My position at Flexagon is a senior architect and I am leading development of the company’s flagman product. This is FlexDeploy which is a fully automated Dev Ops solution. Actually that’s why these technologies got into my focus and that’s why I am so interested in them.
Ok, and this is our scope, this is what we are going to talk about. We will start with considering a sample ADF application that we are going to build and deploy across environments. We’ll discuss the idea of containers itself and what Docker containers are all about. Having done that we will decompose our applicantion into containers and build them both manually and with a CI tool. We will talk about K8s and consider our application as a collection of K8s resources. We are going to deploy the entire solution to OKE in the cloud. Finally, we’ll see how to build a CD pipeline to deploy the application to various k8s environment.
I was thinking what sample application to choose for this session. And eventually I asked myself why would I invent something artificial and unreal. I have a real system. This is Flexdeploy. We are developing, delivering and running FlexDeploy with Docker. It’s going to be my sample application. For two reasons: I’ve been there. So I really did that as a part of my work. This is not a hello world application, this is a real pretty big ADF application which is used by many customers.
So, what we’re gonna do with our sample application. We’re gonna implelement this simplified development lifecycyle. The source code of the application is stored in BitBucket. We will fetch the source code from BitBucket, build the microservice and deploy it across environments. Dev, Test and Prod. Looks very familiar and seems to be easy, but there are some challenges to be aware of.
FlexDeploy is available for Weblogic, Tomcat and Glassfish. In this session I am going to focus on running FlexDeploy on top of Tomcat because our customers mostly go with this option and because Tomcat is relatively lightweight. So, in order to get it working we need a war file with FlexDeploy application, java, Tomcat application server, and a bunch of ADF libraries installed on Tomcat. There are a number of blog posts about running ADF Essentials applications on Tomcat. They describe how to properly build a war file and how to collect right ADF libraries for Tomcat. Sometimes it happens that we start using a new ADF feature in our application and it fails because Tomcat does not have a corresponding library. So we need to install it and we need to do that on every environment where the application is running. And if we look at this in general. There is FlexDeploy application and there is an environment with Java, Tomcat and ADF libraries. And our application depends on that environment. SDLC involves more than one environment. In our scenario there are dev, test and prod. So there is one application and many environments which actually means there are many variations of our solution. And what works fine on Dev fails on Prod just because of different libraries, drivers or JVM.
The idea is to put everything together, application and environment dependencies into one package and deliver it from Dev to Prod as a unit as a container. This idea is as old as the computer science itself. The only thing is that it had been hard to implement technically. There has been always a problem – huge overhead. Some people have done this by the means of virtual machines. It works, but not good enough to become an industrial trend, to become a way to go for everyone. VMs are typically huge. Packaging, delivering and starting a VM has been always a slow process.
Fortunately, some smart guys decided to leverage Linux kernel features such as namespacing and control groups to create isolation level on top of the host OS rather than on top of the hardware. This idea one hand and the level of performance of the modern hardware on another hand brought to the world the notion of containers as we know it today. These containers are significantly more lightweight than VMs. You can run thousands of containers on a machine and it won't even blink. There have been a few implementations of this concept, but today Docker is de facto an industrial standard and when we think of containers, we, actually, think of Docker.
Docker is a tool designed to make it easier to create, deploy, and run containers. Once a Docker container is created it is guaranteed that it will run on any other Linux or Wimdows machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code. The only requirement for that machine is to have Docker Comntainer Manager Installed. This ability to run everywhere is one of the greatest benefits of the containers (along with isolation and incapsulation dependencies in a container). In some way it looks similar to Java. Once a Java program is packaged it can run everywhere, on any machine having JVM installed. So, that's true that for Java guys this containers feature is not something fantastic, they are used to that.
Docker is developed by a company with the same name and it is dstrubtuted both as an Open Source under Apache 2.0 license and as a commercial versions as well.
In a way, Docker is a bit like a virtual machine. They have similar resource isolation and allocation benefits, but function differently because containers virtualize the operating system instead of hardware. Unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.
Ok, so having clarified that what we’re gonna do next is to pack FlexDeploy into a Docker container. Its going to be based on Ubuntu and besides FlkexDeploy war file it will also contain Tomcat 8.5, Java 8, ADF essentials libraries 12.1.3. Tomcat is going to be preconfigured and it will contain a datasource consumed by flexdeploy application and pointing to some JDBC url so that the container is able to connect to a databases with FlexDeploy schema. Optionally, this database can be packed into a separate container, so the entire solution is containerizeed. We publish a new version of Flexdeploy docker image every release and every hot fix so the customers can pull it and install on there prodiuction environment to perform their daily activities.
But besides that there are some alternateve scenearioes when having our application in a container would very helpful. Our potential customers want to easily pull and start the applicationon to investingate the prodocuct, to perform POC, just to get their hands dirty with the system. Our existing customers want to play with new features coming in the next release. Both of them would appreciate having a preconfigured instance of FLexDeploy for various trainings, hands on labs and tutorials. We as developers need preconfigured and freshly built FlexDeploy instances to perform functional and load testing. The key features in all these scenarios is the ability to easily pull the application, start it and throw away when we are done, and that application should be shipped with some data.
For these scenarious it makes sense to put both the application itself and the database with preconfigureded schema into one container. This is just an option working perfectly for those use cases. An example of this approach explaining how to put an ADF application, Tomcat and Oracle XE database into one container is available on my blog. Today we’re gonna focus on the main scenario and we’;re gonna keep the database separately.
We’re going to produce a new image of FlexDeploy container on every single build. Therefore it would be nice to make this operation simple and fast. If we look at the content of the image we’ll see that there is an invariable part that never changes including OS, Java, Tomcat server and ADF libs and there is a variable part which is flexdeploy application itself. What we’re going to do is to build a base image containing the invariable part, push it to the repository and use it to build an actual image just by adding an application war file to it.
So, let’s work on the base image first. We’re going to create a folder with the following content. We’re going to download an archive with jdk 8 and unpack it in this folder. Then we’re going to download and unpack tcat application server. From OTN website we’re downloading adf essentials zip files and unzip them into tomcat lib sub folder. In the configuration context.xml file we’re going to define a datasource pointing to some URL provided by an environment variable. This datasource will be used by the FlexDeploy application. Having this preparation work done we can build an image with a Docker file which is placed in the same folder.
A docker file is a simple text file containing a set of instructions on how to build the image. It is saying, take a publicly available image with ubuntu linux and copy there jdk and tomcat subfolders from the current folder. Having done that, update java home and path environment variables. An entry point of the container, meaning that this command will be executed whenever the container is initialed is Catalina run which starts the tomcat application server.
It is a good practice to build images basing on Docker files as it is a straight forward process which can be easily automated. So,we’re gonna use a Dockerfile and build an image by invoking Docker build command. This will create an image in the local Docker repository. Local meaning that it is on that machine where we invoke the build command. In order to make the image available for everyone we will push to the cloud registry.
The next step is to push images to the cloud. We’re going to use Docker Hub for that. Docker Hub is a cloud-based registry service which provides a centralized resource for container images. There is a public registry which is available for everyone for free and a private registry for money. We can access Docker Hub with a command line interface via docker search, pull, login, and push commands. So you can operate with Docker Hub from our laptop. Let’s push our images to the Hub.
So, let's build the image and push it to docker hub. The first command actually builds the image and saves it in the local docker repository, with docker login I am going to login to docker hub and push there my image with docker push. Let’s do it.
So now a task to create the final FlexDeploy image is getting very easy. We are going to use this simple Dockerfile which takes flexdeploy_base I’mage and copies a freshly built wr file to tomcat applications server. This is going to be our. New image. So, let’s build it.
And now let’s create a container out of this image, so let’s run it. This command takes the latest flexdeploy image and creates a container with name flexdeploy. The command passes into the container an environment variable FLEX_DB_URL with provided value and tells the container to expose 8080 port.
So, according to the log it is up and running and we can see the result. So m=now on my laptop I have FlexDeploy running in a container.
So, what we can do now. We can build a Docker image for our application and push it to the cloud, to docker hub. This is something. And now let’s think about deployment. The question is where are we going ton deploy, I mean what is going to create and run containers out of our images. These Dev, Test and Prod. What is it? Well it could be a simple Docker Container Manager installed on some servers or on my laptop for Dev, Test and Prod and for the demo pusposes it would work. But for a real life, having plenty of containerized applications and putting every single layer, every applicationn component into a container we are getting quite a bit of those containers. And the number of containers tends to increase over time. We need something, some kind of a tool manipulating and managing them.
Just recently, people were relying on technologies like Docker compose and Docker SWARM pretending to do that, however today no-one is really interested in them, so it’s not worth speaking on that. Kubernetes has become de facto the standard tool for that job today. It has been officially announced by all major players in containers world that this is the way to go, and all of them are building now their container native cloud services on top of Kubernetes. So, this is an open source project originally born in Google. It serves as an orchestration engine focusing on how operations staff deploy and manage containers at scale. On the other hand this a level of abstraction hiding a complexity of your infrastructure. For you the answer on the question where your containers are running on is clear. They are running in Kubernetes cluster. What nodes, what VMs on prem or on cloud are hidden behind these magic words - you don’t care. This is Kubernetes’s job. Kubernetes provides feature such as configuration properties making containers able to communicate to each other and to the rest of the world. It also provides out-of-the box Load balancing, Scalability, Security and Visibility. In other words - cool thing!
Let’s have a look at the Kubernetes architecture. What is it all about. The core thing in K8s platform is a cluster. All of us are familyiar with a concept of cluster, right? This is a set of physical or VMs or in other words nodes. In terminology of Kubernetes there is a master node (aka admin server) and a number of Worker nodes (ake managed servers). Worker nodes have Docker engine installed so that they can run Docker containers. The internal service Kubelet is running on both Master and worker nodes. It provides the mechanism of communicating between master and Worker nodes.
How many master nodes in Kubectl cluster? May be we need a better picture with Kubectl and REST API
The access to the cluster from the outer world is available only through the master node. Whenever we want to create or configure any resource in the cluster ewe talk to the Master using one of available channels. It can be kubectl which is just K8s command line interface, or it can be REST API Provoding basically same as kubectl, or it can be UI dashboard.
What kind of resources can we create and configure in K8s? Let’s have a look at the most important ones. First of all this is Pod. A Pod is a logical set of containers. This is smallest deployable and scalable unit. Even when we deploy a single container we deploy it in a pod. On the other hand, if for some reason we want some containers that are relatively tightly coupled to be close to each other we should put them in same pod. Containers within a pod are always running on the same node and they share same CPU, memory and disc resources. Containers in pod share a network namespace and they can address each other via localhost. Consider a pod as a logical VM while decomposing an application into containers and thinking on how they should be co-located in terms of deployment. In other words whatever you would like to deploy on the same host in pre-container world, put in the same pod.
Replica Set provides a policy on how many instances of pod should be alive. So whenever we want to scale out our applications horizontally we do that by multiplying pods, meaning that all containers inside the pod are going to be multiplied also.
Deployment describes a desired state of the application. Actually the deployment controller takes care of the lifespan of application pods and creates a replica set to bring up a desired number of them.
Ok, let’s say in our application there is a middleware layer consisting of a number of containers, we put them in a pod. Fine, so this pod is our middleware layer. Then we are saying to Kubernetes, please keep three of them alive. So, we scaled it out. Having done that our front layer, our UI has no idea how to communicate to the middle layer. One one hand there are three of them, on the other hand Kubernetes does not guarantee that their IP addresses will be stable as if a pod dies on one node, Kubernetetes will start it on another node to make sure there are three of them. So, what we are going to do is to create another K8s roesurce which is called service. It has a stable IP address and name so the front end can rely on that and furthermore it serves as a load balancer between pods. Service can be internal to b be used only inside the cluster, external to be accessible from the outer world.
Alrighty, having clarified that, let’s have a look at our application from K8s perspective. So, basically, we’re going to put FlexDeploy container into a pod. This pod is going to be accessible via a K8s service. In order to create these K8s resources in the cluster (the pod and the service) we have to create a yaml file describing them. This yaml file can be considered as a descriptor or a K8s deployment profile if you will.
Maybe show a DB on this slide and work with a config map.
So our yaml file is going to look like this. It consist of two parts. The first one describes a deployment saying that there should be one replica of a pod flexdeploy with one container created from flexdeploy Docker image. The container exposes the 8080th port. The second part describes a service which forwards all incoming traffic to flexdeploy pod.
And let’s clarify how our pod is connected to the database. So an AM in our application is referring to a datasource. This datasource is configured inside a container in Tomcat’s context.xml file. The JDBC url is provided by an Environment variable, and the value for that variable is referring to a configured map in a yaml file.
A config map is a named K8s resource that allows us to decouple configuration artifacts from image content to keep containerized applications portable. This is just a simple set of key-value paires. And obviously those values in each K8s cluster, in each environment are different. For example the jdbc url can point to some standalone external databases on dev and test, but on prod it points to a database running inside a container on the same K8s cluster.
Similar approach is used when it comes to sensitive data like user names and passwords. Only in this case instead of configmaps we use a special resource which is called Secret. The data is encoded and it is only sent to a node if a pod on that node requires it. It is not written to disk. It is stored in a tmpfs. It is deleted once the pod that depends on it is deleted.
So I have configured three K8s clusters in different clouds. These are Oracle OKE, Amazon EKS. and GKE. The clusters represent my Dev, Test and Prod environments respectively. So this is where I am going to deploy my application. The question is how. How can I communecate with these clusters from my laptop or from any other machine. As I mentioned before we can use REST API ar we can go with a more common approach kubectl.
So Kubectl is a command line tool that should be installed on your computer. The installation process is very simple. You just download binaries for your OS and make them executable. What is not simple is configuring kubectl so that it can talk to your cluster. Every single provider has its own requirements on how you should dance with a tabooreene to finally get it done. For some of them it’s not that bad (e.g. for Google and Oracle it’s pretty straight forward), but for AWS you have to install admitonal tool that is being invoked by kubectl for authenticateon. Anyway it’s doable, juts requires a little pation. Actually, creating and configuring K8s clusters and configuring kubectl for them is totally different for every provider. But having done that you can get relaxed, the rest is similar. The way you manipulate K8s resources in the clouters is totally same. Kubrctl can be configured with many clusters or contexts, so you can specify to which one you are going to talk now.
Ok, so let’s have a look at what I have on my laptop. This is my config file. There are three clusters configured.
And let’s deploy our microservice to those clusters. Since all Kubernetestes resources are described nicely in the yaml file all I need to do is to switch to a correspondent cluster and apply the yaml file with kubectl.
Let’s look at what we have created at the dashboard and let’s finally look at the working application.
Ok. Let’s stop for a while and think a little on what we have done. We can build FleexDeploy Docker image and push it to Docker Hub, we can deploy it to k8s clusters in the clouds. Tihts is outstanding. The only thing is that we are doing that manually, so we need my laptop and me for that. Well at least my laptop. It’s not serious. It’s time to automate it. There are a number of available tools on the market to do this job, but guess what I am going to choose.
Obviously, I am going to use FlexDeploy, since a big part of its container related functionality I have implemented myself.
Basically, we are going to implement the following diagram. In order to build and deploy our sample application we have configured a project in FlexDeploy. This project knows how to fetch its source code from Bitbucket, it knows how to be built and hot be deployed. Whenever the code is changed, FlexDeploy CI server is triggered and starts building a project. As the result of the building process FlexDeploy builds a Docker image and pushes it to Docker Hub repository. Once the build process is finished FlexDeploy creates a snapshot of the application containing the yaml file referring to the Docker image in the Docker Hub repository. Having done that FlexDeploy uses a deployment Pipeline to propagate the snapshot across environments meaning deploy a docker container to K8s clusters.
In FlexDeploy all activities are performed by the means of workflows. A workflow is a composition of steps where each step is an invocation of a plugin operateon or anther workflow. Besides that there are conduitions and loops. There is a huge number of plugins for FlexDeploy but if want to go beyond you can just use shell plugin and go with shell scripting for a specific workflow step. So if we look at the workflow building our sample ADF applicationon we’ll see that at first it fetches the source code from BitBucket and builds the war file. The next step buildAndPushDockerImage is putting everything into a container and pushing it to the repository. It uses FlexDeploy Docker plugin. So, basically it creates a Docker image with a name provided by a project property and with a tag corresponding to the project build version. The next step prepareDeploymentProfile is to create a yaml file referring to the new Docker image. This step takes a template of a yaml file for this project and substitutes a referees to the docker image with the exact image name. And the result yaml file is gong to be saved as an artifact to the artifact repository.
FlexDeploy uses a deploy wrkoflw to deploy a Docker container to K8s cruster. The workflow consumes a yaml file produced by the build workflow as an artifact. The Deploy workflow contains a single step invoking FlexDeploy K8s plugin operation which applies a yaml file to a K8s cluster.
Change the picture!!!
FlexDeploy uses three yaml files packed into a snapshot to deploy the application to Kubernetes environments with a pipeline. While a deploy workflow defines the implementation for the delivery of a single build artifact into an environment, the pipeline operates at a higher level, and orchestrates the propagation of many deployments across many stages/environments. The pipeline includes facilities for approvals, manual tasks, scheduling, test automation, service operations and many others. Basically, a pipeline defines a set of stages (environments), which contain a series of gates and steps. Each gate blocks entry into the steps until some condition is satisfied (e.g. an approval, schedule, or test results meeting some quality metric). The steps define the implementation of delivering the snapshot into the stage. After a stage successfully completes execution, it is sent to the next stage and begins executing its defined gates.
Obviously with this approach we cam build a really complex and flexible deployment processs.
This approach along with workflows along with the concept of containers allow us to easily incorporate such activities as test automation in the deployment process. For example this pipeline step Functional Regression test invokes a workflow performing automated tests. The workflow creates a FlexDeploy container out of a preconfigured image. That image consists of both FklexDeploy applications which is being tested and the database with all necessary data for testing scripts. Having the container started it runs selenium tests against the container and saves the test results, after that it removes the container so it does not consume our resources anymore. The saved test results will be automatically checked with test qualifyers at the gate to the next environment in the pipeline.
The concept of a pipeline serving as a deployment orchestrator and a K8s platform serving as an environment allow us to implement various deployment strategies like these on the slide. It’s interesting that that the first two strategies on the list Recteate and Rolling update are supperted by K8s out-of-the box. We just need to specify for our deployment which strategy we prefer and configure strategy parameters in the yaml file. The rest deployment strategies on the list require human interaction and the are implemented in combination with pipeline steps.
For example Blue/Green Deployment. The implementation of this strategy is based on the ability of K8s to run multiple pods with our application simultaneouslyusly and on the concept that business users access the application only through a service. So with a service we can control which pod or set of pods is currently in use. In K8s we are able to label resources as we want with some custom identifiers and use those labels for routing the traffic for example. So, here we have a pod labeled with version green. The service working as an access point for the users is configured to route the traffic only to the pods with version green. In our pipeline we deploy a new version with label version blue, so the users don’t see it so for. As a human task step in the pipeline we check that deployment was successful and relabel the new pod with version green, having done that we remove the old pod. So now the tafffic is being routed to the new pod.
It seems unableievable, but that’s it. I hope this session helped you put you thoughts on containers in order and provided better understanding of these really cool technologies. If you have any question, go ahead, I will try to give you answers.
Adf with docker
Develop, Deliver, Run
Oracle ADF applications with
•Deploy to the cloud
•A lightweight, stand-alone, executable package of a piece of
software that includes everything needed to run it: code,
runtime, system tools, system libraries, settings.
•Containers share the host operating system rather than the
•Way more lightweight than a VM
•Docker is the most popular implementation
•Open Source platform (Google born)
•Orchestration engine for Ops
•A level of abstraction hiding the complexity of a hybrid
• Cluster. Set of physical or VMs. Masters and workers.
• Node. Worker. Minion. A physical or virtual machine. Docker is installed on it.
• Kubelet. Internal service running on each worker node and managed by a master
node. Makes sure that everything works according to configuration files.
• Pod. Logical set of containers. A smallest deployable and scalable unit
• Replica set. Defines how many instances of a pod should be alive
• Deployment. Creates a Replica Set to bring up a desired number of Pods
• Service. Logical set of pods with a stable ip/access rules and name. Has a
lightweight internal load balancer.
• Internal, External