OpenShift on Azure
.NET Core on OpenShift
Takayoshi Tanaka @TanakaTakayoshi
Red Hat K.K. (Japan) email@example.com
This slide is available online.
As I have tested at OCP 3.5 and .NET Core 2.0 preview2,
something will be changed at the latest OCP 3.6 and .NET Core 2.0
If you have any question or comments, feel free to contact me:
Red Hat K.K. (Japan)
◦ Software Maintenance Engineer
◦ Red Hat solutions on Azure
◦ .NET Core on RHEL
◦ Microsoft MVP for VSDT
◦ C# Lang, .NET Core on Linux
◦ Red Hat Developers
◦ Personal Blog “Silver light and Blue sky”
VSDT: Visual Studio & Development Technologies
◦ Learn about OpenShift on Azure Reference Architecture
◦ How to integrate Azure Features with OpenShift
◦ .NET Core 2.0/integrating OpenShift features with
Document is now available
◦ Deploying Red Hat OpenShift Container Platform 3 on
OpenShift Ansible - Azure ARM Template
• ARM Template for Azure Resources (VM, LB, NW…)
• Custom Script Extension with ARM
• generate config. files & execute ansible
• Ansible Installer for OpenShift
Available only in the Azure Marketplace VM
• duplicated billing. Custom image (.vhd) is on the roadmap.
No official Red Hat is available (self-support only)
• You should troubleshoot by yourself.
The OpenShift VM configuration is fixed
• 3 masters with etcd (same hosts), 3 infra nodes, 3+ nodes, 1 bastion
3 masters with etcd
3 infra nodes (router/docker registry)
Support request required for increasing cpu core limit.
This limitation is due to design of ARM template.
You can install all-in-one OpenShift on 1 host (not supported)
Examples: Integrating Azure Features
Azure Load Balancer
◦ master endpoint
◦ backend is a group of masters
◦ routing endpoint
◦ backend is a group of infra nodes (routers)
Azure VHD for Persistent Volume (PV)
◦ Virtual Hard Disk for Azure VM (VHD)
◦ Dynamic provisioning Available at OCP 3.5+
How does Azure VHD for PV work?
1. node service receives
Volume Mount request
2. Load azure.conf
(API auth etc)
3. (if dynamic provisioning)
Create an empty VHD
4. Mount VHD to Azure VM
5. Create filesystem if needed
6. Mount filesystem to container
Depending on kubernetess Azure Volume Plugin
How to configure azure.conf
See the document for more detail.
Easy 3 steps with Azure CLI 2.0
$ az account list -o json
//Retrieve tenantID & id
$ az group show --name <ResourceGroupName> -o json
//Retrieve id & location
$ az ad sp create-for-rbac --name <ResourceGroupName> --role contributor
--scopes "<Resource Id>“ -o json
//Retrieve appId, password
Azure VHD for PV Notes
Managed Disk is unavailable
◦ kubernetes Azure Disk plugin is not supported Managed Disk
Be sure to confirm VM name to hostname
◦ Also specification of kubernetes plugin
Configure DNS yourself
◦ VMs can be communicate with their VM name.
◦ If not using Azure internal DNS
◦ If using VNET peering or other
More Azure Features
Azure Active Directory Open ID Connect
◦ authentication for master
◦ LDAP integration with AAD+AAD DS or AD is also available.
Azure Blob Storage for OpenShift internal docker registry
◦ object storage is suitable for docker registry storage
Azure File Storage
◦ File storage is also available for PV
◦ Linux kernel CIFS module with SMB 3 is still experimental
Operation Management Suite integration
◦ Log Analysis also available for containers
How to set up OpenID connect?
Create Azure AD App using the Microsoft Azure portal
How to set up LDAP auth with AD?
Option A) AAD + AAD DS + (VNET peering or VNET-to-VNET VPN)
* AAD DS only supports Classic VNET and requires private network from ARM VNET.
AAD AAD DS
classic VNET ARM VNET
AAD DS configuration example
- name: "aad_ds_provider"
bindDN: "cn=adadmin,ou=AADDC Users,DC=example,DC=onmicrosoft,DC=com"
url: "ldap://XXX.XX.XX.XX/OU=AADDC Users,DC=example,DC=onmicrosoft,DC=com?
ou: AADDC Users
AAD default OU
userPrincipalName will be email
How to set up LDAP auth with AD?
Option B) on premise AD + VPN
Connect on premise Network and ARM Network with VPN.
on premise NW ARM NW
Storage Technology Comparison
Type References Notes
Extended Registry Configuration
Microsoft Azure storage driver
Deploying Your Own Private Docker
Registry on Azure
Azure Blob Storage
Only Available for
Azure VHD Filesystem
Persistent Storage Using Azure Disk
Configuring for Azure
About disks and VHDs for Azure Linux VMs
Depends on k8s
NFS Persistent Storage Using Azure File
Configuring for Azure
How to use Azure File Storage with Linux
Depends on k8s
NFS N/A Should maintain
yourself or buy 3rd
Operation Management Suite (OMS)
Log Analysis & other features for on-premised to cloud
Containers (Preview) solution in Log Analytics now support OpenShift
Installing OMS Agent
Adding OMS agent directly on Linux Host
Or, install agent as a OpenShift daemonset
Future: Windows Container?
No roadmap: Windows Container
kubernets has roadmap for working with Windows Container
“Capability” is existing.
.NET Application Model
.NET Core Inside
.NET Core App
IL Assembly (exe, dll)
(.NET Core Runtime)
.NET Core on OpenShift
◦ “Source code in the Git repo” To “docker Image”
◦ can run out of OpenShift
◦ parameters for simple customize
◦ more customization are available with s2i scripts
◦ Start .NET Core on OpenShift with few clicks at the portal
◦ All in one: deploymentconfig, service, route etc…
s2i build & deploy flow
SCM(git) internal registry
$ dotnet build
$ dotnet publish
$ dotnet <dll>
.NET Core 2.0 launch start today!
rh-dotnet supports csproj at .NET Core 2.0
◦ rpm version will be available
◦ s2i for .NET Core 2.0 & ASP.NET Core 2.0
◦ Runtime image & s2i image (s2i image only at 1.x)
More new features coming
◦ Announcing .NET Standard 2.0
◦ Announcing .NET Core 2.0
◦ Introducing ASP.NET Core 2.0
◦ Announcing Entity Framework Core 2.0
Use Case Examples
◦Schedule Jobs with .NET Core
◦Switching Configuration for Dev & Prod Environment
◦Razor Page & C# 7.1
◦Redis for HTTP Session storage with multi pods
All examples are built on .NET Core 2.0 preview.
We’re actively working on it now.
Schedule Job with .NET Core
Run .NET Core Console App as a cron job: Cron Jobs
Web portal does not support cron jobs, so use the CLI.
$ oc create imagestream cronjobexample
$ oc create -f cronjob-buildconfig.yaml
$ oc create -f cronjob.yaml
Schedule Jobs with .NET Core
schedule: '*/1 * * * *'
command to execute:
should be the full path
command to execute:
*scl should be enabled.
To be fixed in my example
image should be specified with full URL
OCP 3.6 will support imagestreamtag.
Replace 172.30.142.2:5000 with your
internal registry’s IP and port
Switching Configuration for Dev & Prod Environment
How to treat different environments with one code
◦ Connect to different database
◦ Use Redis as a cache - only in a production environment
◦ Integrate with a different OpenID account
Use Environment feature in ASP.NET Core
◦ Specified by environment variables.
Configuration can be injected specific to each environment.
Switching with Environment
• Configure method
• ConfigurreService method
Can’t inject IHostingEnvironment into ConfigerService method
from Environment Variable
Loading configuration from Secret
use OpenShift secret feature.
Razor Page + C# 7.1
◦ Simpler application than original MVC: “Page-focused scenarios”
◦ WebMatrix like easy development
◦ Razor Page is enabled with MVC
◦ available at .NET Core 2.0 & ASP.NET Core 2.0
◦ C# 7.1 in Razor page is not working at Preview 2 by bug (see issue)
◦ It should be fixed at 2.0 RTM.
HTTP session for multi pods
◦ Sticky session: request goes to the same pod in same user session
◦ HTTP session is stored in the memory of each pod
◦ HTTP session is encrypted by pod specific key
When a pod has died, a user session will be lost.
How to keep HTTP session
IDistributedCache & IDataProtection
◦ Provide distribution cache
◦ Available for storing session
◦ ASP.NET Core team provides SQLServer and Redis
◦ Provide key management for encryption
◦ Encrypt http session
◦ By default, generate machine (=pod) specific key and store in local file
◦ ASP.NET Core team provides NFS, Redis and AzureStorage (Preview)
Each pod has a different key.
Can’t decrypt session data
When loading another pod
from a different session
load with same id.
default implementation of IDataProtection
Configuration for Redis
public void ConfigureServices(IServiceCollection services)
// You can retrieve this connection string from Azure Portal.
var conn = Configuration["REDIS_CONNECTION_STRING"];
var redis = ConnectionMultiplexer.Connect(conn);
option.Configuration = conn;
option.InstanceName = "master";
High Level Debugging .NET Core
*MIDE: MIDebugEngine: GitHub repository
*vsdbg can be used only in VS products and might not be distributed.
Architecture of MIEngine
Remote Debugging .NET Core
vsdbg provided by Microsoft
◦ only trusted communication is required
◦ SSH is generally available
◦ VS remote debugger tools is also available on Windows
◦ Due to the license limitation, VS products (VS, VS Code, VS for mac) are only
available for debugging.
* Low level debugger is provided by Red Hat
◦ Not providing graphical debugger interface
Remote debugging to
a container on OpenShift
“oc rsh” is available instead of ssh
vsdbg should be manually installed
◦ install script is unavailable as s2i image doesn’t have unzip
◦ download vsdbg on local and rsync
see more detail in my wiki
Remote debug from Visual Studio Code
"name": ".NET Core Docker Remote Attach",
“pipeArgs”: [ “rsh”, “-T”, “firstname.lastname@example.org”],
OpenShift on Azure
◦ Reference Architecture is a good place to start.
◦ More Azure features available-- Authenticating with OpenID and others
.NET Core 2.0/ASP.NET Core 2.0 on OpenShift
◦ csproj support
◦ cronjob for .NET Core console app
◦ OpenShift secret & configuration. ASP.NET Core environment
◦ Remote debugging
Hello, my name is Takayoshi Tanaka.
Today I talk about Deep Dive OpenShift on Azure and .NET Core on OpenShift.
Let’s me give my background.
I work for Red Hat K K in Japan. My position is a Software Maintenance Engineer. My focus is OpenShift, Red Hat solutions on Azure and .NET Core on RHEL.
In personal, I’m a Microsoft MVP for Visual Studio and Development Technologies. I’m interested in C# language and .NET Core on Linux.
I write blogs in Red Hat Developers and Personal Blog “Silver light and Blue Sky”.
My goal for the audience is,
leaning about OpenShift on Azure Reference Architecture.
Being able to know how to integrate Azure Features with OpenShift,
and .NET Core 2.0. integrating OpenShift features with Asp.NET Core.
The first thing is deploying OpenShift on Azure.
Red Hat released the reference architecture document for deploying OpenShift on Azure.
Azure has similar services with OpenShift.
You can run your applications on PaaS and CaaS. CaaS means Container as a Service. OpenShift is a CaaS. And Azure Container services and Service Fabric are also CaaS. And recently Microsoft released new CaaS “Azure Container Instances”.
OpenShift based on RHEL OS Virtual Machines. We support both on Azure Stack and Azure Public Cloud.
You can install this reference architecture from this repository. This repository is an ARM template.
When you click the deploy button, you’ll see this form.
After you fill in all parameters, the install process begins.
There are three steps in installation.
At first, ARM Template creates Azure resources such as Virtual Machines, Load Balancers, Networks and so on.
Second, Custom Script Extension will generate configuration files and execute ansible playbook.
At last, this ansible installer installs OpenShift.
However, there are known issues.
This ARM template only supports marketplace VM. It will be duplicated billing. Supports custom VHD is on roadmap.
This template is unofficial support, so you should troubleshoot by yourself.
Then it’s fixed VM configuration and you can’t change it. Three masters with etcd at the same hosts, three infra nodes, three and more nodes and one bastion server.
Here is the architecture diagram. Let’s explain each component.
Here is a bastion server. This is the only VM accessible from the Internet. You will operate everything on this VM.
There are two external Azure Load Balancers. One is for OpenShift API and web portal.
The backend is master servers with Availability Sets.
Etcd is located at the same host.
The other external Azure Load Balancer is public endpoint for accessing your applications.
The backend is infra nodes with availability set.
Router and docker-registry run on these nodes.
The last component is a node server with availability set.
Your application pods run on these nodes.
So you have to launch at least ten VMs.
Also, you may have to make support request to Microsoft in order to increase cpu core limit.
This limitation is not due to OpenShift itself, but due to design of ARM template.
For example, you can install all-in-one OpenShift on one host.
This reference architecture uses Azure Features to integrate with OpenShift. Availability Set and Load Balancer are told before.
Another one is Azure VHD for Persistent Volume, PV.
It’s a little complicated how Azure VHD for PV works.
At first, this feature is depends on kubernetes Azure Volume plugin.
This plugin works as follows.
Node service receives Volume Mount request. Then load azure.conf file and create an empty VHD if dynamic provisioning is enabled. Next, mount VHD to Azure VM where node is running. If needed, create filesystem. Finally, mount filesystem to the container.
Sometimes, you may have to create azure.conf file manually. It’s not difficult. You can follow three steps with Azure CLI 2.0.
When you use Azure VHD for Persistent Volume, you may pay attention somethings.
At first, managed disk is unavailable. kubernetes plugin doesn’t support Managed Disk now.
And your Azure VM name must be same as hostname.
Then, it’s not mandatory but I recommend use Azure internal DNS, because Virtual Machines must be able to communicate with their VM name. Azure internal DNS can do it without any configuration. If you don’t use Azure internal DNS for example when you use VNET Peering, you must configure DNS by yourself.
There are more Azure feature which are not used in the reference architecture. I’ll explain some of them.
If you use Azure Active Directory, you can use it for authenticating master API with Open ID connect.Also, if you prefer LDAP authentication, Azure Active Directory Domain Services is available. Or you can use on-premised Active Directory.
OpenShift has an internal docker registry. Azure Blob Storage is available for this registry’s storage. Generally speaking, object storage such an Azure Blob Storage is suitable for docker registry storage.
Azure File Storage is also available for Persistent Volume. However, it depends on Linux kernel CIFS module, which is still experimental in SMB 3 protocol.
To set up an OpenID, it’s the easiest way to create an AAD app in Azure Portal page.
You can see the endpoints here.
Also, you should set up Reply URLs and Keys.
If you prefer LDAP authentication, you have two options.
When you want to use AAD, you can use optionA. AAD itself doesn’t have LDAP mechanism, but you can do this with AAD Domain Services.
However, AAD DS is only available for Classic Vnet at this point. So you should connect classic VNET and ARM Vnet.
You can connect them by VNet peering or VNET-to-VNET VPN.
Here is an example configuration for LDAP authentication with AAD DS.
Organization Unit should be “AADDC User”, which is default value on AAD. usually you can’t change this Organization Unit as far as you use only AAD.
Also, userPrincipalName will be the value of email.
When you prefer to use your on-premised AD, you should connect by VPN to use LDAP authentication.
There are several Azure services you can use for storage in OpenShift.
Azure Blob Storage is only available for internal docker registry’s storage. However, this object storage is suitable for docker registry. This feature depends on docker registry driver provided by docker.
Other two features both depend on kubernetes plugin. The one is Azure VHD, which I told before. The other is Azure File Storage. However it’s experimental.
Also, you can use external NFS Server in and out of Azure. You can construct and maintain your own NFS Server or use NFS as a Service from third party vendor.
To collecting and analyzing logs, OpenShift provides EFK stack. Also, Azure provides OMS. You can see OpenShift logs with OMS Agent. To use OMS agent, after creating an OMS workspace, you have only to connect VM from Azure portal page.
You can also see the container logs in OMS workspace. You have only to add container solution in OMS portal.
We’re sometime asked the future of integrating with Windows container. However, the answer is “we have no roadmap for Windows container”. As kubernetes is working for Windows container, we can show a capability to run Windows Container on OpenShift. However, please note it’s not a road map at this point.
Welcome back to my talk. The second part is about .NET Core on OpenShift.
ABI: Application Binary Interface
What is the reason for using .NET Core on RHEL?
One is Red Hat software collections. .NET Core on RHEL ships in this software collections. Usually it’s included in RHEL subscription, so you can use .NET Core without extra cost.
Also, we’re tracking Bugs not only for upstream GitHub but also Red Hat Bugzilla.
Red Hat provides and supports .NET Core SDK for RHEL. RHEL Server subscription includes .NET Core.
Not only Red Hat provides .NET Core, but also OpenShift supports .NET Core.
S2i means Source code in the git repository to docker image. s2i can run out of OpenShift.
S2i requires the base docker image to run build process. Red Hat provides an official s2i image for .NET Core. It can be customized to add parameters or add scripts.
OpenShift provides a template project for .NET Core and ASP.NET Core, so you can start .NET Core and ASP.NET Core on OpenShift with few clicks.
This is a figure how s2i works.
At first, a builder pod will clone source code and build it. After generating a dotnet binary, builder pod creates a application docker image with this binary and pushes this image to internal registry.
Once build is finished, deployer pod will start. Deployer pod will create an application container from pushed docker image, and run the dotnet application.
.NET Core will be coming soon, maybe this autumn.
Red Hat .NET Core will support csproj style build tools at .NET Core 2.0. s2i also supports .NET Core 2.0 and ASP.NET Core 2.0.
More new feature are coming.
From now, I’ll explain four use case examples. I’m now working for another examples.
When you want to execute .NET Core application as a scheduled job, you can use cron job in OpenShift.
Web portal hasn’t supported cronjob yet, so you should create resource from CLI.
Here are two points to pay attention.
One is specify the dotnet path as an absolute path.
The other is specifying docker image as a full URL. You will be able to specify with Image Stream Tag at OCP 3.6.
Sometime you want to treat different environments with one code. For example, connecting to different database on development and production. Using Redis as a cache only in production. Using different OpenID account for development and production.
In this case, you can use environment feature in ASP.NET Core and it can be integrated with OpenShift.
This example shows how you check the current environment in the code.
Startup class is a key calss for ASP.NET Core and it has three important member: a constructor, configure method and configureservice method.
IHostingEnvironment can be injected into constructor and Configure methods. So you can check EnvironmentName property.
However, as ConfigureService method can’t be injected IHostingEnvironment, you can define different methods for each environment.
Configure method can also be defined with environment specific methods.
Also, when you want to pass some values from Environment Variable, you can use “AddEnvironmentVariables” method.
To set environment variables, you can use Visual Studio or Visual Studio Code during developing on your local machine.
In the OpenShift you can set on the Portal page or CLI.
Sometimes you don’t want to store configuration in the repository, you can use OpenShift Secret feature.
The application can read secret values from external file or environment variable.
This example shows secret values are injected to environment variable and application read these value
s from environment variable.
Next is Razor page example. This is a new feature at ASP.NET Core 2.0. Also, C# 7.1 will be available at .NET Core 2.0.
However, C# 7.1 is not available in Razor page due to a bug. It’ll be fixed at RTM.
Usually you will run your application with multiple pods. When you use HTTP session by default configuration,
session is sticky by default router configuration.
Http session is stored in the memory and encrypted by pod specific key.
It means, when a pod has died, a user session will be lost.
In this case, you may want to store session data external. But you should pay attention.
There are two key points in storing HTTP session. You must configure both of them.
IDistibutedCache provides distribution cache to store session across multiple servers.
IDataProtection provides key management for encryption.
Redis supports both services.
When you only configure Idistribution cache to use Redis, HTTP session can’t be shared across pods.
This is because the encrypted key is generated and stored in each pod. These keys are different.
You must configure DataProtection to use Redis. After doing so, the encryption key is stored in Redis and all pods use this key.
Here is an example to configure Redis.
In this example, I use Azure Redis service. Off course, you can run Redison OpenShift and you will get redis ip and port from environment variables when you run redis on OpenShift.
Here is a inside of a debugger. On Windows, Visual Studio contains a VS Debugger and also works as a debugger frontend. On Linux, Visual Studio Code works similarly. Visual Studio Code contains debugger called vsdbg. It is constructed as an Open Source Project MIDebug Engine and depends on GDB and LLDB. Even more remarkable, debugger has a same interface on Windows and Linux. So Visual Studio can remote debug to .NET Core running on Linux and Visual Studio Code on Linux can remote debug to .NET Core running on Windows.
Thank you for listening today. Have a great day.