Recently, I got an opportunity to work on an OpenShift cluster on the Microsoft Azure Cloud Platform and it was a pretty interesting task. I thought to put up an article for my fellow readers to share my experience as it was pretty, quick, and easy task. I remember I had a similar experience with CoreOS Tectonic platform few years back. RedHat integrated Tectonic into the OpenShift platform to deliver the next-generation OpenShift container platform to its customers. I need to mention that, this article is not about the Azure RedHat OpenShift platform which is provided by Microsoft as a managed service also referred as ARO.
Prerequisites
You need to have few prerequisites in place prior to start the deployment
- RedHat Cloud account (https://cloud.redhat.com/)
- Downloaded the installer
- Installer-config file
- Azure service principal id and secret, tenant id and the subscription id
If you don’t have a RedHat cloud account to download all the necessary tools and cluster configurations you can create an account easily. It allows you to create a RedHat OpenShift platform for 60 days as an evaluation. I was pretty impressed with the offerings they have provided to deploy the cluster, which supported multiple cloud vendors and bare-metal installation for on-prem datacenters.
Read More:
- Tap Kubernetes Cluster Communication With Linkerd Service Mesh
- Deploy Applications Easily With Kubeapps On Kubernetes
- Start Working With VMware Fusion Project Nautilus
- VMware Hands On Labs (HOL) And Tanzu Mission Control
- What Is VMware Test Drive And What Are The Benefits?
In my scenario, I wanted to deploy this cluster on Microsoft Azure and I selected Azure as the installation platform.

The next step would be quite important as this is how my infrastructure is going to be provisioned. Basically, two options are available.
- Installer-provisioned Infrastructure – Infrastructure will be provisioned by the installer software
- User-provisioned Infrastructure – Install on pre-existing infrastructure
I didn’t have a pre-existing infrastructure for my cluster and I used the Installer-provisioned Infrastructure option for this deployment.

In this step, the installer needs to be downloaded and also image pull secret should be downloaded or copied. We need to use this image pull secret in our deployment file.

The next step is to, generate out installer or directly deploy the cluster on the Azure Cloud. But, I need to customize my deployment as the default deployment is provisioning Standard_D8s_v3 nodes which have 8 vCPU and 32 GB of RAM, which can be directly impacted my cloud bill. I had to customize the installation with my requirements.
Generate an Installer Configuration and SSH Keys
I have downloaded the installer and created a folder named “cluster” which is dedicated to the files required for the deployment.
Creating SSH keys
We needed to have an ssh key for this deployment, to create the keys execute the below command
ssh-keygen -t ed25519 -N '' -f <path to the file>
Then added the ssh key to the agent
ssh-add <path to the file>
Note: if the ssh-agent is not running start the ssh-agent with the below command prior to adding it
eval "$(ssh-agent -s)"

Let’s generate the installer configurations
Generating the Installer Configurations
Below command will generate the required configuration file for the cluster deployment
./openshift-install create install-config
It will ask to provide the keys and correct keys should be selected

Provide the cloud provider, in my case it was Azure

Provide the necessary details as required in the next steps along with the image pull secret we copied from the RedHat Cloud account.

The generated file was a generic configuration file for the cluster implementation. But my requirement was not exactly the same and I had to customize the file. Below you can see the customized sample file.
apiVersion: v1
baseDomain: dumbdomain.dumb.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 1024
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
azure:
type: Standard_D4s_v3
osDisk:
diskSizeGB: 1024
replicas: 3
metadata:
creationTimestamp: null
name: openshift-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: rg-base-domain-resource-group
cloudName: AzurePublicCloud
outboundType: Loadbalancer
region: westeurope
publish: External
pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"dumbvalue","email":"youremail@dot.com"}}}'
sshKey: |
ssh-ed25519 SSHKeyID-GeneratedFORTHISDEPloYment
The significant changes are the node types OS Disk size and the replica count of the master and the worker nodes.
One important thing that I’d like to highlight is the minimum number of controlplane nodes, in other words, “master” nodes should be at least 3 and “worker” nodes should be at least 2. I was so dumb to forget this by looking at the documentation and ended up with the below error in my first customized deployment.
ERROR Cluster operator authentication Degraded is True with APIServerDeployment_UnavailablePod::IngressStateEndpoints_MissingSubsets::OAuthServerConfigObservation_Error::OAuthServiceCheckEndpointAccessibleController_SyncError::OAuthServiceEndpointsCheckEndpointAccessibleController_SyncError::RouterCerts_NoRouterCertSecret: OAuthServiceCheckEndpointAccessibleControllerDegraded: Get "https://172.30.210.95:443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
ERROR OAuthServiceEndpointsCheckEndpointAccessibleControllerDegraded: oauth service endpoints are not ready
ERROR RouterCertsDegraded: secret/v4-0-config-system-router-certs -n openshift-authentication: could not be retrieved: secret "v4-0-config-system-router-certs" not found
ERROR OAuthServerConfigObservationDegraded: secret "v4-0-config-system-router-certs" not found
ERROR IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server
ERROR APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is crashlooping in apiserver-55985b6dd7-j4nxn pod)
INFO Cluster operator authentication Available is False with APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServiceCheckEndpointAccessibleController_EndpointUnavailable::OAuthServiceEndpointsCheckEndpointAccessibleController_EndpointUnavailable: OAuthServiceEndpointsCheckEndpointAccessibleControllerAvailable: Failed to get oauth-openshift enpoints
So, it was a good learning point and make sure to follow the documentation first and understand the requirements.
Once we have fulfilled the prerequisites, we were good to go with the implementation. I created the cluster with the below command.
./openshift-install create cluster --dir=cluster --log-level=debug
I wanted to see all the output in my terminal and I have set the value of log-level to “debug” instead of “info”. If you follow along with the RedHat documentation, you might see the log level as “info” which only outputs the information.
I personally, wanted to use the Azure shell for this implementation as my local internet was having some hiccups, and didn’t want to re-run the command due to the connectivity issues.
Also, in Safari it was timed out after 20 minutes even I was using it but, I can confirm that Firefox did a great job and I was able to complete it without any issues with Azure Shell. Simply, it didn’t time out while I was using it.
So it took about 40 minutes to complete the entire configuration and I was able to, access my OpenShift cluster with the given temporary kubeadmin username and password. It gave me the “KUBECONFIG” file for the cluster authentication and I downloaded the OC command-line utility to access my cluster.

Cluster was accessible through my Azure cloud shell.

This cluster will be registered under your RedHat cloud account and you can verify it with the cluster ID, with below command and the cloud user interface.
oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'

You can assign the licences, transfer ownership via the cloud console

So, my OpenShift Cluster on Microsoft Azure was successfully created and there more interesting stuff coming up and let’s see them in the future posts.