All config and cmd in this blog has been verified and tested against Openshift 3.11 release

Openshift is Redhat Container Platform, it mainly uses Kubernetes as its PaaS underlay and added more feature such as CICD, app store, etc.

How to Install

Similar as Kubespray, it uses a toolbox which has root access to all nodes and run ansible scripts to install and deploy everything. Few prerequisites before install:

  1. Master node needs at least 16GB RAM.
  2. All nodes need to have subscription registered, as it will need yum to install packages.
  3. oreg user/password are required when deploying openshift-enterprise, as Redhat Registry requires login.
  4. Needs external DNS to resolve all hosts names, the Ansible script will install dnsmasq onto each host occupying port 53 and forward request from container to upstream external DNS server. Simply modify /etc/hosts won’t work in this case, as it will not be mapped into k8s service containers.
  5. Known incompatible issue with Ansible 2.7. Works on 2.6 for sure.

To install Openshift, first download source repo from github: Openshift-Ansible or yum install openshift-ansible.

Modify nmcli con mod ens192 ipv4.dns-domain to add default domain index on each host and point to the right upstream external DNS.

Then dispatch ssh pub key from toolbox to all nodes, so that ansible can interact with them without password prompt.

ssh-copy-id -i ~/.ssh/id_rsa.pub k8s-master

Register all nodes with subscription-manager and add Openshift repo while remove all optional repos:

subscription-manager register --username=<user_name> --password=<password>
subscription-manager list --available --matches '*OpenShift*'
subscription-manager attach --pool=<pool_id>
subscription-manager repos --disable="*"
subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.11-rpms" \
    --enable="rhel-7-server-ansible-2.6-rpms"

Optionally install base packages on all hosts:

yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct

Install openshift-ansible on toolbox, this will install dependent Ansible 2.6.x at the same time:

yum install openshift-ansible

Then we can start creating our own inventory file which includes all necessary variables that can overwrite openshift-ansible, no need to install docker manually for now, we can install it via prerequisites.yml playbook later on.

Play with Inventory

The following Yaml would be good enough for a single master/etcd with multiple slaves approach:

# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

openshift_deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

# host group for masters
[masters]
master.example.com

# host group for etcd
[etcd]
master.example.com

# host group for nodes, includes region info
[nodes]
master.example.com openshift_node_group_name='node-config-master-infra'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'

openshift_node_group_name='node-config-master-infra' will label master as 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true', so that Openshift internal infra services such as router(ingress) can be deployed onto master.

The following Yaml will deploy multiple masters/etcd with multiple slaves:

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
master1.example.com
master2.example.com
master3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name='node-config-master'
node1.example.com openshift_node_group_name='node-config-compute'
node2.example.com openshift_node_group_name='node-config-compute'
infra-node1.example.com openshift_node_group_name='node-config-infra'
infra-node2.example.com openshift_node_group_name='node-config-infra'

Here LB is actually Haproxy that Ansible deploys onto host lb.example.com. From external user’s point of view, the only known endpoint to call is lb.example.com:8443 while in the backend real listener is acutally Apiserver on each master nodes. This also indicates that putting LB and Master on the same node is not supported, as once 8443 is taken by Haproxy, this port cannot be used again by apiserver.

Administration

By default Openshift-Ansible will create system:admin for oc mgmt, but won’t create any user for Openshift web console, a auth provider option openshift_master_identity_providers is available to define how to manage users, the easiest and recommended way is to use htpasswd, this way a file .htpasswd will be created and used by master-config.yml, new users needed to be added inside this file. The htpasswd added new user will not be a serviceaccount for k8s, it’s only valid when login oc/web, like admin/tenant user concept in Openstack, this newly added user will be tenant user by default. To make it be able to visit entire cluster resources, system:admin needs to add this new user to cluster-admin role. Issue following on master node:

#assume we added a new admin user in .htpasswd file
oc adm policy add-cluster-role-to-user cluster-admin admin