Magnum is the container cluster orchestration tool for Openstack, it uses Heat to deploy and monitor. The actual workflow would be: Python script to load cluster request –> Inject into Heat templates –> Start building VM –> run conditional actions in Shell –> build all nodes.

Prerequisite

Few setup need to be done before using Magnum:

  1. Node image needs to have property ‘os_distro’ set, fedora requires os_distro=fedora-atomic and coreos needs os_distro=coreos.
  2. Cinder needs to have a volume_type defined before Magnum starts:
    openstack volume type create standard
    openstack volume type set standard \
      --property volume_backend_name=rbd-1
    

The volume_backend_name value was pre-defined in cinder.conf. Another way to define volume_type is to hardcode cinder.conf with:

[default]
# Default volume type to use (string value)
default_volume_type = standard

But this way the property value is not set.

  1. magnum.conf needs a section to indicate by default which volume_type in cider will be used for its docker storage.

    [cinder]
    # From magnum.conf
    # The default docker volume_type to use for volumes used for docker storage. To
    # use the cinder volumes for docker storage, you need to select a default
    # value. (string value)
    default_docker_volume_type = standard
    
  2. CoreOS Openstack image has some default env parameters that will work with Heat to manage k8s deployment, a brief default value example attached here, which can be found at /etc/sysconfig/heat_param on nodes:

    KUBE_API_PUBLIC_ADDRESS="10.240.105.11"
    KUBE_API_PRIVATE_ADDRESS="10.0.0.7"
    KUBE_API_PORT="6443"
    KUBE_NODE_PUBLIC_IP="10.240.10x.46"
    KUBE_NODE_IP="10.0.0.4"
    KUBE_ALLOW_PRIV="true"
    DOCKER_VOLUME="ca3e952d-2748-401a-9739-8623da8b1e61"
    DOCKER_VOLUME_SIZE="10"
    DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
    NETWORK_DRIVER="flannel"
    FLANNEL_NETWORK_CIDR="10.100.0.0/16"
    FLANNEL_NETWORK_SUBNETLEN="24"
    FLANNEL_BACKEND="udp"
    PORTAL_NETWORK_CIDR="10.254.0.0/16"
    ADMISSION_CONTROL_LIST="NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
    ETCD_DISCOVERY_URL="https://discovery.etcd.io/176c102518c6850d89d64de7b0100b34"
    USERNAME="$USERNAME"
    PASSWORD="$PASSWORD"
    TENANT_NAME="$TENANT_NAME"
    CLUSTER_SUBNET="d0c24b1e-1831-4783-8225-f68fa75fac66"
    TLS_DISABLED="False"
    VERIFY_CA="True"
    CLUSTER_UUID="58b4e63f-b4b0-463e-a34f-0ea92f934bb0"
    MAGNUM_URL="https://cbopenstack.ca:9511/v1"
    HTTP_PROXY=""
    HTTPS_PROXY=""
    NO_PROXY=""
    WAIT_CURL="curl -i -X POST -H 'X-Auth-Token: e802e6f1806f4284a60996c6d1934278' -H 'Content-Type: application/json' -H 'Accept: application/json' https://cbopenstack.tdlab.ca:8004/v1/19c5e8fc2e7246d7aa663e456eacd264/stacks/k8s-1-vlafso6vcisa-kube_masters-nagckfnxtxsf-1-4hko3h7evkit/be81245a-483d-42e1-b9d3-d763c117f587/resources/master_wait_handle/signal"
    KUBE_VERSION="v1.10.3_coreos.0"
    TRUSTEE_USER_ID="e851131f36aa43648aa327cd2d419d56"
    TRUSTEE_PASSWORD="BVG3fcBr2w9vyVpMuj"
    TRUST_ID=""
    AUTH_URL="https://cbopenstack.ca:5000/v3"
    INSECURE_REGISTRY_URL=""
    SYSTEM_PODS_INITIAL_DELAY="30"
    SYSTEM_PODS_TIMEOUT="5"
    KUBE_CERTS_PATH="/etc/kubernetes/ssl"
    HOST_CERTS_PATH="/usr/share/ca-certificates"
    HYPERKUBE_IMAGE_REPO="quay.io/coreos/hyperkube"
    CONTAINER_RUNTIME="docker"
    ETCD_LB_VIP="10.0.0.12"
    KUBE_DASHBOARD_ENABLED="True"
    KUBE_DASHBOARD_VERSION="v1.8.3"
    DNS_SERVICE_IP="10.254.0.10"
    DNS_CLUSTER_DOMAIN="cluster.local"
    

these parameters can be overwritten by modifying label section in Magnum Template, e.g:

labels = "kube_tag=1.11.1,kube_dashboard_enabled=true,prometheus_monitoring=true,influx_grafana_dashboard_enabled=true"

the available label parameters are defined in label_list section in file /magnum/drivers/heat/k8s_template_def.py and /magnum/drivers/heat/k8s_coreos_template_def.py, the default values are: /magnum/drivers/heat/k8s_template_def.py:

label_list
  • python
  • python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
   #/magnum/drivers/heat/k8s_template_def.py
        label_list = ['flannel_network_cidr', 'flannel_backend',
                      'flannel_network_subnetlen',
                      'system_pods_initial_delay',
                      'system_pods_timeout',
                      'admission_control_list',
                      'prometheus_monitoring',
                      'grafana_admin_passwd',
                      'kube_dashboard_enabled',
                      'etcd_volume_size',
                      'cert_manager_api',
                      'ingress_controller',
                      'ingress_controller_role',
                      'kubelet_options',
                      'kubeapi_options',
                      'kubeproxy_options',
                      'kubecontroller_options',
                      'kubescheduler_options',
                      'influx_grafana_dashboard_enabled']
   

  To add new parameter such as `kube_version`, user can modify `k8s_coreos_template_def.py` and add `kube_version` into the `label_list`, theses files will be installed and copied to `/var/lib/kolla/venv/lib/python2.7/site-packages/magnum/` in a kolla deployment.

  The source code that controls all these parameters can be found at Magnum source code `/magnum/drivers/k8s_coreos_v1/templates/kubecluster.yaml`. As you can see as of now in Rocky release the default values of important few are:  
  fixed_network_cidr:
    type: string
    description: network range for fixed ip network   
    default: 10.0.0.0/24

  portal_network_cidr:
    type: string
    description: >
      address range used by kubernetes for service portals
 default: 10.254.0.0/16

  kubernetes_port:
    type: number
    description: >
      The port which are used by kube-apiserver to provide Kubernetes
      service.
    default: 6443

  kube_tag:
    type: string
    description: tag of the k8s containers used to provision the kubernetes cluster #will be combined with ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/} to deploy kube core services
    default: v1.9.3

  etcd_tag:
    type: string
    description: tag of the etcd system container
    default: v3.2.7

  flannel_tag:
    type: string
    description: tag of the flannel system containers
    default: v0.9.0

  kube_version:
    type: string
    description: version of kubernetes used for kubernetes cluster
    default: v1.10.3_coreos.0 

  kube_dashboard_version:
    type: string
    description: version of kubernetes dashboard used for kubernetes cluster
    default: v1.8.3

  hyperkube_image:
    type: string
    description: >
      Docker registry used for hyperkube image  #control where to download k8s
    default: quay.io/coreos/hyperkube   

How To Use

User can either interact with Magnumclient or Openstack coe command to control cluster. Another option is to use Terraform which provides a simple deployment short cut in a Infrastructure as Code way:

resource "openstack_containerinfra_clustertemplate_v1" "kubernetes_template" {
  name                  = "kubernetes_template"
  image                 = "coreos_prod"
  coe                   = "kubernetes"
  flavor                = "m1.small"
  master_flavor         = "m2.medium"
  docker_storage_driver = "overlay2"
  docker_volume_size    = 10
  volume_driver         = "cinder"
  network_driver        = "flannel"
  server_type           = "vm"
  external_network_id   = "${data.openstack_networking_network_v2.External.id}"
  master_lb_enabled     = true
  floating_ip_enabled   = true
  labels {
    hyperkube_image="gcr.io/google-containers/hyperkube"
    kube_version="v1.13.1"
    kube_tag="1.11.2"
    kube_dashboard_enabled="false"
  }
}

resource "openstack_containerinfra_cluster_v1" "k8s_1" {
  name                 = "k8s_1"
  cluster_template_id  = "${openstack_containerinfra_clustertemplate_v1.kubernetes_template.id}"
  flavor                = "m1.small"
  master_flavor         = "m2.medium"
  master_count         = 3
  node_count           = 5
  keypair              = "Lab"
}

with the tf file above, you will be creating a cluster which has 3 master nodes and 5 minions, and use images from google public registry gcr.io/google-containers/hyperkube:v1.13.1. coreos_prod vm image was preuploaded with os_ditro=coreos set.

Issues

Magnum itself as a project under Openstack big tent, has some issues and weekness:

  1. Using heat is good for cluster scaling, but heat has file size limition when injecting cloud-init on nova instances. This causes a lot of trouble when we deploy k8s through heat, as it's very likely you would need quite large amount of file when injecting manifest files.
  2. Takes too long when encapsulating templates into cloud-init files.
  3. Infrastructure can't be changed once deployed, as it's using heat to deploy everything, once your network got deployed, you can't change them at all.
  4. CoreOS is good by itself, but seems Magnum is quite out of date with the rapid updates on CoreOS. Need a lot of DIY work to customize it.