All config and cmd in this blog has been verified and tested against Queens release

Considering Neutron LBaaS has been replaced by Octavia and marked as depreciated since Queens, I think it’s time to write a brief blog about Octavia. LB is the key to many app services running on Openstack, and it's critical for K8s environment as it's the only ingress endpoint for a exposed service.

Let's firstly talk about the issues and weakness that current LBaaS has:

  1. Doesn't support UDP.
  2. All traffic needs to firstly hit Neutron LBaaS Agents which usually reside on controller nodes, and this has huge performance impact and maintenance headache.
  3. Use Haproxy and share controller resources.
  4. No SSL support, can't load cert.
  5. Due to poor coding from LBaaS dashboard, you can't delete a LB from horizon, user must delete all its associated resources [listener, node, pool, LB] from openstack client.

Now let's see what Octavia has brought to us:

  1. Dedicated VM on Openstack tenant to replace shared Haproxy. And this also means traffic don't need to go into LBaaS agent node which usually reside on controllers, instead all traffic will go through compute nodes who own amphorae VMs.
  2. SSL support(Barbican dependency).
  3. Better dashboard support. You can delete all associated elements from dashboard.

As of now, it still doesn't support UDP, which I expect to be resolved in S release.

Octavia core elements include: Octavia Topology

  • amphorae - Amphorae are the individual virtual machines, containers, or bare metal servers that accomplish the delivery of load balancing services to tenant application environments. In Octavia version 0.8, the reference implementation of the amphorae image is an Ubuntu virtual machine running HAProxy.
  • controller - The Controller is the “brains” of Octavia. It consists of four sub-components, which are individual daemons. They can be run on separate back-end infrastructure if desired:
  • API Controller - As the name implies, this subcomponent runs Octavia’s API. It takes API requests, performs simple sanitizing on them, and ships them off to the controller worker over the Oslo messaging bus.
  • Controller Worker - This subcomponent takes sanitized API commands from the API controller and performs the actions necessary to fulfill the API request.
  • Health Manager - This subcomponent monitors individual amphorae to ensure they are up and running, and otherwise healthy. It also handles failover events if amphorae fail unexpectedly.
  • Housekeeping Manager - This subcomponent cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation.
  • network - Octavia cannot accomplish what it does without manipulating the network environment. Amphorae are spun up with a network interface on the “load balancer network,” and they may also plug directly into tenant networks to reach back-end pool members, depending on how any given load balancing service is deployed by the tenant.

Important Notes In General:

  1. Octavia will use Openstack to create VM as its amphorae. As you can see in the above picture, amphorae nodes need to talk with other Octavia elements which have been pre-deployed separately and accessible from Opentack API network, this means amphorae needs to be able to talk to API network. In addition, amphorae will use TCP 9443(ingress) to talk to Agent and 5555(egress) to talk to Health Manager(Kolla default ports), so a proper Secgroup will be needed. Hence putting amphorae directly on Openstack API Network is the simplest way to fullfill all these network requirements.

  2. amphorae image is not provided on board, we need to manually cook it. Luckily Octavia has provided a simple script for it. Check Octavia Official Github, and use diskimage-create/diskimage-create.sh to create a default ubuntu image. If you see an error like assert self.device_path[:5] == "/dev/" failed, it's probably caused by localhost name not resolvable by local hosts file, because the device_path value would be equal to sudo: in such a case.

    openstack image create --container-format bare --disk-format qcow2 --private --file amphora-x64-haproxy.raw --tag amphora amphora
    
  3. A capible flavor is required for creating amphorae VM but has to be created manually, the desired requirement would be:

    octavia_amp_ram: 1024
    octavia_amp_vcpu: 1
    octavia_amp_disk: 2
    

    which translates to Openstack client cmd: bash openstack flavor create --disk 2 --private --ram 1024 --vcpus 1 octavia_flavor

  4. A proper ssh key is also needed, which will be used during nova boot.

    ```bash
    openstack keypair create --public-key /root/.ssh/id_rsa.pub octavia_ssh_key
    ```
    

Important Notes When Deploy With Kolla:

  1. As Octavia requires dedicated VM to play as its LB instance, Kolla won't deploy it during its deploy process, user needs to create all resources manually.

  2. Default yaml value of ansible/group_vars/all.yml:

    octavia_api_port: "9876"
    octavia_health_manager_port: "5555"
    

    Explanation:

    • octavia_api_port: port used to talk with controller api.
    • octavia_health_manager_port: port used to talk with health manager. This is why we need Octavia VM to be on the Openstack API Network.
  3. Default yaml file of octavia.conf.j2:

    [certificates]
    ca_private_key_passphrase = {{ octavia_ca_password }}
    ca_private_key = /etc/octavia/certs/private/cakey.pem
    ca_certificate = /etc/octavia/certs/ca_01.pem
       
    [haproxy_amphora]
    server_ca = /etc/octavia/certs/ca_01.pem
    client_cert = /etc/octavia/certs/client.pem
       
    [api_settings]
    bind_host = {{ api_interface_address }}
    bind_port = {{ octavia_api_port }}
       
    [service_auth]
    username = {{ octavia_keystone_user }}
    password = {{ octavia_keystone_password }}
    user_domain_name = {{ default_user_domain_name }}
    project_name = {{ openstack_auth.project_name }}
    project_domain_name = {{ default_project_domain_name }}
       
    [health_manager]
    bind_port = {{ octavia_health_manager_port }}
    bind_ip = {{ octavia_network_interface_address }}
    [controller_worker]
    amp_boot_network_list = {{ octavia_amp_boot_network_list }}
    amp_image_tag = amphora
    amp_secgroup_list = {{ octavia_amp_secgroup_list }}
    amp_flavor_id = {{ octavia_amp_flavor_id }}
    amp_ssh_key_name = octavia_ssh_key
    loadbalancer_topology = {{ octavia_loadbalancer_topology }}
    

    Explanation:

    • octavia_ca_password: this is generated through kolla-genpwd in password.yml.
    • All certs and keys are generated by ./bin/create_certificates.sh cert $(pwd)/etc/certificates/openssl.cnf.
    # git clone https://github.com/openstack/octavia
    # cd octavia
    # grep octavia_ca /etc/kolla/passwords.yml
    octavia_ca_password: mEUyBHLopKk501CX30WRnPuiDmoP3I7eNQIQbC6z
    # sed -i 's/foobar/mEUyBHLopKk501CX30WRnPuiDmoP3I7eNQIQbC6z/g' bin/create_certificates.sh
    # ./bin/create_certificates.sh cert $(pwd)/etc/certificates/openssl.cnf
    
    • octavia_keystone_user and octavia_keystone_password will be Octavia service account, project_name can be any, here we use admin tenant, which will result in deploying Octavia VM into admin tenant.
    • octavia_amp_boot_network_list will be attached onto every amphorae instance and be used to talk with Octavia elements. This is also the only interface you will see inside a amphorae VM.
    • amp_image_tag is what you defined when creating amphorae image.
    • octavia_loadbalancer_topology can be SINGLE or ACTIVE_STANDBY

    As a result, if we use the pre-defined config we will end up with using all resources from admin tenant, the reqirement is to have visibility to network, flavor, image, secgroup and key, which is default for admin.

To use Octavia to create SSL enabled LB on Opentack, we need to use Barbican service, following cmd will be needed:

  1. To create a private cert.

    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=jenkins.cloud.rbc.ca/O=RBC/C=CA/ST=Ontario/L=Toronto/OU=Cloud"
    

    To check content of an existing cert.

    openssl x509 -in server.crt -text` to check content of an existing cert
    
  2. use following cmd to create a SSL enabled LB:

    openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12
    openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server.p12)"
    openstack acl user add -u <octavia_id> $(openstack secret list | awk '/ tls_secret1 / {print $2}')
    openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
    # Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
    openstack loadbalancer show lb1
    openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_secret1 / {print $2}' lb1
    openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
    openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1
    
  3. use following cmd to create a multple SSL certs enabled LB:

    openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12
    openssl pkcs12 -export -inkey server2.key -in server2.crt -certfile ca-chain2.crt -passout pass: -out server2.p12
    openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server.p12)"
    openstack secret store --name='tls_secret2' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server2.p12)"
    openstack acl user add -u <octavia_id> $(openstack secret list | awk '/ tls_secret1 / {print $2}')
    openstack acl user add -u <octavia_id> $(openstack secret list | awk '/ tls_secret2 / {print $2}')
    openstack loadbalancer create --name lb1 --vip-subnet-id public-subnet
    # Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
    openstack loadbalancer show lb1
    openstack loadbalancer listener create --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret list | awk '/ tls_secret1 / {print $2}' --sni-container_refs $(openstack secret list | awk '/ tls_secret1 / {print $2}') $(openstack secret list | awk '/ tls_secret2 / {print $2}') lb1
    openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
    openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.10 --protocol-port 80 pool1
    openstack loadbalancer member create --subnet-id private-subnet --address 192.0.2.11 --protocol-port 80 pool1