MaaS Notes Installation LXD based maas is so far the best solution. Follow official guide lxc install maas and mass installation. Few steps to install: Create dedicated lxd env for maas, including network and storage pool. maas init to create admin user. Login https://{MAAS}:5240/MAAS Setup user public key injection for bare metal commissioning. Commision nodes and setup networks. Deploy. Storage Preparation Volume can be ZFS/LVM/btrfs: Create lvm pool.

Continue reading

OVS traffic capture OVS traffic flow illustration(kolla example): traffic to go out of cloud via provider network VM –> tap+qbr(linuxbridge)+qvb –> qvo+br-int+int-br-ex –> phy-br-ex+br-ex+br_vlan –> external network traffic to go to vxlan tenant VM –> tap+qbr(linuxbridge)+qvb –> qvo+br-int+patch-tun –> patch-int+br-tun+port vxlan# –> remote host vxlan if ip if no DVR used, then all traffic will go to neutron nodes from compute nodes then use neutron nodes’ port# to go out.

Continue reading

To enable onboard Horizontal Autoscaling feature, a Metric Server needs to be installed first for k8s to pull resource data from. helm install stable/metrics-server -n metric --namespace kube-system -f metric.yml Metric Server has a chart on Helm stable, but somehow new version of it behaves weirdly, it shows error as: unable to fetch pod metrics for pod rook-ceph/csi-rbdplugin-qv94k: no metrics known for pod When this happens, it means you are facing some TLS and network issues.

Continue reading

How to - Ceph - Identify the server drive bay number of a faulty drive To identify a faulty disk is in which drive bay: Method 1 - Using iLO and iDRAC Login to the iLO or iDRAC interface Check for error messages in the iLO or iDRAC. If iLO (HP), from the main page, go to Information → System Information → Storage → Physical View

Continue reading

How To Replace Ceph Osd

How to - Ceph - Configure Ceph on a new drive source: https://ceph.com/geen-categorie/admin-guide-replacing-a-failed-disk-in-a-ceph-cluster/ Remove the OSD of the faulty drive If you are replacing a faulty drive with a new one, you will need to remove the OSD of the faulty drive before proceeding with creating the new OSD. *Requirement: The faulty SSD must have been replaced with a healthy SSD. Login to the Ceph node with the faulty drive.

Continue reading

How to properly remove a node from cluster Find your node then drain it to let k8s reschedule pods and avoid future schedule on this node: kubectl drain <node-name> --ignore-daemonsets --delete-local-data Then you’ll fine node.kubernetes.io/unschedulable=NoSchedule label on this node. Delete node from cluster. kubectl delete node <node-name> Then everything k8s related will be removed, and you’ll only see this left on node: t login: Fri Dec 6 05:25:27 2019 from 10.

Continue reading

Author's picture

LuLU

Love coding and new technologies

Cloud Solution Consultant

Canada