All config and cmd in this blog has been verified and tested against Openshift 4.5 release Openshift 4.5 introduced new way to deploy kubernetes by using Coreos with Igition, this solution makes sure all nodes in a cluster share same image and end-users are not encouraged to modify anything on OS level, everything(nic changes, troubleshoot, ssl injection) should be done through Openshift itself by defining yaml(Machineconfig for OS files, nmstate can mod nic).

Continue reading

Kubernetes requires certs on each nodes/masters to validate each other’s integrity, if the cert ever gets expired, you’d see an error like this: Unable to connect to the server: x509: certificate has expired or is not yet valid.. To fix this cluster, we first need to verify the cert status by: $ openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt Certificate: Data: Version: 3 (0x2) Serial Number: 123123123123123(0x123123123123) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=kubernetes Validity Not Before: Nov 16 16:58:58 2017 GMT Not After : Nov 16 16:58:58 2018 GMT .

Continue reading

MaaS Notes Installation LXD based maas is so far the best solution. Follow official guide lxc install maas and mass installation. Few steps to install: Create dedicated lxd env for maas, including network and storage pool. maas init to create admin user. Login https://{MAAS}:5240/MAAS Setup user public key injection for bare metal commissioning. Commision nodes and setup networks. Deploy. Storage Preparation Volume can be ZFS/LVM/btrfs: Create lvm pool.

Continue reading

To enable onboard Horizontal Autoscaling feature, a Metric Server needs to be installed first for k8s to pull resource data from. helm install stable/metrics-server -n metric --namespace kube-system -f metric.yml Metric Server has a chart on Helm stable, but somehow new version of it behaves weirdly, it shows error as: unable to fetch pod metrics for pod rook-ceph/csi-rbdplugin-qv94k: no metrics known for pod When this happens, it means you are facing some TLS and network issues.

Continue reading

How to properly remove a node from cluster Find your node then drain it to let k8s reschedule pods and avoid future schedule on this node: kubectl drain <node-name> --ignore-daemonsets --delete-local-data Then you’ll fine node.kubernetes.io/unschedulable=NoSchedule label on this node. Delete node from cluster. kubectl delete node <node-name> Then everything k8s related will be removed, and you’ll only see this left on node: t login: Fri Dec 6 05:25:27 2019 from 10.

Continue reading

Rook CRD

Rook is a Cloud Native Storage solution, it creates CRDs which in turn create their corresponding storage pods and resources. Install Rook CRD Install Operator via helm chart. This is the foundation of all fun. helm repo add rook-release https://charts.rook.io/release helm install --namespace rook-ceph rook-release/rook-ceph -n rook Note: Rook Operator and CRD cluster must be in the same namespace, because CRD will use helm created serviceaccount to create all resources.

Continue reading

All config and cmd in this blog has been verified and tested against Openshift 3.11 release Openshift is Redhat Container Platform, it mainly uses Kubernetes as its PaaS underlay and added more feature such as CICD, app store, etc. How to Install Similar as Kubespray, it uses a toolbox which has root access to all nodes and run ansible scripts to install and deploy everything. Few prerequisites before install:

Continue reading

Kubespray Hints

I’ve been using Kubespray to deploy Kubernetes since Kube version 1.9. It’s by far the most customizable and flexible deployment tool for Kubernetes on the open-source market. So I think it’s worth a post for it. To begin with, let’s talk about the dark and stone-age time when Kubespray just came out as “Kargo”, it used to be so confusing and felt like today’s Kubeadm which is using command-line option to interact and deploy, but now, it’s well maintained and fully Ansible based, which means all variables and parameters are configured inside script, not more command-line options.

Continue reading

Pipeline execution The current running pipeline is available within Pipeline Expressions as execution. From there, you can navigate to different parts of the pipeline. For example, to reference the name of the first stage you can use ${ execution.stages[0]['name'] }. The current stage Values for the current stage context are available by their variable names. In the source JSON, they will be defined under the context object. if I see context.

Continue reading

Few changes for default helm chart on github: Few new features only available in new components images(e.g authorization), so update your images. New Fron50 requires adding Credentials into /home/spinnaker/.aws, so make sure you mount it correctly: "volumeMounts": [ { "name": "spinnaker-spinnaker-spinnaker-config", "mountPath": "/opt/spinnaker/config" }, { "name": "spinnaker-spinnaker-s3-config", "mountPath": "/root/.aws" }, { "name": "spinnaker-spinnaker-s3-config", "mountPath": "/home/spinnaker/.aws" } ], To enable authentication and authorization, you need to configure Gate and Fiat.

Continue reading

Author's picture

LuLU

Love coding and new technologies

Cloud Solution Consultant

Canada