Spinnkaer has a lot of pieces and parts. Below is a table listing everything.

Servivces Port Description
halyard Official Spinnaker install/deployment tool, alternative is Helm.
Deck 9000 Deck is a static AngularJS-based UI.
Clouddriver 7002 Cloud Driver integrates with each cloud provider (AWS, GCP, Azure, etc.). It is responsible for all cloud provider-specific read and write operations.
Echo 8089 Echo provides Spinnaker’s notification support, including integrations with Slack, Hipchat, SMS (via Twilio) and Email.
Front50 8080 Front50 stores all application, pipeline and notification metadata.
Gate 8084 Gate exposes APIs for all external consumers of Spinnaker (including deck). It is the front door to Spinnaker.
Igor 8088 Igor facilitates the use of Jenkins in Spinnaker pipelines (a pipeline can be triggered by a Jenkins job or invoke a Jenkins job)
Orca 8083 Orca handles pipeline and task orchestration (ie. starting a cloud driver operation and waiting until it completes).
Rosco 8087 Rosco is a packer-based bakery. We believe in immutable infrastructure and rosco provides a means to take a Debian or Red Hat package and turn it into an Amazon Machine Image. Don’t worry, it also supports Google Compute Engine and Azure images.
Fiat 7003 Fiat is the authorization server for the Spinnaker system. It exposes a RESTful interface for querying the access permissions for a particular user.
Minion Spinnaker default stores at S3, minion is S3’s local simulater.
jenkins 8080 Can servce as trigger source or automate update from Git source.

Test from Spinnaker Source Code

Github hosts several Spinnaker components, the way to load these source code into your own image is as follow, you need to have docker and java installed first:

  1. git clone into local.
  2. ./gradlew clouddriver-web:installDist -x test
  3. docker build -t -f Dockerfile.slim .

To manage multiple k8s from a single Spinnaker

modify value as:

kubeConfig:
  # Use this when you want to register arbitrary clusters with Spinnaker
  # Upload your ~/kube/.config to a secret
  enabled: true
  secretName: config.kube
  secretKey: config
  # List of contexts from the kubeconfig to make available to Spinnaker
  contexts:
  - prod
  - wt-dev

where secretName(config.kube) is what you pre-created via: kubectl create secret generic --from-file=$HOME/.kube/config config.kube --namespace devops, and secretKey(config) is the file name that spinnaker-config.yml will use to find injected credential file location in /tmp/kube/, because we upload ~/.kube/config, so it actually create a config file inside config.kube secret. contexts are list of different k8s clusters. All these values need to be matched up with kube’s config file.

How To Use Openstack user-data

When use Openstack or AWS as infra, spinnaker can provide user-data. The user-data field needs to be base64 encoded. It is possible to create this dynamically with the built in expression language. To do this you can use the ${ #toBase64() } command. For example, You can pass the build number to the user-data via: ${#toBase64("$trigger.buildNumber")}

Use Jenkins Output as Spinnaker Input

The property file is a way to transfer information about your build from Jenkins to Spinnaker. The file needs to be archived by the Jenkins job and should contain key value pairs. If we have env:

COMMITER_NAME=andrew
BRANCH_NAME=mybranch
CONFIG=config-3059cad.tar.gz

If we have a jenkins job that can save all its env values in an archived build.properties:

 stage('run-command') {
              sh 'env > build.properties'
          }

 archiveArtifacts 'build.properties'

Then we can use this Jenkins job as a triggered job and import its archived properties, and in any later stage we can use the expression ${ trigger.properties['BRANCH_NAME']} to access the property value of the variable named BRANCH_NAME. However, if we want to reuse these values in another jenkins stage(Use them as properties for other jenkins slaves), we need to use ${ #stage('Jenkins')['context']['BRANCH_NAME'] }.

Kayenta Canary

few hints for using Kayenta:

  1. use filter template in metric, such as Container_Name:container_name="${scope}". This will be used as part of prometheus lookup syntax.
  2. when create a new metric, apply this filter template, otherwise it would only know to look for CPU value, but don’t know on whom.
  3. when using Canary Analysis, number lower than 1 is not allowed, you cannot use 0.5 to present 30 mins.
  4. location or name can be expression, e.g: ${ deployedServerGroups[0].region }, meaning from region where recent server deployed.