Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Konstantinos Tsakalozos
on 27 September 2017

Patch CDK #1: Build & Release


This article originally appeared on Konstantin’s blog

Happens all the time. You often come across a super cool open source project you would gladly contribute but setting up the development environment and learning to patch and release your fixes puts you off. The Canonical Distribution of Kubernetes (CDK) is not an exception. This set of blog posts will shed some light on the most dark secrets of CDK.

Welcome to the CDK patch journey!

Build CDK from source

Prerequisites

You would need to have Juju configured and ready to build charms. We will not be covering that in this blog post. Please, follow the official documentation to setup your environment and build you own first charm with layers.

Build the charms

CDK is made of a few charms, namely:

To build each charm you need to spot the top level charm layer and do a `charm build` on it. The links on the above list will get you to the github repository you will need to clone and build. Lets try this out for easyrsa:

> git clone https://github.com/juju-solutions/layer-easyrsa
Cloning into ‘layer-easyrsa’…
remote: Counting objects: 55, done.
remote: Total 55 (delta 0), reused 0 (delta 0), pack-reused 55
Unpacking objects: 100% (55/55), done.
Checking connectivity… done.
> cd ./layer-easyrsa/
 > charm build
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/easyrsa
build: Processing layer: layer:basic
build: Processing layer: layer:leadership
build: Processing layer: easyrsa (from .)
build: Processing interface: tls-certificates
proof: OK!

The above builds the easyrsa charm and prints the output directory (/home/jackal/workspace/charms/builds/easyrsa in this case).

Building the kubernetes-* charms is slightly different. As you might already know the kubernetes charm layers are already upstream under cluster/juju/layers. Building the respective charms requires you to clone the kubernetes repository and pass the path to each layer to your invocation of charm build. Let’s build the kubernetes worker layer here:

> git clone https://github.com/kubernetes/kubernetes
Cloning into ‘kubernetes’…
remote: Counting objects: 602553, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 602553 (delta 18), reused 20 (delta 15), pack-reused 602481
Receiving objects: 100% (602553/602553), 456.97 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (409190/409190), done.
Checking connectivity… done.
> cd ./kubernetes/
> charm build cluster/juju/layers/kubernetes-worker/
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/kubernetes-worker
build: Processing layer: layer:basic
build: Processing layer: layer:debug
build: Processing layer: layer:snap
build: Processing layer: layer:nagios
build: Processing layer: layer:docker (from ../../../workspace/charms/layers/layer-docker)
build: Processing layer: layer:metrics
build: Processing layer: layer:tls-client
build: Processing layer: layer:nvidia-cuda (from ../../../workspace/charms/layers/nvidia-cuda)
build: Processing layer: kubernetes-worker (from cluster/juju/layers/kubernetes-worker)
build: Processing interface: nrpe-external-master
build: Processing interface: dockerhost
build: Processing interface: sdn-plugin
build: Processing interface: tls-certificates
build: Processing interface: http
build: Processing interface: kubernetes-cni
build: Processing interface: kube-dns
build: Processing interface: kube-control
proof: OK!

During charm build all layers and interfaces referenced recursively starting by the top charm layer are fetched and merged to form your charm. The layers needed for building a charm are specified in a layer.yaml file on the root of the charm’s directory. For example, looking at cluster/juju/layers/kubernetes-worker/layer.yaml we see that the kubernetes worker charm uses the following layers and interfaces:

- 'layer:basic'
- 'layer:debug'
- 'layer:snap'
- 'layer:docker'
- 'layer:metrics'
- 'layer:nagios'
- 'layer:tls-client'
- 'layer:nvidia-cuda'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
- 'interface:kube-control'

Layers is an awesome way to share operational logic among charms. For instance, the maintainers of the nagios layer have a better understanding of the operational needs of nagios but that does not mean that the authors of the kubernetes charms cannot use it.

charm build will recursively lookup each layer and interface at http://interfaces.juju.solutions/ to figure out where the source is. Each repository is fetched locally and squashed with all the other layers to form a single package, the charm. Go ahead a do a charm build with “-l debug” to see how and when a layer is fetched. It is important to know that if you already have a local copy of a layer under $JUJU_REPOSITORY/layers or interface under $JUJU_REPOSITORY/interfaces charm build will use those local forks instead of fetching them from the registered repositories. This enables charm authors to work on cross layer patches. Note that you might need to rename the directory of your local copy to match exactly the name of the layer or interface.

Building Resources

Charms will install Kubernetes but to do so they need to have the Kubernetes binaries. We package these binaries in snaps so that they are self-contained and deployed in any Linux distribution. Building such binaries is pretty straight forward as long as you know where to find them 🙂

Here is the repository holding the Kubernetes snaps: https://github.com/juju-solutions/release.git. The branch we want is rye/snaps:

> git clone https://github.com/juju-solutions/release.git
Cloning into ‘release’…
remote: Counting objects: 1602, done.
remote: Total 1602 (delta 0), reused 0 (delta 0), pack-reused 1602
Receiving objects: 100% (1602/1602), 384.69 KiB | 236.00 KiB/s, done.
Resolving deltas: 100% (908/908), done.
Checking connectivity… done.
> cd release
> git checkout rye/snaps
Branch rye/snaps set up to track remote branch rye/snaps from origin.
Switched to a new branch ‘rye/snaps’

Have a look at the README.md inside the snap directory to see how to build the snaps:

> cd snap/
> ./docker-build.sh KUBE_VERSION=v1.7.4

A number of .snap files should be available after the build.

In similar fashion you can build the snap package holding Kubernetes addons. We refer to this package as cdk-addons and it can be found at: https://github.com/juju-solutions/cdk-addons.git

> git clone https://github.com/juju-solutions/cdk-addons.git
Cloning into ‘cdk-addons’…
remote: Counting objects: 408, done.
remote: Total 408 (delta 0), reused 0 (delta 0), pack-reused 408
Receiving objects: 100% (408/408), 51.16 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Checking connectivity… done.
 > cd cdk-addons/
> make

The last resource you will need (which is not packaged as a snap) is the container network interface (cni). Lets grab the repository and get to a release tag:

> git clone https://github.com/containernetworking/cni.git
Cloning into ‘cni’…
remote: Counting objects: 4048, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 4048 (delta 0), reused 2 (delta 0), pack-reused 4043
Receiving objects: 100% (4048/4048), 1.76 MiB | 613.00 KiB/s, done.
Resolving deltas: 100% (1978/1978), done.
Checking connectivity… done. 
> cd cni
> git checkout -f v0.5.1

Build and package the cni resource:

> docker run — rm -e “GOOS=linux” -e “GOARCH=amd64” -v `pwd`:/cni golang /bin/bash -c “cd /cni && ./build”
Building API
Building reference CLI
Building plugins
 flannel
 tuning
 bridge
 ipvlan
 loopback
 macvlan
 ptp
 dhcp
 host-local
 noop
 > cd ./bin
 > tar -cvzf ../cni.tgz *
bridge
cnitool
dhcp
flannel
host-local
ipvlan
loopback
macvlan
noop
ptp
tuning

You should now have a cni.tgz in the root folder of the cni repository.

Two things to note here:

  • We do have a CI for building, testing and releasing charms and bundles. In case you want to follow each step of the build process, you can find our CI scripts here: https://github.com/juju-solutions/kubernetes-jenkins
  • You do not need to build all resources yourself. You can grab the resources used in CDK from the Juju store. Starting from the canonical-kubernetes bundle you can navigate to any charm shipped. Select one from the very end of the bundle page and then look for the “resources” sidebar on the right. Download any of them, rename it properly and you are ready to use it in your release.

Releasing Your Charms

After patching the charms to match your needs, please, consider submitting a pull request to tell us what you have been up to. Contrary to many other projects you do not need to wait for your PR to get accepted before you can actually make your work public. You can immediately release your work under your own namespace on the store. This is described in detail on the official charm authors documentation. The developers team is often using private namespaces to test PoCs and new features. The main namespace where CDK is released from is “containers”.

Yet, there is one feature you need to be aware of when attaching snaps to your charms. Snaps have their own release cycle and repositories. If you want to use the officially released snaps instead of attaching them to the charms, you can use a dummy zero sized file with the correct extension (.snap) in the place of each snap resource. The snap layer will see that that resource is empty and will try to grab the snap from the official repositories. Using the official snaps is recommended, however, in network restricted environments you might need to attach your own snaps while you deploy the charms.

Why is it this way?

Building CDK is of average difficulty as long as you know how where to look. It is not perfect by any standards and it will probably continue this path. The reason is that there are opposing forces shaping the build processes. This should come as no surprise. As Kubernetes changes rapidly and constantly expands, the build and release process should be flexible enough to include any new artefacts. Consider for example the switch from flannel cni to calico. In our case it is a resource and a charm that need to be updated. A monolithic build script would have been more “elegant” to the outsiders (eg, make CDK), but we would have been hiding a lot under the carpet. CI should be part of the culture of any team and should be owned by the team or else you get disconnected from the end product causing delays and friction. Our build and release process might look a bit “dirty” with a lot of moving parts but it really is not that bad! I managed to highlight the build and release steps in a single blog. Positive feedback also comes from our field engineers. Most of the time CDK deploys out of the box. When our field engineers are called it is either because our customers have a special requirement from the software or they have a “unconventional” environment in which Kubernetes needs to be deployed. Having such a simple and flexible build and release process enables our people to solve a problem on-site and release it to the Juju store within a couple of hours.

Next steps

This blog post serves as foundation work for what is coming up. The plan is to go over some easy patches so we further demystify how CDK works.

Related posts


Rhys Knipe
23 December 2024

What to know when procuring Linux laptops

Ubuntu Article

Technology procurement directly influences business success. The equipment you procure will determine how your teams deliver projects and contribute to your success. So what does being “well-equipped” look like in the world of Linux laptops?  In this blog, we’ll lay down the best practices for procurement professionals who have been taske ...


Michelle Anne Tabirao
20 December 2024

Building RAG with enterprise open source AI infrastructure

Data Platform Article

How to create a robust enterprise AI infrastructure for RAG systems using open source tooling?A highlight on how open source can help ...


Amir Abdel Baki
19 December 2024

Life at Canonical: Victoria Antipova’s perspective as a new joiner in Product Marketing

Ubuntu Article

Life at Canonical: Victoria Antipova’s perspective as a new joiner in Product Marketing ...