Kyverno BaseImage

Motivations

It’s common that some k8s clusters have their own private image registry, and they don’t want to pull images from other registry for some reasons. This page is about how to integrate kyverno into k8s cluster, which will redirect image pull request to Specified registry.

Uses case

How to use it

We provide an official BaseImage which integrates kyverno into cluster:kubernetes-kyverno:v1.19.8. Note that it contains no docker images other than those necessary to run a k8s cluster, so if you want to use this cloud image, and you also need other docker images(such as nginx) to run a container, you need to cache the docker images to your private registry.

Of course sealer can help you do this,use nginx as an example.
Firstly include nginx in the file imageList.
You can execute cat imageList to make sure you have done this, and the result may seem like this:

1
2
[root@ubuntu ~]# cat imageList
nginx:latest

Secondly edit a Kubefile with the following content:

1
2
3
FROM kubernetes-kyverno:v1.19.8
COPY imageList manifests
CMD kubectl run nginx --image=nginx:latest

Thirdly execute sealer build to build a new cloud image

1
[root@ubuntu ~]# sealer build -t my-nginx-kubernetes:v1.19.8 .

Just a simple command and let sealer help you cache nginx:latest image to private registry. You may doubt whether sealer has successfully cached the image, please execute sealer inspect my-nginx-kubernetes:v1.19.8 and locate the layer attribute of the spec section, you will find there are many layers. In this case, the last layer has two key:value pairs: type: BASE, value: registry cache, from which we know it’s about images cached to registry. Remembering this layer’s id, execute cd /var/lib/sealer/data/overlay2/{layer-id}/registry/docker/registry/v2/repositories/library, then you will find the nginx image existing in the directory.

Now you can use this new cloud image to create k8s cluster. After your cluster startup, there is already a pod running nginx:latest image, you can see it by execute kubectl describe pod nginx, and you can also create more pods running nginx:latest image.

How to build kyverno BaseImage

The following is a sequence steps of building kyverno build-in cloud image

Step 1: choose a base image

Choose a base image which can create a k8s cluster with at least one master node and one work node. To demonstrate the workflow, I will use kubernetes-rawdocker:v1.19.8. You can get the same image by executing sealer pull kubernetes-rawdocker:v1.19.8.

Step 2: get the kyverno install yaml and cache the image

Download the “install.yaml” of kyverno at https://raw.githubusercontent.com/kyverno/kyverno/release-1.5/definitions/release/install.yaml, you can replace the version to what you want. I use 1.5 in this demonstration.

In order to use kyverno BaseImage in offline environment, you need to cache the image used in install.yaml. In this case, there are two docker images need to be cached: ghcr.io/kyverno/kyverno:v1.5.1 and ghcr.io/kyverno/kyvernopre:v1.5.1. So firstly rename them to sea.hub:5000/kyverno/kyverno:v1.5.1 and sea.hub:5000/kyverno/kyvernopre:v1.5.1 in the install.yaml, where sea.hub:5000 is the private registry domain in your k8s cluster. Then create a file imageList with the following content:

1
2
ghcr.io/kyverno/kyverno:v1.5.1
ghcr.io/kyverno/kyvernopre:v1.5.1

Step 3: create a ClusterPolicy

Create a yaml with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion : kyverno.io/v1
kind: ClusterPolicy
metadata:
name: redirect-registry
spec:
background: false
rules:
- name: prepend-registry-containers
match:
resources:
kinds:
- Pod
preconditions:
all:
- key: "{{request.operation}}"
operator: In
value:
- CREATE
- UPDATE
mutate:
foreach:
- list: "request.object.spec.containers"
patchStrategicMerge:
spec:
containers:
- name: "{{ element.name }}"
image: "sea.hub:5000/{{ images.containers.{{element.name}}.path}}:{{images.containers.{{element.name}}.tag}}"
- name: prepend-registry-initcontainers
match:
resources:
kinds:
- Pod
preconditions:
all:
- key: "{{request.operation}}"
operator: In
value:
- CREATE
- UPDATE
mutate:
foreach:
- list: "request.object.spec.initContainers"
patchStrategicMerge:
spec:
initContainers:
- name: "{{ element.name }}"
image: "sea.hub:5000/{{ images.initContainers.{{element.name}}.path}}:{{images.initContainers.{{element.name}}.tag}}"

This ClusterPolicy will redirect image pull request to private registry sea.hub:5000, and I name this file as redirect-registry.yaml

Step 4: create a shell script to monitor kyverno pod

Because the state of kyverno pod should be running, then the ClusterPolicy will work. It’s advised to create and run the following shell script to monitor the state of kyverno pod until it become running.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/bin/bash

echo "[kyverno-start]: Waiting for the kyverno to be ready..."

while true
do
clusterPolicyStatus=`kubectl get cpol -o go-template --template={{range.items}}{{.status.ready}}{{end}}`;
if [ "$clusterPolicyStatus" == "true" ];then
break;
fi
sleep 1
done

echo "kyverno is running"

I named this file wait-kyverno-ready.sh.

Step 5: create the build content

Create a kyvernoBuild directory with five files: the etc/install.yaml and imageList in step 2, etc/redirect-registry.yaml in step 3, scripts/wait-kyverno-ready.sh in step 4 and a Kubefile whose content is following:

1
2
3
4
5
6
FROM kubernetes-rawdocker:v1.19.8
COPY imageList manifests
COPY etc .
COPY scripts .
CMD kubectl create -f etc/install.yaml && kubectl create -f etc/redirect-registry.yaml
CMD bash scripts/wait-kyverno-ready.sh

Step 6: build the image

Supposing you are at the kyvernoBuild directory, please execute sealer build --mode lite -t kubernetes-kyverno:v1.19.8 .

Develop out of tree plugin.

Motivations

Sealer support common plugins such as hostname plugin,label plugin,which is build in,user could define and use it
according their requests. Sealer also support to load out of tree plugin which is written by golang. This page is about
how to extend the new plugin type and how to develop an out of tree plugin.

Uses case

How to develop an out of tree plugin

if user doesn’t want their plugin code to be open sourced, we can develop an out of tree plugin to use it.

  1. implement the golang plugin interface and expose the variable named Plugin.
  • package name must be “main”
  • exposed variable must be “Plugin”
  • exposed variable must be “PluginType”

Examples:list_nodes.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main

import (
"fmt"
"github.com/alibaba/sealer/client/k8s"
"github.com/alibaba/sealer/plugin"
)

type list string

func (l *list) Run(context plugin.Context, phase plugin.Phase) error {
client, err := k8s.Newk8sClient()
if err != nil {
return err
}
nodeList, err := client.ListNodes()
if err != nil {
return fmt.Errorf("cluster nodes not found, %v", err)
}
for _, v := range nodeList.Items {
fmt.Println(v.Name)
}
return nil
}

var PluginType = "LIST_NODE"
var Plugin list
  1. build the new plugin as so file. plugin file and sealer source code must in the same golang runtime in order to avoid
    compilation problems. we suggest the so file must build with the specific sealer version you used. otherwise,sealer
    will fail to load the so file. you can replace the build file at the test directory
    under Example to build your own so file.
1
go build -buildmode=plugin -o list_nodes.so list_nodes.go
  1. use the new so file

Copy the so file and plugin config file to your cloud image.We can also append plugin yaml to Clusterfile and
use sealer apply -f Clusterfile to test it.

Kubefile:

1
2
3
FROM kubernetes:v1.19.8
COPY list_nodes.so plugin
COPY list_nodes.yaml plugin
script
1
sealer build -m lite -t kubernetes-post-install:v1.19.8 .

list_nodes.yaml:

1
2
3
4
5
6
7
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: list_nodes.so # out of tree plugin name
spec:
type: LIST_NODE # define your own plugin type.
action: PostInstall # which stage will this plugin be applied.

apply it in your cluster: sealer run kubernetes-post-install:v1.19.8 -m x.x.x.x -p xxx

Architecture

Sealer has two top module: Build Engine & Apply Engine

The Build Engine Using Kubefile and build context as input, and build a CloudImage that contains all the dependencies.
The Apply Engine Using Clusterfile to init a cluster which contains kubernetes and other applications.

Build Engine

  • Parser : parse Kubefile into image metadata
  • Registry : push or pull the CloudImage
  • Store : save CloudImage to local disks

Builders

  • Lite Builder, sealer will check all the manifest or helm chart, decode docker images in those files, and cache them into CloudImage.
  • Cloud Builder, sealer will create a Cluster using public cloud, and exec RUN & CMD command witch defined in Kubefile, then cache all the docker image in the Cluster.
  • Container Builder, Using Docker container as a node, run kubernetes cluster in container then cache all the docker images.

Apply Engine

  • Infra : manage infrastructure, like create VMs in public cloud then apply the cluster on top of it. Or using docker emulation nodes.
  • Runtime : cluster installer implementation, like using kubeadm to install cluster.
  • Config : application config, like mysql username passwd or other configs, you can use Config overwrite any file you want.
  • Plugin : plugin help us do some extra work, like exec a shell command before install, or add a label to a node after install.
  • Debug : help us check the cluster is healthy or not, find reason when things unexpected.

Other modules

  • Filesystem : Copy CloudRootfs files to all nodes
  • Mount : mount CloudImage all layers together
  • Checker : do some pre-check and post check
  • Command : a command proxy to do some tasks which os don’t have the command. Like ipvs or cert manager.
  • Guest : manage user application layer, like exec CMD command defined in Kubefile.

ARM CloudImage

Download sealer for example download v0.5.0:

script
1
wget https://github.com/alibaba/sealer/releases/download/v0.5.0/sealer-v0.5.0-linux-arm64.tar.gz

Run a cluster on ARM platform

Just using the ARM CloudImage kubernetes-arm64:v1.19.7:

script
1
sealer run kubernetes-arm64:v1.19.7 --master 192.168.0.3 --passwd xxx