构建&运行一个自定义kuberentes集群

本例中介绍如何构建一个包含dashboard的集群镜像,然后一键启动。

asciicast

FAQ

This section is mean to answer the most frequently asked questions about sealer. And it will be updated regularly.

How to clean host environment manually when sealer apply failed.

in some case ,when you failed to run sealer apply ,and the hints show a little that is not enough to use, this section
will guild you how to clean your host manually.

you may follow the below clean steps when run kubeadm init failed.

umount rootfs or apply mount if it existed

1
2
df -h | grep sealer
overlay 40G 7.3G 31G 20% /var/lib/sealer/data/my-cluster/rootfs

umount examples:

1
umount /var/lib/sealer/data/my-cluster/rootfs

delete rootfs directory if it existed

1
rm -rf /var/lib/sealer/data/my-cluster

delete kubernetes directory if it existed

1
2
3
rm -rf /etc/kubernetes
rm -rf /etc/cni
rm -rf /opt/cni

delete docker registry if it existed

1
2
docker ps
docker rm -f -v sealer-registry

you may follow the below clean steps if your cluster is up.

kubeadm reset

1
kubeadm reset -f

delete kube config and kubelet if it existed

1
2
3
4
5
6
rm -rf $HOME/.kube/config
rm -rf ~/.kube/ && rm -rf /etc/kubernetes/ && \
rm -rf /etc/systemd/system/kubelet.service.d && rm -rf /etc/systemd/system/kubelet.service && \
rm -rf /usr/bin/kube* && rm -rf /usr/bin/crictl && \
rm -rf /etc/cni && rm -rf /opt/cni && \
rm -rf /var/lib/etcd && rm -rf /var/etcd

Run a cluster

Run on exist servers

Server ip address 192.168.0.1 ~ 192.168.0.13
server password sealer123

Run the kubernetes cluster on the local server.

1
2
3
4
sealer run kubernetes:v1.19.8 \
-m 192.168.0.1,192.168.0.2,192.168.0.3 \
-n 192.168.0.4,192.168.0.5,192.168.0.6 \
-p sealer123 # ssh passwd

Check the Cluster

script
1
2
3
4
5
6
7
8
[root@iZm5e42unzb79kod55hehvZ ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
izm5e42unzb79kod55hehvz Ready master 18h v1.19.8
izm5ehdjw3kru84f0kq7r7z Ready master 18h v1.19.8
izm5ehdjw3kru84f0kq7r8z Ready master 18h v1.19.8
izm5ehdjw3kru84f0kq7r9z Ready <none> 18h v1.19.8
izm5ehdjw3kru84f0kq7raz Ready <none> 18h v1.19.8
izm5ehdjw3kru84f0kq7rbz Ready <none> 18h v1.19.8

scale up and down

Using join command to scale up the local server.

script
1
2
3
4
5
$ sealer join \
--masters 192.168.0.7,192.168.0.8,192.168.0.9,192.168.0.10 \
--nodes 192.168.0.11,192.168.0.12,192.168.0.13
# or
$ sealer join --masters 192.168.0.7-192.168.0.10 --nodes 192.168.0.11-192.168.0.13

Using delete command to scale down the local server.

1
2
3
4
5
$ sealer delete \
--masters 192.168.0.7,192.168.0.8,192.168.0.9,192.168.0.10 \
--nodes 192.168.0.11,192.168.0.12,192.168.0.13
# or
$ sealer delete --masters 192.168.0.7-192.168.0.10 --nodes 192.168.0.11-192.168.0.13

Clean up the Kubernetes cluster

1
sealer delete --all

Sealer will also remove infrastructure resources if you use cloud mod.

Build application image

Motivations

Applications image contains applications with all dependencies except the base image, so applications image can install
to an already existing Kubernetes cluster. with applications image, cluster can be incremental updating, and we can
install applications to an already existing Kubernetes cluster.

Use cases

Build an application image

just add argument “–base=false”, will build an application image. and the size of application image depends on the
docker image size in most cases. without rootfs,it will become slimmer.

For example to build a prometheus application image:
Kubefile:

1
2
3
4
FROM registry.cn-qingdao.aliyuncs.com/sealer-apps/openebs-localpv:2.11.0
COPY prometheus manifests
CMD kubectl apply -f prometheus/crd.yaml
CMD kubectl apply -f prometheus/operator.yaml

build command:

1
sealer build -f Kubefile -t prometheus:2.30.0 --base=false .

The image prometheus:2.30.0 will not contain the embedded Kubernetes cluster. and it only contains all the docker image
that application needs, and contains helm chart or other operator manifests.

Apply this application image

We can only support sealer cluster currently. Sealer hacked docker and registry to cache docker image in cluster, so if
cluster not installed by sealer, this app image will install failed.

1
sealer run prometheus:2.30.0

Or using Clusterfile to overwrite the config files like helm values.

Merge two application images

Using sealer merge ,we can combine the app image as one,in the meantime we can merge application images with cloud
images.

1
sealer merge mysql:8.0.26 redis:6.2.5 -t mysql-redis:latest

Raw docker BaseImage

Motivations

The existing base images mostly use customized docker, but many k8s clusters use raw docker as container runtime. So it’s necessary to provide a base image with raw docker, this page is a guide of how to get a base image with raw docker.

Use cases

How to use it

We provide an official BaseImage which uses official raw docker as container runtime: kubernetes-rawdocker:v1.19.8. If you want to create a k8s cluster, you can use it directly as sealer run command’s argument or write it into your Clusterfile. If you want to use it as the base image to build other images by sealer build, FROM kubernetes-rawdocker:v1.19.8 should be the first line in your Kubefile.

How to build raw docker BaseImage

Step 1:choose a base image

Get an image which you will modify it later, you may think it as your base image. To demonstrate the workflow, I will use kubernetes:v1.19.8. You can get the same image by executing sealer pull kubernetes:v1.19.8.

Step 2: find the layers you will use later

Find the image layer id by executing sealer inspect kubernetes:v1.19.8. There are four layers in this image, and you will only use two of them. The first one’s id is c1aa4aff818df1dd51bd4d28decda5fe695bea8a9ae6f63a8dd5541c7640b3d6, it consist of bin files, config files, registry files, scripts and so on. (I will use {layer-id-1} to refer to it in the following. Actually, it’s a sha256 string) The another one’s id is 991491d3025bd1086754230eee1a04b328b3d737424f1e12f708d651e6d66860, it consist of network component yaml files. (I will use {layer-id-2} to refer to it in the following. Actually, it’s also a sha256 string)

Step 3: get official raw docker

Choose a raw docker binary version from https://download.docker.com/linux/static/stable/x86_64/ if your machine is based on x86_64 architecture, and download it. (other architecture can be found at https://download.docker.com/linux/static/stable/)

Step 4: replace sealer hacked docker

Replace /var/lib/sealer/data/overlay2/{layer-id-1}/cri/docker.tar.gz with the file you download in step 3, Before replacement you should do some handles. Attention that you should make sure after replacement the compressed file name and untarred working directory tree is same as before. In this case, you should untar the file you download in step 3, enter the docker directory and tar all files in this directory with an output file whose name is docker.tar.gz.

Step 5: replace sealer hacked registry

Pull the official “registry” image and replace existing customized “registry” image at /var/lib/sealer/data/overlay2/{layer-id-1}/images/registry.tar. Firstly make sure raw docker have already installed, then execute docker pull registry:2.7.1 && docker save -o registry.tar registry:2.7.1 && mv registry.tar /var/lib/sealer/data/overlay2/{layer-id-1}/images/registry.tar.

Step 6: modify daemon.json

Edit the file ‘daemon.json’ at /var/lib/sealer/data/overlay2/{layer-id-1}/etc/, delete the mirror-registries attribute.

Step 7: build rawdocker alpine image

Switch to directory /var/lib/sealer/data/overlay2/{layer-id-1}/, edit the Kubefile and make sure it’s content is:

script
1
2
FROM scratch
COPY . .

Then build image by execute sealer build --mode lite -t kubernetes-rawdocker:v1.19.8-alpine ..

Extension

Step 8: add network components to alpine image

Now the base image still need network components to make k8s clusters work well, here we provide a guide for adding calico as network components.
First of all, create a rawdockerBuild directory as your build environment. Then you should move the file “tigera-operator.yaml” and the file “custom-resources.yaml” from /var/lib/sealer/data/overlay2/{layer-id-2}/etc/ to rawdockerBuild/etc. After that you still need modify some contents in those two files to make sure the pods they create will pull docker images from your private registry, which will make your k8s clusters still work well in offline situations. In this case, firstly add a map-key value in “custom-resources.yaml”, the key is spec.registry and the value is sea.hub:5000, secondly modify all docker image names in “tigera-operator.yaml” from <registry>/<repository>/<imageName>:<imageTag> to sea.hub:5000/<repository>/<imageName>:<imageTag>.
Next create a imageList file at rawdockerBuild directory, with the following content:

  • calico/cni:v3.19.1
  • calico/kube-controllers:v3.19.1
  • calico/node:v3.19.1
  • calico/pod2daemon-flexvol:v3.19.1
  • calico/typha:v3.19.1
  • tigrea/operator:v1.17.4

They are all the images needed to create network components, make sure that the tag is consistent with declared in the yaml file “tigera-operator.yaml” and “custom-resources.yaml”.

Step 9: build rawdocker image

Switch to directory rawdockerBuild, create a Kubefile and make sure it’s content is:

script
1
2
3
4
FROM kubernetes-rawdocker:v1.19.8-alpine
COPY imageList manifests
COPY etc .
CMD kubectl apply -f etc/tigera-operator.yaml && kubectl apply -f etc/custom-resources.yaml

Then build image by execute sealer build --mode lite -t kubernetes-rawdocker:v1.19.8 ..

GPU CloudImage

Preparation

  1. Install nvidia driver on your host.
  2. Install the latest version of sealer on your host.

How to build it

We provide GPU base image in our official registry
named registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes-nvidia:v1.19.8.you can use is directly. meanwhile, we
provide the build context in the applications’ directory. it can be adjusted it per your request.

Run below command to rebuild it.

sealer build -f Kubefile -t registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes-nvidia:v1.19.8 -m lite .

How to apply it

  1. Modify the Clusterfile according to your infra environment,here is the Clusterfile for example.
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: sealer.cloud/v2
kind: Cluster
metadata:
name: default-kubernetes-cluster
spec:
image: registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes-nvidia:v1.19.8
ssh:
passwd: xxx
hosts:
- ips: [ 192.168.0.2,192.168.0.3,192.168.0.4 ]
roles: [ master ]
- ips: [ 192.168.0.5 ]
roles: [ node ]
  1. Run command sealer apply -f Clusterfile to apply the GPU cluster. it will take few minutes.

How to check the result

  1. Check the pod status to run kubectl get pods -n kube-system nvidia-device-plugin, you can find the pod in Running
    status.
  2. Get the node details to run kubectl describe node, if nvidia.com/gpu shows on ‘Allocated resources’ section,you
    get a k8s cluster with GPU.