集群镜像插件使用

插件类型列表

主机名插件

主机名插件将帮助您更改所有主机名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: MyHostname # 指定插件名称,将会转储到$rootfs/plugins目录下。
spec:
type: HOSTNAME #插件类型
action: PreInit # 指定运行阶段
data: |
192.168.0.2 master-0
192.168.0.3 master-1
192.168.0.4 master-2
192.168.0.5 node-0
192.168.0.6 node-1
192.168.0.7 node-2

脚本插件

你可以在指定节点的任何阶段执行任何shell命令。

1
2
3
4
5
6
7
8
9
10
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: MyShell
spec:
type: SHELL
action: PostInstall # # 指定运行阶段【PreInit | PreInstall | PostInstall | PostClean】
on: node-role.kubernetes.io/master=
data: |
kubectl get nodes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
action : [PreInit| PreInstall| PostInstall] # 指定执行shell的时机
镜像挂载前阶段 | action: Originally
在初始化之前之前执行命令 | action: PreInit
在添加节点之前执行命令 | action: PreJoin
在添加节点之后执行命令 | action: PostJoin
在安装集群之前执行命令 | action: PreInstall
在安装集群之后执行命令 | action: PostInstall
在清理集群后执行命令 | action: PostClean
组合使用 | action: PreInit|PreJoin
on : #指定执行命令的机器
为空时默认在所有节点执行
在所有master节点上执行 on: master
在所有node节点上执行 on: node
在指定IP上执行 on: 192.168.56.113,192.168.56.114,192.168.56.115,192.168.56.116
在有连续IP的机器上执行 on: 192.168.56.113-192.168.56.116
在指定label节点上执行(action需设置为PostInstall) on: node-role.kubernetes.io/master=
data : #指定执行的shell命令

标签插件

帮助您在安装kubernetes集群后设置标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: MyLabel
spec:
type: LABEL
action: PostInstall
data: |
192.168.0.2 ssd=true
192.168.0.3 ssd=true
192.168.0.4 ssd=true
192.168.0.5 ssd=false,hdd=true
192.168.0.6 ssd=false,hdd=true
192.168.0.7 ssd=false,hdd=true

集群检测插件

由于服务器以及环境因素(服务器磁盘性能差)可能会导致sealer安装完kubernetes集群后,立即部署应用服务,出现部署失败的情况。cluster check插件会等待kubernetes集群稳定后再部署应用服务。

1
2
3
4
5
6
7
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: checkCluster
spec:
type: CLUSTERCHECK
action: PreGuest

污点插件

如果你在Clusterfile后添加taint插件配置并应用它,sealer将帮助你添加污点和去污点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: taint
spec:
type: Taint
action: PreGuest
data: |
192.168.56.3 key1=value1:NoSchedule
192.168.56.4 key2=value2:NoSchedule-
192.168.56.3-192.168.56.7 key3:NoSchedule
192.168.56.3,192.168.56.4,192.168.56.5,192.168.56.6,192.168.56.7 key4:NoSchedule
192.168.56.3 key5=:NoSchedule
192.168.56.3 key6:NoSchedule-
192.168.56.4 key7:NoSchedule-

data写法为ips taint_argument
ips : 多个ip通过,连接,连续ip写法为 首ip-末尾ip
taint_argument: 同kubernetes 添加或去污点写法(key=value:effect #effect必须为:NoSchedule, PreferNoSchedule 或 NoExecute)。

Etcd 备份插件

1
2
3
4
5
6
7
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: MyBackup
spec:
type: ETCD
action: PostInstall

Out of tree plugin

at present, we only support the golang so file as out of tree plugin. More description about golang plugin
see golang plugin website.

copy the so file and the plugin config to your cloud image at build stage use Kubefile,sealer will parse and execute
it. develop your own out of tree plugin see sealer plugin.

plugin config:

1
2
3
4
5
6
7
8
9
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: label_nodes.so # out of tree plugin name
spec:
type: LABEL_TEST_SO # define your own plugin type.
action: PostInstall # which stage will this plugin be applied.
data: |
192.168.0.2 ssd=true

Kubefile:

script
1
2
3
FROM kubernetes:v1.19.8
COPY label_nodes.so plugin
COPY label_nodes.yaml plugin

Build a cluster image that contains the golang plugin (or more plugins):

script
1
sealer build -m lite -t kubernetes-post-install:v1.19.8 .

如何使用插件

通过Clusterfile使用插件

例如,在安装kubernetes集群后设置节点标签:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: sealer.cloud/v2
kind: Cluster
metadata:
name: default-kubernetes-cluster
spec:
image: kubernetes:v1.19.8
ssh:
passwd: xxx
hosts:
- ips: [ 192.168.0.2,192.168.0.3,192.168.0.4 ]
roles: [ master ]
- ips: [ 192.168.0.5 ]
roles: [ node ]
---
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: LABEL
spec:
type: LABEL
action: PostInstall
data: |
172.20.126.8 ssd=false,hdd=true
script
1
sealer apply -f Clusterfile

在Kubefile中使用默认插件

在很多情况下,可以不使用Clusterfile而使用插件,本质上是在使用插件之前存储了Clusterfile插件到$rootfs/plugins目录下 所以当我们构建镜像时可以添加自定义默认插件。

插件配置文件 shell.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: taint
spec:
type: SHELL
action: PostInstall
on: node-role.kubernetes.io/master=
data: |
kubectl get nodes
---
apiVersion: sealer.aliyun.com/v1alpha1
kind: Plugin
metadata:
name: SHELL
spec:
action: PostInstall
data: |
if type yum >/dev/null 2>&1;then
yum -y install iscsi-initiator-utils
systemctl enable iscsid
systemctl start iscsid
elif type apt-get >/dev/null 2>&1;then
apt-get update
apt-get -y install open-iscsi
systemctl enable iscsid
systemctl start iscsid
fi

Kubefile:

script
1
2
FROM kubernetes:v1.19.8
COPY shell.yaml plugin

构建一个包含安装iscsi的插件(或更多插件)的集群镜像:

script
1
sealer build -m lite -t kubernetes-iscsi:v1.19.8 .

通过镜像启动集群后插件也将被执行,而无需在Clusterfile中定义插件:
sealer run kubernetes-iscsi:v1.19.8 -m x.x.x.x -p xxx

快速开始

使用sealer创建一个kubernetes集群

script
1
2
3
4
5
6
7
# 下载和安装sealer二进制
wget https://github.com/alibaba/sealer/releases/download/v0.7.1/sealer-v0.7.1-linux-amd64.tar.gz && \
tar zxvf sealer-v0.7.1-linux-amd64.tar.gz && mv sealer /usr/bin
# 运行一个六节点的kubernetes集群
sealer run kubernetes:v1.19.8 \
--masters 192.168.0.2,192.168.0.3,192.168.0.4 \
--node 192.168.0.5,192.168.0.6,192.168.0.7 --passwd xxx
script
1
2
3
4
5
6
7
8
[root@iZm5e42unzb79kod55hehvZ ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
izm5e42unzb79kod55hehvz Ready master 18h v1.19.8
izm5ehdjw3kru84f0kq7r7z Ready master 18h v1.19.8
izm5ehdjw3kru84f0kq7r8z Ready master 18h v1.19.8
izm5ehdjw3kru84f0kq7r9z Ready <none> 18h v1.19.8
izm5ehdjw3kru84f0kq7raz Ready <none> 18h v1.19.8
izm5ehdjw3kru84f0kq7rbz Ready <none> 18h v1.19.8

增加删除节点

script
1
2
sealer join --masters 192.168.0.2,192.168.0.3,192.168.0.4
sealer join --nodes 192.168.0.5,192.168.0.6,192.168.0.7

清理集群

创建集群会默认创建一个Clusterfile存储在 /root/.sealer/[cluster-name]/Clusterfile, 里面包含集群元数据信息.

删除集群:

script
1
2
3
sealer delete -f /root/.sealer/my-cluster/Clusterfile
# 或者
sealer delete --all

自定义集群镜像

上面我们看到的kubernetes:v1.19.8就是一个标准的集群镜像,有时我们希望在集群镜像中带一些我们自己自定义的组件,就可以使用此功能。

比如这里我们创建一个包含dashboard的集群镜像:

Kubefile:

script
1
2
3
4
5
6
# 基础镜像中包含安装kuberntes的所有依赖,sealer已经制作好,用户直接使用它即可
FROM kubernetes:v1.19.8
# 下载dashboard的yaml文件
RUN wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
# 集群启动时的命令
CMD kubectl apply -f recommended.yaml

构建集群镜像:

script
1
sealer build -t dashboard:latest .

运行集群镜像,这时运行出来的就是一个包含了dashboard的集群:

script
1
2
3
4
# sealer会启动一个kubernetes集群并在集群中启动dashboard
sealer run dashboard:latest --masters 192.168.0.2 --passwd xxx
# 查看dashboard的pod
kubectl get pod -A|grep dashboard

把集群镜像推送到镜像仓库

script
1
2
sealer tag dashboard:latest registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest
sealer push registry.cn-qingdao.aliyuncs.com/sealer-io/dashboard:latest

镜像导入导出

script
1
2
3
sealer save -o dashboard.tar dashboard:latest
# 可以把tar拷贝到客户环境中进行load
sealer load -i dashboard.tar

概览

sealer[ˈsiːlər] 可以像docker那样把整个集群制作成镜像,实现分布式软件的构建、交付、运行。

应用场景:

  • kubernetes集群安装
  • kubernetes集群、数据库、中间件、SaaS应用整体打包,一键交付
  • 离线交付,多架构支持,国产化支持
  • 各种在kubernetes上编排的分布式应用交付

编写一个和Dockerfile很类似的Kubefile,就可以构建一个集群镜像,使用Clusterfile去运行一个集群。

Docker镜像很好的解决了单应用的打包问题,但是没有解决分布式应用的镜像问题。helm这类编排工具解决了编排问题,没有解决打包问题。

目前集群没有打包标准,构建一个自定义的kubernetes集群也比较复杂,整个集群+分布式应用部署也是面向过程,交付问题爆炸屡屡得不到干净利索的解决,集群整体交付一致性差。

特别是在专有云交付领域,一个分布式软件往往有非常多的配置,以及很多应用镜像和依赖,甚至有些情况还需要在离线情况下进行交付,交付过程面临着非常大的挑战。

集群镜像把整个集群看成一台服务器,把k8s看成云操作系统,实现整个集群的镜像化打包和交付,为企业级软件提供一种“开箱即用”的应用封装技术。

通过非常简单的方式把一个应用的所有依赖进行标准化打包,一键运行到客户的集群中去,并且可以兼容复杂的基础设施,保障集群镜像构建过程没问题运行就没问题。

集群镜像市场中会提供非常多已经构建好的可复用镜像,软件的使用者可以像搭建积木一样灵活的组合这些镜像服务与自己的应用,如SaaS应用依赖的数据库,消息队列,甚至k8s本身都可以直接在市场中找到。

使用集群镜像技术最终能帮助企业一键拉起一个复杂的自定义集群,大幅提升交付效率,降低交付出错率,直接复用成熟稳定的组件也可使软件稳定性大大提升。

交付人员不用再关心复杂的部署细节,解决了软件生产者和使用者之间的协作问题。

Save helm chart package

Sealer support to save raw helm chart package to cloud image as oci format. with this feature, we can pull the helm
chart package in other offline production environment.

Prerequisites

Prepare two nodes named the build node and the run node. At the same time need to install sealer and helm on it.

Examples

On the build node.

Start docker registry to save helm chart package.

start docker registry to transfer helm chart package to oci format.

1
docker run -p 5000:5000  --restart=always --name registry -v /registry/:/var/lib/registry -d registry

use helm push to save helm chart package to registry.

1
2
export HELM_EXPERIMENTAL_OCI=1
helm push mysql-8.8.25.tgz oci://localhost:5000/helm-charts

Use sealer build to save helm chart package from local registry to cloud image.

Prepare Kubefile:

1
2
3
[root@iZbp16ikro46xwgqzij67sZ build]# cat Kubefile
FROM kubernetes:v1.19.8
COPY imageList manifests

Prepare imageList file:

1
2
3
[root@iZbp16ikro46xwgqzij67sZ build]# cat imageList
localhost:5000/helm-charts/mysql:8.8.25
localhost:5000/helm-charts/nginx:9.8.0

Then run sealer build -t my-kubernetes:v1.19.8 -f Kubefile .and we can
use sealer save my-kubernetes:v1.19.8 -o my-kubernetes.tar to save the image to the local filesystem.

On the run node.

load the image my-kubernetes.tar from the build node use sealer load -i my-kubernetes.tar.

Use sealer run to start the cluster

1
sealer run -d my-kubernetes:v1.19.8 -p password -m 172.16.0.230

Pull Helm chart on the run node.

When the cluster is up, we can pull the helm chart package use helm pull:

1
2
export HELM_EXPERIMENTAL_OCI=1
helm pull oci://sea.hub:5000/helm-charts/mysql --version 8.8.25

Save ACR chart

Example to pull chart-registry.cn-shanghai.cr.aliyuncs.com/aliyun-inc.com/elasticsearch:1.0.1-elasticsearch.elasticsearch chart.

  1. Login your ACR registry
script
1
2
sealer login sealer login chart-registry.cn-shanghai.cr.aliyuncs.com \
--username cnx-platform@prod.trusteeship.aliyunid.com --passwd xxx
  1. Create Kubefile and imageList
script
1
2
3
4
5
[root@iZ2zeasfsez3jrior15rpbZ chart]# cat imageList
chart-registry.cn-shanghai.cr.aliyuncs.com/aliyun-inc.com/elasticsearch:1.0.1-elasticsearch.elasticsearch
[root@iZ2zeasfsez3jrior15rpbZ chart]# cat Kubefile
FROM kubernetes:v1.19.8
COPY imageList manifests
  1. Build CloudImage and save ACR remote chart to local registry
script
1
sealer build -t chart:latest .
  1. Run a cluster
script
1
sealer run chart:latest -m x.x.x.x -p xxx
  1. Try to pull chart using helm from local registry
script
1
2
3
4
5
6
[root@iZ2zeasfsez3jrior15rpbZ certs]# helm pull oci://sea.hub:5000/aliyun-inc.com/elasticsearch --version 1.0.1-elasticsearch.elasticsearch
Warning: chart media type application/tar+gzip is deprecated
Pulled: sea.hub:5000/aliyun-inc.com/elasticsearch:1.0.1-elasticsearch.elasticsearch
Digest: sha256:c247fd56b985cfa4ad58c8697dc867a69ee1861a1a625b96a7b9d78ed5d9df95
[root@iZ2zeasfsez3jrior15rpbZ certs]# ls
elasticsearch-1.0.1-elasticsearch.elasticsearch.tgz

If you got Error: failed to do request: Head "https://sea.hub:5000/v2/aliyun-inc.com/elasticsearch/manifests/1.0.1-elasticsearch.elasticsearch": x509: certificate signed by unknown authority error, trust registry cert on your host:

script
1
cp /var/lib/sealer/data/my-cluster/certs/sea.hub.crt /etc/pki/ca-trust/source/anchors/ && update-ca-trust extract

Define your own CloudRootfs

All the files which run a kubernetes cluster needs.

Contains:

  • Bin files, like docker, containerd, crictl ,kubeadm, kubectl…
  • Config files, like kubelet systemd config, docker systemd config, docker daemon.json…
  • Registry docker image.
  • Some Metadata, like Kubernetes version.
  • Registry files, contains all the docker image, like kubernetes core component docker images…
  • Scripts, some shell script using to install docker and kubelet… sealer will call init.sh and clean.sh.
  • Other static files
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
.
├── bin
├── conntrack
├── containerd-rootless-setuptool.sh
├── containerd-rootless.sh
├── crictl
├── kubeadm
├── kubectl
├── kubelet
├── nerdctl
└── seautil
├── cri
├── containerd
├── containerd-shim
├── containerd-shim-runc-v2
├── ctr
├── docker
├── dockerd
├── docker-init
├── docker-proxy
├── rootlesskit
├── rootlesskit-docker-proxy
├── runc
└── vpnkit
├── etc
├── 10-kubeadm.conf
├── Clusterfile # image default Clusterfile
├── daemon.json
├── docker.service
├── kubeadm-config.yaml
└── kubelet.service
├── images
└── registry.tar # registry docker image, will load this image and run a local registry in cluster
├── Kubefile
├── Metadata
├── README.md
├── registry # will mount this dir to local registry
└── docker
└── registry
├── scripts
├── clean.sh
├── docker.sh
├── init-kube.sh
├── init-registry.sh
├── init.sh
└── kubelet-pre-start.sh
└── statics # yaml files, sealer will render values in those files
└── audit-policy.yml

How can I get CloudRootfs

  1. Pull a BaseImage sealer pull kubernetes:v1.19.8-alpine
  2. View the image layer information sealer inspect kubernetes:v1.19.8-alpine
  3. Get into the BaseImage Layer ls /var/lib/sealer/data/overlay2/{layer-id}

You will find the CloudRootfs layer.

Build your own BaseImage

You can edit any files in CloudRootfs you want, for example you want to define your own docker daemon.json, just edit it and build a new CloudImage.

script
1
2
FROM scratch
COPY . .
script
1
sealer build -t user-defined-kubernetes:v1.19.8 .

Then you can use this image as a BaseImage.

OverWrite CloudRootfs files

Sometimes you don’t want to care about the CloudRootfs context, but need custom some config.

You can use kubernetes:v1.19.8 as BaseImage, and use your own config file to overwrite the default file in CloudRootfs.

For example: daemon.json is your docker engine config, using it to overwrite default config:

script
1
2
FROM kubernetes:v1.19.8
COPY daemon.json etc/
script
1
sealer build -t user-defined-kubernetes:v1.19.8 .

What is CloudRootfs

All the files witch run a kubernetes cluster needs.

Contains:

  • Bin files, like docker containerd crictl kubeadm kubectl…
  • Config files, like kubelet systemd config, docker systemd config, docker daemon.json…
  • Registry docker image
  • Some Metadata, like Kubernetes version.
  • Registry files, contains all the docker image, like kubernetes core component docker images…
  • Scripts, some shell script using to install docker and kubelet… sealer will call init.sh and clean.sh.
  • Other static files
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
.
├── bin
├── conntrack
├── containerd-rootless-setuptool.sh
├── containerd-rootless.sh
├── crictl
├── kubeadm
├── kubectl
├── kubelet
├── nerdctl
└── seautil
├── cri
├── containerd
├── containerd-shim
├── containerd-shim-runc-v2
├── ctr
├── docker
├── dockerd
├── docker-init
├── docker-proxy
├── rootlesskit
├── rootlesskit-docker-proxy
├── runc
└── vpnkit
├── etc
├── 10-kubeadm.conf
├── Clusterfile # image default Clusterfile
├── daemon.json
├── docker.service
├── kubeadm-config.yaml
└── kubelet.service
├── images
└── registry.tar # registry docker image, will load this image and run a local registry in cluster
├── Kubefile
├── Metadata
├── README.md
├── registry # will mount this dir to local registry
└── docker
└── registry
├── scripts
├── clean.sh
├── docker.sh
├── init-kube.sh
├── init-registry.sh
├── init.sh
└── kubelet-pre-start.sh
└── statics # yaml files, sealer will render values in those files
└── audit-policy.yml

How can I get CloudRootfs

  1. Pull a BaseImage sealer pull kubernetes:v1.19.8-alpine
  2. View the image layer information sealer inspect kubernetes:v1.19.8-alpine
  3. Get into the BaseImage Layer ls /var/lib/sealer/data/overlay2/{layer-id}

You will found the CloudRootfs layer.

Build your own BaseImage

You can edit any files in CloudRootfs you want, for example you want to define your own docker daemon.json, just edit it and build a new CloudImage.

script
1
2
FROM scratch
COPY . .
script
1
sealer build -t user-defined-kubernetes:v1.19.8 .

Then you can use this image as a BaseImage.

OverWrite CloudRootfs files

Sometimes you don’t want to care about the CloudRootfs context, but need custom some config.

You can use kubernetes:v1.19.8 as BaseImage, and use your own config file to overwrite the default file in CloudRootfs.

For example: daemon.json is your docker engine config, using it to overwrite default config:

script
1
2
FROM kubernetes:v1.19.8
COPY daemon.json etc/
script
1
sealer build -t user-defined-kubernetes:v1.19.8 .

Clusterfile definition

Install to existing servers, the provider is BAREMETAL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: sealer.aliyun.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster
spec:
image: registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes:v1.19.8
provider: BAREMETAL
ssh: # host ssh config
# ssh login password. If you use the key, you don't need to set the password
passwd:
# The absolute path of the ssh private key file, for example, /root/.ssh/id_rsa
pk: xxx
# ssh private key file password
pkPasswd: xxx
# ssh login user
user: root
network:
podCIDR: 100.64.0.0/10
svcCIDR: 10.96.0.0/22
certSANS:
- aliyun-inc.com
- 10.0.0.2
masters:
ipList:
- 172.20.125.1
- 172.20.126.2
- 172.20.126.3
nodes:
ipList:
- 172.20.126.7
- 172.20.126.8
- 172.20.126.9

Automatically apply ali cloud server for installation, the provider is ALI_CLOUD. Or using container for installation,the provider is CONTAINER:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: sealer.aliyun.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster
spec:
image: registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes:v1.19.8 # name of CloudImage
provider: ALI_CLOUD # OR CONTAINER
ssh: # custom host ssh config
passwd: xxx
pk: xxx
pkPasswd: xxx
user: root
network:
podCIDR: 100.64.0.0/10
svcCIDR: 10.96.0.0/22
certSANS:
- aliyun-inc.com
- 10.0.0.2
masters: # You can specify the number of servers, system disk, data disk, cpu and memory size
cpu: 4
memory: 8
count: 3
systemDisk: 100
dataDisks:
- 100
nodes:
cpu: 5
memory: 8
count: 3
systemDisk: 100
dataDisks:
- 100
status: {}

Kubefile instruction

A Kubefile is a text document that contains all the commands a user could call on the command line to assemble an
image.We can use the Kubefile to define a cluster image that can be shared and deployed offline. a Kubefile just
like Dockerfile which contains the build instructions to define the specific cluster.

FROM instruction

The FROM instruction defines which base image you want reference, and the first instruction in Kubefile must be the
FROM instruction. Registry authentication information is required if the base image is a private image. By the way
official base images are available from the Sealer community.

command format:FROM {your base image name}

USAGE:

For example ,use the base image kubernetes:v1.19.8 which provided by the Sealer community to build a new cloud image.

FROM registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes:v1.19.8

COPY instruction

The COPY instruction used to copy the contents from the context path such as file or directory to the rootfs. all
the cloud image is based on the rootfs, and the default src path is
the rootfs .If the specified destination directory does not exist, sealer will create it automatically.

command format:COPY {src dest}

USAGE:

For example , copy mysql.yamltorootfs/mysql.yaml

COPY mysql.yaml .

For example , copy directory apollo to rootfs/charts/apollo

COPY apollo charts

RUN instruction

The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The
resulting committed image will be used for the next step in the Kubefile.

command format:RUN {command args …}

USAGE:

For example ,Using RUN instruction to execute a commands that download kubernetes dashboard.

RUN wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

CMD instruction

The format of CMD instruction is similar to RUN instruction, and also will execute any commands in a new layer. However,
the CMD command will be executed when the cluster is started . it is generally used to start applications or configure
the cluster. and it is different with Dockerfile CMD ,If you list more than one CMD in a Kubefile ,then all of them
will take effect.

command format:CMD {command args …}

USAGE:

For example ,Using CMD instruction to execute a commands that apply the kubernetes dashboard yaml.

CMD kubectl apply -f recommended.yaml