Command line reference
Checkout the auto generate commandline reference
Checkout the auto generate commandline reference
If you’re using sealer, let us know
This section is mean to answer the most frequently asked questions about sealer. And it will be updated regularly.
in some case ,when you failed to run sealer apply ,and the hints show a little that is not enough to use, this section
will guild you how to clean your host manually.
you may follow the below clean steps when run kubeadm init failed.
1 | df -h | grep sealer |
umount examples:
1 | umount /var/lib/sealer/data/my-cluster/rootfs |
1 | rm -rf /var/lib/sealer/data/my-cluster |
1 | rm -rf /etc/kubernetes |
1 | docker ps |
you may follow the below clean steps if your cluster is up.
1 | kubeadm reset -f |
1 | rm -rf $HOME/.kube/config |
Server ip address | 192.168.0.1 ~ 192.168.0.13 |
---|---|
server password | sealer123 |
Run the kubernetes cluster on the local server.
1 | sealer run kubernetes:v1.19.8 \ |
Check the Cluster
1 | [root@iZm5e42unzb79kod55hehvZ ~]# kubectl get node |
Using join command to scale up the local server.
1 | sealer join \ |
Using delete command to scale down the local server.
1 | sealer delete \ |
1 | sealer delete --all |
Sealer will also remove infrastructure resources if you use cloud mod.
Applications image contains applications with all dependencies except the base image, so applications image can install
to an already existing Kubernetes cluster. with applications image, cluster can be incremental updating, and we can
install applications to an already existing Kubernetes cluster.
just add argument “–base=false”, will build an application image. and the size of application image depends on the
docker image size in most cases. without rootfs,it will become slimmer.
For example to build a prometheus application image:
Kubefile:
1 | FROM registry.cn-qingdao.aliyuncs.com/sealer-apps/openebs-localpv:2.11.0 |
build command:
1 | sealer build -f Kubefile -t prometheus:2.30.0 --base=false . |
The image prometheus:2.30.0 will not contain the embedded Kubernetes cluster. and it only contains all the docker image
that application needs, and contains helm chart or other operator manifests.
We can only support sealer cluster currently. Sealer hacked docker and registry to cache docker image in cluster, so if
cluster not installed by sealer, this app image will install failed.
1 | sealer run prometheus:2.30.0 |
Or using Clusterfile to overwrite the config files like helm values.
Using sealer merge ,we can combine the app image as one,in the meantime we can merge application images with cloud
images.
1 | sealer merge mysql:8.0.26 redis:6.2.5 -t mysql-redis:latest |
👉 Make sure to read the Code of Conduct.
👉 Follow the guide.
👉 Follow the guide.
The existing base images mostly use customized docker, but many k8s clusters use raw docker as container runtime. So it’s necessary to provide a base image with raw docker, this page is a guide of how to get a base image with raw docker.
We provide an official BaseImage which uses official raw docker as container runtime: kubernetes-rawdocker:v1.19.8
. If you want to create a k8s cluster, you can use it directly as sealer run
command’s argument or write it into your Clusterfile. If you want to use it as the base image to build other images by sealer build
, FROM kubernetes-rawdocker:v1.19.8
should be the first line in your Kubefile.
Get an image which you will modify it later, you may think it as your base image. To demonstrate the workflow, I will use kubernetes:v1.19.8
. You can get the same image by executing sealer pull kubernetes:v1.19.8
.
Find the image layer id by executing sealer inspect kubernetes:v1.19.8
. There are four layers in this image, and you will only use two of them. The first one’s id is c1aa4aff818df1dd51bd4d28decda5fe695bea8a9ae6f63a8dd5541c7640b3d6
, it consist of bin files, config files, registry files, scripts and so on. (I will use {layer-id-1} to refer to it in the following. Actually, it’s a sha256 string) The another one’s id is 991491d3025bd1086754230eee1a04b328b3d737424f1e12f708d651e6d66860
, it consist of network component yaml files. (I will use {layer-id-2} to refer to it in the following. Actually, it’s also a sha256 string)
Choose a raw docker binary version from https://download.docker.com/linux/static/stable/x86_64/
if your machine is based on x86_64 architecture, and download it. (other architecture can be found at https://download.docker.com/linux/static/stable/
)
Replace /var/lib/sealer/data/overlay2/{layer-id-1}/cri/docker.tar.gz
with the file you download in step 3, Before replacement you should do some handles. Attention that you should make sure after replacement the compressed file name and untarred working directory tree is same as before. In this case, you should untar the file you download in step 3, enter the docker
directory and tar all files in this directory with an output file whose name is docker.tar.gz
.
Pull the official “registry” image and replace existing customized “registry” image at /var/lib/sealer/data/overlay2/{layer-id-1}/images/registry.tar
. Firstly make sure raw docker have already installed, then execute docker pull registry:2.7.1 && docker save -o registry.tar registry:2.7.1 && mv registry.tar /var/lib/sealer/data/overlay2/{layer-id-1}/images/registry.tar
.
Edit the file ‘daemon.json’ at /var/lib/sealer/data/overlay2/{layer-id-1}/etc/
, delete the mirror-registries
attribute.
Switch to directory /var/lib/sealer/data/overlay2/{layer-id-1}/
, edit the Kubefile
and make sure it’s content is:
1 | FROM scratch |
Then build image by execute sealer build --mode lite -t kubernetes-rawdocker:v1.19.8-alpine .
.
Now the base image still need network components to make k8s clusters work well, here we provide a guide for adding calico as network components.
First of all, create a rawdockerBuild
directory as your build environment. Then you should move the file “tigera-operator.yaml” and the file “custom-resources.yaml” from /var/lib/sealer/data/overlay2/{layer-id-2}/etc/
to rawdockerBuild/etc
. After that you still need modify some contents in those two files to make sure the pods they create will pull docker images from your private registry, which will make your k8s clusters still work well in offline situations. In this case, firstly add a map-key value in “custom-resources.yaml”, the key is spec.registry
and the value is sea.hub:5000
, secondly modify all docker image names in “tigera-operator.yaml” from <registry>/<repository>/<imageName>:<imageTag>
to sea.hub:5000/<repository>/<imageName>:<imageTag>
.
Next create a imageList
file at rawdockerBuild
directory, with the following content:
They are all the images needed to create network components, make sure that the tag is consistent with declared in the yaml file “tigera-operator.yaml” and “custom-resources.yaml”.
Switch to directory rawdockerBuild
, create a Kubefile
and make sure it’s content is:
1 | FROM kubernetes-rawdocker:v1.19.8-alpine |
Then build image by execute sealer build --mode lite -t kubernetes-rawdocker:v1.19.8 .
.
We provide GPU base image in our official registry
named registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes-nvidia:v1.19.8
.you can use is directly. meanwhile, we
provide the build context in the applications’ directory. it can be adjusted it per your request.
Run below command to rebuild it.
sealer build -f Kubefile -t registry.cn-qingdao.aliyuncs.com/sealer-io/kubernetes-nvidia:v1.19.8 -m lite .
1 | apiVersion: sealer.cloud/v2 |
sealer apply -f Clusterfile
to apply the GPU cluster. it will take few minutes.kubectl get pods -n kube-system nvidia-device-plugin
, you can find the pod in Runningkubectl describe node
, if nvidia.com/gpu
shows on ‘Allocated resources’ section,you