案例分析
1. 规划节点
节点规划,见表1。
表1 节点规划
IP | 主机名 | 节点 |
---|---|---|
192.168.100.10 | k8s-master-node1 | master节点、仓库节点 |
192.168.100.20 | k8s-worker-node1 | worker节点 |
1. 基础环境准备
将安装包chinaskills_cloud_paas_v2.1.iso下载至master节点/root目录,并解压到/opt目录:
[root@localhost ~]# ll
total 2310736
-rw-------. 1 root root 1580 Aug 28 00:20 anaconda-ks.cfg
-rw-r--r--. 1 root root 2366189568 Oct 16 22:18 chinaskills_cloud_paas_v2.1.iso
(1)挂载镜像
[root@localhost ~]# mount chinaskills_cloud_paas_v2.0.2.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only
[root@localhost ~]# cp -rf /mnt/* /opt/
[root@localhost ~]# umount /mnt/
2. 部署Kubernetes集群
(1)Kubernetes 1.25简介
2022年8月23日,Kubernetes 1.25正式发布,是2022年的第二个版本。Kubernetes1.25带来了40余项功能增强,其中13个功能成为稳定状态;10个是对于现有功能的改进;15个全新功能;还有2个废弃功能。在功能增强的数量上,1.25与之前的两个版本大致相同,Kubernetes依然在按自己的节奏稳步推进中。
Kubernetes v1.25还带来了全新主题(Combiner)和徽标,如图1所示:
图1
新版本主要变化如下:
① 存储
Kubernetes的存储插件分为树内(in-tree)和树外(out-of-tree)两种。简单说in-tree是指在Kubernetes主仓库维护的插件,而out-of-tree是指独立维护的插件。在之前,Kubernetes有很多插件都是in-tree的,这样会使得Kubernetes代码维护困难,并且造成Kubernetes主库的臃肿,所以三年前storage SIG就开始从Kubernetes核心中迁移in-tree的存储插件到外面,变为out-of-tree存储插件。V1.25版本中,很多in-tree的存储插件都将被删除,包括GlusterFS、flocker、quobyte、torageos等,
② 网络
在用户初始化一个集群时,找到一个恰当的CIDR很难,如果太大的话会造成浪费,如果太小的话,等到业务增多了IP可能会不够用,到时候就会如鲠在喉了。
由于CIDR是在集群启动时设置的,而且以后不能修改,严重的可能会铲掉集群重新开始。
Kubernetes 1.25带来了一个新功能,通过ClusterCIDRConfig来动态的配置CIDR,这个方式可以很好解决上面提出的问题。
③ 安全
在Kubernetes 1.21中PodSecurityPolicy被标记为弃用,而在1.25中,这个功能被正式删除了。
不过不用担心,Kubernetes对于每个破坏性的变更都是慎之又慎的,PodSecurityPolicy的删除也不例外。之所以删除它,是因为它被多方诟病使用上的晦涩难懂,Kubernetes社区经过考虑后,选择删除这个功能。
并且提供了一个更好的替代品–Pod Security Admission,目前Pod Security Admission在1.25中达到了稳定阶段,用户可以放心迁移到Pod Security Admission。
(2)安装Kubernetes集群
本次安装的Kubernetes版本为v1.25.2。
在master节点执行以下命令部署Kubernetes集群:
[root@localhost ~]# kubeeasy install depend \
--host 192.168.100.10,192.168,100.20 \
--user root \
--password 000000 \
--offline-file /opt/dependencies/packages.tar.gz
以下是输出报错信息
[2024-10-16 22:20:19] INFO: [start] bash kubeeasy install depend --host 192.168.100.10,192.168,100.20 --user root --password ****** --offline-file /opt/dependencies/packages.tar.gz
[2024-10-16 22:20:19] INFO: [offline] unzip offline dependencies package on local.
[2024-10-16 22:20:20] INFO: [offline] unzip offline dependencies package succeeded.
[2024-10-16 22:20:20] INFO: [install] install dependencies packages on local.
[2024-10-16 22:20:21] INFO: [install] install dependencies packages succeeded.
[2024-10-16 22:20:21] INFO: [offline] 192.168.100.10: load offline dependencies file
[2024-10-16 22:20:21] ERROR: [offline] load offline dependencies file to 192.168.100.10 failed.ERROR Summary: [2024-10-16 22:20:21] ERROR: [offline] load offline dependencies file to 192.168.100.10 failed.See detailed log >> /var/log/kubeinstall.log
(3)查看日志分析
[root@localhost ~]# vi /var/log/kubeinstall.log
...
Error: Package: 3:docker-ce-24.0.5-1.el7.x86_64 (/docker-ce-24.0.5-1.el7.x86_64)Requires: libseccomp >= 2.3
Error: Package: slirp4netns-0.4.3-4.el7_8.x86_64 (/slirp4netns-0.4.3-4.el7_8.x86_64)Requires: libseccomp.so.2()(64bit)
Error: Package: containerd.io-1.6.21-3.1.el7.x86_64 (/containerd.io-1.6.21-3.1.el7.x86_64)Requires: libseccompYou could try using --skip-broken to work around the problemYou could try running: rpm -Va --nofiles --nodigest
[2024-10-16 22:20:21] INFO: [install] install dependencies packages succeeded.
[2024-10-16 22:20:21] EXEC: [command] sshpass -p "******" ssh -o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q root@192.168.100.10 -p 22 bash -c 'sed -i -e '"'"'s/#UseDNS yes/UseDNS no/g'"'"' -e '"'"'s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g'"'"' /etc/ssh/sshd_configsed -i '"'"'s/# StrictHostKeyChecking ask/ StrictHostKeyChecking no/g'"'"' /etc/ssh/ssh_configsystemctl restart sshd'
kubeeasy: line 255: sshpass: command not found
[2024-10-16 22:20:21] INFO: [offline] 192.168.100.10: load offline dependencies file
[2024-10-16 22:20:21] EXEC: [command] sshpass -p "******" scp -o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q -P 22 -r /tmp/packages root@192.168.100.10:/tmp
kubeeasy: line 287: sshpass: command not found
[2024-10-16 22:20:21] ERROR: [offline] load offline dependencies file to 192.168.100.10 failed.
3. 报错解决
(1)配置yum源
[root@localhost ~]# mount /dev/cdrom /mnt/
mount: /dev/sr0 is write-protected, mounting read-only
[root@localhost ~]# mkdir /opt/centos
[root@localhost ~]# cp -rf /mnt/* /opt/centos/
[root@localhost ~]# ll /opt/centos/
total 328
-rw-r--r--. 1 root root 14 May 27 21:41 CentOS_BuildTag
drwxr-xr-x. 3 root root 35 May 27 21:41 EFI
-rw-r--r--. 1 root root 227 May 27 21:41 EULA
-rw-r--r--. 1 root root 18009 May 27 21:41 GPL
drwxr-xr-x. 3 root root 57 May 27 21:41 images
drwxr-xr-x. 2 root root 198 May 27 21:41 isolinux
drwxr-xr-x. 2 root root 43 May 27 21:41 LiveOS
drwxr-xr-x. 2 root root 225280 May 27 21:41 Packages
drwxr-xr-x. 2 root root 4096 May 27 21:41 repodata
-rw-r--r--. 1 root root 1690 May 27 21:41 RPM-GPG-KEY-CentOS-7
-rw-r--r--. 1 root root 1690 May 27 21:41 RPM-GPG-KEY-CentOS-Testing-7
-r--r--r--. 1 root root 2883 May 27 21:41 TRANS.TBL
(2)挂载失败解决
挂载成功忽略
[root@localhost ~]# mount /dev/cdrom /mnt/
mount: /dev/sr0 is already mounted or /mnt busy/dev/sr0 is already mounted on /mnt
在Vmware软件界面点击右下角光盘图标
右击光盘图标点击连接
再次尝试挂载
[root@localhost ~]# mount /dev/cdrom /mnt/
mount: /dev/sr0 is write-protected, mounting read-only
[root@localhost ~]# ll /mnt/
total 696
-rw-r--r--. 3 root root 14 Oct 30 2020 CentOS_BuildTag
drwxr-xr-x. 3 root root 2048 Oct 27 2020 EFI
-rw-rw-r--. 21 root root 227 Aug 30 2017 EULA
-rw-rw-r--. 21 root root 18009 Dec 10 2015 GPL
drwxr-xr-x. 3 root root 2048 Oct 27 2020 images
drwxr-xr-x. 2 root root 2048 Nov 3 2020 isolinux
drwxr-xr-x. 2 root root 2048 Oct 27 2020 LiveOS
drwxr-xr-x. 2 root root 673792 Nov 4 2020 Packages
drwxr-xr-x. 2 root root 4096 Nov 4 2020 repodata
-rw-rw-r--. 21 root root 1690 Dec 10 2015 RPM-GPG-KEY-CentOS-7
-rw-rw-r--. 21 root root 1690 Dec 10 2015 RPM-GPG-KEY-CentOS-Testing-7
-r--r--r--. 1 root root 2883 Nov 4 2020 TRANS.TBL
[root@localhost ~]# mv /etc/yum.repos.d/CentOS-* /tmp/
[root@localhost ~]# cp -rvf /mnt/* /opt/centos/
[root@localhost ~]# umount /mnt/
(3)编写本地yum源
[root@localhost ~]# cat > /etc/yum.repos.d/centos.repo << EOF
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
EOF
运行kubeeasy会清除yum源所以备份一下
[root@localhost ~]# cp /etc/yum.repos.d/centos.repo .
(4)安装libseccomp工具
[root@localhost ~]# yum install -y libseccomp
项目简介
Libseccomp 是一个开源库,它为 Linux 系统提供了一种强大的安全增强工具——Seccomp(Secure Computing)。该项目的目标是使应用能够限制自身或者特定进程的系统调用行为,从而减少攻击面,提高系统的安全性。
Seccomp 是 Linux 内核提供的一个功能,允许程序设定一套规则,筛选出哪些系统调用可以被执行,哪些将被阻止。这种机制对于防止恶意代码和零日攻击非常有效。而 libseccomp 则是 Seccomp 的用户空间接口,提供了高级别的 API 和工具,使得开发者无需深入内核编程就能利用这一特性。
Libseccomp 提供了以下关键特性:
- 过滤器语法:基于 BPF(Berkeley Packet Filter)的规则定义,这是一种高效的、编译过的指令集,用于描述系统调用过滤策略。
- API 支持:提供了 C 库接口,方便集成到各种软件中,同时也支持 golang 和 python 绑定。
- 动态策略调整:程序运行时可动态修改过滤规则,适应不同场景下的安全需求。
- 兼容性:广泛支持多个 Linux 内核版本,包括老版本内核的回退机制。
4. 部署 vsftpd 服务
(1)安装服务
[root@localhost ~]# yum install -y vsftpd
(2)设置匿名用户/opt目录
[root@localhost ~]# cat >> /etc/vsftpd/vsftpd.conf << EOF
anon_root=/opt/
EOF
设置开启自启并重启vsftpd服务
[root@localhost ~]# systemctl enable vsftpd --now
[root@localhost ~]# systemctl restart vsftpd
(3)在worker节点配置yum源
[root@localhost ~]# mv /etc/yum.repos.d/CentOS-* /tmp/
[root@localhost ~]# cat > /etc/yum.repos.d/centos.repo << EOF
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
EOF
master节点关闭防火墙和selinux
[root@localhost ~]# systemctl disable firewalld --now && setenforce 0
(4)安装libseccomp工具
[root@localhost ~]# yum install -y libseccomp
5. 重新部署Kubernetes集群
(1)安装依赖包
[root@localhost ~]# kubeeasy install depend \
--host 192.168.100.10,192.168.100.20 \
--user root \
--password 000000 \
--offline-file /opt/dependencies/packages.tar.gz
以下是输出信息
[2024-10-16 23:07:36] INFO: [start] bash kubeeasy install depend --host 192.168.100.10,192.168.100.20 --user root --password ****** --offline-file /opt/dependencies/packages.tar.gz
[2024-10-16 23:07:36] INFO: [offline] unzip offline dependencies package on local.
[2024-10-16 23:07:37] INFO: [offline] unzip offline dependencies package succeeded.
[2024-10-16 23:07:37] INFO: [install] install dependencies packages on local.
[2024-10-16 23:07:37] INFO: [install] install dependencies packages succeeded.
[2024-10-16 23:07:37] INFO: [offline] 192.168.100.10: load offline dependencies file
[2024-10-16 23:07:39] INFO: [offline] load offline dependencies file to 192.168.100.10 succeeded.
[2024-10-16 23:07:39] INFO: [install] 192.168.100.10: install dependencies packages
[2024-10-16 23:07:39] INFO: [install] 192.168.100.10: install dependencies packages succeeded.
[2024-10-16 23:07:44] INFO: [offline] 192.168.100.20: load offline dependencies file
[2024-10-16 23:07:47] INFO: [offline] load offline dependencies file to 192.168.100.20 succeeded.
[2024-10-16 23:07:47] INFO: [install] 192.168.100.20: install dependencies packages
[2024-10-16 23:08:23] INFO: [install] 192.168.100.20: install dependencies packages succeeded.See detailed log >> /var/log/kubeinstall.log
(2)安装Kubernetes集群
[root@localhost ~]# kubeeasy install kubernetes \
--master 192.168.100.10 \
--worker 192.168.100.20 \
--user root \
--password 000000 \
--version 1.25.2 \
--offline-file /opt/kubeeasy.tar.gz
以下是输出信息
[2024-10-16 23:11:33] INFO: [start] bash kubeeasy install kubernetes --master 192.168.100.10 --worker 192.168.100.20 --user root --password ****** --version 1.25.2 --offline-file /opt/kubeeasy.tar.gz
[2024-10-16 23:11:33] INFO: [check] sshpass command exists.
[2024-10-16 23:11:33] INFO: [check] rsync command exists.
[2024-10-16 23:11:33] INFO: [check] ssh 192.168.100.10 connection succeeded.
[2024-10-16 23:11:33] INFO: [check] ssh 192.168.100.20 connection succeeded.
[2024-10-16 23:11:33] INFO: [offline] unzip offline package on local.
[2024-10-16 23:12:04] INFO: [offline] unzip offline package succeeded.
[2024-10-16 23:12:04] INFO: [offline] master 192.168.100.10: load offline file
[2024-10-16 23:12:04] INFO: [offline] load offline file to 192.168.100.10 succeeded.
[2024-10-16 23:12:04] INFO: [offline] master 192.168.100.10: install packages
[2024-10-16 23:12:04] INFO: [offline] master 192.168.100.10: install packages succeeded.
[2024-10-16 23:12:04] INFO: [offline] master 192.168.100.10: disable the firewall
[2024-10-16 23:12:05] INFO: [offline] 192.168.100.10: disable the firewall succeeded.
[2024-10-16 23:12:05] INFO: [offline] worker 192.168.100.20: load offline file
[2024-10-16 23:14:50] INFO: [offline] load offline file to 192.168.100.20 succeeded.
[2024-10-16 23:14:50] INFO: [offline] worker 192.168.100.20: install packages
[2024-10-16 23:14:50] INFO: [offline] worker 192.168.100.20: install packages succeeded.
[2024-10-16 23:14:50] INFO: [offline] worker 192.168.100.20: disable the firewall
[2024-10-16 23:14:51] INFO: [offline] 192.168.100.20: disable the firewall succeeded.
[2024-10-16 23:14:51] INFO: [get] Get 192.168.100.10 InternalIP.
[2024-10-16 23:14:51] INFO: [result] get MGMT_NODE_IP value succeeded.
[2024-10-16 23:14:51] INFO: [result] MGMT_NODE_IP is 192.168.100.10
[2024-10-16 23:14:51] INFO: [init] master: 192.168.100.10
[2024-10-16 23:14:54] INFO: [init] init master 192.168.100.10 succeeded.
[2024-10-16 23:14:54] INFO: [init] master: 192.168.100.10 set hostname and hosts
[2024-10-16 23:14:54] INFO: [init] 192.168.100.10 set hostname and hosts succeeded.
[2024-10-16 23:14:54] INFO: [init] worker: 192.168.100.20
[2024-10-16 23:14:55] INFO: [init] init worker 192.168.100.20 succeeded.
[2024-10-16 23:14:55] INFO: [init] master: 192.168.100.20 set hostname and hosts
[2024-10-16 23:14:56] INFO: [init] 192.168.100.20 set hostname and hosts succeeded.
[2024-10-16 23:14:56] INFO: [install] install containerd on 192.168.100.10.
[2024-10-16 23:16:11] INFO: [install] install containerd on 192.168.100.10 succeeded.
[2024-10-16 23:16:11] INFO: [install] install kube on 192.168.100.10
[2024-10-16 23:16:11] INFO: [install] install kube on 192.168.100.10 succeeded.
[2024-10-16 23:16:11] INFO: [install] install containerd on 192.168.100.20.
[2024-10-16 23:17:21] INFO: [install] install containerd on 192.168.100.20 succeeded.
[2024-10-16 23:17:21] INFO: [install] install kube on 192.168.100.20
[2024-10-16 23:17:22] INFO: [install] install kube on 192.168.100.20 succeeded.
[2024-10-16 23:17:22] INFO: [kubeadm init] kubeadm init on 192.168.100.10
[2024-10-16 23:17:22] INFO: [kubeadm init] 192.168.100.10: set kubeadm-config.yaml
[2024-10-16 23:17:22] INFO: [kubeadm init] 192.168.100.10: set kubeadm-config.yaml succeeded.
[2024-10-16 23:17:22] INFO: [kubeadm init] 192.168.100.10: kubeadm init start.
[2024-10-16 23:17:30] INFO: [kubeadm init] 192.168.100.10: kubeadm init succeeded.
[2024-10-16 23:17:33] INFO: [kubeadm init] 192.168.100.10: set kube config.
[2024-10-16 23:17:33] INFO: [kubeadm init] 192.168.100.10: set kube config succeeded.
[2024-10-16 23:17:33] INFO: [kubeadm init] 192.168.100.10: delete master taint
[2024-10-16 23:17:33] INFO: [kubeadm init] 192.168.100.10: delete master taint succeeded.
[2024-10-16 23:17:34] INFO: [kubeadm init] Auto-Approve kubelet cert csr succeeded.
[2024-10-16 23:17:34] INFO: [network] add flannel
[2024-10-16 23:17:34] INFO: [flannel] change flannel pod subnet succeeded.
[2024-10-16 23:17:34] INFO: [apply] apply kube-flannel.yml file
[2024-10-16 23:17:34] INFO: [apply] apply kube-flannel.yml file succeeded.
[2024-10-16 23:17:37] INFO: [waiting] waiting flannel
[2024-10-16 23:17:47] INFO: [waiting] flannel pods ready succeeded.
[2024-10-16 23:17:47] INFO: [result] get INTI_TOKEN value succeeded.
[2024-10-16 23:17:47] INFO: [kubeadm join] worker 192.168.100.20 join cluster.
[2024-10-16 23:18:01] INFO: [kubeadm join] worker 192.168.100.20 join cluster succeeded.
[2024-10-16 23:18:01] INFO: [kubeadm join] set 192.168.100.20 worker node role.
[2024-10-16 23:18:01] INFO: [kubeadm join] set 192.168.100.20 worker node role succeeded.
[2024-10-16 23:18:01] INFO: [ui] add dashboard
[2024-10-16 23:18:01] INFO: [apply] apply recommended.yaml file
[2024-10-16 23:18:02] INFO: [apply] apply recommended.yaml file succeeded.
[2024-10-16 23:18:05] INFO: [waiting] waiting kubernetes-dashboard
[2024-10-16 23:18:05] INFO: [waiting] kubernetes-dashboard pods ready succeeded.
[2024-10-16 23:18:05] INFO: [apply] apply dashboard-adminuser.yaml file
[2024-10-16 23:18:05] INFO: [apply] apply dashboard-adminuser.yaml file succeeded.
[2024-10-16 23:18:05] INFO: [apply] apply components.yaml file
[2024-10-16 23:18:06] INFO: [apply] apply components.yaml file succeeded.
[2024-10-16 23:18:09] INFO: [waiting] waiting metrics-server
[2024-10-16 23:18:26] INFO: [waiting] metrics-server pods ready succeeded.
[2024-10-16 23:18:26] INFO: [helm] install the helm
[2024-10-16 23:18:26] INFO: [virtctl] install the virtctl
[2024-10-16 23:18:26] INFO: [docker-compose] install the docker compose
[2024-10-16 23:18:26] INFO: [istioctl] install the istioctl
[2024-10-16 23:18:26] INFO: [nerdctl] install the nerdctl
[2024-10-16 23:18:26] INFO: [buildkitd] install the buildkitd
[2024-10-16 23:18:27] INFO: [storage] add nfs storage class
[2024-10-16 23:18:27] INFO: [apply] apply nfs-storage.yaml file
[2024-10-16 23:18:28] INFO: [apply] apply nfs-storage.yaml file succeeded.
[2024-10-16 23:18:31] INFO: [waiting] waiting nfs-client-provisioner
[2024-10-16 23:18:31] INFO: [waiting] nfs-client-provisioner pods ready succeeded.
[2024-10-16 23:18:31] INFO: [virt] add kubevirt
[2024-10-16 23:18:31] INFO: [apply] apply kubevirt-operator.yaml file
[2024-10-16 23:18:32] INFO: [apply] apply kubevirt-operator.yaml file succeeded.
[2024-10-16 23:18:35] INFO: [waiting] waiting kubevirt
[2024-10-16 23:18:42] INFO: [waiting] kubevirt pods ready succeeded.
[2024-10-16 23:18:42] INFO: [apply] apply kubevirt-cr.yaml file
[2024-10-16 23:18:43] INFO: [apply] apply kubevirt-cr.yaml file succeeded.
[2024-10-16 23:19:16] INFO: [waiting] waiting kubevirt
[2024-10-16 23:19:25] INFO: [waiting] kubevirt pods ready succeeded.
[2024-10-16 23:19:28] INFO: [waiting] waiting kubevirt
[2024-10-16 23:19:51] INFO: [waiting] kubevirt pods ready succeeded.
[2024-10-16 23:19:54] INFO: [waiting] waiting kubevirt
[2024-10-16 23:19:54] INFO: [waiting] kubevirt pods ready succeeded.
[2024-10-16 23:19:54] INFO: [istio] add istio
[2024-10-16 23:20:06] INFO: [waiting] waiting istiod
[2024-10-16 23:20:07] INFO: [waiting] istiod pods ready succeeded.
[2024-10-16 23:20:10] INFO: [waiting] waiting istio-egressgateway
[2024-10-16 23:20:10] INFO: [waiting] istio-egressgateway pods ready succeeded.
[2024-10-16 23:20:13] INFO: [waiting] waiting istio-ingressgateway
[2024-10-16 23:20:13] INFO: [waiting] istio-ingressgateway pods ready succeeded.
[2024-10-16 23:20:13] INFO: [apply] apply grafana.yaml file
[2024-10-16 23:20:14] INFO: [apply] apply grafana.yaml file succeeded.
[2024-10-16 23:20:14] INFO: [apply] apply jaeger.yaml file
[2024-10-16 23:20:14] INFO: [apply] apply jaeger.yaml file succeeded.
[2024-10-16 23:20:14] INFO: [apply] apply kiali.yaml file
[2024-10-16 23:20:15] INFO: [apply] apply kiali.yaml file succeeded.
[2024-10-16 23:20:15] INFO: [apply] apply prometheus.yaml file
[2024-10-16 23:20:15] INFO: [apply] apply prometheus.yaml file succeeded.
[2024-10-16 23:20:18] INFO: [waiting] waiting grafana
[2024-10-16 23:20:19] INFO: [waiting] grafana pods ready succeeded.
[2024-10-16 23:20:22] INFO: [waiting] waiting jaeger
[2024-10-16 23:20:22] INFO: [waiting] jaeger pods ready succeeded.
[2024-10-16 23:20:25] INFO: [waiting] waiting kiali
[2024-10-16 23:20:45] INFO: [waiting] kiali pods ready succeeded.
[2024-10-16 23:20:48] INFO: [waiting] waiting prometheus
[2024-10-16 23:20:49] INFO: [waiting] prometheus pods ready succeeded.
[2024-10-16 23:20:49] INFO: [harbor] add harbor
[2024-10-16 23:21:13] INFO: [waiting] waiting harbor-nginx
[2024-10-16 23:21:14] INFO: [waiting] harbor-nginx pods ready succeeded.
[2024-10-16 23:21:17] INFO: [waiting] waiting harbor-core
[2024-10-16 23:21:24] INFO: [waiting] harbor-core pods ready succeeded.
[2024-10-16 23:21:27] INFO: [waiting] waiting harbor-jobservice
[2024-10-16 23:22:14] INFO: [waiting] harbor-jobservice pods ready succeeded.
[2024-10-16 23:22:17] INFO: [waiting] waiting harbor-notary-server
[2024-10-16 23:22:17] INFO: [waiting] harbor-notary-server pods ready succeeded.
[2024-10-16 23:22:20] INFO: [waiting] waiting harbor-notary-signer
[2024-10-16 23:22:21] INFO: [waiting] harbor-notary-signer pods ready succeeded.
[2024-10-16 23:22:26] INFO: [cluster] kubernetes cluster status
+ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
harbor harbor-chartmuseum-5b48966b5f-jxmp9 1/1 Running 0 96s
harbor harbor-core-5d67b874d-xxv9p 1/1 Running 0 96s
harbor harbor-database-0 1/1 Running 0 96s
harbor harbor-jobservice-66cd5c87d7-spfp7 1/1 Running 3 (66s ago) 96s
harbor harbor-nginx-7599458b66-kt87n 1/1 Running 0 96s
harbor harbor-notary-server-69947888c8-52pk5 1/1 Running 0 96s
harbor harbor-notary-signer-bbfc65bdc-hqmhw 1/1 Running 0 96s
harbor harbor-portal-67d8547c5f-td6mw 1/1 Running 0 96s
harbor harbor-redis-0 1/1 Running 0 96s
harbor harbor-registry-96b56c67-mhvzm 2/2 Running 0 96s
harbor harbor-trivy-0 1/1 Running 0 96s
istio-system grafana-56bdf8bf85-s9q7n 1/1 Running 0 2m12s
istio-system istio-egressgateway-fffc799cf-mbx5b 1/1 Running 0 2m28s
istio-system istio-ingressgateway-7d68764b55-xq295 1/1 Running 0 2m28s
istio-system istiod-5456fd558d-69lhh 1/1 Running 0 2m30s
istio-system jaeger-c4fdf6674-dvlmz 1/1 Running 0 2m12s
istio-system kiali-8f955f859-hddrs 1/1 Running 0 2m11s
istio-system prometheus-85949fddb-zpw8w 2/2 Running 0 2m11s
kube-system coredns-565d847f94-77ns7 1/1 Running 0 4m42s
kube-system coredns-565d847f94-wk4cl 1/1 Running 0 4m42s
kube-system dashboard-metrics-scraper-64bcc67c9c-f67j8 1/1 Running 0 4m24s
kube-system dashboard-portainer-695648f848-7trvb 1/1 Running 0 95s
kube-system etcd-k8s-master-node1 1/1 Running 0 4m59s
kube-system kube-apiserver-k8s-master-node1 1/1 Running 0 4m58s
kube-system kube-controller-manager-k8s-master-node1 1/1 Running 0 4m58s
kube-system kube-flannel-ds-bdwgd 1/1 Running 0 4m43s
kube-system kube-flannel-ds-mrjcv 1/1 Running 0 4m25s
kube-system kube-proxy-48fqs 1/1 Running 0 4m43s
kube-system kube-proxy-jdkh5 1/1 Running 0 4m25s
kube-system kube-scheduler-k8s-master-node1 1/1 Running 0 4m58s
kube-system kubernetes-dashboard-74b66d7f9c-4z7b4 1/1 Running 0 4m24s
kube-system metrics-server-84c4f4fb8d-p7r9z 1/1 Running 0 4m20s
kube-system nfs-client-provisioner-5947b7c5b9-qr6js 1/1 Running 0 3m58s
kubevirt virt-api-5dd9ccbc96-c9rg7 1/1 Running 0 3m21s
kubevirt virt-api-5dd9ccbc96-nmbjj 1/1 Running 0 3m21s
kubevirt virt-controller-7659874849-r87c8 1/1 Running 0 2m56s
kubevirt virt-controller-7659874849-rnlbv 1/1 Running 0 2m56s
kubevirt virt-handler-gtdzf 1/1 Running 0 2m56s
kubevirt virt-handler-vgpd2 1/1 Running 0 2m56s
kubevirt virt-operator-5db8d9f8f9-98mzz 1/1 Running 0 3m54s
kubevirt virt-operator-5db8d9f8f9-zqd97 1/1 Running 0 3m54s See detailed log >> /var/log/kubeinstall.log
6. 其他报错解决
(1)查看磁盘空间是否够用
[root@k8s-master-node1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 17M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root 94G 25G 69G 27% /
/dev/sda1 1014M 138M 877M 14% /boot
/dev/mapper/centos-home 2.0G 33M 2.0G 2% /home
tmpfs 394M 0 394M 0% /run/user/0
/dev/loop1 4.4G 4.4G 0 100% /mnt
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/b039687a23f8fae6c5187496752b40f9680b1176a6b450012da37205da799228/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/3aa2dd9ef3f5215fef69348e2915ae3fdede0172d10c87233ba5d495c7f3481c/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/b0d856449d7fd4be80f1dbab0d8eb7b6a12435ebc645d2bd67442ece1c957bfa/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/83779008afa2470bee75fcf40986a2b88677c070a142434d2afd24ccc6595c8c/merged
shm 64M 0 64M 0% /var/lib/docker/containers/a25f32a6997c8e8837e3fd53f8f49317ba00d3792fd2400eaeb37651ffc6adf1/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/1bc9a2dcf244a4bab3f6b7b452220d57f84f751fde6d2343041e13752d43cb5c/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/ad389f9f68ba4d07e1e230f0263e62c59b93de201ddf66969037cb5e3305d9b0/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/3d907398b2cf132c4c712543c9b86fae338a984e52ddc1141bb60a9309c965b3/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/e19cf1eb10fcd855424dfd274e4da3302743ae22f5b30571203c5235e6c84f29/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/e4ca942935911960946ffb8f3b32f064c8e3696117f61c9ab4e079dfd7711d53/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/fe04aff4c51e8579db0c52d8313ea09aa19cefd9e4183d9fbfc7fe008fd85f0e/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/0ba8948cc724f519001633dbaee7bb4faa5b96e83bc47545c190d2040d7dc975/merged
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/ef02c2ea-55c0-470f-9d5f-fded20a23ca6/volumes/kubernetes.io~projected/kube-api-access-jb4jm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/971ecbd26e0fe7add7b25144508b1812284143fb62b0e8e8d059a2d25b1205b6/merged
shm 64M 0 64M 0% /var/lib/docker/containers/0e032d5f3aa5b4cb696eeb0cf160ea7180c223eec7adf485f6eb69cf2f60de0e/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/2874cd9c985d553a8617e52c4e37ce07185e2522df5366354da35eb36827dc8f/merged
tmpfs 50M 12K 50M 1% /var/lib/kubelet/pods/92783df2-056d-403a-aa81-15c585d725bf/volumes/kubernetes.io~projected/kube-api-access-sbfd5
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/ba0551bd461d5077cfdaa4ecf2269558b1ecb116c21dc213d28aad352c36066c/merged
shm 64M 0 64M 0% /var/lib/docker/containers/ec37c7d0bb4a287200b840af79317ddf66e29efdee0e501fe69462faabe1e68d/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/2f2caf724cfce283b0770371e2d367b52adb6ebc0e81ca81a9741dbe9dece228/merged
tmpfs 170M 12K 170M 1% /var/lib/kubelet/pods/a81ce6dc-f02a-4053-bf5d-92353bf0b260/volumes/kubernetes.io~projected/kube-api-access-q56s8
tmpfs 170M 12K 170M 1% /var/lib/kubelet/pods/7666fe20-f005-408d-898c-6515f9c9e82b/volumes/kubernetes.io~projected/kube-api-access-dfl4j
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/8b80eafa-2b69-4727-a4b4-60b1601af7b2/volumes/kubernetes.io~projected/kube-api-access-fsvlh
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/16ef5f9dc887f97e540a1a71fbe8b11c06911b5d679a61d8bfe75e583510545c/merged
shm 64M 0 64M 0% /var/lib/docker/containers/47a4cdd51151d957850c7064def32e664d0d9f6432955db7a7e9eb5c5e53eda5/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/05d335b1ad2fa4bfc722351d75ccc90c459c996d9e991fd48a4b0b5135cb51ba/merged
shm 64M 0 64M 0% /var/lib/docker/containers/a2447dc63f5b6a7a6910b5d7cca2fdcd4ea29141659b130c8d02b5a83dc26197/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/ef6d7fea914b853924850dc9aa1ca536ab6f16d702d0031e7edd621fc5ffa460/merged
shm 64M 0 64M 0% /var/lib/docker/containers/90753d4fe75abf6a3e9296947175f9aaeccbc64ff5b6cc19d68a03e75f546767/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/61be0dd3df2fff2c28b7ca31510123e8bd9665068d8c6ad9a106bce71467566c/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/c1bc3abea4903c4b70db30436b2163fb9d8360b38fa7f7fe68c7b133f29a7c52/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/9b833c94fa3d68af7c543a776805fd2640c1572a45ba343b58c8214442968613/merged
tmpfs 3.8G 8.0K 3.8G 1% /var/lib/kubelet/pods/b965e58c-5af4-4642-967d-c2478bd13933/volumes/kubernetes.io~secret/kubevirt-operator-certs
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/b965e58c-5af4-4642-967d-c2478bd13933/volumes/kubernetes.io~projected/kube-api-access-kxxt5
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/a3d7abde028b02ce103634320b26d6375c5fb2e3dd66a0d416276c7166941410/merged
shm 64M 0 64M 0% /var/lib/docker/containers/0178ab4dcec3e23eda1f48bff7764cd5df4ed8266eb88f745ad4a58343f016d9/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/2335318d42637beb79eb21832921ff085ae753aa1029ef17b570f2e962e8cfdd/merged
tmpfs 3.8G 8.0K 3.8G 1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~secret/kubevirt-virt-handler-certs
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~projected/kube-api-access-rspl9
tmpfs 3.8G 8.0K 3.8G 1% /var/lib/kubelet/pods/fa2af6c2-9912-464b-8d43-5fcd6473fc13/volumes/kubernetes.io~secret/kubevirt-virt-api-certs
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/ef243c1f62650ac77de2525f199b52f3e63e82badb2465a2b47bddadbda95923/merged
shm 64M 0 64M 0% /var/lib/docker/containers/7acb351ec8eb07876ffe470d2a5d83ea260ae70e7aee4558db0ca755887f3e50/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/874c2b916a2b5f86a57a1c4fc68947119e0854e997996bf5345e39284b7cbaa1/merged
tmpfs 3.8G 8.0K 3.8G 1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~secret/kubevirt-virt-handler-certs
tmpfs 3.8G 8.0K 3.8G 1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~secret/kubevirt-virt-handler-server-certs
tmpfs 3.8G 8.0K 3.8G 1% /var/lib/kubelet/pods/35153ae2-e931-45f8-a374-2d4da17fd354/volumes/kubernetes.io~secret/kubevirt-controller-certs
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/35153ae2-e931-45f8-a374-2d4da17fd354/volumes/kubernetes.io~projected/kube-api-access-28wt5
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/c76d1164-ad80-48fb-8f53-34e873d5e446/volumes/kubernetes.io~projected/kube-api-access-pckgc
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/0f5b9d0761baaf31c5ce519f507295c2cb8c14e5791403f5a65de5c69d2e5ae1/merged
shm 64M 0 64M 0% /var/lib/docker/containers/f0a64a8405ebfffb980d40a4e0d9c829000d7a4bbe1f1712705a6a84b357e5e1/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/f116e463a9f43d90a24f6c719fcbef9bc18494c1680065cde6b83aa89a73533d/merged
shm 64M 0 64M 0% /var/lib/docker/containers/5383598d8af1938aef830ba4c68ce26116f0b1e2ec2b40abf53c0268fa0f9fcb/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/f6f05621a5cb6f361ccd023773ee27a680da6d3a7f44627adfa3ba13c9969ba2/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/ed10477d475aa46d65ba380fcef6f6d98d82d7686263e6bd5e95ebdf9f48e3da/merged
tmpfs 50M 12K 50M 1% /var/lib/kubelet/pods/a327cf65-4ace-4281-8b75-e3badd0b912a/volumes/kubernetes.io~projected/kube-api-access-dkrbx
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/9b33e3c7aed5bdce53b286b1e342078ca9bd1602bfcf561e9d2ffa4c8cd655cd/merged
shm 64M 0 64M 0% /var/lib/docker/containers/091e2157f089eb0747e63f4e4b4282f9d40fc45c90d3a73ec4e187aa38f419cd/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/5a471abc372438b63f23ef5ea8fd82a1f01db73afb3589c4b01a76e04b1857ad/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/af21221686a4ae59ff55c16945cf649004eeecf9e1333a56a1a99a91e91ebd65/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/2f60c35ae8573c9cbad08fb26d1ec676fb156d869af2c7b533e84c7503c30c34/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/59b1196c741f5aa612fdbfad9a4c04b937f36648f24ac3d4ee6acab567152d67/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/1604976bc8461cfb835ab152ccf0219a108cd2032d2e5c3a777b189a3953acca/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/96343f1ed6280bfbc9e465165762ba8394a5abf223d20ba4c74c6cf99727eb44/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/92e58a653625ee2e93c6ac90d69e996af6d306e16637825203cd4ebe7511a7d4/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/a6f9001a599dc66c3d9de23e6708bd6b3ff685a69b44b1c666d8ff9f0b28050b/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/ad9e487aaf165f4950d430b0fd95db9d45d515dbbfae24d7338b0922b8cf56bb/merged
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/f689a5ad2eb8cba138142620f27a0c5968ca36886f37698945018e15e0b96965/merged
tmpfs 3.8G 12K 3.8G 1% /var/lib/kubelet/pods/7b683571-c0de-45bd-9ec4-c2eac9755c69/volumes/kubernetes.io~projected/kube-api-access-gztdj
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/dd459ae9c7ce30526e1a0d68e2ab5cce5b6ad1e353d91a7a7ab5f9dc6c40b2d8/merged
shm 64M 0 64M 0% /var/lib/docker/containers/d94be22f84c576ee1daa39a03f60efddd60e46543a2b4eb5d536621855c8e19b/mounts/shm
overlay 94G 25G 69G 27% /var/lib/docker/overlay2/0b431cf493569d862eee88343dc82bf8fc8541aaa43e8f182b12205e45026e66/merged