k8s部署(kubeadm方式)
部署环境及版本
系统版本:CentOS Linux release 7.9.2009
k8s版本:v1.28.15
docker版本:26.1.4
containerd版本:1.6.33
calico版本:v3.25.0
准备
主机ip | 主机名 | 角色 | 配置 |
---|
192.168.1.131 | vm131 | master | 2核4G,50G |
192.168.1.132 | vm132 | node1 | 2核4G,50G |
192.168.1.133 | vm133 | node2 | 2核4G,50G |
1、修改主机名
hostnamectl set-hostname vm131
hostnamectl set-hostname vm132
hostnamectl set-hostname vm133
vim /etc/hosts
192.168.1.131 vm131
192.168.1.132 vm132
192.168.1.133 vm133
2、关闭自带防火墙
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
sysctl --system
swapoff -a && free –h
sed -i 's@/dev/mapper/centos-swap@#/dev/mapper/centos-swap@g' /etc/fstab
3、时间同步
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
timedatectl set-timezone Asia/Shanghai
ntpdate -u time.nist.gov
date
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache
一、安装docker
yum -y install gcc gcc-c++
yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install docker-ce docker-ce-cli containerd.io -y
systemctl start docker
systemctl enable docker
二、安装containerd
yum install containerd -y
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]
disable = false
systemctl restart containerd
三、Master节点安装kubeadm
1、安装kubelet 和kubeadm以及kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
2、下载所需要的镜像
kubeadm config images list
I0306 11:27:12.886721 21939 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.15
registry.k8s.io/kube-controller-manager:v1.28.15
registry.k8s.io/kube-scheduler:v1.28.15
registry.k8s.io/kube-proxy:v1.28.15
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15 registry.k8s.io/kube-apiserver:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15 registry.k8s.io/kube-controller-manager:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15 registry.k8s.io/kube-scheduler:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15 registry.k8s.io/kube-proxy:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
for item in `ctr -n k8s.io images list | awk '$1 ~ /registry.cn-hangzhou.aliyuncs.com/ {print $1}'`;do ctr -n k8s.io images rm ${item};done
3、更改kubelet参数
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
4、kubeadm初始化
kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
四、Node节点安装kubeadm
1、安装kubeadm kubelet
yum -y install kubeadm kubelet
systemctl enable kubelet.service
2、下载所需要的镜像
kubeadm config images list
I0306 11:27:12.886721 21939 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.15
registry.k8s.io/kube-controller-manager:v1.28.15
registry.k8s.io/kube-scheduler:v1.28.15
registry.k8s.io/kube-proxy:v1.28.15
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15 registry.k8s.io/kube-apiserver:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15 registry.k8s.io/kube-controller-manager:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15 registry.k8s.io/kube-scheduler:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15 registry.k8s.io/kube-proxy:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
for item in `ctr -n k8s.io images list | awk '$1 ~ /registry.cn-hangzhou.aliyuncs.com/ {print $1}'`;do ctr -n k8s.io images rm ${item};done
3、更改kubelet参数
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
4、加入Master
kubeadm join 192.168.1.131:6443 --token 51lpgy.tue7o5rnwi3h62ql \
> --discovery-token-ca-cert-hash sha256:200ad4e8f13649b3cd14db97cdaf842d97f4dbbca0cbec123c06b2a3c687ede1
五、安装网络插件
1、在Master节点拉取并修改calico.yml文件
yum install wget -y
mkdir -pv /data/yaml && cd /data/yaml
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"
- name: IP_AUTODETECTION_METHODvalue: "interface=ens192"
2、下载所需要的镜像(Master和Node都要操作)
mkdir -pv /data/tar && cd /data/tar
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull calico/kube-controllers:v3.25.0
docker save -o cni.tar calico/cni:v3.25.0
docker save -o node.tar calico/node:v3.25.0
docker save -o kube-controllers.tar calico/kube-controllers:v3.25.0
ctr -n k8s.io images import cni.tar
ctr -n k8s.io images import node.tar
ctr -n k8s.io images import kube-controllers.tar
ctr -n k8s.io images list|grep calico
3、安装calico插件
cd /data/yaml
kubectl apply -f calico.yaml
[root@vm131 ~]
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-658d97c59c-fc46q 1/1 Running 0 46m
kube-system calico-node-4cc9q 1/1 Running 0 46m
kube-system calico-node-l6ch2 1/1 Running 0 46m
kube-system calico-node-r27jj 1/1 Running 0 46m
kube-system coredns-5dd5756b68-84cww 0/1 Running 0 4h
kube-system coredns-5dd5756b68-nd9v8 0/1 Running 0 4h
kube-system etcd-vm131 1/1 Running 0 4h1m
kube-system kube-apiserver-vm131 1/1 Running 0 4h1m
kube-system kube-controller-manager-vm131 1/1 Running 1 (3h9m ago) 4h1m
kube-system kube-proxy-26qwm 1/1 Running 0 3h44m
kube-system kube-proxy-bj5xg 1/1 Running 0 4h
kube-system kube-proxy-wgfnz 1/1 Running 0 3h43m
kube-system kube-scheduler-vm131 1/1 Running 0 4h1m
kubectl rollout restart deployment coredns -n kube-system
六、验证
[root@vm131 ~]
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-658d97c59c-t678p 1/1 Running 0 5m26s
kube-system calico-node-5zj2n 1/1 Running 0 5m26s
kube-system calico-node-hjdcm 1/1 Running 0 5m26s
kube-system calico-node-hrwh4 1/1 Running 0 5m26s
kube-system coredns-967d5bb69-sb9xx 1/1 Running 0 31m
kube-system coredns-967d5bb69-sngxg 1/1 Running 0 31m
kube-system etcd-vm131 1/1 Running 0 4h34m
kube-system kube-apiserver-vm131 1/1 Running 0 4h34m
kube-system kube-controller-manager-vm131 1/1 Running 1 (3h43m ago) 4h34m
kube-system kube-proxy-26qwm 1/1 Running 0 4h17m
kube-system kube-proxy-bj5xg 1/1 Running 0 4h34m
kube-system kube-proxy-wgfnz 1/1 Running 0 4h16m
kube-system kube-scheduler-vm131 1/1 Running 0 4h34m
[root@vm131 ~]
NAME STATUS ROLES AGE VERSION
vm131 Ready control-plane 4h34m v1.28.2
vm132 Ready <none> 4h16m v1.28.2
vm133 Ready <none> 4h17m v1.28.2
七、问题记录
crictl images 报错:
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
E0305 16:42:46.922252 22812 remote_image.go:119] "ListImages with filter from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
FATA[0000] listing images: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
解决:
export CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
export IMAGE_SERVICE_ENDPOINT=unix:///run/containerd/containerd.sock
配置 crictl 默认连接 containerd
编辑 crictl 配置文件:
mkdir -p /etc/crictl
echo 'runtime-endpoint: unix:///run/containerd/containerd.sock' > /etc/crictl.yamlvim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]
disable = false禁用跟踪插件: 如果你不打算使用跟踪功能,可以在 containerd 配置文件中禁用该插件。在 /etc/containerd/config.toml 文件中,查找并禁用跟踪插件:
[plugins."io.containerd.tracing.processor.v1.otlp"]enabled = false
然后重启 containerd:
sudo systemctl restart containerd
配置 tracing endpoint: 如果你需要启用 tracing,可以配置跟踪端点。在 containerd 的配置文件中,提供正确的 tracing 端点配置:[plugins."io.containerd.tracing.processor.v1.otlp"]endpoint = "your-tracing-endpoint:port"
根据你所使用的 tracing 服务(如 OpenTelemetry Collector)来提供适当的端点信息。配置完成后,重启 containerd:
sudo systemctl restart containerd
总结:
如果不使用跟踪功能,可以简单地忽略这个信息或禁用该插件。
如果需要启用跟踪功能,确保配置了正确的 tracing endpoint