一、环境准备
-
节点规划(最低要求)
- 1台Master节点(4核/8GB内存)
- 2台Worker节点(2核/4GB内存)
- 1台Ansible控制机(可复用Master节点)
-
系统配置
# 所有节点执行 sudo hostnamectl set-hostname master # 主节点 sudo hostnamectl set-hostname worker1 # 工作节点 sudo hostnamectl set-hostname worker2# 配置/etc/hosts(所有节点) echo "192.168.1.10 master 192.168.1.11 worker1 192.168.1.12 worker2" | sudo tee -a /etc/hosts# 关闭SELinux和防火墙 sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config sudo systemctl stop firewalld && sudo systemctl disable firewalld
-
SSH免密登录
# 在Ansible控制机生成密钥并分发 ssh-keygen -t rsa ssh-copy-id root@master ssh-copy-id root@worker1 ssh-copy-id root@worker2
二、Ansible配置
-
安装Ansible
# 在控制机安装Ansible sudo dnf install epel-release -y sudo dnf install ansible sshpass -y
-
配置Inventory文件
创建hosts
文件:[master] master ansible_host=192.168.1.10[workers] worker1 ansible_host=192.168.1.11 worker2 ansible_host=192.168.1.12[k8s_cluster:children] master workers
三、编写Ansible Playbook
创建k8s-cluster.yml
,内容如下:
- name: Deploy Kubernetes Clusterhosts: k8s_clusterbecome: yestasks:- name: Install containerdyum:name: containerd.iostate: present- name: Configure containerdcopy:src: containerd-config.tomldest: /etc/containerd/config.tomlnotify: restart containerd- name: Enable kernel modulesshell: |modprobe overlaymodprobe br_netfilterecho "overlay" >> /etc/modules-load.d/k8s.confecho "br_netfilter" >> /etc/modules-load.d/k8s.conf- name: Configure sysctlsysctl:name: "{{ item.key }}"value: "{{ item.value }}"state: presentreload: yeswith_items:- { key: net.bridge.bridge-nf-call-ip6tables, value: 1 }- { key: net.bridge.bridge-nf-call-iptables, value: 1 }- { key: net.ipv4.ip_forward, value: 1 }- name: Install kubeadm/kubelet/kubectlyum:name: "{{ item }}"state: presentwith_items:- kubeadm-1.24.2- kubelet-1.24.2- kubectl-1.24.2- name: Enable kubeletsystemd:name: kubeletenabled: yesstate: started- name: Initialize Kubernetes Masterhosts: masterbecome: yestasks:- name: Initialize clustershell: kubeadm init --pod-network-cidr=10.244.0.0/16register: init_output- name: Save join commandcopy:content: "{{ init_output.stdout }}"dest: /root/join-command.sh- name: Join Workershosts: workersbecome: yestasks:- name: Copy join commandfetch:src: /root/join-command.shdest: /tmp/join-command.shflat: yes- name: Join clustershell: "sh /tmp/join-command.sh"
四、执行部署
# 运行Playbook
ansible-playbook -i hosts k8s-cluster.yml
五、验证集群
# 在Master节点执行
kubectl get nodes # 应显示所有节点状态为Ready
kubectl apply -f # 安装网络插件
注意事项
- 离线部署:若环境无外网,需提前下载所有依赖包(如containerd、kubeadm二进制文件)并配置本地仓库。
- 架构支持:AlmaLinux默认支持x86_64,若需ARM64需调整镜像源和软件包。
- 证书配置:建议使用
cfssl
工具生成自定义证书,避免默认证书过期问题。
通过以上步骤,您可以在AlmaLinux上快速完成Kubernetes集群的自动化部署。若需更复杂的配置(如多Master高可用),可参考中的负载均衡方案。