文章目录
- @[TOC]
- 技术选型
- 后端技术
- 前端技术
- 移动端技术
- 开发环境
- 架构图
- 业务架构图
- 项目部署实操
- 主机规划
- 中间件版本
- 服务规划
- 系统准备
- 开始部署
- [[#MYSQL]]
- 建立主从关系
- 再次配置成为双主双从
- 为 mysql 集群配置 vip
- [[#mongodb]]
- 在主节点上无认证登录
- [[#redis]]
- 在主节点上查看集群状态
- 在从节点上
- 查看哨兵状态
- 使用 vip 实现 redis 高可用
- [[#rabbitMQ]]
- 尝试查看集群状态:
- 使用 LVS + keepalived 给 rabbitmq 做高可用
- [[#Elasticsearch]]
- 集群模式的部署
- 使用 nginx + keepalived 实现 elk 高可用
- [[#minio]]
- 使用 nginx + keepalived 实现 minio 的高可用
- [[#nginx 前端]]
- mall-admin-web后台部署
- **mall-app-web 部署**
- [[#nginx 后端]]
- mall-admin的部署:
- Mall-portal的部署
- mall-search的部署:
- 用 nginx和keepalived 为后端实现负载均衡
文章目录
- @[TOC]
- 技术选型
- 后端技术
- 前端技术
- 移动端技术
- 开发环境
- 架构图
- 业务架构图
- 项目部署实操
- 主机规划
- 中间件版本
- 服务规划
- 系统准备
- 开始部署
- [[#MYSQL]]
- 建立主从关系
- 再次配置成为双主双从
- 为 mysql 集群配置 vip
- [[#mongodb]]
- 在主节点上无认证登录
- [[#redis]]
- 在主节点上查看集群状态
- 在从节点上
- 查看哨兵状态
- 使用 vip 实现 redis 高可用
- [[#rabbitMQ]]
- 尝试查看集群状态:
- 使用 LVS + keepalived 给 rabbitmq 做高可用
- [[#Elasticsearch]]
- 集群模式的部署
- 使用 nginx + keepalived 实现 elk 高可用
- [[#minio]]
- 使用 nginx + keepalived 实现 minio 的高可用
- [[#nginx 前端]]
- mall-admin-web后台部署
- **mall-app-web 部署**
- [[#nginx 后端]]
- mall-admin的部署:
- Mall-portal的部署
- mall-search的部署:
- 用 nginx和keepalived 为后端实现负载均衡
tags:
- mall
综合实战: mall
[!important] -组织架构
mall
├── mall-common – 工具类及通用代码
├── mall-mbg – MyBatisGenerator生成的数据库操作代码
├── mall-security – SpringSecurity封装公用模块
├── mall-admin – 后台商城管理系统接口
├── mall-search – 基于Elasticsearch的商品搜索系统
├── mall-portal – 前台商城系统接口
└── mall-demo – 框架搭建时的测试代码
技术选型
后端技术
技术 | 说明 | 官网 |
---|---|---|
SpringBoot | Web应用开发框架 | https://spring.io/projects/spring-boot |
SpringSecurity | 认证和授权框架 | https://spring.io/projects/spring-security |
MyBatis | ORM框架 | http://www.mybatis.org/mybatis-3/zh/index.html |
MyBatisGenerator | 数据层代码生成器 | http://www.mybatis.org/generator/index.html |
Elasticsearch | 搜索引擎 | https://github.com/elastic/elasticsearch |
RabbitMQ | 消息队列 | https://www.rabbitmq.com/ |
Redis | 内存数据存储 | https://redis.io/ |
MongoDB | NoSql数据库 | https://www.mongodb.com |
LogStash | 日志收集工具 | https://github.com/elastic/logstash |
Kibana | 日志可视化查看工具 | https://github.com/elastic/kibana |
Nginx | 静态资源服务器 | https://www.nginx.com/ |
Docker | 应用容器引擎 | https://www.docker.com |
Jenkins | 自动化部署工具 | https://github.com/jenkinsci/jenkins |
Druid | 数据库连接池 | https://github.com/alibaba/druid |
OSS | 对象存储 | https://github.com/aliyun/aliyun-oss-java-sdk |
MinIO | 对象存储 | https://github.com/minio/minio |
JWT | JWT登录支持 | https://github.com/jwtk/jjwt |
Lombok | Java语言增强库 | https://github.com/rzwitserloot/lombok |
Hutool | Java工具类库 | https://github.com/looly/hutool |
PageHelper | MyBatis物理分页插件 | http://git.oschina.net/free/Mybatis_PageHelper |
Swagger-UI | API文档生成工具 | https://github.com/swagger-api/swagger-ui |
Hibernator-Validator | 验证框架 | http://hibernate.org/validator |
前端技术
技术 | 说明 | 官网 |
---|---|---|
Vue | 前端框架 | https://vuejs.org/ |
Vue-router | 路由框架 | https://router.vuejs.org/ |
Vuex | 全局状态管理框架 | https://vuex.vuejs.org/ |
Element | 前端UI框架 | https://element.eleme.io |
Axios | 前端HTTP框架 | https://github.com/axios/axios |
v-charts | 基于Echarts的图表框架 | https://v-charts.js.org/ |
Js-cookie | cookie管理工具 | https://github.com/js-cookie/js-cookie |
nprogress | 进度条控件 | https://github.com/rstacruz/nprogress |
移动端技术
技术 | 说明 | 官网 |
---|---|---|
Vue | 核心前端框架 | https://vuejs.org |
Vuex | 全局状态管理框架 | https://vuex.vuejs.org |
uni-app | 移动端前端框架 | https://uniapp.dcloud.io |
mix-mall | 电商项目模板 | https://ext.dcloud.net.cn/plugin?id=200 |
luch-request | HTTP请求框架 | https://github.com/lei-mu/luch-request |
开发环境
工具 | 版本号 | 下载 |
---|---|---|
JDK | 1.8 | https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html |
MySQL | 5.7 | https://www.mysql.com/ |
Redis | 7.0 | https://redis.io/download |
MongoDB | 5.0 | https://www.mongodb.com/download-center |
RabbitMQ | 3.10.5 | http://www.rabbitmq.com/download.html |
Nginx | 1.22 | http://nginx.org/en/download.html |
Elasticsearch | 7.17.3 | https://www.elastic.co/downloads/elasticsearch |
Logstash | 7.17.3 | https://www.elastic.co/cn/downloads/logstash |
Kibana | 7.17.3 | https://www.elastic.co/cn/downloads/kibana |
架构图
业务架构图
项目部署实操
#项目部署实操
[!Note]- 在 github 上拉取项目
git clone https://github.com/macrozheng/mall.git
[!NOTE]- 在 gitee 上拉取项目
git clone https://gitee.com/macrozheng/mall.git
主机规划
ID | ROLE | SERVER | IP | MANUAL LINK | configuration |
---|---|---|---|---|---|
1 | mysql-master | mysql | 192.168.30.126 | [[#mysql]] | 2C2G 硬盘20G |
2 | mysql-slave | mysql | 192.168.30.58 | [[#mysql]] | 2C2G 硬盘20G |
3 | mrhost_master | mongodb+redis | 192.168.30.43 | [[#mongodb]],[[#redis]] | 2C2G 硬盘20G |
4 | mrhost_slave | mongodb+redis | 192.168.30.5 | [[#mongodb]],[[#redis]] | 2C2G 硬盘20G |
5 | mrhost_sentry | mongodb+redis | 192.168.30.160 | [[#mongodb]],[[#redis]] | 2C2G 硬盘20G |
6 | rabbit1 | rabbitMQ | 192.168.30.212 | [[#rabbitMQ]] | 2C2G 硬盘20G |
7 | rabbit2 | rabbitMQ | 192.168.30.127 | [[#rabbitMQ]] | 2C2G 硬盘20G |
8 | elasticsearch1 | Elasticsearch | 192.168.30.8 | [[#Elasticsearch]] | 2C4G 硬盘20G |
9 | elasticsearch2 | Elasticsearch | 192.168.30.242 | [[#Elasticsearch]] | 2C4G 硬盘20G |
10 | elasticsearch3 | Elasticsearch | 192.168.30.46 | [[#Elasticsearch]] | 2C4G 硬盘20G |
11 | minio1 | minio | 192.168.30.12 | [[#minio]] | 2C2G 硬盘20G*2 |
12 | minio2 | minio | 192.168.30.68 | [[#minio]] | 2C2G 硬盘20G*2 |
13 | nginx-frontend | nginx 前端+反向代理 | 192.168.30.91 | [[# nginx 前端]] | 4C4G 硬盘20G |
14 | nginx-backend | nginx 后端+反向代理 | 192.168.30.71 | [[#nginx 后端]] | 4C4G 硬盘20G |
中间件版本
SERVER | VERSION |
---|---|
mysql | 5.5.68 |
mongodb | 5.0.23 |
redis | 6.2.13 |
rabbitMQ | 3.10.0 |
Elasticsearch | 8.4.0 |
minio | Release 2024-10-02T17-50-41Z |
nginx | 1.20.1 |
服务规划
[!NOTE]- MySql
{db: mall# 连接数据库的远程账号user: adminpwd: admin# 本地用户,不能远程登录loginpass: rootloginuser: rootport: 3306 }
[!NOTE]- mongodb
{replSet: mall_dbport: 27017user: rootpassword: 123456 }
[!NOTE]- redis
{pwd: 123456port: 6379 }
[!NOTE]- minio
{user: minioadminpwd: minioadminport: 9000 }
[!NOTE]- rebbit
{virtual-host: /mallusername: mallpassword: mallport: 5672 }
系统准备
[!warning] -操作系统 : CentOS7
开始部署
[!important]- 前期准备工作
关闭防火墙systemctl stop firewalld.service systemctl disable firewalld.service
关闭 selinux
setenforce 0
配置 aliyun 源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
下载基础包
yum update
yum install -y bash-com* vim wget unzip
[[#MYSQL]]
下载 git 拉取项目 ,项目地址在 #项目部署实操
yum install -y git
yum install -y mariadb-server mariadb
编辑MySQL配置文件 /etc/my.cnf,添加以下内容:
[!Warning] 注意添加到【mysqld】所属组 !!!
[mysqld]
server-id=1
log-bin=/data/mysql/log/mysql-bin-master #开启log-bin日志
为Mysql创建主log-bin目录
mkdir -p /data/mysql/log/
chown mysql.mysql -R /data/mysql/log
启动
sudo systemctl start mariadb --now
mysqladmin -u root password "root"
mysql -uroot -proot
[!Warning] 上述操作需要在主从上分别执行
创建mall仓库和授权,【在主库上】
create database mall;
grant all privileges on mall.* to admin@'%' identified by 'admin';
flush privileges;
建立给从数据库授权
grant replication slave on *.* to slave@'192.168.30.%' identified by 'root';
flush privileges;
[!Warning] 导入数据库并发送给从数据库,从库在导入时应先按上述将数据库先行建立
use mall
source /root/mall/document/sql/mall.sql
[!Warning] 记住这串数字 661406
show master status;
+-------------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+-------------------------+----------+--------------+------------------+
| mysql-bin-master.000003 | 661406 | | |
+-------------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
[!NOTE] 从部署
vim /etc/my.cnf
[mysqld]
server-id = 2
relay-log=/data/mysql/log/relay-log-bin #中继日志文件的路径名称
relay-log-index=/data/mysql/log/slave-relay-bin.index #中继日志索引文件的路径名称
mkdir -p /data/mysql/log/
chown mysql.mysql -R /data/mysql/log
启动
sudo systemctl start mariadb --now
mysqladmin -u root password "root"
mysql -uroot -proot
create database mall;
use mall;
source /root/mall/document/sql/mall.sql
[!INOTE]- mall 数据库信息
+-----------------------------------------+
| Tables_in_mall |
±----------------------------------------+
| cms_help |
| cms_help_category |
| cms_member_report |
| cms_prefrence_area |
| cms_prefrence_area_product_relation |
| cms_subject |
| cms_subject_category |
| cms_subject_comment |
| cms_subject_product_relation |
| cms_topic |
| cms_topic_category |
| cms_topic_comment |
| oms_cart_item |
| oms_company_address |
| oms_order |
| oms_order_item |
| oms_order_operate_history |
| oms_order_return_apply |
| oms_order_return_reason |
| oms_order_setting |
| pms_album |
| pms_album_pic |
| pms_brand |
| pms_comment |
| pms_comment_replay |
| pms_feight_template |
| pms_member_price |
| pms_product |
| pms_product_attribute |
| pms_product_attribute_category |
| pms_product_attribute_value |
| pms_product_category |
| pms_product_category_attribute_relation |
| pms_product_full_reduction |
| pms_product_ladder |
| pms_product_operate_log |
| pms_product_vertify_record |
| pms_sku_stock |
| sms_coupon |
| sms_coupon_history |
| sms_coupon_product_category_relation |
| sms_coupon_product_relation |
| sms_flash_promotion |
| sms_flash_promotion_log |
| sms_flash_promotion_product_relation |
| sms_flash_promotion_session |
| sms_home_advertise |
| sms_home_brand |
| sms_home_new_product |
| sms_home_recommend_product |
| sms_home_recommend_subject |
| ums_admin |
| ums_admin_login_log |
| ums_admin_permission_relation |
| ums_admin_role_relation |
| ums_growth_change_history |
| ums_integration_change_history |
| ums_integration_consume_setting |
| ums_member |
| ums_member_level |
| ums_member_login_log |
| ums_member_member_tag_relation |
| ums_member_product_category_relation |
| ums_member_receive_address |
| ums_member_rule_setting |
| ums_member_statistics_info |
| ums_member_tag |
| ums_member_task |
| ums_menu |
| ums_permission |
| ums_resource |
| ums_resource_category |
| ums_role |
| ums_role_menu_relation |
| ums_role_permission_relation |
| ums_role_resource_relation |
±----------------------------------------+
76 rows in set (0.00 sec)
建立主从关系
在从上
[!Warning] master_log_pos 661406
mysql -uroot -proot
stop slave;change master to master_host='192.168.30.126',master_user='slave',master_password='root',master_log_file='mysql-bin-master.000003',master_log_pos=661406;start slave;
show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.30.126Master_User: slaveMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin-master.000003Read_Master_Log_Pos: 661406Relay_Log_File: relay-log-bin.000002Relay_Log_Pos: 536Relay_Master_Log_File: mysql-bin-master.000003Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 661406Relay_Log_Space: 828Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1
1 row in set (0.00 sec)
####看到两个yes代表主从搭建完成####Slave_IO_Running: Yes######Slave_SQL_Running: Yes
最后,设置让其开机自启动
systemctl enable mariadb
再次配置成为双主双从
master-slave配置
vim /etc/my.conf
server-id = 2
log-bin=/data/mysql/log/mysql-bin-master
创建mysql 的log-bin目录
mkdir -p /data/mysql/log
chown mysql.mysql -R /data
重启
systemctl restart mariadb
mysql -uroot -proot -e "grant replication slave on *.* to slave@'%' identified by 'root'"
mysql -uroot -proot -e "flush privileges"
查看master 状态
mysql -uroot -proot -e "show master status"
mysql-master配置
vim /etc/my.conf
relay-log=/data/mysql/log/relay-log-bin
relay-log-index=/data/mysql/log/slave-relay-bin.index
重启
systemctl restart mariadb
mysql数据库从配置
stop slave;change master to master_host='192.168.30.126',master_user='slave',master_password='root',master_log_file='mysql-bin-master.000001',master_log_pos=600;start slave;
show slave status\G
为 mysql 集群配置 vip
安装 keepalived
yum install keepalived -y
keepalived 配置
mv /etc/keepalived/keepalived.conf{,.bak}
vim /etc/keepalived/keepalived.conf
global_defs {router_id MYSQL_MS
}
vrrp_script chk_mysql {script "/etc/keepalived/check_mysql.sh"interval 5weight -5fall 2rise 2
}
vrrp_instance VI_1 {state MASTERinterface ens32virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.30.60}track_script {chk_mysql }
}
主从的 keepalived 配置文件的差异
state MASTER -----》 state BACKUP
priority 120 # 从比主低就行
健康检查脚本[在主上]
vim /etc/keepalived/check_mysql.sh
#!/bin/bash
MYSQL_HOST="127.0.0.1"
MYSQL_USER="root"
MYSQL_PASSWORD="root"
VIP="192.168.30.60" # 定义VIP变量
INTERFACE="ens32" # 定义网络接口变量# Check MySQL is running
mysql_status=$(mysqladmin ping -h${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_PASSWORD} 2>&1)
if [ $? -ne 0 ] && [ ip addr show | grep -q "${VIP}" ]; thenip addr del ${VIP} dev ${INTERFACE}exit 1
fiip addr add ${VIP} dev ${INTERFACE}
exit 0
[!NOTE] 脚本大致的意思就是检测该主机是否有 vip 并且服务是否是正常的,如果服务不正常并且 vip 还在的话就删除该 vip ,[以下的服务配置健康检查脚本的逻辑基本一样]
chmod +x /etc/keepalived/check_mysql.sh
启动脚本
cd /etc/keepalived
./check_mysql.sh
启动
sudo systemctl enable keepalived
sudo systemctl start keepalived
在主节点上
ip a
vip 已经生成
将主宕机
systemctl stop mariadb.service
主节点挂机之后,vip 实现漂移
当主恢复后
systemctl restart mariadb.service
[[#mongodb]]
[!warning] MongoDB 采用 二进制 的安装方式
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-5.0.23.tgz
tar -zxvf mongodb-linux-x86_64-rhel70-5.0.23.tgz -C /usr/local/
mv /usr/local/mongodb-linux-x86_64-rhel70-5.0.23/ /usr/local/mongodb
ln -s /usr/local/mongodb/bin/* /usr/local/bin/
mkdir -p /data/mongodb/{data,log}
新增配置文件
vim /usr/local/mongodb/mongodb.conf
[!Warning]- 单机模式下,请先注释 或者在启动前先将下面的密钥配置了再来启动
security:authorization: enabledkeyFile: /data/mongodb/mongo.keyclusterAuthMode: keyFile
storage:dbPath: /data/mongodb/data
systemLog:destination: filepath: /data/mongodb/log/mongodb.loglogAppend: true
net:bindIp: 0.0.0.0port: 27017
processManagement:fork: true
security:authorization: enabledkeyFile: /data/mongodb/mongo.keyclusterAuthMode: keyFile
replication:replSetName: mall_db
新增 service 脚本
vim /usr/lib/systemd/system/mongodb.service
[Unit]
Description=mongodb-server
After=network.target[Service]
Type=forking
ExecStart=/usr/local/mongodb/bin/mongod --config /usr/local/mongodb/mongodb.conf
PrivateTmp=true[Install]
WantedBy=multi-user.target
启动
systemctl start mongodb
systemctl enable mongodb
[!warning] 以上是单机实例
进入文件夹
cd /data/mongodb
在主上生成密钥
openssl rand -base64 756 > /data/mongodb/mongo.key
chmod 400 /data/mongodb/mongo.key
useradd -M -s /sbin/nologin mongodb
chown mongodb.mongodb -R /data/mongodb/
在传输之前,先在各个节点都把用户建立好
for i in 5 160 ; do scp mongo.key root@192.168.58.$i:/data/mongodb/ ;done
for i in 5 160 ; do scp /usr/local/mongodb/mongodb.conf root@192.168.58.$i:/usr/local/mongodb ;done
都重启一下
systemctl restart mongodb.service
在主节点上无认证登录
mongo
use admincfg={_id : "mall_db",members: [{ _id: 0, host: "192.168.30.43:27017", priority: 2},{ _id: 1, host: "192.168.30.5:27017", priority: 1},{ _id: 2, host: "192.168.30.160:27017", arbiterOnly: true}]
}
rs.initiate(cfg)
rs.status()
{"set" : "mall_db","date" : ISODate("2024-10-24T03:32:56.004Z"),"myState" : 1,"term" : NumberLong(1),"syncSourceHost" : "","syncSourceId" : -1,"heartbeatIntervalMillis" : NumberLong(2000),"majorityVoteCount" : 2,"writeMajorityCount" : 2,"votingMembersCount" : 3,"writableVotingMembersCount" : 2,"optimes" : {"lastCommittedOpTime" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"lastCommittedWallTime" : ISODate("2024-10-24T03:32:48.824Z"),"readConcernMajorityOpTime" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"appliedOpTime" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"durableOpTime" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"lastAppliedWallTime" : ISODate("2024-10-24T03:32:48.824Z"),"lastDurableWallTime" : ISODate("2024-10-24T03:32:48.824Z")},"lastStableRecoveryTimestamp" : Timestamp(1729740718, 1),"electionCandidateMetrics" : {"lastElectionReason" : "electionTimeout","lastElectionDate" : ISODate("2024-10-24T03:28:18.796Z"),"electionTerm" : NumberLong(1),"lastCommittedOpTimeAtElection" : {"ts" : Timestamp(1729740488, 1),"t" : NumberLong(-1)},"lastSeenOpTimeAtElection" : {"ts" : Timestamp(1729740488, 1),"t" : NumberLong(-1)},"numVotesNeeded" : 2,"priorityAtElection" : 2,"electionTimeoutMillis" : NumberLong(10000),"numCatchUpOps" : NumberLong(0),"newTermStartDate" : ISODate("2024-10-24T03:28:18.805Z"),"wMajorityWriteAvailabilityDate" : ISODate("2024-10-24T03:28:19.766Z")},"members" : [{"_id" : 0,"name" : "192.168.30.43:27017","health" : 1,"state" : 1,"stateStr" : "PRIMARY","uptime" : 393,"optime" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"optimeDate" : ISODate("2024-10-24T03:32:48Z"),"lastAppliedWallTime" : ISODate("2024-10-24T03:32:48.824Z"),"lastDurableWallTime" : ISODate("2024-10-24T03:32:48.824Z"),"syncSourceHost" : "","syncSourceId" : -1,"infoMessage" : "","electionTime" : Timestamp(1729740498, 1),"electionDate" : ISODate("2024-10-24T03:28:18Z"),"configVersion" : 1,"configTerm" : 1,"self" : true,"lastHeartbeatMessage" : ""},{"_id" : 1,"name" : "192.168.30.5:27017","health" : 1,"state" : 2,"stateStr" : "SECONDARY","uptime" : 287,"optime" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"optimeDurable" : {"ts" : Timestamp(1729740768, 1),"t" : NumberLong(1)},"optimeDate" : ISODate("2024-10-24T03:32:48Z"),"optimeDurableDate" : ISODate("2024-10-24T03:32:48Z"),"lastAppliedWallTime" : ISODate("2024-10-24T03:32:48.824Z"),"lastDurableWallTime" : ISODate("2024-10-24T03:32:48.824Z"),"lastHeartbeat" : ISODate("2024-10-24T03:32:54.844Z"),"lastHeartbeatRecv" : ISODate("2024-10-24T03:32:55.858Z"),"pingMs" : NumberLong(0),"lastHeartbeatMessage" : "","syncSourceHost" : "192.168.58.43:27017","syncSourceId" : 0,"infoMessage" : "","configVersion" : 1,"configTerm" : 1},{"_id" : 2,"name" : "192.168.30.160:27017","health" : 1,"state" : 7,"stateStr" : "ARBITER","uptime" : 287,"lastHeartbeat" : ISODate("2024-10-24T03:32:54.845Z"),"lastHeartbeatRecv" : ISODate("2024-10-24T03:32:54.844Z"),"pingMs" : NumberLong(0),"lastHeartbeatMessage" : "","syncSourceHost" : "","syncSourceId" : -1,"infoMessage" : "","configVersion" : 1,"configTerm" : 1}],"ok" : 1,"$clusterTime" : {"clusterTime" : Timestamp(1729740768, 1),"signature" : {"hash" : BinData(0,"Ld0ua+4WBbtcmFjvarvT7Ks/S9g="),"keyId" : NumberLong("7429178869476753413")}},"operationTime" : Timestamp(1729740768, 1)
}
关键信息
"stateStr" : "PRIMARY",
"stateStr" : "SECONDARY",
"stateStr" : "ARBITER",
[[#redis]]
[!warning] redis 采用 yum 的方式安装
yum -y install epel-release https://repo.ius.io/ius-release-el7.rpm
yum -y install redis6
编辑配置文件 vim /etc/redis/redis.conf
[!Warning] bind 使用自己的 ip
daemonize yesbind 127.0.0.1 -::1 192.168.30.43requirepass 123456 #开启安全认证
启动
systemctl start redis
systemctl enable redis
[!warning] 以上是单机实例
[!warning] 三台主机都是上诉操作
从节点配置
masterauth 123456
replica-read-only yes
replicaof 192.168.30.43 6379
重启
systemctl restart redis
在主节点上查看集群状态
redis-cli -a 123456 -c info replicationrole:master
connected_slaves:2
slave0:ip=192.168.30.5,port=6379,state=online,offset=280,lag=0
slave1:ip=192.168.30.160,port=6379,state=online,offset=280,lag=0
在从节点上
redis-cli -a 123456 -p 6379 -c info replicationrole:slave
master_host:192.168.30.43
master_port:6379
master_link_status:up
[!warning] 哨兵搭建
vim /etc/redis/sentinel.conf
port 26379
protected-mode no
daemonize yes
sentinel monitor mymaster 192.168.30.43 6379 2 # 这里填写主节点 ip 所有哨兵一样
sentinel auth-pass mymaster 123456
sentinel down-after-milliseconds mymaster 5000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 3000
redis-server /etc/redis/sentinel.conf --sentinel
查看哨兵状态
redis-cli -p 26379 -c info sentinel# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=192.168.30.43:6379,slaves=2,sentinels=3
使用 vip 实现 redis 高可用
[!Warning] redis 哨兵不存储任何数据,所以不需要为其配置 vip
yum install -y keepalived
mv /etc/keepalived/keepalived.conf{,.bak}
vim /etc/keepalived/keepalived.conf
global_defs {router_id REDIS
}
vrrp_script chk_redis {script "/etc/keepalived/check_redis.sh"interval 5weight -5fall 2rise 2
}
vrrp_instance VI_1 {state MASTERinterface ens32virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.30.150}track_script {chk_redis}
}
编写健康检查脚本
vim /etc/keepalived/check_redis.sh
#!/bin/bash# Redis服务的主机地址和端口
REDIS_HOST="127.0.0.1"
REDIS_PORT="6379"# VIP地址和网络接口
VIP="192.168.30.150"
INTERFACE="ens32"# 检查 Redis 服务是否正在运行
redis-cli -h $REDIS_HOST -p $REDIS_PORT ping > /dev/null 2>&1
if [ $? -eq 0 ]; then# Redis 服务正在运行,检查VIP是否已经配置在接口上if ! ip addr show $INTERFACE | grep -q "$VIP"; then# 如果VIP未配置,则添加VIP到接口systemctl start keepalivedfi
else# Redis 服务未运行,检查VIP是否配置在接口上if ip addr show $INTERFACE | grep -q "$VIP"; then# 如果VIP已配置,则从接口删除VIPsystemctl stop keepalivedfiexit 1
fi
systemctl restart keepalived
exit 0
chmod +x /etc/keepalived/check_redis.sh
启动脚本
cd /etc/keepalived
./check_redis.sh
启动
sudo systemctl enable keepalived
sudo systemctl start keepalived
[[#rabbitMQ]]
配置yum源:
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum update -y
校准时间:(可选)
sudo yum install chrony -y
sudo systemctl start chronyd
sudo systemctl enable chronyd
sudo chronyc makestep
sudo date
添加 Erlang Solutions 仓库:
sudo tee /etc/yum.repos.d/erlang-solutions.repo <<EOF
[erlang-solutions]
name=erlang-solutions
baseurl=https://packages.erlang-solutions.com/rpm/centos/\$releasever/\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.erlang-solutions.com/rpm/erlang_solutions.asc
EOF
安装 Erlang:
sudo yum install erlang -y
导入 RabbitMQ 的 GPG 密钥:
sudo tee ./rabbitmq-release-signing-key.asc <<EOF
-----BEGIN PGP PUBLIC KEY BLOCK-----mQINBFc6394BEACzae+l1pU31AMhJrRx4BqYv8ZCVUBOeiS3xIcgme1Oq2HSq/Vt
x49VPU9xY9ni4GjOU9c9/J9/esuigbctCN7CdR8bqN/srwqmuIPNIS/MvGhNimjO
/EUKcZtmJ5fnFk08bzjkyS/ScEzf3jdJadrercoPpbAKWnzCUblX8AdFDyDJhl65
TlSKS9+Sz0tfSdUIa0LpyJHZmLQ4chCy6KbDUAvchM2xUTIEJwx+sL4n/J6yYkZl
L90mVi4QEYl1Cajioeg9zxduoUmXq0SR5gQe6VIaXYrIk2gOEMNQL4P/4CKEn9No
1yvUP1+dSYTyvbmF+1pr16xPyNpw3ydmxDX9VxZAEnzPabB8Uortirtt0Dpopufy
TJR99dPcKV+BWJtQF6xD30kj8LaDfhyVeB6Bo+L0hhhvnZYWkps8ZJ1swcoBjir7
RDq8hJVqu8YHrzsiFL5Ut/pRkNhrK83GVOxnTndmj/MNboExD3IR/yjCiWNxC9Zu
Iaedv2ux+0KrQVTDU7I97x2GDwyiUMnKL7IKWSOTDR4osv5RlJzAovuv2+lZ8sle
ZvCEWOGeEYYM1VLDgXhPQdMwyizJ113oobxbqF+InlWq/T9mWmJDLb4wAiha3KKE
XJi8wXkJMdRQ0ftM1zKD8qBMukyVndZ6yNQrx3uHAP/Yl2XKPUbtkq/KVQARAQAB
tDBSYWJiaXRNUSBSZWxlYXNlIFNpZ25pbmcgS2V5IDxpbmZvQHJhYmJpdG1xLmNv
bT6JAjcEEwEKACEFAlc6394CGwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ
a3OjbmAm38qiJQ/+PkS0I+Be1jQINT2F4f8Mwq4Zxcqm4whbg6DH6zkvvqSqXFNB
wg7HVsC3qQ9Uh6OPw3dziBHmsOE50DpeqCGjHGacJ/Az/00PHKUn8eJQ/dIB1rla
PcSOBUP2CrMLLh9PbP1ZDm2/6gpInyYIRQox8k7j5PnHSVprYAA6tp/11i351WOQ
WkuN54482svVRfUEttt0NPLXtXJQl4V1eBt8+J11ZSh0mq2QSTxg211YBY0ugeVx
Q0PBIWvrNmcsnBttj5MJ/4L9nFmuemiSS3M9ONjwDBxaiaWCwxFwKXGensNOWeZy
bBfbhQxTpOKSNgyk+MymrG5EyI7fVlbmmHEhuYmV4pJadXmW1a9wvRHap/aLR1Aw
akFI29CABbnYD3ZXg+DmNqqE6um5Uem2zYr/9hfSL5KuuwawoyW8HV4gKBe+MgW1
n1lECvECt9Bn2VepjIUCv4gfHBDel5v1CXxZpTnHLt8Hsno1qTf6dGvvBYEPyTA+
cAlUeCmfjhBVNQEapUzgW0D7E8JaWHAbJPtwwp/iIO/xqEps3VGOouG+G4GPiABh
CP7hYUceecgVAF5g75gcI2mZeXAfbHVdfffZZXSYA7RjOAA1bLOopjq6UvYyIBhe
D72feGzkEPtjTpHtqttDFO9ypBEwnJjTpw2uTcBIbc6E7AThaZeEF/JC84aIRgQQ
EQoABgUCV0RROwAKCRD3uM6mBW6OVjBwAJ9j4tcWbw03rBy5j4LjP9a4EToJcwCf
TEfCiAWldVzFkDM9jBfu0V+rIwC5Ag0EVzrf3gEQAN4Nor5B6nG+Rrb0yzI7Q1sO
VM+OD6CdCN4Ic9E3u+pgsfbtRQKRuSNk8LyPVOpI5rpsJhqGKEDOUWEtb7uyfZxV
J57QhbhIiJTJsFp50mofC58Kb8+vQ4x6QKdW9dwNSH3+BzwHi6QN+b+ZFifC4J6H
q/1Ebu1b6q7aWjY7dPh2K+XgKTIq6qio9HFqUTGdj2QM0eLiQ6FDDKH0cMvVqPGD
dwJXAYoG5Br6WeYFyoBiygfaKXMVu72dL9YhyeUfGJtrZkRv6zqrkwnjWL7Xu1Rd
5gdYXV1QBz3SyBdZYS3MCbvkMLEkBCXrMG4zvReasrkanMANRQyM/XPMS5joO5dD
cvL5FDQeOy7+YlznkM5pAar2SLrJDerjVLBvXdCBX4MjsW05t3OPg6ryMId1rHbY
XtPslrCm9abox53dUtd16Gp/FSxs2TT3Wbos0/zel/zOIyj4kcVR3QjplMchlWOA
YLYO5VwM1f49/xvFOEMiyb98ameS0fFf1pNAstLodEDxgXIdzoelxbybYrRLymgD
tp3gkf53mhSN1q5Qu+/CQbSChqbcAsT8qUSdeGkvzR4qKEzDh+dEo4lheNwi7xPZ
/kj2RjaKs6jjxUWw9oyqxdGt9IwbRo+0TV+gLKUv/uj/lVKO5O3alNN37lobLQbF
5fFTrp9oXz2eerqAJFI7ABEBAAGJAh8EGAEKAAkFAlc6394CGwwACgkQa3OjbmAm
38pltg//W37vxUm6OMmXaKuLtE/G4GsM7QHD/OIvXZw+HIzyVClsM8v0+DGolOGU
Qif9HBRZfrgEWHTVeTDkynq3y7hbA2ekXEGvdKMVTt1JqRWgWPP57dAu8aVaJuR6
b4HLS0dfavXxnG1K2zunq3eARoOpynUJRzdG95JjXaLyYd1FGU6WBfyaVEnaZump
o6evG8VcH8fj/h88vhc3qlU+FdP0B8pb6QQpkqZGJeeiKP/yVFI/wQEqITIs1/ST
stzNGzIeUnNITjUCm/O2Hy+VmrYeFqFNY0SSdRriENnbcxOZN4raQfhBToe5wdgo
vUXCJaaVTd5WMGJX6Gn3GevMaLjO8YlRfcqnD7rAFUGwTKdGRjgc2NbD0L3fB2Mo
Y6SIAhEFbVWp/IExGhF+RTX0GldX/NgYMGvf6onlCRbY6By24I+OJhluD6lFaogG
vyar4hPA2PMw2LUjR5sZGHPGd65LtXviRn6E1nAJ8CM9g9s6LD5nA9A7m+FEI0rL
LVJf9GjgRbyD6QF53AZanwGUoKUPaF+Jp6HhVXNWEyc2xV1GQL+9U2/BX6zyzAZP
fVeMPOtWIF9ZPqp7nQw9hhzfYWxJRh4UZ90/ErwzKYzZLYZJcPNMSbScPVB/th/n
FfI07vQHGzzlrJi+064X5V6BdvKB25qBq67GbYw88+XcrM6R+Uk=
=tsX2
-----END PGP PUBLIC KEY BLOCK-----
EOF
sudo rpm --import rabbitmq-release-signing-key.asc
添加 RabbitMQ 仓库:
sudo tee /etc/yum.repos.d/rabbitmq.repo <<EOF
[rabbitmq-erlang]
name=rabbitmq-erlang
baseurl=https://packagecloud.io/rabbitmq/erlang/el/\$releasever/\$basearch
gpgcheck=1
gpgkey=https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
[rabbitmq-server]
name=rabbitmq-server
baseurl=https://packagecloud.io/rabbitmq/rabbitmq-server/el/\$releasever/\$basearch
gpgcheck=1
gpgkey=https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
EOF
安装 启动 RabbitMQ 服务器
sudo yum install rabbitmq-server -y
sudo systemctl enable rabbitmq-server --now
尝试查看集群状态:
rabbitmqctl cluster_status
[!Warning] 此时只有单节点显示是正确的,因为还未加入节点,报错则首先确保~/.erlang.cookie 与/var/lib/rabbitmq/.erlang.cookie一致,若无/.erlang.cookie,则可重启试试,亲测有效:
sudo systemctl restart rabbitmq-server
配置节点域名
cat >> /etc/hosts << EOF
192.168.30.212 rabbit1
192.168.30.127 rabbit2
EOF
确保所有节点上.erlang.cookie文件内容一致,建议各机子检查一下:
scp /var/lib/rabbitmq/.erlang.cookie 192.168.30.127:/var/lib/rabbitmq/
将rabbit2节点加入集群当中,在rabbit2上操作:
[!Warning] rabbit1 是主机名,这个地方不能使用 ip 地址代替
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster --disc rabbit@rabbit1 //--disc表示磁盘节点(默认也是磁盘节点) rabbitmqctl start_app
需要说明的是,在执行join_cluster时,出现如下报错,是正常的:
Clustering node rabbit@node2 with rabbit@node100:01:14.574 [warn] Feature flags: the previous instance of this node must have failed to write the `feature_flags` file at `/var/lib/rabbitmq/mnesia/rabbit@node2-feature_flags`:00:01:14.596 [warn] Feature flags: - list of previously enabled feature flags now marked as such: [:maintenance_mode_status]00:01:14.619 [error] Failed to create a tracked connection table for node :rabbit@node2: {:node_not_running, :rabbit@node2}00:01:14.620 [error] Failed to create a per-vhost tracked connection table for node :rabbit@node2: {:node_not_running, :rabbit@node2}00:01:14.620 [error] Failed to create a per-user tracked connection table for node :rabbit@node2: {:node_not_running, :rabbit@node2}
再次查看集群状态:
rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
BasicsCluster name: rabbit@rabbit2Disk Nodesrabbit@rabbit1
rabbit@rabbit2Running Nodesrabbit@rabbit1
rabbit@rabbit2
创建vhost,用户,并授权:
rabbitmqctl add_vhost /mall
sudo rabbitmqctl add_user mall mall
sudo rabbitmqctl set_user_tags mall publisher
sudo rabbitmqctl set_permissions -p /mall mall ".*" ".*" ".*"
查看情况:
[root@rabbit1 ~]# sudo rabbitmqctl list_permissions -p /mall
Listing permissions for vhost "/mall" ...
user configure write read
mall .* .* .*
使用 LVS + keepalived 给 rabbitmq 做高可用
[!Warning] 因为 rabbitmq 走的是 tcp 协议,所以使用四层负载均衡
yum install -y keepalived
mv /etc/keepalived/keepalived.conf{,.bak}
vim /etc/keepalived/keepalived.conf
global_defs {router_id RABBITMQ
}
vrrp_script chk_rabbitmq {script "/etc/keepalived/check_rabbitmq.sh"interval 5weight -5fall 2rise 2
}
vrrp_instance VI_1 {state MASTERinterface ens32virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.30.50}track_script {chk_rabbitmq}
}
编写健康检查脚本
vim /etc/keepalived/check_rabbitmq.sh
#!/bin/bash# 检查 RabbitMQ 状态
rabbitmqctl status > /dev/null 2>&1if [ $? -eq 0 ]; then# RabbitMQ 运行中,启动 Keepalivedsystemctl start keepalived
else# RabbitMQ 未运行,停止 Keepalivedsystemctl stop keepalived
fi
chmod +x /etc/keepalived/check_rabbitmq.sh
启动脚本
cd /etc/keepalived
./check_rabbitmq.sh
启动
sudo systemctl enable keepalived
sudo systemctl start keepalived
安装配置 lvs
sudo yum install -y ipvsadm
ipvsadm -A -t 192.168.30.50:5672 -s rripvsadm -a -t 192.168.30.50:5672 -r 192.168.30.212:5672 -g -w 1
ipvsadm -a -t 192.168.30.50:5672 -r 192.168.30.127:5672 -g -w 1
[[#Elasticsearch]]
[!Warning] elasticsearch 采用 二进制 安装
[!NOTE] 网络源下载慢的可以先下载到本地在上传到虚拟机上
部署 lrzsz
[!NOTE] 将 elasticsearch-8.4.0-linux-x86_64.tar.gz 导入
yum install -y lrzsz
[!NOTE] 单机模式搭建
tar -zxvf elasticsearch-8.4.0-linux-x86_64.tar.gz -C /usr/local/
创建用户
useradd -m -s /bin/bash elk
授权
chown -R elk:elk /usr/local/elasticsearch-8.4.0/
修改linux系统打开文件最大数,并让其立即生效
vim /etc/sysctl.conf
vm.max_map_count=655360
sysctl -p
vim /etc/security/limits.conf
#修改最大打开文件数
* soft nofile 65536
* hard nofile 65536
# 修改最大进程数
* soft nproc 65536
* hard nproc 65536
安装 jdk
vim /etc/profile
export JAVA_HOME=/usr/local/elasticsearch-8.4.0/jdk
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME PATH CLASSPATHsource /etc/profile
查看java 版本
java -version
切换用户
su - elk
cd /usr/local/elasticsearch-8.4.0/
vim config/elasticsearch.yml
cluster.name: my-application
node.name: node-1
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
xpack.security.enabled: false
xpack.security.http.ssl:enabled: false
在后台启动
./bin/elasticsearch -d
[!Warning] 添加开机自启动脚本,请使用 root 用户操作
vim /etc/init.d/elasticsearch
#!/bin/sh#chkconfig: 2345 80 05#description: elasticsearch#author: taftexport JAVA_HOME=/usr/local/elasticsearch-8.4.0/jdkexport PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME PATH CLASSPATHcase "$1" instart)su elk<<!cd /usr/local/elasticsearch-8.4.0/./bin/elasticsearch -d!echo "elasticsearch startup";;stop)es_pid=`jps | grep Elasticsearch | awk '{print $1}'`kill -9 $es_pidecho "elasticsearch stopped";;restart)es_pid=`jps | grep Elasticsearch | awk '{print $1}'`kill -9 $es_pidecho "elasticsearch stopped"su elk<<!cd /usr/local/elasticsearch-8.4.0/./bin/elasticsearch -d!echo "elasticsearch startup";;*)echo "start|stop|restart";;esacexit $?
chmod +x /etc/init.d/elasticsearch
下载 中文分词器IKAnalyzer
wget https://github.com/infinilabs/analysis-ik/releases/download/v8.4.0/elasticsearch-analysis-ik-8.4.0.zip
下载 unzip
yum install -y unzip
unzip elasticsearch-analysis-ik-8.4.0.zip -d /usr/local/elasticsearch-8.4.0/plugins/analysis-ik/
chown -R elk:elk /usr/local/elasticsearch-8.4.0/plugins/
/etc/init.d/elasticsearch restart
浏览器访问 http://192.168.58.18:9200/
{"name": "node-1", "cluster_name": "my-application","cluster_uuid": "PIuVAUpUT7yJQU7Dwgjrzw","version": {"number": "8.4.0","build_flavor": "default","build_type": "tar","build_hash": "f56126089ca4db89b631901ad7cce0a8e10e2fe5","build_date": "2022-08-19T19:23:42.954591481Z","build_snapshot": false,"lucene_version": "9.3.0","minimum_wire_compatibility_version": "7.17.0","minimum_index_compatibility_version": "7.0.0"},"tagline": "You Know, for Search"
}
集群模式的部署
vim /usr/local/elasticsearch-8.4.0/config/elasticsearch.yml
所有的节点就在原来的基础上新加了
discovery.seed_hosts: ["192.168.30.8","192.168.30.242","192.168.30.46"]
master 节点配置文件
cluster.name: my-application
node.name: node-1
bootstrap.memory_lock: false #防止启动检测报错
network.host: 0.0.0.0 #修改监听IP
http.port: 9200
discovery.seed_hosts: ["192.168.58.18","192.168.58.19","192.168.58.20"]
cluster.initial_master_nodes: ["node-1"]
xpack.security.enabled: false #关闭安全认证,否则使用http访问不到es,第一次启动没有这个参数,第二次启动再关闭
xpack.security.http.ssl:enabled: false
其他节点配置就只是改一个
node.name: node-1
再启动
/etc/init.d/elasticsearch restart
查看
http://192.168.58.18:9200/
{"name": "node-1", "cluster_name": "my-application","cluster_uuid": "PIuVAUpUT7yJQU7Dwgjrzw","version": {"number": "8.4.0","build_flavor": "default","build_type": "tar","build_hash": "f56126089ca4db89b631901ad7cce0a8e10e2fe5","build_date": "2022-08-19T19:23:42.954591481Z","build_snapshot": false,"lucene_version": "9.3.0","minimum_wire_compatibility_version": "7.17.0","minimum_index_compatibility_version": "7.0.0"},"tagline": "You Know, for Search"
}
查看集群健康状态
curl -s -X GET "http://localhost:9200/_cat/health?v"
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1730011780 06:49:40 my-application green 3 3 2 1 0 0 0 0 - 100.0%
查看主节点状态
curl -s -X GET "http://localhost:9200/_cat/master?v"
id host ip node
Il-S5yt9SRe31ipmRA4gnw 192.168.30.8 192.168.30.8 node-1
查看集群详细信息
curl -s -X GET "http://localhost:9200/_cluster/state?pretty"
使用 nginx + keepalived 实现 elk 高可用
安装 keepalived
yum install keepalived -y
keepalived 配置
mv /etc/keepalived/keepalived.conf{,.bak}
vim /etc/keepalived/keepalived.conf
global_defs {router_id ELK
}
vrrp_script chk_elk {script "/etc/keepalived/check_elk.sh"interval 5weight -5fall 2rise 2
}
vrrp_instance VI_1 {state MASTERinterface ens32virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.30.200}track_script {chk_elk}
}
主从的 keepalived 配置文件的差异
state MASTER -----》 state BACKUP
priority 120 # 从比主低就行
健康检查脚本[在主上]
vim /etc/keepalived/check_elk.sh
#!/bin/bash# 检查 Elasticsearch 服务是否健康
curl -s http://127.0.0.1:9200/_cluster/health | grep -q '"status":"green"'# 根据 Elasticsearch 服务的健康状态,停止或启动 Keepalived
if [ $? -eq 0 ]; thensystemctl start keepalived
elsesystemctl stop keepalived
fi
chmod +x /etc/keepalived/check_elk.sh
启动脚本
cd /etc/keepalived
./check_elk.sh
启动
sudo systemctl enable keepalived
sudo systemctl start keepalived
配置 nginx
yum install -y nginx
vim /etc/nginx/nginx.conf
upstream elk {server 192.168.30.8:9200;server 192.168.30.242:9200;server 192.168.30.46:9200;}server {listen 80;server_name 192.168.30.200location / {proxy_pass http://elk}}
sudo systemctl enable nginx
sudo systemctl start nginx
[[#minio]]
如果磁盘是后面新加的就要先执行如下操作
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm /└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
sr0 11:0 1 4.4G 0 rom
新增的盘,[新增磁盘后需要重启]
sdb 8:16 0 20G 0 disk
初始化硬盘
mkfs.ext4 /dev/sdb
创建挂载磁盘的目录
mkdir -p /data/minio/{data1,data2}
持久化挂载
echo "/dev/sdb /data/minio/ ext4 defaults 0 0" >> /etc/fstab
立即生效
mount -a
查看
df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 898M 0 898M 0% /dev
tmpfs tmpfs 910M 0 910M 0% /dev/shm
tmpfs tmpfs 910M 9.4M 901M 2% /run
tmpfs tmpfs 910M 0 910M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 17G 2.0G 16G 12% /
/dev/sda1 xfs 1014M 195M 820M 20% /boot
tmpfs tmpfs 182M 0 182M 0% /run/user/0
/dev/sdb ext4 20G 45M 19G 1% /data/minio
/dev/sdb ext4 20G 45M 19G 1% /data/minio
[!Warning] minio 采用 二进制 安装
wget https://dl.minio.org.cn/server/minio/release/linux-amd64/minio
chmod +x minio
mv minio /usr/local/bin/
minio server /data/minio --console-address :9001
vim /usr/lib/systemd/system/minio.service
[Unit]Description=MinIODocumentation=https://min.io/docs/minio/linux/index.htmlWants=network-online.targetAfter=network-online.targetAssertFileIsExecutable=/usr/local/bin/minio[Service]WorkingDirectory=/usr/localUser=rootGroup=rootProtectProc=invisibleEnvironmentFile=-/etc/default/minioExecStartPre=/bin/bash -c "if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi"ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES# MinIO RELEASE.2023-05-04T21-44-30Z adds support for Type=notify (https://www.freedesktop.org/software/systemd/man/systemd.service.html#Type=)# This may improve systemctl setups where other services use `After=minio.server`# Uncomment the line to enable the functionality# Type=notify# Let systemd restart this service alwaysRestart=always# Specifies the maximum file descriptor number that can be opened by this processLimitNOFILE=65536# Specifies the maximum number of threads this process can createTasksMax=infinity# Disable timeout logic and wait until process is stoppedTimeoutStopSec=infinitySendSIGKILL=no[Install]WantedBy=multi-user.target
创建服务配置文件
[!warning] 编辑配置文件 /etc/default/minio(严格要求配置文件来写,否则读不到磁盘)
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin
MINIO_OPTS="--console-address :9001"MINIO_VOLUMES="http://192.168.30.12/data/minio/data1 http://192.168.30.12/data/minio/data2 http://192.168.30.68/data/minio/data1 http://192.168.30.68/data/minio/data2"
启动
systemctl start minio && systemctl enable minio
测试
[!Important] 输入192.168.30.12:9000可以访问到minio 网站。集群搭建完毕
使用 nginx + keepalived 实现 minio 的高可用
安装 keepalived
yum install keepalived -y
keepalived 配置
mv /etc/keepalived/keepalived.conf{,.bak}
vim /etc/keepalived/keepalived.conf
global_defs {router_id MYSQL_MS
}
vrrp_script chk_minio {script "/etc/keepalived/check_minio.sh"interval 5weight -5fall 2rise 2
}
vrrp_instance VI_1 {state MASTERinterface ens32virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.30.100}track_script {chk_minio}
}
编写健康检查脚本
vim /etc/keepalived/check_minio.sh
#!/bin/bash# MinIO服务的主机地址和端口
MINIO_HOST="127.0.0.1"
MINIO_PORT="9000"# VIP地址和网络接口
VIP="192.168.30.100"
INTERFACE="ens32"# 检查 MinIO 服务是否正在运行
if nc -z $MINIO_HOST $MINIO_PORT; then# 检查 VIP 是否已经配置在接口上if ! ip addr show $INTERFACE | grep -q "$VIP"; then# 添加 VIP 到接口systemctl start keepalivedfi
else# 如果 MinIO 服务不运行,则删除 VIPif ip addr show $INTERFACE | grep -q "$VIP"; thensystemctl start keepalivedfiexit 1
fiexit 0
chmod +x /etc/keepalived/check_minio.sh
在启动前先下个插件
yum install -y nmap-ncat
cd /etc/keepalived
./check_minio.sh
启动 keepalived
sudo systemctl enable keepalived
sudo systemctl start keepalived
配置 nginx
upstream minio {server 192.168.30.12:9000;server 192.168.30.68:9000;}server {listen 80;server_name 192.168.30.100;location / {proxy_pass http://minio;}}
启动 nginx
systemctl enable nginx --now
[[#nginx 前端]]
mall-admin-web后台部署
拉取源代码
[root@nginx-frontend ~]# git clone https://github.com/macrozheng/mall-admin-web.git
获取node.js,解压缩并修改属组
wget https://nodejs.org/dist/v12.14.0/node-v12.14.0-linux-x64.tar.gz
tar -xvf node-v12.14.0-linux-x64.tar.xz -C /usr/local/
chown -R root:root /usr/local/node-v12.14.0-linux-x64/
配置PATH环境变量
echo 'export PATH=/usr/local/node-v12.14.0-linux-x64/bin:$PATH' >>/etc/profile
source /etc/profile
node -v
npm配置淘宝源:
npm config set registry https://registry.npmmirror.com
npm config list
构建mall-admin-web:
修改配置文件:
[root@nginx-frontend mall-admin-web]# vim config/prod.env.js'use strict'module.exports = {NODE_ENV: '"production"',BASE_API: '"http://192.168.30.71:8080"'}
构建:
[root@nginx-frontend mall-admin-web]# npm cache clean –-force
[root@nginx-frontend mall-admin-web]# yum groupinstall -y Development\ Tools
[root@nginx-frontend mall-admin-web]# npm install
[root@nginx-frontend mall-admin-web]# npm run build
mall-app-web 部署
安装nginx:
yum install -y epel-release
yum install -y nginx
systemctl enable nginx.service --now
部署静态页面:
rm -rf /usr/share/nginx/html/*
cp -r /root/mall-admin-web/dist/* /usr/share/nginx/html/
访问测试:
http://192.168.30.91/默认账号:admin默认密码:macro123
[!warning] 在登录时可能会遇到这种问题
[!warning] 遇到这种文件基本就是 config/prod.env.js 配置文件的问题了,把 https 改成 http 再重新构建一下 , 构建就是下面的命令
rm -rf /usr/share/nginx/html/*
cp -r /root/mall-admin-web/dist/* /usr/share/nginx/html/
mall-app-web部署(前端的前台)
Windows上安装git:
git下载链接
安装Hbuilder X:
HBulider X 下载链接
点开后输入:
https://gitee.com/macrozheng/mall-app-web.git
[!Note] 打开 Hbuilderx 》 文件 》导入 》 从 git 导入
打开项目后修改以下文件:
修改 API_BASE_URL 的 ip 地址为后端服务器的 ip 地址
启动:
[[#nginx 后端]]
拉取代码:
yum install -y git
git clone https://gitee.com/macrozheng/mall
安装jdk和Maven:
[!INFO] JDK 下载地址
jdk
卸载原本的 jdk
yum remove java-* -y
安装
tar -zxvf jdk-8u171-linux-x64.tar.gz -C /usr/local/
vim /etc/profile
JAVA_HOME=/usr/local/jdk1.8.0_171PATH=$JAVA_HOME/bin:$PATHCLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jarexport PATH JAVA_HOME CLASSPATH
source /etc/profile
安装 maven
Maven获取地址:
搞进来,解压缩
tar xzvf apache-maven-3.8.8-bin.tar.gz -C /usr/local/ln -s /usr/local/apache-maven-3.8.8/bin/* /usr/local/bin/
查看版本:
-bash-4.2# mvn -vApache Maven 3.8.8 (4c87b05d9aedce574290d1acc98575ed5eb6cd39)
Maven home: /usr/local/apache-maven-3.8.8
Java version: 1.8.0_412, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1160.119.1.el7.x86_64", arch: "amd64", family: "unix"
若显示文件不存在,执行:
echo 'PATH=$PATH:/usr/local/bin' >> ~/.bashrc
source ~/.bashrc
mvn -v
修改Maven仓库源:
vim /usr/local/apache-maven-3.8.8/conf/settings.xml
#在158行下添加阿里仓库,加速下载
<mirror><id>aliyunmaven</id><mirrorOf>*</mirrorOf><name>阿里云公共仓库</name><url>https://maven.aliyun.com/repository/public</url>
</mirror>
注释mall/pom.xml中的 218-224行
<!--如果想在项目打包时构建镜像添加--><!-- <execution><id>build-image</id><phase>package</phase><goals><goal>build</goal></goals></execution> -->
编译mall/下的依赖:
[root@nginx-backend mall]$ mvn clean install -pl mall-common,mall-mbg,mall-security -am
mall-admin的部署:
修改mall-admin配置文件:
[root@nginx-backend mall]$ vim mall-admin/src/main/resources/application-prod.ymlspring:datasource:url: jdbc:mysql://192.168.30.126:3306/mall?useUnicode=true&characterEncoding=utf-8&serverTimezone=Asia/Shanghai&useSSL=falseusername: adminpassword: admindruid:initial-size: 5 #连接池初始化大小min-idle: 10 #最小空闲连接数max-active: 20 #最大连接数web-stat-filter:exclusions: "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*" #不统计这些请求数据stat-view-servlet: #访问监控网页的登录用户名和密码login-username: druidlogin-password: druidredis:host: redis # Redis服务器地址database: 0 # Redis数据库索引(默认为0)port: 6379 # Redis服务器连接端口password: # Redis服务器连接密码(默认为空)timeout: 300ms # 连接超时时间(毫秒)minio:endpoint: http://192.168.30.12:9000 #MinIO服务所在地址bucketName: mall #存储桶名称accessKey: minioadmin #访问的keysecretKey: minioadmin #访问的秘钥logging:file:path: /var/logslevel:root: infocom.macro.mall: infologstash:host: logstash
构建mall-admin:
[root@nginx-backend mall]$ cd mall-admin/
[root@nginx-backend mall-admin]$ mvn clean package
构建成功会在该目录生成target文件夹,其中包含一个打好的jar包
启动mall-admin:
mkdir -p /mall/mall-admincp /root/mall/mall-admin/target/mall-admin-1.0-SNAPSHOT.jar /mall/mall-admin/nohup java -jar -Dspring.profiles.active=prod /mall/mall-admin/mall-admin-1.0-SNAPSHOT.jar &> /tmp/mall-admin.log &
查看运行结果:
tail -f /tmp/mall-admin.log
![[Pasted image 20241024201830.png]]
访问:http://192.168.30.71:8080/swagger-ui/
![[Pasted image 20241027173655.png]]
Mall-portal的部署
修改mall-portal配置文件:
[root@nginx-backend mall]$ vim mall-portal/src/main/resources/application-prod.ymlserver:port: 8085spring:datasource:url: jdbc:mysql://192.168.30.126:3306/mall?useUnicode=true&characterEncoding=utf-8&serverTimezone=Asia/Shanghai&useSSL=falseusername: adminpassword: admindruid:initial-size: 5 #连接池初始化大小min-idle: 10 #最小空闲连接数max-active: 20 #最大连接数web-stat-filter:exclusions: "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*" #不统
计这些请求数据stat-view-servlet: #访问监控网页的登录用户名和密码login-username: druidlogin-password: druiddata:mongodb:host: 192.168.30.43port: 27017database: mall-portredis:host: 192.168.30.43 # Redis服务器地址database: 0 # Redis数据库索引(默认为0)port: 6379 # Redis服务器连接端口password: # Redis服务器连接密码(默认为空)timeout: 300ms # 连接超时时间(毫秒)rabbitmq:host: 192.168.30.212port: 5672virtual-host: /mallusername: mallpassword: mallmongo:insert:sqlEnable: true # 用于控制是否通过数据库数据来插入mongologging:file:path: /var/logslevel:root: infocom.macro.mall: infologstash:host: logstashalipay:gatewayUrl: https://openapi-sandbox.dl.alipaydev.com/gateway.doappId: your appIdalipayPublicKey: your alipayPublicKeyappPrivateKey: your appPrivateKeyreturnUrl: http://192.168.3.101:8060/#/pages/money/paySuccessnotifyUrl:
构建mall-portal
[root@nginx-backend mall]$ cd mall-portal/
[root@nginx-backend mall-portal]$ mvn clean package
启动mall-portal:
[root@nginx-backend mall-portal]$ mkdir /mall/mall-portal
[root@nginx-backend mall-portal]$ cp target/mall-portal-1.0-SNAPSHOT.jar /mall/mall-portal/
[root@nginx-backend mall-portal]$ nohup java -jar -Dspring.profiles.active=prod /mall/mall-portal/mall-portal-1.0-SNAPSHOT.jar &> /tmp/mall-portal.log &
通过 8085 访问
http://192.168.30.71:8085/swagger-ui/
mall-search的部署:
修改配置文件:
[root@nginx-backend mall-search]# cat src/main/resources/application-prod.yml
spring:datasource:url: jdbc:mysql://192.168.30.126:3306/mall?useUnicode=true&characterEncoding=utf-8&serverTimezone=Asia/Shanghai&useSSL=falseusername: adminpassword: admindruid:initial-size: 5 #连接池初始化大小min-idle: 10 #最小空闲连接数max-active: 20 #最大连接数web-stat-filter:exclusions: "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*" #不统计这些请求数据stat-view-servlet: #访问监控网页的登录用户名和密码login-username: druidlogin-password: druiddata:elasticsearch:repositories:enabled: trueelasticsearch:uris: 192.168.30.8:9200logging:file:path: /var/logslevel:root: infocom.macro.mall: infologstash:host: logstash
构建mall-search:
[root@nginx-backend mall-search]# mvn clean package
启动mall-search :
[root@nginx-backend mall-search]$ mkdir /mall/mall-search
[root@nginx-backend mall-search]$ cp target/mall-search-1.0-SNAPSHOT.jar /mall/mall-search/
[root@nginx-backend mall-search]$ nohup java -jar -Dspring.profiles.active=prod /mall/mall-search/mall-search-1.0-SNAPSHOT.jar &> /tmp/mall-search.log &
http://192.168.30.71:8081/swagger-ui/
用 nginx和keepalived 为后端实现负载均衡
yum -y install nginx keepalived
vim /etc/nginx/nginx.conf
listen 808;listen [::]:808;
vim /etc/nginx/conf.d/mall.conf
upstream mall-admin-web { #在server上面添加server 192.168.30.71 weight=1 max_fails=1 fail_timeout=10s;server 192.168.30.91 weight=2 max_fails=1 fail_timeout=10s;}upstream mall-admin { #在server上面添加server 192.168.20.71:8080 weight=1 max_fails=1 fail_timeout=10s;server 192.168.30.91:8080 weight=2 max_fails=1 fail_timeout=10s;}upstream mall-portal { #在server上面添加server 192.168.30.71:8085 weight=1 max_fails=1 fail_timeout=10s;server 192.168.30.91:8085 weight=2 max_fails=1 fail_timeout=10s;}upstream mall-search { #在server上面添加server 192.168.30.71:8081 weight=1 max_fails=1 fail_timeout=10s;server 192.168.30.91:8081 weight=2 max_fails=1 fail_timeout=10s;}server {listen 80;server_name localhost;#charset koi8-r;#access_log logs/host.access.log main;location / {# root html;# index index.html index.htm;proxy_pass http://mall-admin-web;}}server {listen 8080;server_name localhost;#charset koi8-r;#access_log logs/host.access.log main;location / {# root html;# index index.html index.htm;proxy_pass http://mall-admin;}}server {listen 8085;server_name localhost;#charset koi8-r;#access_log logs/host.access.log main;location / {# root html;# index index.html index.htm;proxy_pass http://mall-portal;}}server {listen 8081;server_name localhost;#charset koi8-r;#access_log logs/host.access.log main;location / {# root html;# index index.html index.htm;proxy_pass http://mall-search;}}
检查 nginx 是否报错
nginx -t
systemctl restart nginx