您的位置:首页 > 财经 > 金融 > 七谷网络工作室_济南物流行业网站建设工具_百度云电脑版网站入口_外链推广平台

七谷网络工作室_济南物流行业网站建设工具_百度云电脑版网站入口_外链推广平台

2024/11/15 18:22:33 来源:https://blog.csdn.net/weixin_51674962/article/details/142354589  浏览:    关键词:七谷网络工作室_济南物流行业网站建设工具_百度云电脑版网站入口_外链推广平台
七谷网络工作室_济南物流行业网站建设工具_百度云电脑版网站入口_外链推广平台

1 集群规划

1.1 规划内容

  在宿主机上创建磁盘映像、并使用该磁盘映像创建虚拟块设备,按照losetup -f可用顺序分别关联到loop0, loop1, loop2, loop3;也可以直接挂载四个云虚拟磁盘vdb, vdc, vdd, vde
  配置UDEV规则,通过.img磁盘映像文件名称(路径)/ 磁盘序列号作为其唯一特征,创建持久化软链接同样到/dev下有DMDATADMLOGVOTEDCR四个持久化软链接,以防当主机重启或其他情况时块设备名称重新分配,导致loop* / vd*指向改变软链接失效
  DMDSC包括两个节点,表现为容器DSCNode1DSCNode2;容器初始化时在自身文件系统的/dev目录下挂载持久化软链接,而非将宿主机的/dev目录全部挂载,which有悖于宿主机和Docker容器的设备管理和级别定位,同时也避免了/dev下磁盘很多,可能存在多套DMDSC系统,从而DMDSC在启动时扫描DCR_DISK_LOAD_PATH发现路径下有多套DMDSC系统,导致启动失败的情况;
  容器被设计为“即用即抛”的精简理念;在容器初始化时若没有使用--privileged参数,则大多数在宿主机/dev目录下的设备不会被挂载到容器文件系统的/dev目录下;在--privileged参数加持下,宿主机则是会将loop* / vd*以及大多数设备与容器/dev目录共享,但并不包括DCR, VOTE, DMDATADMLOG这些通过UDEV规则定义的持久化软链接,因此这些持久化链接需要手动挂载;
  四个持久化软链接从宿主机分别挂载到两个容器中后,还要在容器内部进行二次软链接到/dev_DSC2目录下,防止asmsvr启动时扫描到/dev下的其他块设备,但是因为在容器内对宿主机其他vda块设备只有只读权限而导致asm启动失败的情况发生。

  集群网络规划细节如表1-1,其中DSCNode1为控制节点:

表1-1 集群网络规划

节点容器实例名称IPPORT_NUM
DSC_Node1CSS0192.168.2.29836
ASM05836
DSC016636
DSC_Node2CSS1192.168.2.39837
ASM15837
DSC026637

  一般来说,在共享存储的四块磁盘中,2块较小的(1G)用于创建DCR, VOTE磁盘;2块较大的(2T)用于创建ASM磁盘组(数据磁盘组DMDATA和联机日志磁盘组DMLOG)。

  目录规划如表1-2所示:

表1-2 目录规划

目录用途
/opt/dmdbms/dmdsc使用dmdba用户创建用于DSC环境搭建的目录
/opt/dmdbms/dmdsc/binDM执行码和工具存放目录
/opt/dmdbms/dmdsc/data/DSC01DSC_Node1节点配置文件存放目录
/opt/dmdbms/dmdsc/data/DSC02DSC_Node2节点配置文件存放目录

1.2 规划内容的代码实现

  创建网络dmdsc-test;为避免与先前数据守护集群创建的网段冲突,此次子网为192.168.2.0/24

docker network create --driver=bridge --subnet=192.168.2.0/24 dmdsc-test

  在宿主机上初始化磁盘映像,并根据磁盘映像关联虚拟块设备到loop*

[root@VM-8-6-centos ~]$ dd if=/dev/zero of=/root/DCR.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.935396 s, 1.1 GB/s
[root@VM-8-6-centos ~]$ dd if=/dev/zero of=/root/VOTE.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.913526 s, 1.2 GB/s
[root@VM-8-6-centos ~]$ dd if=/dev/zero of=/root/DMDATA.img bs=1M count=1024
1024+0 records inc
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.951041 s, 1.1 GB/s
[root@VM-8-6-centos ~]$ dd if=/dev/zero of=/root/DMLOG.img bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.924813 s, 1.2 GB/s[root@VM-8-6-centos dev]$ losetup /dev/loop0 /root/DCR.img
[root@VM-8-6-centos dev]$ losetup /dev/loop1 /root/VOTE.img
[root@VM-8-6-centos dev]$ losetup /dev/loop2 /root/DMDATA.img
[root@VM-8-6-centos dev]$ losetup /dev/loop3 /root/DMLOG.img

  上述挂载仅为示例,其虚拟块设备的容量过小(均仅1G),在后续DMLOG磁盘初始化过程中可能会出现问题,建议还是尽量通过直接挂载大容量、真实磁盘的方式实现。
  若是直接挂载虚拟磁盘,查询其全局唯一磁盘序列号,有:

[root@VM-8-6-centos rules.d]$ udevadm info --name=/dev/vdb | grep -i serial
E: ID_SERIAL=disk-rglh5yxt
[root@VM-8-6-centos rules.d]$ udevadm info --name=/dev/vdc | grep -i serial
E: ID_SERIAL=disk-77ajhl0n
[root@VM-8-6-centos rules.d]$ udevadm info --name=/dev/vdd | grep -i serial
E: ID_SERIAL=disk-mg0ylaef
[root@VM-8-6-centos rules.d]$ udevadm info --name=/dev/vde | grep -i serial
E: ID_SERIAL=disk-7v5mxxet

  配置UDEV规则,在/etc/udev/rules.d下创建文件66-dmdevice.rules;其中66为加载顺序,同目录下数字越小,规则越先加载;数字相同的情况下,通过数字前缀后的第一个英文字母顺序来确定加载顺序:

[root@VM-8-6-centos rules.d]$ cat 66-dmdevices.rules
KERNEL=="vd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="disk-rglh5yxt", SYMLINK+="DCR", OWNER="dmdba", GROUP="dinstall", MODE="0660"
KERNEL=="vd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="disk-77ajhl0n", SYMLINK+="VOTE", OWNER="dmdba", GROUP="dinstall", MODE="0660"
KERNEL=="vd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="disk-mg0ylaef", SYMLINK+="DMDATA", OWNER="dmdba", GROUP="dinstall", MODE="0660"
KERNEL=="vd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="disk-7v5mxxet", SYMLINK+="DMLOG", OWNER="dmdba", GROUP="dinstall", MODE="0660"

  配置完成后,重启systemd-udev-trigger服务以应用规则,UDEV会自动维护/dev目录下由其规则创建的symbolic link,包括创建、更新、删除等操作:

[root@VM-8-6-centos rules.d]$ systemctl restart systemd-udev-trigger

  此时,使用ls /dev命令可以发现持久化软链接已经成功创建。

  创建容器DSC_Node1DSC_Node2,同时在子网dmdsc-test内分别为其分配IP地址、挂载宿主机内的四个同名持久化软链接到宿主机文件系统/dev目录下:

docker run -d --restart=always --name=DSC_Node1 --network dmdsc-test --ip 192.168.2.2  --privileged=true -e LD_LIBRARY_PATH=/opt/dmdbms/bin -e PAGE_SIZE=32 -e EXTENT_SIZE=32 -e LOG_SIZE=2048 -e UNICODE_FLAG=1 -e INSTANCE_NAME=DSC_Node1 --mount type=bind,source=/dev/DCR,target=/dev/DCR --mount type=bind,source=/dev/VOTE,target=/dev/VOTE --mount type=bind,source=/dev/DMDATA,target=/dev/DMDATA --mount type=bind,source=/dev/DMLOG,target=/dev/DMLOG dm8_single:dm8_20240715_rev232765_x86_rh6_64docker run -d --restart=always --name=DSC_Node2 --network dmdsc-test --ip 192.168.2.3  --privileged=true -e LD_LIBRARY_PATH=/opt/dmdbms/bin -e PAGE_SIZE=32 -e EXTENT_SIZE=32 -e LOG_SIZE=2048 -e UNICODE_FLAG=1 -e INSTANCE_NAME=DSC_Node2 --mount type=bind,source=/dev/DCR,target=/dev/DCR --mount type=bind,source=/dev/VOTE,target=/dev/VOTE --mount type=bind,source=/dev/DMDATA,target=/dev/DMDATA --mount type=bind,source=/dev/DMLOG,target=/dev/DMLOG dm8_single:dm8_20240715_rev232765_x86_rh6_64

  由于DM8 Docker版创建后自动开启服务,我们分别在两个节点创建后先将其关闭:

root@a0d20641b3cc:/opt/dmdbms/bin$ ./DmService stop
Stopping DmService:                                        [ OK ]root@6fbff487ae8f:/opt/dmdbms/bin$ ./DmService stop
Stopping DmService:                                        [ OK ]

  按照目录规划分别在节点DSC_Node1和DSC_Node2中创建对应目录:

# DSC_Node1
root@a0d20641b3cc:/$ mkdir -p /opt/dmdbms/dmdsc/bin
root@a0d20641b3cc:/$ mkdir -p /opt/dmdbms/dmdsc/data/DSC01
root@a0d20641b3cc:/$ mkdir -p /dev_DSC2# DSC_Node2
root@6fbff487ae8f:/$ mkdir -p /opt/dmdbms/dmdsc/bin
root@6fbff487ae8f:/$ mkdir -p /opt/dmdbms/dmdsc/data/DSC02
root@6fbff487ae8f:/$ mkdir -p /dev_DSC2

  对/dev下挂载的宿主机软链接再次同名软链接到/dev_DSC2目录下:

root@a0d20641b3cc:/$ ln -s /dev/DCR /dev_DSC2/DCR
root@a0d20641b3cc:/$ ln -s /dev/VOTE /dev_DSC2/VOTE
root@a0d20641b3cc:/$ ln -s /dev/DMDATA /dev_DSC2/DMDATA
root@a0d20641b3cc:/$ ln -s /dev/DMLOG /dev_DSC2/DMLOGroot@6fbff487ae8f:/$ ln -s /dev/DCR /dev_DSC2/DCR
root@6fbff487ae8f:/$ ln -s /dev/VOTE /dev_DSC2/VOTE
root@6fbff487ae8f:/$ ln -s /dev/DMDATA /dev_DSC2/DMDATA
root@6fbff487ae8f:/$ ln -s /dev/DMLOG /dev_DSC2/DMLOG

2 配置dmdcr_cfg.ini

  dmdcr_cfg.ini分别放置在DSC_Node1DSC_Node2/opt/dmdbms/dmdsc/data/DSC01/opt/dmdbms/dmdsc/data/DSC02目录下。其内容为:

DCR_N_GRP				= 3
DCR_VTD_PATH			= /dev_DSC2/VOTE
DCR_OGUID				= 237589[GRP]
DCR_GRP_TYPE			= CSS
DCR_GRP_NAME			= GRP_CSS
DCR_GRP_N_EP			= 2
DCR_GRP_DSKCHK_CNT		= 60[GRP_CSS]
DCR_EP_NAME				= CSS0
DCR_EP_HOST				= 192.168.2.2
DCR_EP_PORT				= 9836[GRP_CSS]
DCR_EP_NAME				= CSS1
DCR_EP_HOST				= 192.168.2.3
DCR_EP_PORT				= 9837[GRP]
DCR_GRP_TYPE			= ASM
DCR_GRP_NAME			= GRP_ASM
DCR_GRP_N_EP			= 2
DCR_GRP_DSKCHK_CNT		= 60[GRP_ASM]
DCR_EP_NAME				= ASM0
DCR_EP_SHM_KEY			= 64735
DCR_EP_SHM_SIZE			= 512
DCR_EP_HOST				= 192.168.2.2
DCR_EP_PORT				= 5836
DCR_EP_ASM_LOAD_PATH	= /dev_DSC2[GRP_ASM]
DCR_EP_NAME				= ASM1
DCR_EP_SHM_KEY			= 54736
DCR_EP_SHM_SIZE			= 512
DCR_EP_HOST				= 192.168.2.3
DCR_EP_PORT				= 5837
DCR_EP_ASM_LOAD_PATH	= /dev_DSC2[GRP]
DCR_GRP_TYPE			= DB
DCR_GRP_NAME			= GRP_DSC
DCR_GRP_N_EP			= 2
DCR_GRP_DSKCHK_CNT		= 60[GRP_DSC]
DCR_EP_NAME				= DSC01
DCR_EP_SEQNO			= 0
DCR_EP_PORT				= 6636[GRP_DSC]
DCR_EP_NAME				= DSC02
DCR_EP_SEQNO			= 1
DCR_EP_PORT				= 6637

3 DMASMCMD初始化磁盘

  在DSC_Node1上使用DMASMCMD工具初始化所有磁盘:

root@a0d20641b3cc:/opt/dmdbms/bin$ ./dmasmcmd
dmasmcmd V8ASM>create dcrdisk '/dev_DSC2/DCR' 'DCR'
[TRACE]The ASM initialize dcrdisk /dev_DSC2/DCR to name DMASMDCR
Used time: 13.898(ms).ASM>create votedisk '/dev_DSC2/VOTE' 'VOTE'
[TRACE]The ASM initialize votedisk /dev_DSC2/VOTE to name DMASMVOTE
Used time: 20.957(ms).ASM>create asmdisk '/dev_DSC2/DMDATA' 'DMDATA'
[TRACE]The ASM initialize asmdisk /dev_DSC2/DMDATA to name DMASMDMDATA
Used time: 22.817(ms).ASM>create asmdisk '/dev_DSC2/DMLOG' 'DMLOG'
[TRACE]The ASM initialize asmdisk /dev_DSC2/DMLOG to name DMASMDMLOG
Used time: 12.181(ms).ASM>init dcrdisk '/dev_DSC2/DCR' from '/opt/dmdbms/dmdsc/data/DSC01/dmdcr_cfg.ini' identified by 'SYSDBA'
[TRACE]DG 126 alloc extent for inode (0, 0, 1)
[TRACE]DG 126 alloc 4 extents for 0xfe000002 (0, 0, 2)->(0, 0, 5)
Used time: 261.193(ms).ASM>init votedisk '/dev_DSC2/VOTE' from '/opt/dmdbms/dmdsc/data/DSC01/dmdcr_cfg.ini'
[TRACE]DG 125 alloc extent for inode (0, 0, 1)
[TRACE]DG 125 alloc 4 extents for 0xfd000002 (0, 0, 2)->(0, 0, 5)
Used time: 175.005(ms).

  事实上,在哪里都可以通过DMASMCMD工具进行初始化,但由于实际的共享存储环境下DMASMCMD工具只存在于数据库实例的/opt/dmdbms/bin,所以需要在某一个连接到共享存储的数据库实例中才能使用的到DMASMCMD工具;
  如果只是像此次的“容器 - 宿主机”环境,且宿主机上有DMASMCMD工具,也可以使用宿主机的DMASMCMD工具完成上述过程,包括dmdcr_cfg.ini文件的路径也可以放置在宿主机的合适目录下。

4 配置dmasvrmal.ini

  分别在DSC_Node1DSC_Node2/opt/dmdbms/dmdsc/data/DSC01/opt/dmdbms/dmdsc/data/DSC02下配置相同内容的dmasvrmal.ini,内容如下:

[MAL_INST1]
MAL_INST_NAME			= ASM0
MAL_HOST				= 192.168.2.2
MAL_PORT				= 4836[MAL_INST2]
MAL_INST_NAME			= ASM1
MAL_HOST				= 192.168.2.3
MAL_PORT				= 4837

5 配置dmdcr.ini

  分别在DSC_Node1和DSC_Node2的/opt/dmdbms/dmdsc/data/DSC01/opt/dmdbms/dmdsc/data/DSC02下配置dmdcr.ini,两者内容分别如下:

# DSC_Node1
root@a0d20641b3cc:/opt/dmdbms/dmdsc/data/DSC01$ cat dmdcr.ini
DMDCR_PATH					= /dev_DSC2/DCR
DMDCR_MAL_PATH				= /opt/dmdbms/dmdsc/data/DSC01/dmasvrmal.ini
DMDCR_SEQNO					= 0
DMDCR_ASM_RESTART_INTERVAL	= 0
DMDCR_ASM_STARTUP_CMD		= /opt/dmdbms/bin/dmasmsvr dcr_ini=/opt/dmdbms/dmdsc/data/DSC01/dmdcr.ini
DMDCR_DB_RESTART_INTERVAL	= 0
DMDCR_DB_STARTUP_CMD		= /opt/dmdbms/bin/dmserver path=/opt/dmdbms/dmdsc/data/DSC01/DSC01_conf/dm.ini dcr_ini=/opt/dmdbms/dmdsc/data/DSC01/dmdcr.ini
DMDCR_LINK_CHECK_IP			= 192.168.2.1# DSC_Node2
root@6fbff487ae8f:/opt/dmdbms/dmdsc/data/DSC02$ cat dmdcr.ini
DMDCR_PATH					= /dev_DSC2/DCR
DMDCR_MAL_PATH				= /opt/dmdbms/dmdsc/data/DSC02/dmasvrmal.ini
DMDCR_SEQNO					= 1
DMDCR_ASM_RESTART_INTERVAL	= 0
DMDCR_ASM_STARTUP_CMD		= /opt/dmdbms/bin/dmasmsvr dcr_ini=/opt/dmdbms/dmdsc/data/DSC02/dmdcr.ini
DMDCR_DB_RESTART_INTERVAL	= 0
DMDCR_DB_STARTUP_CMD		= /opt/dmdbms/bin/dmserver path=/opt/dmdbms/dmdsc/data/DSC02/DSC02_conf/dm.ini dcr_ini=/opt/dmdbms/dmdsc/data/DSC02/dmdcr.ini
DMDCR_LINK_CHECK_IP			= 192.168.2.1

6 向DMSERVER和DMASMSVR赋予ping权限

  由于在dmdcr.ini中设置了DMDCR_LINK_CHECK_IP,所以需要为DSC_Node1和DSC_Node2两个节点的DMSERVER和DMASMSVR赋予ping权限;
  容器的对应镜像内默认并不包含ping工具,因此需要分别在两个容器中执行如下命令以安装:

apt-get update
apt-get install iputils-ping

  安装后可以先尝试目标服务器能不能ping通,这里跳过该步骤;
  分别在DSC_Node1DSC_Node2两个容器中,通过以下命令赋予二进制文件能力,实现比“root”更细粒度的权限控制:

sudo setcap cap_net_raw,cap_net_admin=eip /opt/dmdbms/bin/dmserver
sudo setcap cap_net_raw,cap_net_admin=eip /opt/dmdbms/bin/dmasmsvr

  对上述“能力”设置的解释:

  • setcap工具用于为二进制可执行文件设置Linux能力,这是比root权限更加细粒度的权限控制,允许程序只获取需要的权限而不是完全超级用户权限;
  • cap_net_raw参数允许程序执行原始套接字工作(raw socket),这对于pingtraceroute这种需要低层网络通信的程序非常重要;
  • cap_net_admin参数允许程序执行网络管理相关的操作,比如修改网络接口、配置路由表等;
  • eipEffective, Inherited, Permitted三者的缩写。其中,Permitted代表表示程序允许拥有这些能力;Effective表示当程序执行时,这些能力将会生效;Inherited表示子进程可以从该父进程中继承这些能力;
  • 在setcap命令中,eip修饰的是所有指定的能力;即在该示例中,eip同时适用于cap_net_rawcap_net_admin两个能力。

7 启动DMCSS、DMASMSVR

  分别在DSC_Node1和DSC_Node2容器启动DMCSS和DMASMSVR;
  注意⚠️,启动顺序必须为先启动DMCSS,后启动DMASM

  1. 启动DSC_Node1的DMCSS:
root@a0d20641b3cc:/opt/dmdbms/bin$ ./dmcss dcr_ini=/opt/dmdbms/dmdsc/data/DSC01/dmdcr.ini
  1. 启动DSC_Node2的DMCSS:
root@6fbff487ae8f:/opt/dmdbms/bin$ ./dmcss dcr_ini=/opt/dmdbms/dmdsc/data/DSC02/dmdcr.ini
  1. 启动DSC_Node1的DMASMSVR:
root@a0d20641b3cc:/opt/dmdbms/bin$ ./dmasmsvr dcr_ini=/opt/dmdbms/dmdsc/data/DSC01/dmdcr.ini
  1. 启动DSC_Node2的DMASMSVR:
root@6fbff487ae8f:/opt/dmdbms/bin$ ./dmasmsvr dcr_ini=/opt/dmdbms/dmdsc/data/DSC02/dmdcr.ini

8 登录DMASMTOOL工具创建ASM磁盘组

  任意一个数据库实例登录DMASMTOOL工具都可以创建ASM磁盘组。这里选择在DSC_Node1上登录并操作,创建一个DMDATA磁盘组一个DMLOG磁盘组

root@a0d20641b3cc:/opt/dmdbms/bin$ ./dmasmtool dcr_ini=/opt/dmdbms/dmdsc/data/DSC01/dmdcr.ini
DMASMTOOL V8
ASM>CREATE DISKGROUP DMDATA asmdisk '/dev_DSC2/DMDATA'
Used time: 119.051(ms).
ASM>CREATE DISKGROUP DMLOG asmdisk '/dev_DSC2/DMLOG'
Used time: 111.002(ms).

9 配置dminit.ini

  在任意一个数据库实例上创建dminit.ini均可。此次选择在DSC_Node1上配置,保存在/opt/dmdbms/dmdsc/data/DSC01目录下;其文件内容有:

DB_NAME			= dsc2
SYSTEM_PATH		= +DMDATA/data
SYSTEM			= +DMDATA/data/dsc2/system.dbf
SYSTEM_SIZE		= 128
ROLL			= +DMDATA/data/dsc2/roll.dbf
ROLL_SIZE		= 128
MAIN			= +DMDATA/data/dsc2/main.dbf
MAIN_SIZE		= 128
CTL_PATH		= +DMDATA/data/dsc2/dm.ctl
LOG_SIZE		= 2048
DCR_PATH		= /dev_DSC2/DCR
DCR_SEQNO		= 0
AUTO_OVERWRITE		= 2
PAGE_SIZE		= 16
EXTENT_SIZE		= 16[DSC01]
CONFIG_PATH		= /opt/dmdbms/dmdsc/data/DSC01/DSC01_conf
PORT_NUM		= 6636
MAL_HOST		= 192.168.2.2
MAL_PORT		= 6536
LOG_PATH		= +DMLOG/log/DSC01_log1.log
LOG_PATH		= +DMLOG/log/DSC01_log2.log[DSC02]
CONFIG_PATH		= /opt/dmdbms/dmdsc/data/DSC02/DSC02_conf
PORT_NUM		= 6637
MAL_HOST		= 192.168.2.3
MAL_PORT		= 6537
LOG_PATH		= +DMLOG/log/DSC02_log1.log
LOG_PATH		= +DMLOG/log/DSC02_log2.log

10 初始化数据库环境

  使用上一章节创建的dminit.ini初始化数据库环境。在任何一个节点(此处选择DSC_Node1节点)执行如下命令:

root@a0d20641b3cc:/opt/dmdbms/bin$ ./dminit control=/opt/dmdbms/dmdsc/data/DSC01/dminit.ini
initdb V8
db version: 0x7000c
file dm.key not found, use default license!
License will expire on 2025-06-21
Normal of FAST
Normal of DEFAULT
Normal of RECYCLE
Normal of KEEP
Normal of ROLLlog file path: +DMLOG/log/DSC01_log1.loglog file path: +DMLOG/log/DSC01_log2.loglog file path: +DMLOG/log/DSC02_log1.loglog file path: +DMLOG/log/DSC02_log2.logwrite to dir [+DMDATA/data/dsc2].
create dm database success. 2024-09-25 16:40:14

  根据dminit.ini中的内容,dminit工具完成了数据库环境的初始化,在DSC_Node1节点下的/opt/dmdbms/dmdsc/data/DSC01/DSC01_conf/opt/dmdbms/dmdsc/data/DSC02/DSC02_conf路径下分别生成了两套配置文件,分别适用于DSC_Node1DSC_Node2两个节点。在DSC_Node1节点分别查看两个路径下生成的配置文件如下:

root@a0d20641b3cc:/opt/dmdbms/dmdsc/data/DSC01/DSC01_conf$ ls
dm.ini  dminit20240925164011.log  dmmal.ini  sqllog.iniroot@a0d20641b3cc:/opt/dmdbms/dmdsc/data/DSC02/DSC02_conf$ ls
dm.ini  dmmal.ini  sqllog.ini

  使用docker cp命令,借助宿主机文件系统,将DSC_Node1上根据dminit.ini初始化生成的DSC_Node2相关的内容拷贝到DSC_Node2的对应目录下:

[root@VM-8-6-centos ~]$ docker cp DSC_Node1:/opt/dmdbms/dmdsc/data/DSC02 /root/DSC02
Successfully copied 85kB to /root/DSC02[root@VM-8-6-centos ~]$ docker cp /root/DSC02/DSC02_conf DSC_Node2:/opt/dmdbms/dmdsc/data/DSC02
Successfully copied 84.5kB to DSC_Node2:/opt/dmdbms/dmdsc/data/DSC02

  在DSC_Node2中查看:

root@6fbff487ae8f:/opt/dmdbms/dmdsc/data/DSC02$ ls
DSC02_conf  dmasvrmal.ini  dmdcr.ini  dmdcr_cfg.iniroot@6fbff487ae8f:/opt/dmdbms/dmdsc/data/DSC02$ cd DSC02_confroot@6fbff487ae8f:/opt/dmdbms/dmdsc/data/DSC02/DSC02_conf$ ls
dm.ini  dmmal.ini  sqllog.ini

11 启动数据库服务器

  分别启动两个节点的服务器:

root@a0d20641b3cc:/opt/dmdbms/bin$ ./dmserver dcr_ini=/opt/dmdbms/dmdsc/data/DSC01/dmdcr.ini /opt/dmdbms/dmdsc/data/DSC01/DSC01_conf/dm.iniroot@6fbff487ae8f:/opt/dmdbms/bin$ ./dmserver dcr_ini=/opt/dmdbms/dmdsc/data/DSC02/dmdcr.ini /opt/dmdbms/dmdsc/data/DSC02/DSC02_conf/dm.ini

12 配置并启动DMCSSM

  DMCSSM 在任何机器上均可以启动,只要该台机器和 DMDSC 的真实机器网络是相通的,就可以监控 DMDSC 集群信息;
  这里我们选择在DSC_Node1节点上搭建DMCSSM。在/opt/dmdbms/dmdsc/data下创建dmcssm.ini内容如下:

root@a0d20641b3cc:/$ cat /opt/dmdbms/dmdsc/data/dmcssm.ini
CSSM_OGUID				= 237589CSSM_CSS_IP				= 192.168.2.2:9836
CSSM_CSS_IP				= 192.168.2.3:9837CSSM_LOG_PATH			= /opt/dmdbms/dmdsc/data/cssm_log
CSSM_LOG_FILE_SIZE		= 32
CSSM_LOG_SPACE_LIMIT	= 0

  注意保持dmcssm.iniCSSM_OGUID的值与dmdcr_cfg.ini中配置的DCR_OGUID的值一致;
  根据dmcssm.ini中的配置项的值,对应地创建 DMCSSM 的日志存放路径:

root@a0d20641b3cc:/opt/dmdbms/dmdsc/data$ mkdir cssm_log

  启动DMCSSM:

root@a0d20641b3cc:/opt/dmdbms/bin$ ./dmcssm ini_path=/opt/dmdbms/dmdsc/data/dmcssm.ini
[monitor]         2024-09-25 17:35:51: CSS MONITOR V8
[monitor]         2024-09-25 17:35:51: CSS MONITOR SYSTEM IS READY.[monitor]         2024-09-25 17:35:51: Wait CSS Control Node choosed...
[monitor]         2024-09-25 17:35:52: Wait CSS Control Node choosed succeed.

  使用show命令查看集群状态信息:

showmonitor current time:2024-09-25 17:37:06, n_group:3
=================== group[name = GRP_CSS, seq = 0, type = CSS, Control Node = 0] ========================================[CSS0] auto check = TRUE, global info:
[ASM0] auto restart = FALSE
[DSC01] auto restart = FALSE
[CSS1] auto check = TRUE, global info:
[ASM1] auto restart = FALSE
[DSC02] auto restart = FALSEep:	css_time               inst_name     seqno     port    mode         inst_status        vtd_status   is_ok        active       guid              ts              2024-09-25 17:37:06    CSS0          0         9836    Control Node OPEN               WORKING      OK           TRUE         1186081327        1186085368      2024-09-25 17:37:06    CSS1          1         9837    Normal Node  OPEN               WORKING      OK           TRUE         1186113029        1186116994      =================== group[name = GRP_ASM, seq = 1, type = ASM, Control Node = 0] ========================================n_ok_ep = 2
ok_ep_arr(index, seqno):
(0, 0)
(1, 1)sta = OPEN, sub_sta = STARTUP
break ep = NULL
recover ep = NULLcrash process over flag is TRUE
ep:	css_time               inst_name     seqno     port    mode         inst_status        vtd_status   is_ok        active       guid              ts              2024-09-25 17:37:06    ASM0          0         5836    Control Node OPEN               WORKING      OK           TRUE         1186159387        1186163241      2024-09-25 17:37:06    ASM1          1         5837    Normal Node  OPEN               WORKING      OK           TRUE         1186173736        1186177556      =================== group[name = GRP_DSC, seq = 2, type = DB, Control Node = 0] ========================================n_ok_ep = 2
ok_ep_arr(index, seqno):
(0, 0)
(1, 1)sta = OPEN, sub_sta = STARTUP
break ep = NULL
recover ep = NULLcrash process over flag is TRUE
ep:	css_time               inst_name     seqno     port    mode         inst_status        vtd_status   is_ok        active       guid              ts              2024-09-25 17:37:06    DSC01         0         6636    Control Node OPEN               WORKING      OK           TRUE         94136539          94137304        2024-09-25 17:37:06    DSC02         1         6637    Normal Node  OPEN               WORKING      OK           TRUE         94142912          94143613        ==================================================================================================================

  至此,基于 DMASM 的 DMDSC 已经搭建完成。

  社区地址:https://eco.dameng.com

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com