您的位置:首页 > 健康 > 养生 > Docker常用命令与实战示例

Docker常用命令与实战示例

2024/10/6 18:31:42 来源:https://blog.csdn.net/weixin_41674401/article/details/139887833  浏览:    关键词:Docker常用命令与实战示例

docker

  • 1. 安装
  • 2. 常用命令
  • 3. 存储
  • 4. 网络
  • 5. redis主从复制示例
  • 6. wordpress示例
  • 7. DockerFile
  • 8. 一键安装超多中间件(compose)

1. 安装

以centOS系统为例

# 移除旧版本docker
sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine# 配置docker yum源(阿里云镜像)。
sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 安装 最新 docker
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin# 启动& 开机启动docker; enable + start 二合一
systemctl enable docker --now# 配置加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

2. 常用命令

#docker命令
#查看运行中的容器
docker ps
#查看所有容器(包含非运行状态的)
docker ps -a
#搜索镜像
docker search nginx
#下载镜像
docker pull nginx
#下载指定版本镜像
docker pull nginx:1.26.0
#查看所有本地镜像
docker images
#删除指定id的镜像
docker rmi e784f4560448#运行一个新容器(以nginx为例)
docker run nginx
#停止容器
docker stop nginx
#启动容器(以容器ID=592为例,容器ID用docker ps命令查看)
docker start 592
#重启容器(以容器ID=592为例,容器ID用docker ps命令查看)
docker restart 592
#查看容器资源占用情况(以容器ID=592为例,容器ID用docker ps命令查看)
docker stats 592
#查看容器日志(以容器ID=592为例,容器ID用docker ps命令查看)
docker logs 592
#删除指定容器(以容器ID=592为例,容器ID用docker ps命令查看)
docker rm 592
#强制删除指定容器(以容器ID=592为例,容器ID用docker ps命令查看)
docker rm -f 592
# 后台启动容器(-d)
docker run -d --name mynginx nginx
# 后台启动并暴露端口(-p)
docker run -d --name mynginx -p 80:80 nginx
# 进入容器内部(exec命令)
docker exec -it mynginx /bin/bash# 提交容器变化打成一个新的镜像
docker commit -m "update index.html" mynginx mynginx:v1.0
# 保存镜像为指定文件
docker save -o mynginx.tar mynginx:v1.0
# 删除多个镜像
docker rmi bde7d154a67f 94543a6c1aef e784f4560448
# 加载镜像
docker load -i mynginx.tar # 登录 docker hub
docker login
# 重新给镜像打标签
docker tag mynginx:v1.0 leifengyang/mynginx:v1.0
# 推送镜像
docker push leifengyang/mynginx:v1.0# 查看分卷列表
Docker volume ls# 查看卷详情
docker volume inspect [volume name]
# 创建一个网络,使用自定义的网络可以实现docker之间通过名称访问
Docker network create mynet
# 列举网络
Docker network ls

3. 存储

两种方式,注意区分,命令都是“-v”,以目录开头的是目录挂载,此方式以外部目录为主,覆盖容器目录;
以变量开头的方式为卷映射,以容器内部为主,在宿主机的/var/lib/docker/volumes/目录下会新建卷,来映射容器中的指定位置

• 目录挂载: -v /app/nghtml:/usr/share/nginx/html
• 卷映射:-v ngconf:/etc/nginx

# 命令示例
docker run -d -p 80:80 \
-v /app/nghtml:/usr/share/nginx/html \
-v ngconf:/etc/nginx \
--name app03 \
nginx

配置文件可以使用卷映射,以容器为主
数据目录可以使用挂载盘,以宿主机为主

4. 网络

虽然docker官方只带网络“docker0”,但是默认网络不支持使用域名访问,一般我们需要自定义一个网络,然后让相关的容器都加入这个网络,实现容器间域名访问,注意容器间网络交互使用容器内部端口,而非宿主机映射端口。

# 创建一个网络,使用自定义的网络可以实现docker之间通过名称访问
Docker network create mynet
# 列举网络
Docker network ls
# 让容器加入mynet网络
docker run -d -p 80:80 \
-v /app/nghtml:/usr/share/nginx/html \
-v ngconf:/etc/nginx \
--name mynginx \
--network mynet \
nginx

5. redis主从复制示例

#自定义网络
docker network create mynet
#主节点
docker run -d -p 6379:6379 \
-v /app/rd1:/bitnami/redis/data \
-e REDIS_REPLICATION_MODE=master \
-e REDIS_PASSWORD=123456 \
--network mynet --name redis01 \
bitnami/redis#从节点
docker run -d -p 6380:6379 \
-v /app/rd2:/bitnami/redis/data \
-e REDIS_REPLICATION_MODE=slave \
-e REDIS_MASTER_HOST=redis01 \
-e REDIS_MASTER_PORT_NUMBER=6379 \
-e REDIS_MASTER_PASSWORD=123456 \
-e REDIS_PASSWORD=123456 \
--network mynet --name redis02 \
bitnami/redis

6. wordpress示例

#MYSQLdocker run -d -p 3306:3306 \
-e MYSQL_ROOT_PASSWORD=123456 \
-e MYSQL_DATABASE=mall \
-v /app/mysql-data:/var/lib/mysql \
-v /app/myconf:/etc/mysql/conf.d \
--restart always --name mysql01 \
--network mynet \
mysql:8.0# wordpressdocker run -d -p 8080:80 \
-e WORDPRESS_DB_HOST:mysql01 \
-e WORDPRESS_DB_USER:root \
-e WORDPRESS_DB_PASSWORD:123456 \
-e WORDPRESS_DB_NAME:wordpress \
-v wordpress:/var/www/html \
--restart always --name wordpress \
--network mynet \
wordpress

7. DockerFile

利用Dockerfile将自己的jar包打包成镜像,以app.jar为例

vim Dockerfile
# 输入以下内容
FROM openjdk:17
LABEL author=guailiwugui
COPY app.jar /app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]# esc wq 保存并退出
# 构建镜像(最后一个.表示指定目录为当前目录)
docker build -f Dockerfile -t myapp:v1.0 .

Dockerfile常见指令

8. 一键安装超多中间件(compose)

环境准备前置命令

#Disable memory paging and swapping performance
sudo swapoff -a# Edit the sysctl config file
sudo vi /etc/sysctl.conf# Add a line to define the desired value
# or change the value if the key exists,
# and then save your changes.
vm.max_map_count=262144# Reload the kernel parameters using sysctl
sudo sysctl -p# Verify that the change was applied by checking the value
cat /proc/sys/vm/max_map_count

注意:
● 将下面文件中 kafka 的 119.45.147.122 改为你自己的服务器IP。
● 所有容器都做了时间同步,这样容器的时间和linux主机的时间就一致了
准备一个 compose.yaml文件,内容如下:

name: devsoft
services:redis:image: bitnami/redis:latestrestart: alwayscontainer_name: redisenvironment:- REDIS_PASSWORD=123456ports:- '6379:6379'volumes:- redis-data:/bitnami/redis/data- redis-conf:/opt/bitnami/redis/mounted-etc- /etc/localtime:/etc/localtime:romysql:image: mysql:8.0.31restart: alwayscontainer_name: mysqlenvironment:- MYSQL_ROOT_PASSWORD=123456ports:- '3306:3306'- '33060:33060'volumes:- mysql-conf:/etc/mysql/conf.d- mysql-data:/var/lib/mysql- /etc/localtime:/etc/localtime:rorabbit:image: rabbitmq:3-managementrestart: alwayscontainer_name: rabbitmqports:- "5672:5672"- "15672:15672"environment:- RABBITMQ_DEFAULT_USER=rabbit- RABBITMQ_DEFAULT_PASS=rabbit- RABBITMQ_DEFAULT_VHOST=devvolumes:- rabbit-data:/var/lib/rabbitmq- rabbit-app:/etc/rabbitmq- /etc/localtime:/etc/localtime:roopensearch-node1:image: opensearchproject/opensearch:2.13.0container_name: opensearch-node1environment:- cluster.name=opensearch-cluster # Name the cluster- node.name=opensearch-node1 # Name the node that will run in this container- discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligibile to serve as cluster manager- bootstrap.memory_lock=true # Disable JVM heap memory swapping- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM- "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to OpenSearch- "DISABLE_SECURITY_PLUGIN=true" # Disables Security pluginulimits:memlock:soft: -1 # Set memlock to unlimited (no soft or hard limit)hard: -1nofile:soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536hard: 65536volumes:- opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container- /etc/localtime:/etc/localtime:roports:- 9200:9200 # REST API- 9600:9600 # Performance Analyzeropensearch-node2:image: opensearchproject/opensearch:2.13.0container_name: opensearch-node2environment:- cluster.name=opensearch-cluster # Name the cluster- node.name=opensearch-node2 # Name the node that will run in this container- discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligibile to serve as cluster manager- bootstrap.memory_lock=true # Disable JVM heap memory swapping- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM- "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to OpenSearch- "DISABLE_SECURITY_PLUGIN=true" # Disables Security pluginulimits:memlock:soft: -1 # Set memlock to unlimited (no soft or hard limit)hard: -1nofile:soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536hard: 65536volumes:- /etc/localtime:/etc/localtime:ro- opensearch-data2:/usr/share/opensearch/data # Creates volume called opensearch-data2 and mounts it to the containeropensearch-dashboards:image: opensearchproject/opensearch-dashboards:2.13.0container_name: opensearch-dashboardsports:- 5601:5601 # Map host port 5601 to container port 5601expose:- "5601" # Expose port 5601 for web access to OpenSearch Dashboardsenvironment:- 'OPENSEARCH_HOSTS=["http://opensearch-node1:9200","http://opensearch-node2:9200"]'- "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true" # disables security dashboards plugin in OpenSearch Dashboardsvolumes:- /etc/localtime:/etc/localtime:rozookeeper:image: bitnami/zookeeper:3.9container_name: zookeeperrestart: alwaysports:- "2181:2181"volumes:- "zookeeper_data:/bitnami"- /etc/localtime:/etc/localtime:roenvironment:- ALLOW_ANONYMOUS_LOGIN=yeskafka:image: 'bitnami/kafka:3.4'container_name: kafkarestart: alwayshostname: kafkaports:- '9092:9092'- '9094:9094'environment:- KAFKA_CFG_NODE_ID=0- KAFKA_CFG_PROCESS_ROLES=controller,broker- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://119.45.147.122:9094- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER- ALLOW_PLAINTEXT_LISTENER=yes- "KAFKA_HEAP_OPTS=-Xmx512m -Xms512m"volumes:- kafka-conf:/bitnami/kafka/config- kafka-data:/bitnami/kafka/data- /etc/localtime:/etc/localtime:rokafka-ui:container_name: kafka-uiimage: provectuslabs/kafka-ui:latestrestart: alwaysports:- 8080:8080environment:DYNAMIC_CONFIG_ENABLED: trueKAFKA_CLUSTERS_0_NAME: kafka-devKAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092volumes:- kafkaui-app:/etc/kafkaui- /etc/localtime:/etc/localtime:ronacos:image: nacos/nacos-server:v2.3.1container_name: nacosports:- 8848:8848- 9848:9848environment:- PREFER_HOST_MODE=hostname- MODE=standalone- JVM_XMX=512m- JVM_XMS=512m- SPRING_DATASOURCE_PLATFORM=mysql- MYSQL_SERVICE_HOST=nacos-mysql- MYSQL_SERVICE_DB_NAME=nacos_devtest- MYSQL_SERVICE_PORT=3306- MYSQL_SERVICE_USER=nacos- MYSQL_SERVICE_PASSWORD=nacos- MYSQL_SERVICE_DB_PARAM=characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true- NACOS_AUTH_IDENTITY_KEY=2222- NACOS_AUTH_IDENTITY_VALUE=2xxx- NACOS_AUTH_TOKEN=SecretKey012345678901234567890123456789012345678901234567890123456789- NACOS_AUTH_ENABLE=truevolumes:- /app/nacos/standalone-logs/:/home/nacos/logs- /etc/localtime:/etc/localtime:rodepends_on:nacos-mysql:condition: service_healthynacos-mysql:container_name: nacos-mysqlbuild:context: .dockerfile_inline: |FROM mysql:8.0.31ADD https://raw.githubusercontent.com/alibaba/nacos/2.3.2/distribution/conf/mysql-schema.sql /docker-entrypoint-initdb.d/nacos-mysql.sqlRUN chown -R mysql:mysql /docker-entrypoint-initdb.d/nacos-mysql.sqlEXPOSE 3306CMD ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]image: nacos/mysql:8.0.30environment:- MYSQL_ROOT_PASSWORD=root- MYSQL_DATABASE=nacos_devtest- MYSQL_USER=nacos- MYSQL_PASSWORD=nacos- LANG=C.UTF-8volumes:- nacos-mysqldata:/var/lib/mysql- /etc/localtime:/etc/localtime:roports:- "13306:3306"healthcheck:test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]interval: 5stimeout: 10sretries: 10prometheus:image: prom/prometheus:v2.52.0container_name: prometheusrestart: alwaysports:- 9090:9090volumes:- prometheus-data:/prometheus- prometheus-conf:/etc/prometheus- /etc/localtime:/etc/localtime:rografana:image: grafana/grafana:10.4.2container_name: grafanarestart: alwaysports:- 3000:3000volumes:- grafana-data:/var/lib/grafana- /etc/localtime:/etc/localtime:rovolumes:redis-data:redis-conf:mysql-conf:mysql-data:rabbit-data:rabbit-app:opensearch-data1:opensearch-data2:nacos-mysqldata:zookeeper_data:kafka-conf:kafka-data:kafkaui-app:prometheus-data:prometheus-conf:grafana-data:

启动命令:

# 在 compose.yaml 文件所在的目录下执行
docker compose up -d
# 等待启动所有容器
# tip:如果重启了服务器,可能有些容器会启动失败。再执行一遍 docker compose up -d即可。所有程序都可运行成功,并且不会丢失数据。请放心使用。

各组件端口资料

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com