4 docker 网络管理

4.1 Docker的默认网络通信

4.1.1 Docker默认的网络设置

docker 默认使用桥接网络模式,启动docker服务后会在宿主机上创建一个 docker0虚拟桥接网卡,使用固定IP地址 172.17.0.1桥接宿主机上物理网卡,每启动运行一个docker容器,宿主机上也会创建一个 veth虚拟网卡,与对应容器内部网卡关联,veth虚拟网卡由和 docker0虚拟桥接网卡绑定,容器内部网卡IP地址通过docker内部的DHCP服务自动分配地址(172.17.0.0/16 )

image-20240216154719354

创建容器网络实现流程

  • 启动运行一个新容器,宿主机上创建一个 veth@if<ID>虚拟网卡与容器内部网卡 eth0@if<ID>关联
  • 容器 eth0@if<ID>自动获取一个172.17.0.0/16网段的随机地址,默认从172.17.0.2开始,第二次容器为 172.17.0.3,以此类推
  • 容器网卡 eth0@if<ID>指向宿主机上虚拟 veth网卡编号,同理宿主机上 veth@if<ID>指定容器网卡编号,默认情况下:宿主机 vethID编号为奇数,而容器网卡编号是在对应宿主机 veth网卡 ID + 1
  • 容器获取的地址并不固定,每次容器重启,可能会发生地址变化

范例: Docker的默认网络配置

[root@ubuntu1804 ~ ]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fa:6a:d6:38 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:faff:fe6a:d638/64 scope link
       valid_lft forever preferred_lft forever

#查看此时宿主机桥接网卡情况
[root@ubuntu1804 ~ ]# apt install -y bridge-utils
[root@ubuntu1804 ~ ]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fa6ad638	no

运行二个容器后的网络状态情况

#运行nginx01容器
[root@ubuntu1804 ~ ]# docker run -d -P --name nginx01 nginx:1.21.1-alpine

#运行nginx02容器
[root@ubuntu1804 ~ ]# docker run -d -P --name nginx02 nginx:1.21.1-alpine

#查看宿主机上网卡变化
root@ubuntu1804 ~ ]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:fa:6a:d6:38 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:faff:fe6a:d638/64 scope link
       valid_lft forever preferred_lft forever
9: vethb5effc1@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether ee:f1:77:18:5c:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecf1:77ff:fe18:5c66/64 scope link
       valid_lft forever preferred_lft forever
11: vethdc6bf6f@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 62:67:9e:51:1c:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::6067:9eff:fe51:1cf0/64 scope link
       valid_lft forever preferred_lft forever

#宿主机上网卡桥接情况
[root@ubuntu1804 ~ ]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fa6ad638	no		vethb5effc1		#对应nginx01容器网卡
										vethdc6bf6f		#对应nginx02容器网卡

#查看nginx01容器网卡
[root@ubuntu1804 ~ ]# docker exec -it nginx01 sh
/ # ip a
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#查看nginx02容器网卡
[root@ubuntu1804 ~ ]# docker exec -it nginx02 sh
/ # ip a
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

4.1.2 容器间的通信

4.1.2.1 同一个宿主机的不同容器可相互通信

docker容器服务默认允许在同一个宿主机的不同容器之间可以相互通信,是由启动docker服务参数控制

dockerd --icc		#dockerd默认参数,开启容器之间通信
dockerd --icc=false	#关闭容器之间通信,容器和宿主机还可以通信

范例: 同一个宿主机的容器之间访问

#docker服务默认配置
[root@ubuntu1804 ~ ]# cat /lib/systemd/system/docker.service
...........
[Service]
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

#nginx01容器访问nginx02
[root@ubuntu1804 ~ ]# docker exec -it nginx01 sh
/ # ping -c2 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.108 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.097 ms

#nginx02容器访问nginx01
[root@ubuntu1804 ~ ]# docker exec -it nginx02 sh
/ # ping -c2 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.208 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.100 ms

4.1.2.2 同一个宿主机不同容器间禁止通信

[root@ubuntu1804 ~ ]# vim /lib/systemd/system/docker.service
[Service]
#在下面一行最后添加 --icc=false
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --icc=false

#启动docker服务
[root@ubuntu1804 ~ ]# systemctl daemon-reload
[root@ubuntu1804 ~ ]# systemctl restart docker

#启动容器
docker start nginx01
docker start nginx02

#测试容器之间内部访问,无法通信
[root@ubuntu1804 ~ ]# docker exec -it nginx01 sh
/ # hostname -i
172.17.0.2
/ # ping -c2 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes


[root@ubuntu1804 ~ ]# docker exec -it nginx02 sh
/ # hostname -i
172.17.0.3
/ # ping -c2 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes

4.2 容器名称互连

4.2.1 容器互连介绍

新建容器时,docker会给自动分配容器名称,容器ID和IP地址,导致容器名称,容器ID和IP不固定,那么如何区分不同的容器,实现和确定目标容器的通信呢?解决方案是给容器起个固定的名称,容器之间通过固定名称实现确定目标的通信

有两种固定名称:

  • 容器名称
  • 容器名称的别名

注意:两种方式都最少需要两个容器才能实现

4.2.2 容器名称互连

docker run 创建容器,可使用 –link 选项实现容器名称的引用

docker run --name <容器名称> 				 #先创建指定名称的容器
docker run --link <目标通信的容器ID或容器名称>  #再创建容器时引用上面容器的名称

#注意点:
容器名称互连指定的容器必须事先已经存在
也就是说:创建两个容器nginx1和nginx2,但是nginx1容器是先创建的,在nginx1创建时nginx2容器并不存在, nginx1容器无法使用docker --link 名称引用nginx2容器

实战案例:使用容器名称进行容器间通信

[root@ubuntu1804 ~ ]# docker ps -a
CONTAINER ID     IMAGE       COMMAND     CREATED        STATUS      PORTS               NAMES

#尝试没有nginx1名称容器时,创建nginx2容器 --link 引用nginx1容器名称. 报错如下
[root@ubuntu1804 ~ ]# docker run -d --name nginx2 --link nginx1 nginx:1.21.1-alpine
docker: Error response from daemon: could not get container for nginx1: No such container: nginx1.

#创建nginx1容器
[root@ubuntu1804 ~ ]# docker run -d --name nginx1 nginx:1.21.1-alpine
a07bcc52e4f062c571c9cafe28a74ca1b75a59d25fdeba1c496e5b9cc29fbd50
[root@ubuntu1804 ~ ]# docker exec -it nginx1 sh
/ # cat /etc/hosts
192.168.200.2	a07bcc52e4f0

#然后创建nginx2容器
[root@ubuntu1804 ~ ]# docker run -d --name nginx2 --link nginx1 nginx:1.21.1-alpine
[root@ubuntu1804 ~ ]# docker exec -it nginx2 sh
/ # cat /etc/hosts
172.17.0.2	nginx1 a07bcc52e4f0			#docker自动把nginx1的容器名称和IP地址写入nginx2容器的hosts解析文件
172.17.0.3	9d60c0ce4aaa

/ # ping -c2 nginx1
PING c1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.232 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.095 ms

4.2.3 容器别名互连

自定义的容器名称可能后期会发生变化,那么一旦名称发生变化,容器内程序之间也必须要随之发生变化,比如:程序通过固定的容器名称进行服务调用,但是容器名称发生变化之后再使用之前的名称肯定是无法成功调用,每次都进行更改的话又比较麻烦,因此可以使用自定义别名的方式解决,即容器名称可以随意更,只要不更改别名即可

命令格式:

docker run --name <容器名称> 		#先创建指定名称的容器
docker run -d --name 容器名称 --link <目标容器名称>:“alias1 alias2 alias3...” #给上现创建的容器起别名,来创建新容器

范例:

#创建nginx1容器
docker run -d --name nginx1 nginx:1.21.1-alpine

#使用容器名称引用创建nginx2容器
docker run -d --name nginx2 --link nginx1 nginx:1.21.1-alpine

#使用容器别名创建nginx3容器
[root@ubuntu1804 ~ ]# docker run -d --name nginx3 --link nginx1:nginx1-alias --link nginx2:nginx2-alias nginx:1.21.1-alpine

[root@ubuntu1804 ~ ]# docker exec -it nginx3 sh
/ # cat /etc/hosts
172.17.0.2	nginx1-alias 6f536ca63ac9 nginx1
172.17.0.3	nginx2-alias cba57980c90a nginx2
172.17.0.4	bc361e9231fd
/ # ping -c2 nginx1-alias
PING nginx1-alias (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.193 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.088 ms

/ # ping -c2 nginx1
PING nginx1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.078 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.091 ms

/ # ping -c2 nginx2-alias
PING nginx2-alias (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.146 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.181 ms

/ # ping -c2 nginx2
PING nginx2 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.105 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.090 ms

4.2.4 实现 wordpress 和 MySQL 两个容器互连

[root@ubuntu1804 ~ ]# mkdir /data/lamp
[root@ubuntu1804 ~ ]# vim /data/lamp/env_mysql.list
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wordpress
MYSQL_PASSWORD=123456

[root@ubuntu1804 ~ ]# vim /data/lamp/env_wordpress.list
WORDPRESS_DB_HOST=mysql:3306
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_USER=wordpress
WORDPRESS_DB_PASSWORD=123456
WORDPRESS_TABLE_PREFIX=wp_

#运行数据库容器
docker run -d --name mysql -v /data/mysql/data/:/var/lib/mysql --env-file /data/lamp/env_mysql.list -p 3306:3306 mysql:5.7.30

#运行WordPress容器
docker pull wordpress:5.8.0-php8.0-apache
docker run -d --name wordpress --link mysql:mysql-server --env-file /data/lamp/env_wordpress.list -p 80:80 wordpress:5.8.0-php8.0-apache

4.3 docker网络模式

4.3.1 docker 网络模式

默认新建的容器使用Bridge模式,创建容器时,docker run 命令使用以下选项指定网络模式

docker run --network <net_mode>
docker run --net=<net_mode>

docker默认支持的网络模式

  • bridge
  • host
  • none
  • contarine

查看docker默认支持网络模式

[root@ubuntu1804 ~ ]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d28b2aaa5e8b        bridge              bridge              local
e6528b92fb0b        host                host                local
62327ec0d6e2        none                null                local

4.3.2 bridge

4.3.2.1 桥接bridge

image-20240216154742558

本模式是docker的默认模式,即不指定任何模式就是bridge模式,也是使用比较多的模式,此模式创建的容器会为每一个容器分配自己的网络 IP 等信息,并将容器连接到一个虚拟网桥与外界通信

注意:桥接模式宿主机的需要启动ip_forward功能,docker服务启动默认启动了ip_forward功能

bridge网络模式特点

  • 网络资源隔离:不同宿主机的容器无法直接通信,各自使用独立网络
  • 无需手动配置:容器默认自动获取172.17.0.0/16的IP地址,此地址可以修改
  • 可访问外网:利用宿主机的物理网卡,SNAT连接外网
  • 外部主机无法直接访问容器:可以通过配置DNAT接受外网的访问
  • 低性能较低:因为可通过NAT,网络转换带来更低损耗
  • 端口管理繁琐:每个容器必须手动指定唯一的端口,容器产生端口冲突

查看 bridge 模式信息

docker network inspect bridge

4.3.2.2 修改docker默认桥接网卡

docker启动服务默认在宿主机上创建 docker0桥接网卡,实现容器和宿主机之间网络连接,docker支持自定义桥接网卡

dockerd [-b, --bridge] [桥接网卡]	

#1.指定docker桥接网卡必须事先存在,否则docker服务不能启动
#2.指定桥接网卡地址默认是172.18.0.1/16,也可以手动指定桥接网卡地址

注意:修改docker默认桥接网卡前已存在创建的容器,修改后容器网络也会随之更改

范例:

#查看docker宿主机上默认网卡情况
[root@ubuntu1804 ~ ]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fa:6a:d6:38 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:faff:fe6a:d638/64 scope link
       valid_lft forever preferred_lft forever

#修改docker启动脚本文件
[root@ubuntu1804 ~ ]# vim /lib/systemd/system/docker.service
[Service]
Type=notify
#下面最后添加 -b br0
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0

#由于没有创建br0桥接网卡,无法启动docker服务
[root@ubuntu1804 ~ ]# systemctl daemon-reload
[root@ubuntu1804 ~ ]# systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

#创建br0桥接网卡
[root@ubuntu1804 ~ ]# brctl addbr br0
[root@ubuntu1804 ~ ]# ifconfig br0 up
[root@ubuntu1804 ~ ]# ip a a 192.168.0.1/24 dev br0
[root@ubuntu1804 ~ ]# ip a
33: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 46:fd:92:56:06:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::44fd:92ff:fe56:629/64 scope link
       valid_lft forever preferred_lft forever

#重新启动docker服务
[root@ubuntu1804 ~ ]# systemctl daemon-reload
[root@ubuntu1804 ~ ]# systemctl restart docker

#运行容器
[root@ubuntu1804 ~ ]# docker run -d --name nginx nginx:1.21.1-alpine
[root@ubuntu1804 ~ ]# docker exec -it nginx sh
34: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:c0:a8:00:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.2/24 brd 192.168.0.255 scope global eth0
       valid_lft forever preferred_lft forever

4.3.2.3 指定docker0桥接网卡地址

docker容器服务的 docker0默认桥接网卡地址是 172.17.0.0/16,也可以启动服务指定地址段

注意:一旦修改docker0地址,则不能恢复默认地址172.17.0.1/16

方法一:dockerd启动服务指定参数

dockerd --bip <network>

#注意:
dockerd --bip 只能修改docker0地址, docker --bip 和 -b参数无法同时使用,导致docker服务无法启动

#修改dockerd启动服务参数
[root@ubuntu1804 ~ ]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip 192.168.200.1/24

[root@ubuntu1804 ~ ]# systemctl daemon-reload
[root@ubuntu1804 ~ ]# systemctl restart docker

#查看docker0网卡地址
[root@ubuntu1804 ~ ]# ip a
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fa:6a:d6:38 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.1/24 brd 192.168.200.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:faff:fe6a:d638/64 scope link
       valid_lft forever preferred_lft forever
     
#启动容器.查看容器地址
[root@ubuntu1804 ~ ]# docker start nginx
nginx
[root@ubuntu1804 ~ ]# docker inspect nginx -f "{{.NetworkSettings.IPAddress}}"
192.168.200.2

方法二:修改docker配置文件

[root@ubuntu1804 ~ ]# vim /etc/docker/daemon.json
{
	"registry-mirrors": ["https://n0g7070z.mirror.aliyuncs.com"],
	"dns" : [  "114.114.114.114", "119.29.29.29"],
    "dns-search": [ "test.com", "test.org"]
	#分配docker0网卡的IP,24是容器IP的netmask
    "bip": "192.168.100.100/24",
    #分配容器IP范围,26不是容器IP的子网掩码,只表示容器IP地址范围; 即前26位是网络ID,后6位是主机ID 192.168.100.129 ~ 192.168.100.190
	"fixed-cidr": "192.168.100.128/26",
	#网关必须和bip在同一个网段
	"default-gateway": "192.168.100.200",
	#接口允许接收的最大传输单元
	"mtu": 1500
}

#恢复dockerd启动参数,避免设置冲突
[root@ubuntu1804 ~ ]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

[root@ubuntu1804 ~ ]# systemctl restart docker
[root@ubuntu1804 ~ ]# ip a
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:6f:3c:3d:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.100/24 brd 192.168.100.255 scope global docker0
       valid_lft forever preferred_lft forever

[root@ubuntu1804 ~ ]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 docker0

[root@ubuntu1804 ~ ]# iptables -t nat -vnL
Chain POSTROUTING (policy ACCEPT 1 packets, 76 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MASQUERADE  all  --  *      !docker0  192.168.100.0/24     0.0.0.0/0

4.3.3 host

image-20240216154804356

如果启动容器时指定 host 模式,那么这个容器将不会获得一个独立的 Network namespace,而是和宿主机共用一个 Network namespace。容器不会虚拟出自己的网卡,而是使用宿主机的 IP 和端口。这种无需 NAT 转换的网络模式无需再映射容器与宿主机之间的端口,在提高网络传输性能的同时,造成了网络环境隔离性弱化。容器之间不再拥有隔离独立的网络,Docker host 上已使用的端口就不能再用了,访问容器的时候直接使用宿主机IP+容器端口即可,不过容器内除网络以外的其它资源,如:文件系统、系统进程等仍然和宿主机保持隔离

此模式由于直接使用宿主机的网络无需转换,网络性能最高,但是各容器之间端口不能相同,适用于运行容器端口比较固定的业务

Host网络模式特点:

  • 使用参数 --network host 指定
  • 共享宿主机网络
  • 共享宿主机端口
  • 网络性能无损耗
  • 网络故障排除相对简单
  • 各容器网络无隔离
  • 网络资源无法分别统计
  • 端口管理困难:容易产生端口冲突
  • 不支持端口映射
#查看宿主机的网络设置
[root@ubuntu1804 ~ ]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:6f:3c:3d:8c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.100  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5b:158b  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5b:15:8b  txqueuelen 1000  (Ethernet)
        RX packets 2946  bytes 228799 (228.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2244  bytes 341415 (341.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 111  bytes 9181 (9.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 111  bytes 9181 (9.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#查看宿主机上端口情况
[root@ubuntu1804 ~ ]# netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN
tcp6       0      0 :::22                   :::*                    LISTEN
[root@ubuntu1804 ~ ]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

#启动容器指定网络模式为host
[root@ubuntu1804 ~ ]# docker run -d --network host --name host_nginx01 nginx:1.21.1-alpine

#查看容器中网络
[root@ubuntu1804 ~ ]# docker exec -it host_nginx01 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:6f:3c:3d:8c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
/ # netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN
tcp        0      0 :::80                   :::*                    LISTEN
tcp        0      0 :::22                   :::*                    LISTEN
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

#宿主机的80端口已被host_nginx01容器占用,其他容器启动无法使用80端口
[root@ubuntu1804 ~ ]# docker run -d --network host --name host_nginx02 nginx:1.21.1-alpine
[root@ubuntu1804 ~ ]# docker ps -a
CONTAINER ID    IMAGE        		 COMMAND                 CREATED         STATUS     PORTS    NAMES
afe37e6eee77    nginx:1.21.1-alpine  "/docker-entrypoint.…"  11 seconds ago  Exited (1) 7   host_nginx02
5d8d6ed60097    nginx:1.21.1-alpine  "/docker-entrypoint.…"  4 minutes ago   Up 4 minutes   host_nginx01

范例:host模式下端口映射无法实现

[root@ubuntu1804 ~ ]# docker run -d -p 8000:80 --network host --name host_nginx03 nginx:1.21.1-alpine
WARNING: Published ports are discarded when using host network mode		#警告host网卡不能端口映射
447615190e5d4ca84df54ea4844317257271ef5458baf62c5817a38fe2abf364
[root@ubuntu1804 ~ ]# docker ps -a
CONTAINER ID    IMAGE        		 COMMAND                 CREATED         STATUS     PORTS    NAMES
447615190e5d    nginx:1.21.1-alpine  "/docker-entrypoint.…"  11 seconds ago  Exited (1) 7   host_nginx03
afe37e6eee77    nginx:1.21.1-alpine  "/docker-entrypoint.…"  11 seconds ago  Exited (1) 7   host_nginx02
5d8d6ed60097    nginx:1.21.1-alpine  "/docker-entrypoint.…"  4 minutes ago   Up 4 minutes   host_nginx01

4.3.4 none

在使用none 模式后,Docker 容器不会进行任何网络配置,没有网卡、没有IP也没有路由,因此默认无法与外界通信,需要手动添加网卡配置IP等,所以极少使用

作为 Docker 开发者,才能在这基础做其他无限多可能的网络定制开发,这种方式可以实现更加灵活复杂的网络,同时也体现了 Docker 设计理念的开放

none模式特点

  • 使用参数 --network none 指定
  • 默认无网络功能,无法和外部通信
[root@ubuntu1804 ~ ]# docker run -d -P --network none --name none_nginx01 nginx:1.21.1-alpine
95fde2c600a552a2401949af800340d0075c15456b85454a18d4d3bc9f390d44
[root@ubuntu1804 ~ ]# docker ps
CONTAINER ID   IMAGE                 COMMAND                CREATED        STATUS     PORTS     NAMES
95fde2c600a5   nginx:1.21.1-alpine   /docker-entrypoint.…" 8 seconds ago Up 7 seconds        none_nginx01

[root@ubuntu1804 ~ ]# docker exec -it none_nginx01 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

4.3.5 contarine

image-20240216154820100

使用此模式创建的容器需指定和一个已经存在的容器共享一个网络,而不是和宿主机共享网络,新创建的容器不会创建自己的网卡也不会配置自己的IP,而是共享被指定的容器IP和端口范围,除了网络之外的文件系统、进程信息等仍然保持相互隔离,两个容器的进程可以通过lo网卡内部通信

Container模式特点

  • 使用参数 --network container:名称或ID
  • 容器间共享网络空间
  • 适合频繁的容器间的网络通信
  • 容器网络由指定容器网络模式决定,较少使用
  • 绝对依赖指定共享网络容器,当指定共享网络容器被删除,则容器立即无法启动
#创建共享网络资源的server容器
[root@ubuntu1804 ~ ]# docker run -d -P --name server nginx:1.21.1-alpine
231043e8e7381876c52cf097e68a86d38ff396d6b0b612301cc400f65bb091e9
[root@ubuntu1804 ~ ]# docker exec -it server sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 :::80                   :::*                    LISTEN
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

#创建client容器共享server网络
[root@ubuntu1804 ~ ]# docker run -it --name client --network container:server alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 :::80                   :::*                    LISTEN

范例: 第一个容器使用host网络模式,第二个容器与之共享网络

[root@ubuntu1804 ~ ]# docker run -it --name c1 --network host alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:6f:3c:3d:8c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6fff:fe3c:3d8c/64 scope link
       valid_lft forever preferred_lft forever
7: veth4892cad@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 46:70:c4:6f:e1:64 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4470:c4ff:fe6f:e164/64 scope link
       valid_lft forever preferred_lft forever
/ # #ctrl+P+q退出

[root@ubuntu1804 ~ ]# docker run -it --name c2 --network container:c1 alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:6f:3c:3d:8c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:6fff:fe3c:3d8c/64 scope link
       valid_lft forever preferred_lft forever
7: veth4892cad@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 46:70:c4:6f:e1:64 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4470:c4ff:fe6f:e164/64 scope link
       valid_lft forever preferred_lft forever

4.3.6 自定义网络模式

docker的四种网络模式以外,也可以自定义网络,使用自定义的网段地址,网关等信息,实现不同集群应用的独立网络管理,而互不影响

所谓docker自定义网络,其实docker默认网络模式上创建新模式名称和IP地址,网关信息,但是自定义网络仅支持bridge网络,不支持host和none网络模式

注意:在同一个自定义网络中,容器之间可以直接利用容器名相互访问非常便利

创建自定义网络

docker network create -d <mode> --subnet <CIDR> --gateway <网关> <自定义网络名称>

-d	#表示自定义网络在docker现有网络模式上创建,不支持指定host和none模式

创建去使用自定义网络

docker run --network <自定义网络名称或网络ID> <镜像名称>

删除自定义网络

doccker network rm <自定义网络名称或网络ID>

范例:第一步:创建自定义网络

#step1:创建一个bridge类型的自定义网络test-net
[root@ubuntu1804 ~ ]# docker network create -d bridge --subnet 172.27.0.10/16 --gateway 172.27.0.1 test-net
721cc87c058c7cf56190871d890dd52b6910ba6729bc2c798ff590cbbdab81c3
[root@ubuntu1804 ~ ]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d28b2aaa5e8b        bridge              bridge              local
e6528b92fb0b        host                host                local
62327ec0d6e2        none                null                local
721cc87c058c        test-net            bridge              local

[root@ubuntu1804 ~ ]# docker network inspect test-net
[
    {
        "Name": "test-net",
        "Id": "721cc87c058c7cf56190871d890dd52b6910ba6729bc2c798ff590cbbdab81c3",
        .......................
        "Config": [
                {
                    "Subnet": "172.27.0.10/16",
                    "Gateway": "172.27.0.1"
                }
            ]
        },
		..........................
]

#step:查看宿主机上多了一块网卡
[root@ubuntu1804 ~ ]# brctl show
bridge name			bridge id		STP enabled	interfaces
br-721cc87c058c		8000.0242b1c55c34	no
docker0				8000.02426f3c3d8c	no

[root@ubuntu1804 ~ ]# ifconfig
br-721cc87c058c: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.27.0.1  netmask 255.255.0.0  broadcast 172.27.255.255
        ether 02:42:b1:c5:5c:34  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:6fff:fe3c:3d8c  prefixlen 64  scopeid 0x20<link>
        ether 02:42:6f:3c:3d:8c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 626 (626.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.100  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5b:158b  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5b:15:8b  txqueuelen 1000  (Ethernet)
        RX packets 12350  bytes 882158 (882.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5918  bytes 691677 (691.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ubuntu1804 ~ ]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.27.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-721cc87c058c

第二步:利用自定义的网络创建容器

#通过自定义网络创建的容器,会通过自定义网络设置IP网络端获取地址
[root@ubuntu1804 ~ ]# docker run -it --rm --network test-net --name test01 alpine:3.11
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#再开一个新终端窗口查看网络
[root@ubuntu1804 ~ ]# docker network inspect test-net
............
#此网络中容器的网络信息
        "Containers": {
        	#容器ID
            "4cc5eabb9ce817c1eb07aa9518f351447d18146d3d6c4745d3eb33520a21b023": {
            	#容器名称
                "Name": "test01",
                "EndpointID": "740f7516c747722096077c901a2c73068512cd66e7764639d8f3453d2ba934f1",
                #容器Mac地址
                "MacAddress": "02:42:ac:1b:00:02",
                #容器IP地址
                "IPv4Address": "172.27.0.2/16",
                "IPv6Address": ""
            }
        },

#开启新窗口再运行test02容器,在自定义网络中就可以直接通过容器名称解析为目标主机IP地址
[root@ubuntu1804 ~ ]#  docker run -it --rm --network test-net --name test02 alpine:3.11
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:1b:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.3/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever

/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.27.0.3	536346278e3d

/ # ping test01
PING test01 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=64 time=0.919 ms
64 bytes from 172.27.0.2: seq=1 ttl=64 time=0.095 ms

4.4 实现同一个宿主机不同网络的容器通信

开启两个容器,一个使用自定义网络容器,一个使用默认brideg网络的容器,默认因iptables规则导致无法通信

#创建bridge网络模式的容器test01
[root@ubuntu1804 ~ ]# docker run -it --name test01 alpine:3.11 sh
/ # ip a
19: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#创建一个自定义网络模式的test02
[root@ubuntu1804 ~ ]# docker network create -d bridge --subnet 172.27.0.10/16 --gateway 172.27.0.1 test-net
[root@ubuntu1804 ~ ]# ifconfig
br-721cc87c058c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.27.0.1  netmask 255.255.0.0  broadcast 172.27.255.255
        inet6 fe80::42:b1ff:fec5:5c34  prefixlen 64  scopeid 0x20<link>
        ether 02:42:b1:c5:5c:34  txqueuelen 0  (Ethernet)
        RX packets 273  bytes 22316 (22.3 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 1406 (1.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:6fff:fe3c:3d8c  prefixlen 64  scopeid 0x20<link>
        ether 02:42:6f:3c:3d:8c  txqueuelen 0  (Ethernet)
        RX packets 396  bytes 32648 (32.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1268 (1.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ubuntu1804 ~ ]# docker run -it --rm --network test-net --name test02 alpine:3.11 sh
/ # ip a
21: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#test01容器无法与test02容器通信
[root@ubuntu1804 ~ ]# docker exec -it test01 sh
/ # ping 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes		#无法ping通172.27.0.2

#test01容器无法与test02容器通信
[root@ubuntu1804 ~ ]# docker exec -it test02 sh
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes		#无法ping通172.17.0.2

#由于docker修改iptables防火墙规则导致不同网络模式之间不能通信
[root@ubuntu1804 ~ ]# iptables -S
#转发给 DOCKER-ISOLATION-STAGE-1 规则链
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
#step1: 从 br-721cc87c058c 网卡接到数据不是br-721cc87c058c网卡出去的转发给 DOCKER-ISOLATION-STAGE-2 规则链; test01容器要和test02容器通信需要转发给docker0,则转发给 DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-721cc87c058c ! -o br-721cc87c058c -j DOCKER-ISOLATION-STAGE-2

#step2:test-net网络模式容器要和bridge网络容器通信需要转发给docker0,规则将drop
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP

#step3: 从docker0网卡接到数据不是docker0网卡出去的转发给 DOCKER-ISOLATION-STAGE-2 规则链
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2

#step4:bridge网络容器要和test-net网络容器通信需要转发给docker0,规则将drop
-A DOCKER-ISOLATION-STAGE-1 -j RETURN

image-20240216154843175

4.4.1 修改iptables实现同一宿主机上的不同网络的容器间通信

#在 DOCKER-ISOLATION-STAGE-2 规则最前面添加允许
[root@ubuntu1804 ~ ]# iptables -I DOCKER-ISOLATION-STAGE-2 -j ACCEPT
[root@ubuntu1804 ~ ]# iptables -vnL
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
 pkts bytes target     prot opt in     out     source               destination
    4   336 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0
  385 32340 DROP       all  --  *      br-721cc87c058c  0.0.0.0/0            0.0.0.0/0
 1171 98364 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

#test01容器和test02容器之间便可以通信
[root@ubuntu1804 ~ ]# docker exec  -it test01 sh
/ # ping 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=63 time=0.162 ms
64 bytes from 172.27.0.2: seq=1 ttl=63 time=0.132 ms

[root@ubuntu1804 ~ ]# docker exec -it test02 sh
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=63 time=0.151 ms
64 bytes from 172.17.0.2: seq=1 ttl=63 time=0.099 ms

4.4.2 通过解决docker network connect 实现同一个宿主机不同网络的容器间通信

docker netowrk connect命令可以将现有容器中添加指定网络模式的网卡,实现同一个宿主机不同网络的容器间相互通信

#将CONTAINER连入指定的NETWORK中,使此CONTAINER可以与NETWORK中的其它容器进行通信
docker network connect [OPTIONS] NETWORK CONTAINER
docker network connect [OPTIONS] <网络模式>	<容器ID>		#在容器中创建指定网络模式的网卡

#将CONTAINER与指定的NETWORK断开连接,使此CONTAINER可以与CONTAINER中的其它容器进行无法通信
docker network disconnect -f NETWORK CONTAINER		#把容器中指定模式网卡卸载

范例:

#恢复防火墙规则
systemctl restart docker

#在桥接模式test01容器中可以看到新添加了一个网卡,并且分配了test-net网络的IP信息
[root@ubuntu1804 ~ ]# docker network connect test-net test01

#查看宿主机上网卡桥接情况
[root@ubuntu1804 ~ ]# brctl  show
bridge name			bridge id		STP enabled	interfaces
br-721cc87c058c		8000.0242b1c55c34	no		veth3fafd8f
docker0				8000.02426f3c3d8c	no		veth7f56b0e
												vethc6fb935
[root@ubuntu1804 ~ ]# docker exec -it test01 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
19: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
23: eth1@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:1b:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.3/16 brd 172.27.255.255 scope global eth1
       valid_lft forever preferred_lft forever
/ # ping -c2 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=64 time=0.391 ms
64 bytes from 172.27.0.2: seq=1 ttl=64 time=0.095 ms

#在test02容器中没有变化,仍然无法连接test01
[root@ubuntu1804 ~ ]# docker exec -it test02 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#将自定义网络中的容器test02也加入到默认网络中,使之和默认网络中的容器test1通信
[root@ubuntu1804 ~ ]# docker network connect bridge test02

#查看宿主机上网卡桥接情况
[root@ubuntu1804 ~ ]# brctl  show
bridge name			bridge id		STP enabled	interfaces
br-721cc87c058c		8000.0242b1c55c34	no		veth3fafd8f
												vethc6fde1a
docker0				8000.02426f3c3d8c	no		veth7f56b0e
												vethc6fb935

[root@ubuntu1804 ~ ]# docker exec -it test02 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever
29: eth1@if30: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth1
       valid_lft forever preferred_lft forever
/ # ping -c2 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.207 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.101 ms

#查看test-net网络情况
[root@ubuntu1804 ~ ]# docker network inspect test-net
[
    {
        "Name": "test-net",
        "Id": "721cc87c058c7cf56190871d890dd52b6910ba6729bc2c798ff590cbbdab81c3",
        "Created": "2021-09-02T13:09:57.210954719Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.27.0.10/16",
                    "Gateway": "172.27.0.1"
                }
            ]
        },
        .................
        "ConfigOnly": false,
        "Containers": {
            "5185a197606d2c60b2bad7a298e82d0d7437da3aba8de0560fdff220905230f7": {
                "Name": "test01",
                "EndpointID": "caa7ab66c0286adb374685da386330c7fd3ce441640fab22c7477f8497b2d7ea",
                "MacAddress": "02:42:ac:1b:00:03",
                "IPv4Address": "172.27.0.3/16",
                "IPv6Address": ""
            },
            "7612c77180db6072f4ddfdf18aa4b4f20d7bbe04b3374abe284e6347c240de87": {
                "Name": "test02",
                "EndpointID": "c7e53945f5ff685fdb85f1d69cb350f78f325794797a53a469ca1818bc1def3e",
                "MacAddress": "02:42:ac:1b:00:02",
                "IPv4Address": "172.27.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

断开不同网络中容器的通信

#将 test01 断开和网络test-net中其它容器的通信
docker network disconnect test-net test01

#将test02 断开和默认网络中其它容器的通信
docker network disconnect bridge test02

4.5 实现容器跨宿主机通信

4.5.1 方式一 : 利用桥接实现跨宿主机的容器间通信

docker默认创建一个 docker0桥接网卡,把容器的网卡 eth0对应宿主机上 veth网卡桥接到 docker0网卡实现了宿主机访问容器,那么我们可以把宿主机上物理网卡也桥接到 docker0网卡,实现了宿主机网卡和容器网卡

image-20240216154859041

注意点:

  • docker0桥接网卡相当于虚拟交换机,其实上图方法只是把通信线路打通而已,所以多台宿主机上容器网络必须一样
  • 不同宿主机上容器IP地址不能冲突
  • 把宿主机上 eth0网卡桥接 docker0,会导致 eth0IP地址丢失,无法远程连接

环境规划

宿主机操作系统宿主机桥接网卡Docker0 IP
host1ubuntu1804eth0 10.0.0.100172.17.0.1/24
host1ubuntu1804eth0 10.0.0.200172.17.0.1/24
#查看host1宿主机网卡情况
[root@host1 ~ ]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:c3:fa:8c:f7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.100  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5b:158b  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5b:15:8b  txqueuelen 1000  (Ethernet)
        RX packets 50  bytes 3024 (3.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 796 (796.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.101  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5b:1595  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5b:15:95  txqueuelen 1000  (Ethernet)
        RX packets 181  bytes 17858 (17.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 129  bytes 18568 (18.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#查看host2宿主机网卡情况
[root@host2 ~ ]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:87:ed:c9:56  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.200  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:febf:dece  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:bf:de:ce  txqueuelen 1000  (Ethernet)
        RX packets 64  bytes 3870 (3.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1016 (1.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.102  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:febf:ded8  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:bf:de:d8  txqueuelen 1000  (Ethernet)
        RX packets 208  bytes 21912 (21.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 141  bytes 21030 (21.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#分别将两个宿主机都执行下面操作:安装桥接工具brctl,把宿主机eth0桥接到docker0
apt -y install bridge-utils
brctl addif docker0 eth0

#删除配置
brctl delif docker0 eth0

[root@host1 ~ ]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242c3fa8cf7	no		eth0

[root@host2 ~ ]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.024287edc956	no		eth0

#在两个宿主机上各启动一个容器,需要确保IP不同,相互测试访问
#host1宿主机的容器
[root@host1 ~ ]# docker run -it alpine:3.11
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

#host2宿主机的容器指定IP地址
[root@host2 ~]#docker run -it --name test1 alpine:3.11
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #

[root@host2 ~]#docker run -it alpine:3.11
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

[root@host2 ~]#docker stop test1 

#访问host1宿主机上容器
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.102 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.516 ms

4.5.2 方法二: 利用DNAT实现跨主机的容器间通信

利用防火墙的DNAT转发技术实现跨宿主机上不同网段容器之间通信

环境规划

宿主机宿主机网卡docker0网段
host110.0.0.100192.168.1.1/24
host210.0.0.200182.168.2.1/24

image-20240216154914019

4.5.2.1 更改host1宿主机上docke网络

[root@host1 ~ ]# vim /etc/docker/daemon.json
{
        "registry-mirrors": ["https://n0g7070z.mirror.aliyuncs.com"],
        "bip": "192.168.1.1/24"
}
[root@host1 ~ ]# systemctl restart docker
[root@host1 ~ ]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5b:15:8b brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5b:158b/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:9e:5d:ae:df brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 brd 192.168.1.255 scope global docker0
       valid_lft forever preferred_lft forever

4.5.2.2 更改host2宿主机上docke网络

[root@host2 ~ ]# vim /etc/docker/daemon.json
{
        "registry-mirrors": ["https://n0g7070z.mirror.aliyuncs.com"],
        "bip": "192.168.2.1/24"
}
[root@host2 ~ ]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:bf:de:ce brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.200/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:febf:dece/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:33:34:57:69 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global docker0
       valid_lft forever preferred_lft forever

4.5.2.3 添加静态路由和iptables规则

为什么添加静态路由呢?

当host1宿主机上容器通过防火墙的SNAT转发到eth0网卡上,再通过eth0网卡上路由记录找到目标容器的网关

[root@host1 ~ ]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0

docker默认防火墙是没有设置允许宿主机网卡接收数据报文转发到宿主机内部docker0网卡上,全是针对docker0设置转发规则

image-20240216154929472

添加规则

iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT

4.5.2.4 实现host1宿主机容器访问host2宿主机容器

第一步:host1宿主机上添加去往 192.168.2.0/24静态路由

[root@host1 ~ ]# route add -net 192.168.2.0/24 gw 10.0.0.200
[root@host1 ~ ]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0
192.168.2.0     10.0.0.200      255.255.255.0   UG    0      0        0 ens33

第二步:host2宿主机上添加防防火墙允许规则

[root@host2 ~ ]# iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT

第三步:host1宿主机容器可以访问host2容器,但是host2宿主机容器不能访问host1宿主机上容器

[root@host1 ~ ]# docker run -td --name host1 alpine:3.11 sh
[root@host1 ~ ]# docker exec -it host1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2): 56 data bytes
64 bytes from 192.168.2.2: seq=0 ttl=62 time=1.436 ms
64 bytes from 192.168.2.2: seq=1 ttl=62 time=0.409 ms

#host2宿主机上容器不能访问host1宿主机上容器
[root@host2 ~ ]# docker run -td --name host2 alpine:3.11 sh
[root@host2 ~ ]# docker exec -it host2 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2): 56 data bytes

4.5.2.5 实现host2宿主机容器访问host1宿主机容器

第一步:host2宿主机上添加去往 192.168.1.0/24静态路由

[root@host2 ~ ]# route add -net 192.168.1.0/24 gw 10.0.0.100
[root@host2 ~ ]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
192.168.1.0     10.0.0.100      255.255.255.0   UG    0      0        0 ens33
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 docker0

第二步:host1宿主机上添加防防火墙允许规则

[root@host1 ~ ]# iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT

4.5.2.6 实现跨宿主机跨网络容器之间通信

host1宿主机上容器访问host2宿主机上容器

[root@host1 ~ ]# docker exec -it host1 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2): 56 data bytes
64 bytes from 192.168.2.2: seq=0 ttl=62 time=1.436 ms
64 bytes from 192.168.2.2: seq=1 ttl=62 time=0.409 ms

host2宿主机上容器访问host1宿主机上容器

[root@host2 ~ ]# docker exec -it host2 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever

/ # ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2): 56 data bytes
64 bytes from 192.168.1.2: seq=0 ttl=62 time=0.798 ms
64 bytes from 192.168.1.2: seq=1 ttl=62 time=0.428 ms

4.5.3 方法三:跨宿主机macvlan网络

在 Docker 中,macvlan 是众多 Docker 网络模型中的一种,并且是一种跨主机的网络模型,作为一种驱动(driver)启用(-d 参数指定),Docker macvlan 只支持 bridge 模式。

参考文档:https://cloud.tencent.com/developer/article/1432601

由于 macvlan 网络会独占物理网卡,也就是说一张物理网卡只能创建一个 macvlan 网络

注意:macvlam绑定的宿主机上网卡之间必须物理联通

image-20240216154945685

第一步:host1宿主机上添加eth1网卡,开启网卡混杂模式(混杂模式:接收不属于该网卡数据)

#macvlan模式的宿主机上网卡可以不设置IP地址
[root@host1 ~ ]# ifconfig eth1 192.168.0.1/24 up
[root@host1 ~ ]# ip link set eth1 promisc on

image-20240216154955403

第二步:host2宿主机上添加eth1网卡,开启网卡混杂模式

#macvlan模式的宿主机上网卡可以不设置IP地址
[root@host2 ~ ]# ip link set eth1 promisc on

第三步:每个宿主机上创建 macvlan网络模式

docker network create --driver macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.254 -o parent=eth1 macvlan

-d 			#指定 Docker 网络 driver
--subnet 	#指定 macvlan 网络所在的网络,不要和宿主机其他网卡网段冲突
--gateway 	#指定网关
-o parent 	#指定用来分配 macvlan 网络的物理网卡

第四步:创建容器使用macvlan网络模式,由于跨宿主机需要指定容器IP地址

#host1宿主机
[root@host1 ~ ]# docker run -td --network macvlan --ip=192.168.0.2 --name vlan1 alpine:3.11

#host2宿主机
[root@host2 ~ ]# docker run -td --network macvlan  --ip=192.168.0.3 --name vlan2 alpine:3.11

第五步:测试访问

#vlan1容器访问vlan2
[root@host1 ~ ]# docker exec -it vlan1 sh
/ # ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: seq=0 ttl=64 time=0.770 ms
64 bytes from 192.168.0.3: seq=1 ttl=64 time=0.294 ms
#vlan1容器不能ping通本地宿主机网卡地址
/ # ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1): 56 data bytes

#vlan2容器访问vlan1
[root@host2 ~ ]# docker exec -it vlan2 sh
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
64 bytes from 192.168.0.2: seq=0 ttl=64 time=0.433 ms
64 bytes from 192.168.0.2: seq=1 ttl=64 time=0.547 ms