用linux的network namespace模拟docker bridge网络模式搭建虚拟网络环境ping不通

我用linux的network namespace来模拟docker的bridge模式,模拟下图所示结构:(命令里用的是br0来代替bridge)
图片描述
我的命令键入顺序依次是:

ip netns add net0
ip netns add net1
ip link add br0 type bridge
ip link set dev br0 up
ip link add type veth
ip link set dev veth0 netns net0
ip netns exec net0 ip link set dev veth0 name eth0
ip netns exec net0 ip addr add 10.0.1.1/24 dev eth0
ip netns exec net0 ip link set dev eth0 up
ip link set dev veth1 master br0
ip link set dev veth1 up
ip link add type veth
ip link set dev veth0 netns net1
ip netns exec net1 ip link set dev veth0 name eth0
ip netns exec net1 ip addr add 10.0.1.2/24 dev eth0
ip netns exec net1 ip link set dev eth0 up
ip link set dev veth2 master br0
ip link set dev veth2 up

这个时候的环境信息分别是:

root@VM-102-49-ubuntu:/home/ubuntu# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:36:e0:7a brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:5a:64:40:f4 brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:e5:fd:61:13:44 brd ff:ff:ff:ff:ff:ff
8: veth1@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
    link/ether 8e:64:2e:b3:10:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth2@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000
    link/ether 32:e5:fd:61:13:44 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    
root@VM-102-49-ubuntu:/home/ubuntu# ip netns exec net0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:ee:e3:4e:a5:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.1.1/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::98ee:e3ff:fe4e:a574/64 scope link
       valid_lft forever preferred_lft forever
       
root@VM-102-49-ubuntu:/home/ubuntu# ip netns exec net1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fa:d5:1f:6b:ac:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.1.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f8d5:1fff:fe6b:ace2/64 scope link
       valid_lft forever preferred_lft forever
       
root@VM-102-49-ubuntu:/home/ubuntu# bridge link
8: veth1 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2
10: veth2 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2

看着没什么问题,然而试着用net0去ping net1的时候,却ping失败:

root@VM-102-49-ubuntu:/home/ubuntu# ip netns exec net0 ping -c 3 10.0.1.2
PING 10.0.1.2 (10.0.1.2) 56(84) bytes of data.

--- 10.0.1.2 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2007ms

这时候试着用tcpdump工具去抓包看了一下:

root@VM-102-49-ubuntu:/home/ubuntu# ip netns exec net1 tcpdump -n -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:03:23.061964 ARP, Request who-has 10.0.1.2 tell 10.0.1.1, length 28
00:03:23.061979 ARP, Reply 10.0.1.2 is-at fa:d5:1f:6b:ac:e2, length 28

2 packets captured
2 packets received by filter
0 packets dropped by kernel

root@VM-102-49-ubuntu:/home/ubuntu# ip netns exec net0 tcpdump -n -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:04:00.540777 IP 10.0.1.1 > 10.0.1.2: ICMP echo request, id 18216, seq 1, length 64
00:04:01.548514 IP 10.0.1.1 > 10.0.1.2: ICMP echo request, id 18216, seq 2, length 64
00:04:02.556267 IP 10.0.1.1 > 10.0.1.2: ICMP echo request, id 18216, seq 3, length 64
00:04:05.541908 ARP, Request who-has 10.0.1.2 tell 10.0.1.1, length 28
00:04:05.541961 ARP, Reply 10.0.1.2 is-at fa:d5:1f:6b:ac:e2, length 28

5 packets captured
5 packets received by filter
0 packets dropped by kernel

root@VM-102-49-ubuntu:/home/ubuntu# sudo tcpdump -n -i br0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:06:28.266228 IP 10.0.1.1 > 10.0.1.2: ICMP echo request, id 18399, seq 1, length 64
00:06:29.265953 IP 10.0.1.1 > 10.0.1.2: ICMP echo request, id 18399, seq 2, length 64
00:06:30.265941 IP 10.0.1.1 > 10.0.1.2: ICMP echo request, id 18399, seq 3, length 64
00:06:33.269893 ARP, Request who-has 10.0.1.2 tell 10.0.1.1, length 28
00:06:33.269944 ARP, Reply 10.0.1.2 is-at fa:d5:1f:6b:ac:e2, length 28

5 packets captured
5 packets received by filter
0 packets dropped by kernel

也没有看出什么问题,所以求助一下大家为什么会ping失败

阅读 7.6k
2 个回答

手头上没有ubuntu,我在debian上试了一下,应该是一样的。

原因是因为系统为bridge开启了iptables功能,导致所有经过br0的数据包都要受iptables里面规则的限制,而docker为了安全性,将iptables里面filter表的FORWARD链的默认策略设置成了drop,于是所有不符合docker规则的数据包都不会被forward,导致你这种情况ping不通。

解决办法有两个,二选一:

  1. 关闭系统bridge的iptables功能,这样数据包转发就不受iptables影响了:echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables

  2. 为br0添加一条iptables规则,让经过br0的包能被forward:iptables -A FORWARD -i br0 -j ACCEPT

第一种方法不确定会不会影响docker,建议用第二种方法。

你是不是没有 echo 1 > /proc/sys/net/ipv4/ip_forward 开启 IPv4 转发?

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题
宣传栏