Docker默认桥接网络是如何工作的
阅读原文时间:2023年07月10日阅读:2

一般来说,我们起一个容器比如一个简单的nginx服务会向这样

docker run -d --rm nginx:XXX

OK容器起来了,但是并不能通过宿主机被外面的机器访问,此时我们需要对容器做一个端口映射,像是这样

docker run -d --rm -p 80:80 nginx:XXX

这样外面就可以通弄映射出来的宿主机端口访问到容器里的服务了。那么这在这其中流量包时如何在网口,netfilter,路由,网络隔离的namespace,linux虚拟网桥,veth pair设备间送到指定的容器内进程呢?

2.1 Network Namespace

linux network namespace(网络命名空间),linux 可以通过namespce对系统的资源做隔离,包括网络资源,我们先做一个小实验。对这一块比较熟悉的同学可以跳过。

建立一个net1和net2的网络名称空间

# 创建net1
ip netns add net1
# 创建net2
ip netns add net2
# 打印名称空间
ip netns list


# 在指定的名称空间执行命令,ip netns exec [network namespace name] [command]
ip netns exec net1 ip addr

我们可以看到回显为

[root@localhost ~]# ip netns exec net1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]#

OK, 看不到宿主机网络的任何信息,完全是一个空的网络配置。这就是network namespace的效果。

2.2 不同Network Namespace间的通讯

在docker 容器中,不同的容器是可以互相访问的,也就是说namespace间是可以互通的,那么应该如何做到?。

先来了解下veth pair这个东西,它是linux的中虚拟网络设备,注意是“虚拟”,是模拟出来的并不真实存在。veth设备总是成对出现,也叫veth pair,一端发送的数据会由另外一端接收。[滑稽]机灵的同学应该已经发现了,有两端而且两段的数据互通这特么不就是网线么,哎理解的很到位,再展开一下上面的net1, net2不就是两台虚机的主机(从网络资源上讲可以这么说)么,那么我把它们之间用虚拟的网线(veth pair)一连那么不就通了。bingo理论行对了,让我们来实践一下,继续上一实验的环境。

先创建一对veth pair设备 veth1, veth2

ip link add veth1 type veth peer name veth2

在用这对veth设备连接两个namespace

ip link set veth1 netns net1
ip link set veth2 netns net2

进入net1配置并启动虚拟网卡

[root@localhost ~]# ip netns exec ip addr
Cannot open network namespace "ip": No such file or directory
[root@localhost ~]# ip netns exec net1 bash
[root@localhost ~]# ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0e:ad:28:91:48:51 brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@localhost ~]# ip addr add 10.0.0.1/24 dev veth1
[root@localhost ~]# ip link set dev veth1 up
[root@localhost ~]#

进入net2配置并启动网卡

[root@localhost ~]# ip netns exec net2 bash
[root@localhost ~]# ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: veth2@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4a:4a:a2:0a:cc:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost ~]# ip addr add 10.0.0.2/24 dev veth2
[root@localhost ~]# ip link set dev veth2 up
[root@localhost ~]#

OK, 理论上这就相当于两台直连主机(就好像你的一台pc直接把网线怼到另一台pc),让我们来ping一下

# 进入net1 ping net2的地址
[root@localhost ~]# ip netns exec net1 bash
[root@localhost ~]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.069 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.055 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.065 ms
^C
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.055/0.065/0.073/0.010 ms
[root@localhost ~]#

通了ok,让我们把vnet2 的veth2 down 了试下(相当于把对端pc的网线拔了)

[root@localhost ~]# ip netns exec net2 bash
[root@localhost ~]# ip link set dev veth2 down
[root@localhost ~]# exit
exit
[root@localhost ~]# ip netns exec net1 bash
[root@localhost ~]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
^C
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

[root@localhost ~]#

意料之中,完全不同(都拔线了咋可能通)。

2.3 比起插线(veth pair)更好的方案 bridge

细心的你一定想到了一个问题,我在机器上有这么多的容器每个容器都要实现互通,难道每个namespace之间都要通过veth pair来连接么,这样得差n的二次方的网线啊,这也太离谱的。

这个问题可以回归到现实是如果做到的,在公司的一个网段里有那么多的主机,互插线的话网口都不够用,那么插哪里呢?交换机。由交换机帮我把对应的数据送到对应的网口,像是物流站一样,所有的pc把网线插上去就完事。这里我们引入另一个linux的虚拟网络设备 bridge(网桥),你可以理解为这就是一个二层交换机的功能。所以无论我们新增多少容器,都可以通过veth pair 连接到 bridge上实现容器互通,而在docker里这个网桥就是docker0。

[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:9b:21:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.144.128/24 brd 192.168.144.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::2ac9:5d64:5e4b:6619/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:91:d3:be:6c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:91ff:fed3:be6c/64 scope link
       valid_lft forever preferred_lft forever
5: veth299c707@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 8a:eb:25:8f:78:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::88eb:25ff:fe8f:78f0/64 scope link
       valid_lft forever preferred_lft forever

docker 会在启动是创建docker0网桥,在启动容器时创建一对veth pair设备,一端会放置在容器里命名为eth0,另一端会放在host主机的上叫做veth*****并加载网桥上。

我们可以手动模拟这个过程,我们新建一个网桥br0, 再建立两个网络名称空间net3, net4, 并通过veth pair把它们连接到这个网桥上。

# 创建网络名称空间net3 net4
ip netns add net3
ip netns add net4

# 创建两队veth pair设备 veth*in 放置于namespace, veth*out放置于网桥
ip link add veth3in type veth peer name veth3out
ip link add veth4in type veth peer name veth4out

# 连接net3并配置ip
ip link set veth3in netns net3
ip netns exec net3 ip addr add 10.0.0.3/24 dev veth3in
ip netns exec net3 ip link set dev veth3in up

# 连接net4并配置ip
ip link set veth4in netns net4
ip netns exec net4 ip addr add 10.0.0.4/24 dev veth4in
ip netns exec net4 ip link set dev veth4in up

# 可以试不连网桥是否ping通

# 创建br0网桥
ip link add name br0 type bridge

# 连接网桥
ip link set dev veth3out master br0
ip link set dev veth3out up
ip link set dev veth4out master br0
ip link set dev veth4out up

# 先网桥还没有启动你看看是否能ping通

# 启动网桥
ip link set br0 up
# 启动网桥后net3 终于能ping通net4了

如果宿主机想访问这个bridge上的namespace怎么办,再加个veth pair 给一端配置ip 另一端接到bridge上? 正确,但是不必这么麻烦。其实我们在ip addr 是看到的那个br0并不是网桥的本地,br0也只是网桥的一个口子罢了,我们是可给这个OK配置上IP直接访问在这个二层网络的,像是这样:

[root@localhost ~]# ip addr add 10.0.0.254/24 dev br0
[root@localhost ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.144.2   0.0.0.0         UG    100    0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.144.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
[root@localhost ~]# ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=0.048 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2024ms
rtt min/avg/max/mdev = 0.048/0.055/0.068/0.012 ms
[root@localhost ~]#

所以我们在安装了docker的主机中可以看到带有与ip 172.17.0.1 的docker0网桥。

2.4 小总结

到这一节,其实就是run一个容器时不加端口映射的全部情况了。可以梳理下就是。docker 通过Network Namespace, bridge和veth pair这三个虚拟设备实现一个简单的二层网络,不同的namespace实现了不同容器的网络隔离让他们分别有自己的ip,通过veth pair 连接到docker0网桥上实现了容器间和宿主机的互通。其实就这么简单。

docker 的端口映射是通过netfilter实现的,netfilter是内核的一个数据包处理框架,可以在数据包在路由前,路由后,进入用户态前,从用户态出来后等等好几个点对数据包进行处理包括但不限于拦截,源地址转换,目的地址换等,用户态表现为iptables工具和防火墙。一般而言都喜欢称呼为docker的端口映射是通过iptables实现,其实iptables只是用户态的一个规则下发工具而已,实际工作的是netfilter,看个人理解吧,其实都差不多。关于iptables的使用内容太多,此处不加以赘述如果有同学这方面知识比较生疏强烈安利一篇博客[https://www.zsythink.net/archives/tag/iptables/]。

一下内容我都是简历在对iptables规则较为熟悉的前提下

3.1 数据包是如何到容器里面的又是如何从容器出去的

我们先创建一个容器在开放8000端口的服务到宿主机的8000端口,命令如下:

# 用别的服务也可以,都行,映射了端口就好
docker run -d -p 8000:8000 centos:7 python -m SimpleHTTPServer

确认端口开发成功,外部可以访问后我们先来看下docker在iptables配置了什么

# 查看nat表(这个表负责转发和地址转换)
[root@localhost ~]# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 32 packets, 4465 bytes)
 pkts bytes target     prot opt in     out     source               destination
    2   112 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 25 packets, 3913 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 184 packets, 13926 bytes)
 pkts bytes target     prot opt in     out     source               destination
   10   740 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 185 packets, 14010 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MASQUERADE  all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match src-type LOCAL
    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:8000

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8000 to:172.17.0.2:8000
[root@localhost ~]#

可以看到在 PREROUTING 链中(路由前)有一条自定义的DOCKER链,DOCKER链中有一条规则,翻译下就是“只要是tcp的包目的地址的端口是8000的统统把目的地址改为172.17.0.2,目的端口改为8000”, 这样一来原先的数据包src_ip:src_port->dst_ip:8000就会在路由前变成src_ip:src_port->172.17.0.2:8000。再看路由表:

[root@localhost ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.144.2   0.0.0.0         UG    100    0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.144.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33

匹配到172.17.0.0这条,命中路由走input这条链,被直接发往Iface:docker0也就是网桥在host主机的这个口子上。然后这个数据包会通过veth pair 设备走到容器中。

3.2 抓包验证

口说无凭眼见为实,我觉得只要兄弟们不是明天项目要上限有闲功夫来看我得我文章的都应该去抓个包看看。

同事对物理网卡和docker0抓包

[root@localhost ~]# tcpdump -i ens33 -n -vvv tcp port 8000
tcpdump: listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
00:31:38.768744 IP (tos 0x0, ttl 64, id 28655, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [S], cksum 0x3e4b (correct), seq 2908588551, win 29200, options [mss 1460,sackOK,TS val 12707304 ecr 0,nop,wscale 7], length 0
00:31:38.768941 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [S.], cksum 0xa281 (incorrect -> 0x90e0), seq 2883195023, ack 2908588552, win 28960, options [mss 1460,sackOK,TS val 12710174 ecr 12707304,nop,wscale 7], length 0
00:31:38.772124 IP (tos 0x0, ttl 64, id 28656, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [.], cksum 0x2fe5 (correct), seq 1, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 0
00:31:38.772185 IP (tos 0x0, ttl 64, id 28657, offset 0, flags [DF], proto TCP (6), length 136)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [P.], cksum 0x0f8c (correct), seq 1:85, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 84
00:31:38.772281 IP (tos 0x0, ttl 63, id 57777, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [.], cksum 0xa279 (incorrect -> 0x2f90), seq 1, ack 85, win 227, options [nop,nop,TS val 12710177 ecr 12707307], length 0
00:31:38.779728 IP (tos 0x0, ttl 63, id 57778, offset 0, flags [DF], proto TCP (6), length 69)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [P.], cksum 0xa28a (incorrect -> 0x6fab), seq 1:18, ack 85, win 227, options [nop,nop,TS val 12710184 ecr 12707307], length 17
00:31:38.780273 IP (tos 0x0, ttl 63, id 57779, offset 0, flags [DF], proto TCP (6), length 990)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [FP.], cksum 0xa623 (incorrect -> 0x8580), seq 18:956, ack 85, win 227, options [nop,nop,TS val 12710185 ecr 12707307], length 938
00:31:38.780359 IP (tos 0x0, ttl 64, id 28658, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [.], cksum 0x2f6d (correct), seq 85, ack 18, win 229, options [nop,nop,TS val 12707316 ecr 12710184], length 0
00:31:38.780566 IP (tos 0x0, ttl 64, id 28659, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [.], cksum 0x2bb3 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780806 IP (tos 0x0, ttl 64, id 28660, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [F.], cksum 0x2bb2 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780864 IP (tos 0x0, ttl 63, id 55149, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [.], cksum 0x2bc1 (correct), seq 957, ack 86, win 227, options [nop,nop,TS val 12710186 ecr 12707316], length 0
^C


[root@localhost ~]# tcpdump -i docker0 -n -vvv tcp port 8000
tcpdump: listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:31:38.768858 IP (tos 0x0, ttl 63, id 28655, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [S], cksum 0xe360 (correct), seq 2908588551, win 29200, options [mss 1460,sackOK,TS val 12707304 ecr 0,nop,wscale 7], length 0
00:31:38.768929 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [S.], cksum 0xfd6b (incorrect -> 0x35f6), seq 2883195023, ack 2908588552, win 28960, options [mss 1460,sackOK,TS val 12710174 ecr 12707304,nop,wscale 7], length 0
00:31:38.772152 IP (tos 0x0, ttl 63, id 28656, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [.], cksum 0xd4fa (correct), seq 1, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 0
00:31:38.772190 IP (tos 0x0, ttl 63, id 28657, offset 0, flags [DF], proto TCP (6), length 136)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [P.], cksum 0xb4a1 (correct), seq 1:85, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 84
00:31:38.772270 IP (tos 0x0, ttl 64, id 57777, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [.], cksum 0xfd63 (incorrect -> 0xd4a5), seq 1, ack 85, win 227, options [nop,nop,TS val 12710177 ecr 12707307], length 0
00:31:38.779674 IP (tos 0x0, ttl 64, id 57778, offset 0, flags [DF], proto TCP (6), length 69)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [P.], cksum 0xfd74 (incorrect -> 0x14c1), seq 1:18, ack 85, win 227, options [nop,nop,TS val 12710184 ecr 12707307], length 17
00:31:38.780084 IP (tos 0x0, ttl 64, id 57779, offset 0, flags [DF], proto TCP (6), length 990)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [FP.], cksum 0x010e (incorrect -> 0x2a96), seq 18:956, ack 85, win 227, options [nop,nop,TS val 12710185 ecr 12707307], length 938
00:31:38.780389 IP (tos 0x0, ttl 63, id 28658, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [.], cksum 0xd482 (correct), seq 85, ack 18, win 229, options [nop,nop,TS val 12707316 ecr 12710184], length 0
00:31:38.780578 IP (tos 0x0, ttl 63, id 28659, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [.], cksum 0xd0c8 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780818 IP (tos 0x0, ttl 63, id 28660, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [F.], cksum 0xd0c7 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780847 IP (tos 0x0, ttl 64, id 55149, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [.], cksum 0xd0d6 (correct), seq 957, ack 86, win 227, options [nop,nop,TS val 12710186 ecr 12707316], length 0
^C
11 packets captured
11 packets received by filter
0 packets dropped by kernel

可以明显看到目的地址192.168.144.128:8000 在路由前被篡改为了172.17.0.2:8000,并在数据包返回时iptables自动帮我们吧返回的数据包源地址改回了192。168.144.128:8000。

3.2 手动添加端口映射

既然docker自己都是通过iptables实现映射的那么,当然我们手敲命令也可以实现同样的效果。我们新增一条host主机9000端口,命令如下

iptables -t nat -A DOCKER -p tcp --dport 9000 -j DNAT --to-destination 172.17.0.2:8000


# 访问成功,丝滑
[root@localhost ~]# curl 192.168.144.128:9000
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".dockerenv">.dockerenv</a>
<li><a href="anaconda-post.log">anaconda-post.log</a>
<li><a href="bin/">bin@</a>
<li><a href="dev/">dev/</a>
<li><a href="etc/">etc/</a>
<li><a href="home/">home/</a>
<li><a href="lib/">lib@</a>
<li><a href="lib64/">lib64@</a>
<li><a href="media/">media/</a>
<li><a href="mnt/">mnt/</a>
<li><a href="opt/">opt/</a>
<li><a href="proc/">proc/</a>
<li><a href="root/">root/</a>
<li><a href="run/">run/</a>
<li><a href="sbin/">sbin@</a>
<li><a href="srv/">srv/</a>
<li><a href="sys/">sys/</a>
<li><a href="tmp/">tmp/</a>
<li><a href="usr/">usr/</a>
<li><a href="var/">var/</a>
</ul>
<hr>
</body>
</html>

3.3 小节

docker对于转发的实现还是比较简单的,只要耐心的去查看相应的转发规则都可以读懂数据的走象,至于在容器里发出的请求,他的snat是如果做的,本地的docker-proxy的服务存在的意义可以自己好好思考下,我这里就不多BB了。