一、前言二、实现原理三、设备配置
1.服务器配置
1.1.配置NAT策略及放行相应数据包
1.1.1.安装iptables1.1.2.配置NAT策略 1.2.配置路由 2.配置网络设备
2.1.基本网络配置2.2.配置静态ARP 3.测试
3.1.ping测试3.2.打流测试
一、前言 最近想对设备进行打流测试,但手头上只有一台服务器,所以想用单台服务器实现,即一个网卡做Iperf的服务端,另一个网卡做客户端,数据流从客户湍的网卡流出绕经测试设备,再从服务器端的网卡回到服务器。具体实现如下:
二、实现原理 要实现一台服务器既做Iperf的服务端和客户端,需要做用“-B”参数绑定网卡,如`iperf3 -B 192.168.10.1 -s `。
服务器上两个网卡分别配置不同网段的IP,在访问对方网卡过程做源目NAT,并且访问的是对方SNAT后的地址。如eth1的IP为192.168.10.1,SNAT之后 的IP为172.16.10.1;eth2的IP为192.168.20.1,SNAT后的IP为172.16.20.1;当eth2访问eth1时,报文的SIP为192.168.20.1、DIP为eth1 SNAT后的IP 172.16.10.1,然后经SNAT之后报文SIP为172.16.20.1、DIP为172.16.10.1,该报文到达eth1进行DNAT,将报文的DIP 172.16.10.1还原为eth1的IP 192.168.10.1,eth2访问eth1的单向过程如下图所示。eth1回包过程原理和eth2访问eth1的过程一样。 三、设备配置
实验拓扑及IP如上图所示,配置如下:
1.服务器配置 服务器版本为CentOS 7.2,网卡信息如下:eth1 IP:192.168.10.1/24 MAC:00:50:00:00:01:01 GATEWAY:192.168.10.254 eth2 IP:192.168.20.1/24 MAC:00:50:00:00:01:02 GATEWAY:192.168.20.2541.1.配置NAT策略及放行相应数据包 CentOS 7默认防火墙为firewalld,可使用firewalld做NAT及控制数据转发,也可以采用iptables,两者采用其中一种即可,本次实验采用iptables实现NAT功能。
CentOS 7默认防火墙为firewalld,最小化安装时并没有安装 iptables,若采用iptables做NAT及放行相应数据包,需要安装iptables并关闭firewalld。
PS:不安装iptables也可实现NAT功能,但重启后NAT配置失效。 1.1.1.安装iptables
yum install -y iptables iptables-services systemctl stop firewalld systemctl disable firewalld systemctl start iptables.service systemctl enable iptables.service1.1.2.配置NAT策略 访问对方网卡发包时,进行SNAT
iptables -t nat -A POSTROUTING -s 192.168.10.1 -d 172.16.20.1 -j SNAT --to-source 172.16.10.1 iptables -t nat -A POSTROUTING -s 192.168.20.1 -d 172.16.10.1 -j SNAT --to-source 172.16.20.1
收到对方网卡数据包时,进行DNAT,将DIP还原为网卡IP
iptables -t nat -A PREROUTING -d 172.16.10.1 -j DNAT --to-destination 192.168.10.1 iptables -t nat -A PREROUTING -d 172.16.20.1 -j DNAT --to-destination 192.168.20.1
放行数据Iperf3数据包,由于打流是在测试环境,不接入公网,简单粗暴点放行所有
iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT
保存配置并生效
service iptables save systemctl restart iptables1.2.配置路由
vim /etc/sysconfig/network-scripts/route-eth1 172.16.10.254/32 dev eth1 172.16.20.1/32 via 172.16.10.254 dev eth1
vim /etc/sysconfig/network-scripts/route-eth2 172.16.20.254/32 dev eth2 172.16.10.1/32 via 172.16.20.254 dev eth2
重启网络服务
systemctl restart network2.配置网络设备 本次实验采用了两台VMX960,作为服务器的网关,两台设备间跑静态路由。 2.1.基本网络配置
VMX960_A:
set interfaces ge-0/0/0 unit 0 family inet address 192.168.10.254/24 set interfaces ge-0/0/0 unit 0 family inet address 172.16.10.254/24 set interfaces ge-0/0/1 unit 0 family inet address 192.168.30.1/24 set routing-options static route 192.168.20.0/24 next-hop 192.168.30.2 set routing-options static route 172.16.20.0/24 next-hop 192.168.30.2
VMX960_B:
set interfaces ge-0/0/0 unit 0 family inet address 192.168.20.254/24 set interfaces ge-0/0/0 unit 0 family inet address 172.16.20.254/24 set interfaces ge-0/0/1 unit 0 family inet address 192.168.30.2/24 set routing-options static route 172.16.10.0/24 next-hop 192.168.30.1 set routing-options static route 192.168.10.0/24 next-hop 192.168.30.12.2.配置静态ARP 由于服务器做了SNAT,172.16.10.1和172.16.20.1在服务器内部,可以理解为eth1和eth2的虚拟IP,但并不是与物理网上绑定在一起,网卡不会为其响应网关的arp请求。而且由图一可知,在服务器外部的网络中,数据报文中的源目IP为NAT后的地址172.16段,所以需要在网关上为配置静态ARP表项。
VMX960_A: set interfaces ge-0/0/0 unit 0 family inet address 172.16.10.254/24 arp 172.16.10.1 mac 00:50:00:00:01:01 VMX960_B: set interfaces ge-0/0/0 unit 0 family inet address 172.16.20.254/24 arp 172.16.20.1 mac 00:50:00:00:01:023.测试 3.1.ping测试
[root@server ~]# ping -I eth2 172.16.10.1 PING 172.16.10.1 (172.16.10.1) from 192.168.20.1 eth2: 56(84) bytes of data. 64 bytes from 172.16.10.1: icmp_seq=1 ttl=62 time=2.69 ms 64 bytes from 172.16.10.1: icmp_seq=2 ttl=62 time=2.11 ms 64 bytes from 172.16.10.1: icmp_seq=3 ttl=62 time=1.53 ms
[root@server ~]# ping -I eth1 172.16.20.1 PING 172.16.20.1 (172.16.20.1) from 192.168.10.1 eth1: 56(84) bytes of data. 64 bytes from 172.16.20.1: icmp_seq=1 ttl=62 time=3.26 ms 64 bytes from 172.16.20.1: icmp_seq=2 ttl=62 time=10.8 ms 64 bytes from 172.16.20.1: icmp_seq=3 ttl=62 time=5.41 ms 64 bytes from 172.16.20.1: icmp_seq=4 ttl=62 time=9.94 ms3.2.打流测试
服务端:
[root@server ~]# iperf3 -B 192.168.10.1 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 172.16.20.1, port 46934 [ 5] local 192.168.10.1 port 5201 connected to 172.16.20.1 port 34578 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 5] 0.00-1.00 sec 116 KBytes 949 Kbits/sec 1.178 ms 0/82 (0%) [ 5] 1.00-2.00 sec 129 KBytes 1.05 Mbits/sec 0.673 ms 0/91 (0%) [ 5] 2.00-3.00 sec 127 KBytes 1.04 Mbits/sec 1.013 ms 0/90 (0%) [ 5] 3.00-4.00 sec 129 KBytes 1.05 Mbits/sec 1.370 ms 0/91 (0%) [ 5] 4.00-5.00 sec 127 KBytes 1.04 Mbits/sec 0.818 ms 0/90 (0%) [ 5] 5.00-6.00 sec 116 KBytes 948 Kbits/sec 1.061 ms 0/82 (0%) [ 5] 6.00-7.00 sec 140 KBytes 1.15 Mbits/sec 1.055 ms 0/99 (0%) [ 5] 7.00-8.00 sec 129 KBytes 1.05 Mbits/sec 0.593 ms 0/91 (0%) [ 5] 8.00-9.00 sec 127 KBytes 1.04 Mbits/sec 0.657 ms 0/90 (0%) [ 5] 9.00-10.00 sec 129 KBytes 1.05 Mbits/sec 1.178 ms 0/91 (0%) [ 5] 10.00-11.00 sec 127 KBytes 1.04 Mbits/sec 0.673 ms 0/90 (0%) [ 5] 11.00-12.00 sec 129 KBytes 1.05 Mbits/sec 0.593 ms 0/91 (0%) [ 5] 12.00-13.00 sec 127 KBytes 1.04 Mbits/sec 0.739 ms 0/90 (0%) [ 5] 13.00-14.00 sec 129 KBytes 1.05 Mbits/sec 1.023 ms 0/91 (0%) [ 5] 14.00-15.00 sec 127 KBytes 1.04 Mbits/sec 1.055 ms 0/90 (0%) [ 5] 15.00-16.00 sec 129 KBytes 1.05 Mbits/sec 1.430 ms 0/91 (0%) [ 5] 16.00-17.00 sec 127 KBytes 1.04 Mbits/sec 1.072 ms 0/90 (0%) [ 5] 17.00-18.00 sec 129 KBytes 1.05 Mbits/sec 1.207 ms 0/91 (0%) [ 5] 18.00-19.00 sec 127 KBytes 1.04 Mbits/sec 1.257 ms 0/90 (0%) [ 5] 19.00-20.00 sec 129 KBytes 1.05 Mbits/sec 1.252 ms 0/91 (0%) [ 5] 20.00-20.04 sec 0.00 Bytes 0.00 bits/sec 1.252 ms 0/0 (0%) - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 5] 0.00-20.04 sec 0.00 Bytes 0.00 bits/sec 1.252 ms 0/1802 (0%)
客户端:
[root@server ~]# iperf3 -B 192.168.20.1 -c 172.16.10.1 -i 5 -u -t 20 Connecting to host 172.16.10.1, port 5201 [ 4] local 192.168.20.1 port 34578 connected to 172.16.10.1 port 5201 [ ID] Interval Transfer Bandwidth Total Datagrams [ 4] 0.00-5.00 sec 628 KBytes 1.03 Mbits/sec 444 [ 4] 5.00-10.00 sec 641 KBytes 1.05 Mbits/sec 453 [ 4] 10.00-15.00 sec 639 KBytes 1.05 Mbits/sec 452 [ 4] 15.00-20.00 sec 641 KBytes 1.05 Mbits/sec 453 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 4] 0.00-20.00 sec 2.49 MBytes 1.04 Mbits/sec 1.252 ms 0/1802 (0%) [ 4] Sent 1802 datagrams iperf Done.
测试成功,实现单台服务器多网卡分别做Iperf3的服务端和客户端。



