RSS

Wireshark Skills Series 4 of 7

16 Dec

In some of our HTTP requests there was something interesting going on that didn’t make the cut to be explained. The magic of Ethernet was fast at work furiously chopping up data at one end of the pipe and reassembling it at the other. Packets can be all different sizes but if they exceed a limit they will get sliced and diced.

Increasing the size of the packet reduces the amount of overhead needed to transfer your data. Both ends of the pipe and everything inbetween have to support the packet size that is being sent. If a switch or router doesn’t support it the connection may fail. If it is supported but not properly configured you could run into a silent problem where that intermediate device is fragmenting the packets. It is best to change jumbo packets at the intermediate switches and communications devices first, then at the endpoints. Also, some applications like Microsoft SQL Server need configured to send larger packets too.

To illustrate this process we are going to use the ping tool. Ping runs in ICMP which is on top of IP and Ethernet. Ethernet is the layer where the MTU setting resides. Before we change the MTU lets take a look at a large ping getting sliced up like the HTTP traffic we saw earlier. This command will continuously ping with 10,000 bytes of data. For this test I am running Wireshark on a Windows 2012 R2 server. (VHD 180 day Trial: http://technet.microsoft.com/en-US/evalcenter/dn205286.aspx)

ping 10.10.10.108 -l 10000 -t

Segmented_default

If you squint really hard you can see 7 packets coming into the server and 7 packets going out of the server. Ethernet + IP require 36 bytes per packet so the overhead can add up quickly. If you look at the length, you will see no packet is over 1514 bytes in size.

Now we can try to make this more efficient. I’ll have to configure the vswitch and the virtual machine NIC to enable jumbo packets.

configure_jumbo_1

configure_jumbo_2

configure_jumbo_3

Now if we re-try that ping we should notice fewer packets.

jumbo_packet_confirmed

Success right? Well yes in this case we made the process 340 bytes more efficient. However, since I enabled jumbo packets I broke other things. Most notably my blogging software :]

Advertisements
 
Leave a comment

Posted by on December 16, 2013 in Network Admin

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: