The Promise and Peril of Jumbo Frames

Jumbo frames are fan-friggin-tastic for clustering. I’m speaking of the now ancient kind where you have shared SCSI drives and crossover cables. I’ll assume everyone else has moved out of the 1990s.

All other drawbacks as noted in the comments are true for home networking. If you don’t have an internal network specifically for data transfer, you’re going to find edge cases all over the place.

Assuming that many of these fast home networks are using FiOS, I’ll point out that FiOS requires a 1492 MTU.

@Bob from what I have seen IPv6 is potentially a bigger problem than IPv4, because where an IPv4 router may see that the packet is too large and fragment it, IPv6 leaves it to the end devices.

What an apropos article, I’m setting up an iSCSI SAN this week (with HBAs and jumbo frames, of course).

In addition to these issues, large packets can also hurt latency for gaming and voice-over-IP applications.
That’s why you should use a dedicated network. The problem isn’t the large packets themselves, it how they play with the smaller, more numerous real-time packets when traversing the network equipment.

While jumbo frames are very useful when transferring large files, they are basically incompatible with networks that require a lot of low latency packets.

The upcoming IEEE 802.1av standard for Audio/Video Bridging specifically disallows jumbo frames since a single jumbo frame would starve the high bandwidth low latency streams and also induce unacceptable jitter in the packet times.

–jeffk++

I’d heard of this while browsing the Drobo app store where there’s a Jumbo Frames app, but hadn’t investigated what it was exactly. Thanks for the writeup!

I’d be interested to see what would happen to your internet connection throughput if you enable jumbo frames. My understanding is that packet fragmentation is seriously bad news for performance, if you’re trying to send through anything that doesn’t support the large packet size.

Another nay here; I gave up on jumbos.

  1. One switch I use lies through its teeth, and handles jumbos badly.

  2. You WILL see a CRC failure every now and then with 9k frames. May you be a lucky b*stard and have it happen only in the middle of a pr0n you don’t care about.

  3. If any of your gigabit NICs are PCI-slot, rather than PCIe or Motherboard-PCIe, fugghedaboutit. You won’t get the boost, and CPU will go up.

  4. Some of the latest drivers (are you listening, Marvell?) aren’t worth spitting at.

  5. If all of the above don’t dissuade you from a 20-30% speed boost combined with pain, go ahead, brave man.

-chicken

@Allen That’s the benefit of how IPv6 handles it though. No fragmentation, no messing with frame sizes. The endpoints discover the maximum MTU for their transmission path and use without having to configure anything. :wink:

Jeff, did this 20% speed bust worth the time you invested into researching that issue and reconfiguring these settings?

I think I will get these Jumbo Frames (or whatever) couple of years later without spending any of my time on that – just because I would install new OS or something like that.

Something interesting I learned (and I still believe it to be true, but I wouldnt be surprised if modern network tech can negate it):

Minimum packet length ~= Maximum network segment

This is to prevent two distant stations from transmitting on the same network segment, writing the complete packet and not listening for a collision, and then having the two packets collide in the middle of the segment.

(Min packet length) x (Network speed e.g. 100Mbps) x (line transmission speed) = Maximum network length

20% is so much that the gain can’t be blamed on the packet overhead. The difference in packet overhead between jumbos and regular frames isn’t even 1%. I bet that the difference is in the TCP window back-off.

Looking at the two graphs, I see that the max throughput is about the same for both. The regular frames, however, backoff farther down that the jumbo frames. That is what is killing performance, IMHO.

If you have a way to monitor TCP window size over time I bet that you’ll see the difference in the graphs. If the connection is relatively free of noise you could also increase the retransmit time-out to improve performance. You could run Wireshark and snoop out a few TCP packets to get a feel for the TCP window size over time.

it is absolutely possible to make your networking worse by enabling jumbo frames, so proceed with caution

Yes! this is so true. Our previous network admin royally screwed up the network by doing this on some but not all switches.

@JMJ AFAIK you’re right on classic/old ethernet network: hub and/or coaxial calble (10Mbit). If you use switches (and not hubs) the collision domain is reduced to two point: you and the switch. The switch uses internal buffer and every port is a collision domain.

Yes, Jumbo Frames do rock. Disk speed over the wire. Hard to get all your drivers and cards to support it though:
http://www.hanselman.com/blog/WiringTheHouseForAHomeNetworkPart5GigabitThroughputAndVista.aspx

Jumbo Frames made a HUGE difference for throughput on my home network. I went from 30’ish MB/sec to roughly 50MB/sec. The only downside is I game a lot and this can cause more latency. I enabled it on my other vista-64 box and my home server and they xfer much better now (I’m not using JF on my gaming rig). Great read!

Doug

The most interesting thing about this is the very low usage of the network capacity.

60MB/s*8bits = 480Mbits/s

Not counting the SMB (Windows networking) header, the ethernet/IP/UDP only accounts for an additional 3% (or less for jumbo frames)

The problem is Window’s SMB protocol. You can get typically get DOUBLE the speed (i.e. a 100% improvement) using FTP, which is much better than the 26% improvement Jeff achieved using Jumbo frames.

Jumbo frames are great. I work on VMware ESX networking, and I will point out what may not be obvious to everyone. In a virtualized environment (hosted or hypervisor) jumbo frames make an even bigger difference, since you are doing more work per packet to begin with. That’s why we added jumbo frame support since ESX 3.5 shipped.

My experience is that any recent machine can easily push full 1Gbit line rate (on native, and for that matter ESX VMs). Setting Jumbo Frames will save you on CPU though, which will allow you to run more VMs or otherwise use that power. And while Jumbo Frames are nice- they get you from 1.5k packets to 9, TCP Segmentation Offloading (TSO) is much better, since you push down entire 64k (or sometimes up to 256k) packets, and an engine on the NIC itself automatically handles dicing them into 1.5K packets. Most good nics support this- Intel, Broadcom, etc. On the other side, the reverse is LRO, or RSS, but this is more complicated and less common. Plus with TSO, you don’t have to worry about MTU.

The other thing I would mention is- for the love of god, don’t run networking benchmarks by doing a file copy. With 1GBit networks, you are limited by your disk speed! Run a proper tool such as iperf (brain dead simple) or netperf, which just blasts data. Even if your hard drive could reach 1Gbit speeds, you would be wasting cycles, so your networking performance would be worse. You always want to look at these things in isolation.

Sorry… as I read more misinformed posts, I feel the need to lay down more truth.

The reason that all these people are seeing performance improvements using Jumbo Frame on Windows is because Windows networking stack sucks. Windows is really stupid and often will not let a single tcp stream reach the full capacity of the NIC. I.e. you run 1 TCP stream and measure 400Mbits, but if you ran 3 in parallel you would hit 940Mbits (~Line rate). This is even more annoying with 10G, since you need like 18 streams to reach the peak performance. Linux doesn’t have these problems, and will give you its best possible performance on a single stream. I can only imagine Window’s behavior is the result of some misguided attempt at ensuring fairness between connections by making sure that even if there is only one connection, it never uses the full capacity.

I recently went gigabit. What a letdown. True, copying 5gb dvd images became much faster but otherwise I don’t notice it.

To add insult to injury, I had to turn OFF jumbo frames on my gaming PC because the online lag/latency was horrible.