The biggest problem I have with this now is that over the past few years at home I’ve gone from a primarily wired-network with one or two wireless devices to a primarily wireless network with a single wired device.
I’ve only got one desktop that’s actively in-use and using a wired ethernet connection anymore. Then a wireless printer, PS3, laptops, iphone and so forth. All still running at 54Mbps, too. And it seems things are transitioning more in that direction than towards gigabit ethernet. At least for at home.
I’d imagine the bigger packets could be slower on high activity lans due to colisions? It would be interesting to see some testing on that kind of thing. In a realm of gigabit lans though this may have little impact.
Another advantage besides of the lower overhead which results in a higher datarate is the lower CPU usage (fewer interrupts) at the same datarate. This is a big topic in industrial vision systems, where you have GigE cameras with given datarates.
i have been building networks for broadcasters for over a decade - who always wanted bigger / faster / more type networks.
jumbo frames are great in theory, but the pain level can be very high.
A core network switch can be brought to its knees when 9 Kbyte frames have to be fragmented to run out a lower MTU interface.
Many devices dont implement PMTU correctly, or just ignore responses - video codecs seem particularly prone to this.
and wasnt there a discussion a few posts ago about don’t try to optimise things too much? If you need 20% more network performance, but you are only operating at maybe 40% load, then you need a faster machine or a better NIC card.
And there have been something like 5 definitions of jumbo just in the cisco product line. Also telecomms manufacturers idea of jumbo often have frames with 4 Kbytes, not 9 Kbytes…
And just to set the record straight - the reason for the 1514 bytes frame limit in GigE and 10G ethernet is backward compatibility.
Just about every network has some 10/100 (or 10 only) equipment still, and the 1514 limit has been built into other standards such as 802.11 wireless LAN.
the old saying is that God would have struggled to make the world in 7 days if he started with an installed base…
I think that your UDP packet schema is wrong.
Ethrnet packet is max 1518 (14 header + 4 trailer + 1500 max payload).
In UDP, payload is divided between IP and UDP headers (20+8) and UDP Paylod.
So the max udp payload is: 1518 - 14 - 4 - 20 - 8 = 1472 bytes.
I think you’ve made a slight factual error. The total size of the payload of an ethernet frame can be 1500 octets, with 18 octets for the header and crc data. This is the Maximum Transmit Unit, or MTU. That is at the ethernet layer, at the IP layer, which must be fully contained within the ethernet payload, the IP and UDP / TCP headers compete with the body of the packet for the remaining 1500 octets.
Wireless networks have a lot of transmission errors, so large frames would not work. In any case, Jumbo frames are only useful if the CPU cannot keep up with the data rate. Since wireless networks are quite slow, this is not an issue.
I have not enabled Jumbo frames on my network and probably never will, because I have some PCs that have 100mbps NICs (and do not need faster ones). If I enabled Jumbo frames, I would have to put a router between the gigabit and 100m parts of my network so that it can fragment the packets. I will just use gigabit NICs with TCP segmentation offloading suport
I did some of this myself recently. I ended up getting intel nics for all the boxes, everything works much better. also, with tcp there is some ability to negotiate frame sizes so some mix/match can work.
however, I tossed all my nics that have the 1k/2k…7k sizes - it just didn’t work too well, in the case of my Myth box, not at all!
plus, with linux you may need to do some tuning to get things running smoothly.
There are too many pitfalls. I’ve gone down this road before, and it is painful, and yet, I do the same as you so you’ve got me thinking that maybe I should try it again.
You’ve listed many of the problems but the comments have listed more. (non jumbo devices, internet, etc). I’m willing to bet you a beer that after a while of dealing with the nuisances, you’re just going to say F it and put settings back to default. (Those settings are supposed to be there for people like us, but they’re really a bit more like a Dickensian toy shop)
Actually, maybe the ideal solution lies in one of the more recent trends in MB design/layout: multiple ethernet NICs onboard. If the number of machines that you truly run large files between is small (as is mine), run a small, exclusive (maybe even all-the-same-NIC), parallel network.
The problem is that even a simple solution like that takes time and money to setup, it complicates your network a good bit, and I’d argue it is far in excess of the slightly longer time you incur by using defaults and thus guarantee everything just works. It’s a problem that is probably unnecessarily optimized.
Most home networking gear, including routers, has safely transitioned to gigabit ethernet.
Not for wireless connections it hasn’t.
You supply the caveat of technically, for jumbo frames to work, all the networking devices on the subnet have to agree on the data payload size, but make no mention of JF being unsupported by wireless connections.
Most home connections at present have at least some wireless components - these wireless connections are almost invariably on the same subnet as any wired connections.
Your suggestion is likely to cause a lot of slightly-informed people grief as they attempt to eke a little more performance out of their networking gear, without knowing WTF they are doing.
I’m glad Nicolas mentioned IPv6, as that is much more relevant here than jumbo frames. If I’m not mistaken, IPv6 requires devices to use Path MTU Discovery, so jumbo frames would largely be a non-issue.
Most home networking gear is gigabit? Really? I’d like to see where you’re drawing the statistics for that. Even if the average home user is keeping on the leading edge of home network routers and ethernet cards (instead of the economy class offerings of older technology), how many of them are going to know to go get the latest drivers from the chipset manufacturer? Or to go in and change the default settings of anything, let alone the obscure depths of network card settings?
My gut reaction based on my personal experience is that gigabit is still a few years off for the typical home user. But I’m willing to be proven wrong if you’ve got some studies…
File transfer is typically done using TCP, not UDP. TCP has more overhead than UDP.
I’m curious why we see a sawtooth pattern in the un-jumbo framed graph. Is that TCP Vegas doing its thing?
I’m glad you’ve gone ahead and tried this out. Jumbo frames wouldn’t exist if they didn’t have a purpose, but with all the different kinds of traffic I think 1500 MTU is a good choice.
One with jumbo frames that you touched on, but didn’t adequately explain, is that most consumer switches use the store-and-forward method of switching packets. This means that your switch must receive the whole packet before it can send it along, it can’t be doing anything else because packets can’t be multiplexed. This can cause unacceptable latency (you have 2 computers, not a big deal, but between several machines all trying to send data, you can end up with some seriously delayed packets).
I just would have liked to see more reasons not to do this that it’s not a supported standard and doesn’t work with a lot of hardware. There are other reasons this has not become the default.
Denton Gentry has it right. We’re always trying to wring the maximum out of our data-video network (mixed Irix-Windows), and some years ago GigE + jumbo frames were like magic bullets on the network segments that could handle it, easily giving a 50-60% boost.
With so much legacy 100base gear, and switches that didn’t understand jumbos, it took a long time before everything was integrated (you don’t just replace a 200+ port switch every day). In key places we have dual 1000base NICs, one talking to the jumbo-capable segments, and one to the legacy segments. But with IRQ-coalescing NICs, and offloaded CSum etc, the difference is much less than 50% now.
And of course, our high-performance segments have migrated to Infiniband, and 10GigE is around the corner. It never stops.