At work today, we had a problem with a particular workstation. Although it was connected to a gigabit ethernet hub, network file transfers were "too slow". How do you quantify "too slow"?
I finally unpacked my server, which has a gigabit 64-bit PCI card (it’s a dual Athlon MP system). I’m using the d-link gigabit router described above, and the integrated nForce 4 gigabit on my desktop. The tray icons for both connections show 1.0 Gbps.
I get anywhere from ** 33-35 megabytes/sec ** when running a TTCP test from my desktop to the server. That’s in the ballpark…
–
D:\Storage\ttcppcattcp -t -f M homeserver
PCAUSA Test TCP Utility V2.01.01.08
TCP Transmit Test
Transmit : TCP - 192.168.0.10:5001
Buffer Size : 8192; Alignment: 16384/0
TCP_NODELAY : DISABLED (0)
Connect : Connected to 192.168.0.10:5001
Send Mode : Send Pattern; Number of Buffers: 2048
Statistics : TCP - 192.168.0.10:5001
16777216 bytes in 0.48 real seconds = 33.06 MB/sec +++
numCalls: 2048; msec/call: 0.24; calls/sec: 4231.40
The old 64-bit PCI gig-E card went kaput. The new card is a single chip design and it does substantially better in the same server on the same exact network-- nothing else has changed:
pcattcp -t -f M homeserver
34.12 MB/sec
pcattcp -t -l 16384 -f M homeserver
64.00 MB/sec
pcattcp -t -l 32768 -f M homeserver
93.16 MB/sec
pcattcp -t -l 65536 -f M homeserver
105.00 MB/sec
pcattcp -t -l 131072 -f M homeserver
103.73 MB/sec
Er… wow. About the same with small 8kb packets, but WAYYY better with larger ones! Which is a good thing, because all this donkey porn isn’t going to copy itself from the server. That’s all I’m saying…
Remember that if the program is copying the data to HDD, there will your bottleneck be. The system will probably not transfer faster than it needs to, so your HDD write speed will be the determining factor. (I think).
Well your hard drive may seem to be bottlenecking as it is the slowest component of the computer, but many modern desktop hard drives can read/write on average 50 MB/s (that’s Big M Big B for megabytes a second). Personally, my 500 gig sata2 samsung 7200rpm can read/write from 60 to 80 MB/s bursting up to 100 MB/s or more. Even my older IDE 7200 rpm can transfer at least 30 MB/s. Speaking of bottlenecks, USB hard drives are being bottlenecked to 25-30 MB/s as a result of USB HI-speed’s low sustained bandwidth(theoretical burst speeds up to 480 Mb/s but sustains around 300 Mb/s). This is one reason why I’d prefer a file server with gigabit connections rather than a JBOD configuration of USB hard drives.
Just tested with OS X 10.5.1 between a PowerMac G5 and a PowerBook G4 both connected via Gigabit (standard mtu of 1500) to a NETGEAR GS105. I’m getting 70MB/sec which is much faster than the rule of thumb posted above (this is between two machines not in memory on a single machine).
Great article, very useful. Am actually looking at an industrial application that requires about 50MB/s - 80 MB/s data rates and I’m investigating whether gigabit ethernet could be a viable (cheap) solution.
However I have one query, when you refer to changing the packet size, I presume you don’t actual mean the packet size but rather the amount of data transmitted? If this is the case the speed difference is probably to due to efficiencies of scale.
I thought the packet size (MTU) defaulted to 1500, or 9kB (jumbo frames).
I just tried benchmarking my home network. All my pc is connected to a gigabit ethernet router. Gigabit ethernet is really a lot fast.
Using pcattcp -t -l 131072 -f M MyPC, I can get about 108-109 MB/s which is about 864-872 Mbps. For file transfer from PC1-PC2, i can get about 38MB/s.
Yep. I have achieved over 100MB/s sustained writing from RAM to RAM over gigabit ethernet, and this includes any SAMBA overhead.
I have never used pcattcp, but I do use iperf and it confirms that I am getting the full speeds once should expect from gigabit ethernet, even on cheap shitty Realtek chips.
My guess is that something else fails in this test. Maybe old wires not good for gigabit, or a shitty switch or something.
I ran the test again, I had to use two console windows:
In the first window pcattcp.exe -r
In the second window pcattcp -t -f M localhost
results
TCP Transmit Test
Transmit : TCPv4 0.0.0.0 -> 127.0.0.1:5001
Buffer Size : 8192; Alignment: 16384/0
TCP_NODELAY : DISABLED (0)
Connect : Connected to 127.0.0.1:5001
Send Mode : Send Pattern; Number of Buffers: 2048
Statistics : TCPv4 0.0.0.0 -> 127.0.0.1:5001
16777216 bytes in 0.033 real seconds = 483.56 MB/sec +++
numCalls: 2048; msec/call: 0.017; calls/sec: 61895.551
So, ten years later, we go from 90 MB/sec to 484 MB/sec in loopback ethernet testing. Real world throughput from machine to machine these days in 2015 is about 120 MB/sec max.