Home » SBS2008 » Gigabit Ethernet vs USB 2.0 Transfer Speed Performance

Gigabit Ethernet vs USB 2.0 Transfer Speed Performance

I’ve tried finding a comparison of Gigabit Ethernet vs USB 2.0 transfer speed performance on the internet but couldn’t find anything useful as everyone compared theoretical speeds rather than actual speeds saying that Gigabit is faster.  I didn’t believe that as actual and theoretical transfer speeds are two different things.  I did the testing myself on an SBS 2008 server with a powered external Seagate USB 2.0 3.5″ 1.5TB drive plugged into the server and compared that with a Gigabit ethernet Thecus NAS and copied 5GB files from both USB and the gigabit ethernet NAS onto the server.  It’s also worth noting that the NAS is connected to the server via Cat6e cable via a quality Netgear Gigabit Ethernet Switch (with separate buffers on each channel, etc), and Both the NAS and the server have dual gigabit cards configured to run in a loadbalanced arrangement.  The result I got was 2:45 for USB 2.0 external hard drive and 3:45 for the Gigabit Ethernet NAS.  This was done on the weekend too, so there was no other load on the network.  So even though in theory Gigabit Ethernet should be faster, in practice USB 2.0 is actually faster.  So if you’re trying to work out whether to do backups to a NAS connected via gigabit lan or an external USB 2.0 drive then if performance is your main consideration the USB 2.0 drive should actually be faster.  It’s also worth noting that 2.5″ USB 2.0 drive that draw their power from USB 2.0 actually seem to run a fair bit slower than the powered USB drive.

Categories: SBS2008 Tags:
  1. March 28th, 2012 at 23:35 | #1

    I definitely recommend against bumping up the MTU on your network. I’ve tried doing this and it creates all kinds of hassles. Even though all equipment (except some PCs) actually supports jumbo frames.

  2. lol
    April 28th, 2012 at 20:16 | #2

    Not to be mean, admin of SHELDMANDU, But I understand your scenario as a college/university network admin that must pay money to them old resellers (dell windows cisco etc), and why you are having slow data rate. If you use any *nix box that you can pipe your netcat connection directly to hdd in any Cat5+ cable, using and any journaling Files system like ZFS and Btrfs, you could enjoy the 300-329Mb/s I/O speed of just a Pentium M processor with a simple dedicated motherboard for Backups (i.e. RJ45 port and cpu to a sata drive (do not care whether SDD or not)). But since you must use windows, you *&%$#. I make most of my hardware nodes I can get a dedicated cron job just ensuring in the night that the server get all their data ready on ram when I need to do backup command on cutie board, the open the precious port, and I sow all them data and pipe it all the server.

    Your scenario thought, you will need to make a dedicated program to lift the data on to ram and pass to baby board on call queues, using a regular encrypted TCP call. I hate windows so much, I do not want to give any Batch script you can do to do this SOOO easily.

    Is it possible you can change this post, and add a:
    Sorry this was a test done using servers that are not properly managed for their TCP/IP queues, and I only used a product I can did not build myself. It seems that the NAS server was poorly done for the RJ-45 port, and how it talks with the Intel IOP processor to pipe the data to the RAIDed HDDs in a non optimized fashion. I apologize for misleading anyone, but I did know this product was not optimized. Please read the rest of the comments below to learn more about I learned of this. And again I sincerely apologize for any one I mislead.”

    You can reword this any way you please. But what we do not want is readers trustful of your posts to be mislead into thinking that indeed USB2 protocol for data transfers if faster than Cat5/e protocol, esp. for I/O Transfers on a RJ-45 port built for 1Gb/s. USB3 for me, though, it is a toy. Poorly done toy.

    I had to reword this post several times to make sure there was nothing offensive. It is hard though when your job is to study your hardware and software for a web hosting company decision on what the next servers should be for the next 4k customers that will be running on it. And yes, I need to keep my privacy, for me not to lose my Job. So please remove my IP address from this post in the Database. (I hope this is not MySQL *Facepalm*)

  3. JM
    May 3rd, 2012 at 08:57 | #3


    Actually, the write speed of RAID5 is slower.

  4. September 3rd, 2012 at 10:39 | #4

    I was googling the issue of speeds of NAS vs. USB external as I’m developing a backup plan for a business client of mine. Your testing has confirmed my suspicions, and as there will be 5 computers all creating a huge backup at the same time that would slow things even further… so USB it is! Thanks for a great real-world test!!

  5. September 3rd, 2012 at 15:58 | #5

    I would actually get a USB 3.0 card these days and a 7200RPM HDD in a USB 3.0 caddy. That is BY FAR the fastest way to go, with speed of the HDD being the main limiting factor.

  6. stuart
    September 4th, 2012 at 08:13 | #6

    Hello, I use a pair of fairly low powered Fedora systems for all my file share and backup storage. Both run RAID5 using the linux MD driver. All this is done as SMB shares to enable my windows systems to see the disk. At peak performance (moving large files) I can manage over 70MB/s across the Gb LAN. This is aproximately 70% of the theoretical throughput of the LAN. This is way past the theoretical limit of USB2.0. I haven’t played with USB3.0 enough to make any conclusions and at 5Gb/s it should without difficulty exceed what Gb LAN can. It should though since it has 5 times the bandwidth.

    I should say that I also have a Thecus N4200eco that I turned off because the performance was so woeful.

    My 2 HP Microservers are only slightly larger than the Thecus NAS and way more flexible. The only thing missing is the pretty GUI to configure it.

  7. Ray Myers
    September 17th, 2012 at 22:29 | #7

    First thing if you are testing transfer rates across the network to a USB attached hard drive, The maximum write speeds to disk will be set by the USB hard drive transfer rates For 2.0 that is 480Mbps or 60MBps… So no matter the device you are going to be capped by the USB-hard drive transfer rates.

    I know I am late to chime in on this, but I find it is important to note you did not clarify if you were comparing PCI to USB or PCI-X to USB or PCI-e xN to USB…

    Given bus speed differences:
    USB2=60MB/s (1-Byte = 8-bits.)
    PCI=133MB/s to 533MB/s (32-bit to 64-bit)
    PCI-X=1066MB/S to 2133MB/s (133Mhz to 266Mhz)
    PCI-e=250MB to 16GB/s (V1.x single lane to V3.0 16 lane)

    The impact on performance can be significant…

    On a Gbit network or 128MB network (again 1Byte = 8bit), the USB is already caping throughput to half. Standard PCI can run into bottlenecks since the bus is shared, so depending on number of PCI cards (and type) you will see significant differences.

    PCI-e will yield best performance by far and come closest to the Gbit speed of the cards capability…

  8. September 21st, 2012 at 16:27 | #8

    Actually I was transferring data to an NAS, not a USB Hard Drive and comparing a NAS attached via a Gigabit LAN and comparing against USB attached directly to the server. I found transferring to the NAS over gigabit LAN was slower than directly to a HDD attached via USB 2.0. In any case, the fastest way to go would be to HDD attached via USB 3.0 these days anyway. It’s the speed of the HDD that limits you in that case.

  9. September 21st, 2012 at 16:30 | #9

    We also use Microservers in addition to the Thecus and they perform way better because the CPU (either N36L or N40L) is way more powerful than what’s in the NAS. CPU makes a massive difference. Also I would suggest using FreeNAS if you’re using a Microserver as that way you get the pretty GUI too. USB 3.0 is definitely faster, BUT you are then limited by the HDD speed.

  10. Eric
    November 6th, 2012 at 13:20 | #10

    Symantec Backup Exec reports full backups to our USB3 WD external 7200 RPM disk (connected via USB2 port) run at a rate of ~1,500 MB/min. (25MB/s)

    Symantec Backup Exec reports full backups to our QNAP TS-459 Pro-II (4x 1tb 7200 rpm disks in raid-5) run at a rate of ~3,300 MB/min. (55MB/s)

    I hope to increase the speed of the backups to NAS by implementing a “backup” VLAN using jumbo frames, and increase the speed of backups to the WD external USB disk by adding a SS-USB 3.0 controller card to our server. I might also try load balancing the dual 1Gbe NICs on both the NAS and the server, although I don’t think it would produce much if any gain over a jumbo-frame VLAN since I didn’t see the network monitor go above ~75% during the backup.

Comment pages
1 2 3 86
  1. March 28th, 2011 at 23:51 | #1

3 − = one