Home » SBS2008 » Gigabit Ethernet vs USB 2.0 Transfer Speed Performance

Gigabit Ethernet vs USB 2.0 Transfer Speed Performance

I’ve tried finding a comparison of Gigabit Ethernet vs USB 2.0 transfer speed performance on the internet but couldn’t find anything useful as everyone compared theoretical speeds rather than actual speeds saying that Gigabit is faster.  I didn’t believe that as actual and theoretical transfer speeds are two different things.  I did the testing myself on an SBS 2008 server with a powered external Seagate USB 2.0 3.5″ 1.5TB drive plugged into the server and compared that with a Gigabit ethernet Thecus NAS and copied 5GB files from both USB and the gigabit ethernet NAS onto the server.  It’s also worth noting that the NAS is connected to the server via Cat6e cable via a quality Netgear Gigabit Ethernet Switch (with separate buffers on each channel, etc), and Both the NAS and the server have dual gigabit cards configured to run in a loadbalanced arrangement.  The result I got was 2:45 for USB 2.0 external hard drive and 3:45 for the Gigabit Ethernet NAS.  This was done on the weekend too, so there was no other load on the network.  So even though in theory Gigabit Ethernet should be faster, in practice USB 2.0 is actually faster.  So if you’re trying to work out whether to do backups to a NAS connected via gigabit lan or an external USB 2.0 drive then if performance is your main consideration the USB 2.0 drive should actually be faster.  It’s also worth noting that 2.5″ USB 2.0 drive that draw their power from USB 2.0 actually seem to run a fair bit slower than the powered USB drive.

Share
Categories: SBS2008 Tags:
  1. JohnP
    July 19th, 2010 at 13:31 | #1

    Real world testing is good. Thanks. Real data is always better than “feelings” or “beliefs” without data.

    Do you think there were CPU or bus limitations causing the GigE slowdown for the client?
    Was the target drive SATA/eSATA connected to the client or also USB connected?
    Did you try the test going in the client —> server direction?

    I’m about to mirror a production VM server to another drive (for initial DR seeding) and need to have the server in single user mode for the final sync but for the initial pre-seeding, USB or remote GigE are available. It is about 500GB of data, but since I’ll be impacting a production server, I’d like to get it done during a maintenance window with as little impact as possible. It will definitely be run at a lower priority regardless.

  2. July 20th, 2010 at 01:42 | #2

    I tried copying forward and back from the USB drive and from the NAS. Perhaps the issue is with the NAS rather then with the Gigabit LAN on the server, but at least in my case the Gigabit network is definitely slower than USB. I actually expected this, as theoretical throughput of a Gigabit LAN is very different from actual throughput.

  3. July 20th, 2010 at 12:37 | #3

    @admin
    Ah … a NAS. Often the little CPUs in the cheaper NAS devices (for SOHO or home use) can’t keep up with with network. Obviously, it completely depends on the NAS involved. You didn’t say what kind of NAS (EMC, NetApp, ReadyNAS+, or some other kind.) The enterprise class NAS devices will keep up with the network until their cache is full. Since you’re talking about a Netgear switch, I’ll assume you have a cheaper NAS that may not have the CPU to keep up.

    If your network IO gets over 650Mbps, you’re doing well. Some of the cheaper GigE NICs only support around 300Mbps. I’ve never had issues with the Intel PRO/1000 MT NICs ($29), but I have had issues with some of the other non-brands. If you stay with server-class NICs, like provided by Dell, HP, IBM, and Intel, then the performance should be fairly good. Realistically, if you get 750-850Mbps, you’re doing fantastic.

    Learn that there is a delay to my DR transfer date, so I’ll have some time to run some tests with USB and SATA and GigE then post them on my blog. For this client, the network is very SOHO, but all the systems are GigE connected. They use a bunch of virtualization. Another set of data points for the discussion.

  4. July 22nd, 2010 at 03:21 | #4

    The NAS we have is a Thecus N4100 PRO. Sure it’s not a HP or NETGEAR, but is still pretty reasonable and at the moment runs with 2 drives rather than 4 so should have sufficient CPU. We also have load balancing setup on the LAN cards, which are Gigabit. One thing we did find is that when we changed from a Repotec Gigabit Switch (not great, good for the price) to a NETGEAR (Excellent) we noticed the speed did improve, as the NETGEAR switch has buffering on each port. Either way my experience with this and even with moving data to other computers (i.e. another beasty server)

  5. Navin
    September 9th, 2010 at 13:59 | #5

    Wow, thanks to google search, I landed on the right page. I was also exploring the same kind of thing you tested when deciding to buy a NAS drive. I currently have an ASUS router with USB 2.0 port and an externally powered 300 GB IOGear hard drive plugged into it. After reading on NAS drives, I thought the throughput would be faster, but I believe it’s really the wireless speed. I can’t remember the model number of my router, but it’s about 6 months old, so pretty latest one, if not the most latest.

  6. September 9th, 2010 at 14:48 | #6

    Basically whichever way you look the network speed is going to really slow things down for you and USB or even better FireWire or eSATA is the better way to go. Also with the NAS you’ll most likely find that the processors isn’t the best, and that’s the reason why things are even slower than they should be.

  7. Joel D Hencken
    September 11th, 2010 at 12:09 | #7

    Thanks for your post. I’ve been extremely frustrated with the transfer speeds on my home network (gigabit), but you get much closer scores than I do. I use a synch program for My Documents between my main PC and my NAS. Reading the full folder takes about 30sec on the PC, about 45sec on a USB HD, and over 6 MINUTES on the NAS (or on another PC on the network). :-(

    If anybody reads this and has ways to troubleshoot the transfer speeds on a wired network, I’d be grateful.

  8. September 12th, 2010 at 02:29 | #8

    That doesn’t sound right and there’s definitely some issue that has nothing to do with the gigabit network itself. I’ve had issues in the past where Jumbo Frame support was the issue and had to be turned off on the server because the switch didn’t support it, but I don’t think that’s the issue here, there’s something else in play. To troubleshoot a problem like this you need to try to elimitate the various problems one by one.

  9. Jim
    November 15th, 2010 at 22:57 | #9

    @Joel D Hencken
    Make sure that none of your Ethernet network devices have a duplex mis-match. Cisco switches are notorious for causing this problem when set to auto-detect. Forcing everything to full/gig will ensure that you don’t have this problem.

  10. November 16th, 2010 at 00:36 | #10

    Good point. This wasn’t the case with my setup, it’s just that stated throughput is never the same as actual throughput on the network.

  11. kcg
    January 25th, 2011 at 23:03 | #11

    Nice joke indeed! So when you compare speed of your seagate drive with speed of your NAS drive(s) you claim you have compared USB 2.0 with GigE and the conclusion is: USB 2.0 is faster! Please, next time publish something like that April 1st.

  12. January 26th, 2011 at 00:19 | #12

    I know what you mean… I would have expected NAS with Gigabit Ethernet to be faster as well, BUT IT WASNT. In theory, writing to a NAS with Gigabit LAN connection should be faster than writing to a USB 2.0 7200RPM Drive, but in practice it isn’t. The reason for this is that the actual throughput of a Gigabit LAN is somewhat less than it’s theoretical throughput. If you did networking at Uni and have worked with networks for a while like I have you would undertand why that may be the case. I should note that I tested on a real-life network here where everything is wired through a Netgear 48 Port switch, so there is other traffic on the network as well which slows things down. Other factors like length of cable, whether the cable is Cat6e or Cat5e, what power cables are around and so on would most certainly make a difference in terms of speed as all of these have the effect of causing packets to be re-transmitted and slow things down. I should also point out that the NAS could have a significant impact on the speed. My testing was done with a Thecus and I can’t say that it’s the best NAS around. I think I would get somewhat better performance with something like Netgear ReadyNAS, but with my specific hardware the results I got were the results I got. I should also point out that I ran the tests over and over and tinkered around to try account for all the factors, so the results are as legit as they’re going to get.

  13. February 6th, 2011 at 14:52 | #13

    There are more issues involved than just how quick the GigE network is. Not only do you need to optomise the network side of things (making sure all parts operate with the same size jumbo frames etc etc) , you also need to ensure that the drives themselves are capable of being written to at the speeds you require.
    In order to do a real world test – to show which is quickest – you need to make sure that the drives in use are the same. In my experience, a correctly optomised network isn’t the limiting factor – the drives (and their respective interfaces) are.

  14. February 7th, 2011 at 00:39 | #14

    The drives used are 7200 RPM drives connected using SATA (it’s a Thecus NAS), so there isn’t much that can be changed in terms of their configuration. I have however heard that some NASes might slow things down considerably because they don’t have enough processing power and thus result in sub-optimial performance. As I don’t have another NAS to test with I can’t really tell. In terms of the network all sides do operate with the same frame sizes as we had an issue related to frame size previously (when using a Repotek switch before we made the change to Netgear) and are well aware of how this can impact performance. You are most likely right that it’s not the network that’s slowing things down as we do indeed have the dual Gigabit ports configured to run in parallel and I’ve tried other configurations too, but I still find USB 2.0 has been faster (not by much).

  15. Fred
    February 17th, 2011 at 12:14 | #15

    I’ve got 3 of the Thecus NAS’, in varying flavours (two N4100s and a N5100) all RAID5, all with 500Gb-1Tb Seagate Baracuda 7200 SATA drives, and all with GB ethernet. Not one of them gives adequate performance even as backups for our server (a Mac X-serve/X-RAID). Typical network transfer speeds are 30-60MB/s (depending on transport protocol – SMB and AFS being significantly slower than FTP, and will support multiple clients at these speeds concurrently), but my Theci are lucky if they hit 5MB/s, which (to make matters worse) gets divided by the number of clients, whatever protocol, so it’s ALL down to the crap embedded PC in the Thecus. An old Buffalo Terastation, I had previously was even worse!

    Bottom line, you get what you pay for, so for home/soho use, USB 2 will be quicker, even than Gigabit, for external storage – IMO anyway! :-) That said, I use a NAS (Apple Time Capsule) at home for automated wireless backup of my laptop, which is only running 802.11G (router is N) and it works absolutely fine for my needs and doesn’t interfere with websurfing or music streaming, so it is definitely down to what you need.

  16. ploogman
    February 17th, 2011 at 20:58 | #16

    Sorry to report that most average NAS devices over ethernet (either 100Base-T or 1000Base-t/Gigabit) tend to max out around only 10Base-T speed in terms of their actual reading/writing performance. This puts NAS several times behind USB drives in terms of speed.

    Yes, almost all USB drives are faster than almost all NAS devices if you are clocking the time to read or write files with a file system. In order to get better speeds over a network for storage, you have to escalate up to fiber and special SAN setups (storage arrays) or it will feel like you are using a computer from 15-20 years ago in terms of storage speed. NAS is great because everyone can share it, but it is usually dog slow and has not increased much in speed, really, in several years. Probably a result of slow micro-CPUs inside NAS devices combined with the overhead involved with normal ethernet networking = poor performance. The rush to the bottom, price wise, for NAS, has not increased performance one iota.

    Yes, USB is much faster. For backups, video editing, etc. use USB unless you really need a NAS for some reason (like to share files with users in an office, etc.). Many NAS come with multiple drives which is also good to give you backups. However, good luck recovering from some of the devices on the market. They may say they use the XFS file system, or ext2 or ext3, or even fat32, but reading those drives after a failure if often easier said than done in reality. Again rush to the bottom price wise has meant some crappy devices.

  17. February 18th, 2011 at 00:37 | #17

    Yes, that sounds about right… I suspected the problem was in part due to the Thecus hardware, as otherwise I would normally expect Gigabit to perform better.

  18. March 28th, 2011 at 23:36 | #18

    I know you guys are all talking about “cheap” “off-the-shelf” NAS devices, but I’m wondering how much throughput you got with the USB 2.0 drive.

    I just built an OpenSolaris box that I intend to use as a NAS. (out of old hardware laying around)
    * Core 2 Duo @ 2.0 ghz
    * 2GB ram
    * 3 X 2TB drives (7200 RPM, 64mb cache, 6.0MB/s) configured in a ZFS raidz1 (equivalent of RAID5)
    * Single GigE NIC (obviously without any network load balancing enabled)

    I’m wondering how much of this plays into the specification of the drives themselves also!

  19. March 29th, 2011 at 00:10 | #19

    The spec of the drives certainly matters as does the RAID configuration… Any striping will obviously increase the performance of the RAID as a whole as you’re writing data across multiple drives rather than the one drive (or 2 in a raid 1 configuration). RAID 10 or RAID 0+1 as some call it for example works faster than RAID 1. RAID5 should work even better. The processing power of the unit also seems to have quite an effect on the performance. I suspect with a setup such as that described the bottleneck will be the network and not the hardware or the drives.

  20. April 17th, 2011 at 04:01 | #20

    I’ts not a claim, it’s what the testing has shown, but as discussed I believe the reason is the relatively crappy NAS being a Thecus.

Comment pages
1 2 3 86
  1. March 28th, 2011 at 23:51 | #1


9 − two =