Announcement

Collapse
No announcement yet.

HDD bottlneck on EMC SAN

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • HDD bottlneck on EMC SAN

    I have been benchmarking different configs on my EMC AX4-5 SAN, and I am running into a bottlneck on some of the tests. I have been using the default tests (file server,web server, workstation, database), and noticed when expanding the diskpool to 8 450gb SAS 15k drives, I am getting the same database test numbers as 4 disks. Avg is right around 44-46mb/sec transfer rate over a 300 second test. Other tests show about the improvement I would expect. Just tried running a db test against the 8 disk pool and another 4 disk pool, and the combined rate fell right within my max of 44-46. Could this be a bottlneck in the program itself or should I start looking elsewhere? Program is running on a vmware Virtual server with quad gigabit nics, 2x4 core, 32gb ram. Looking in performace monitor, nothing obvious seems to be coming anywhere near their limits.

    Any help or advice is appreciated.

  • #2
    I don't see why adding more storage capacity would improve performance?

    The database test involves a fair amount of disk seeking. So maybe the limit is just the hard drives themselves?

    Comment


    • #3
      Originally posted by passmark View Post
      I don't see why adding more storage capacity would improve performance?

      The database test involves a fair amount of disk seeking. So maybe the limit is just the hard drives themselves?
      I am adding the disks to raid groups. My 8 disk raid group is in a Raid 10 configuration. The bigger the raid group, the more the read/write capacity. I thought about it being a limit of the disk pools themselves, which is why I ran the test against the two different disk pools at the same time (one 4 disk raid 5 pool, and one 8 disk raid 10 pool). I can't imagine it could possibly be a limitation of the 3 shelf SAN. It's an iSCSI SAN, so I'm thinking if there isn't a bottleneck in the way the program runs the test, there must be a bottleneck in the network or VMware setup. I'm wanting to figure this out before I finish moving everything to a 20 disk raid 10 pool.

      Comment


      • #4
        We aren't aware of any limitation in the software.
        The same test can get speeds of 300MB/sec with solid state drives (locally connected).
        But it is hard to get high throughput in this database file test because the block size is so small. The overhead starts to become more important than the disk throughput.

        Comment

        Working...
        X