hello to all,
i've did a lot of testing the last two days and found out this:
on another infrastructure i have two physical hp dl380 g7 servers running for oracle rac and red hat linux enterprise.
the server are connected via iscsi (netword bonding an the redhat with 2 nics) on the same cisco switches (3750) and netapp storage (2240-2).
i found an i/o testing script on this page http://benjamin-schweizer.de/measuring-disk-io-performance.html an run it on one oracle server.
the results were:
in realtime the sysstat von the netapp says the following:
so this tells me, me performance on the iscsi of an physical server an my netapp is ok !
now it comes to the thing i dont understand.... i run the same test on a linux system in virtual infrastrukture:
looks ok, but in realtime on my netapp there is this going on:
so this is weird.... my linux in vmware tells me, i have good iops and throughput an my virtual system, but actually down on the netapp there is nothing going on.
how could this be, when i do this on a real hardware cluster with the same iscsi connection via the ciscos and the same netapp storage, that the perfomance is SO BAD ???????
something must be in the vmware that is breakting out my performance to the storage system, and no the switches are ok. we compared the configuartion of the ciscos in both enviromnets.
for the result:
physical servers -> 2 nics via bondig connected to the ciscos -> cisco puts its data through a trunk of 4 ports to the netapp -> netapp all 4 nics are connected to a virtual interface
virtual infrastruture -> 1 esx server -> 4 nics go to the cisco via iscsi multipathing (see config screenshot above) -> connection via netapp an ciscos 10GB/E over the mezzanine card from netapp an the 10GB modules from the cisco.
physical servers -> iops and througput ok
virtual infrastrukture -> iops and throughput miserable and not acceptable
PLEASE HELP !!
many thanks for help in advance.. im getting mad over this !
greetings marc