ReadyNAS VMware iSCSI Performance

 

 

 

I purchased a ReadyNAS 4200 to be used as my iSCSI storage for two ESXi 5 hosts.  Out of the box I always felt there was more performance to be had.  To check the performance of my setup I used IoMeter (http://www.iometer.org/) with two different test suites.  The first was the ReadyNAS test (http://www.readynas.com/download/iometer/iometer.icf).  My results were

 

Read  = 74 MBs

Write = 65 MBs

 

The second test was the unofficial VMware Open Performance Test (http://www.mez.co.uk/OpenPerformanceTest.icf).  This test puts more load on the system then the ReadyNAS test. My results were

 

Read 100%         = 112 MBs

RealLife               = 86 MBs

MaxThroughput = 117 MBs

Random              = 85 MBS

 

The theoretical maximum for a 1 gig network connection is 125 MBs.  So the Open Performance Test showed that I was close to that limit.  The problem was I had two 1 gig nics in the ReadyNAS and the ESXi hosts.  So my theoretical maximum was 250 MBs.  I wanted to go faster.

 

In this article I will describe how I quadrupled my iSCSI performance with 4 nic’s and proper VMware MPIO configuration.  You could use the same steps with 2 nic’s and double your performance.

 

I suggest you run IoMeter with both test suites and record your results as a baseline before you make any changes to your network.  It would be best to suspend or power down all your VM's except the one running IoMeter.  When running IoMeter the test will be against the C: drive.  But since your VM's datastore is on the iSCSI target we really will be testing the performance of your network and the iSCSI storage device.

 

Here is tutorial on running IoMeter tests.  I would recommend you test drive C:  http://technodrone.blogspot.com/2010/06/benchmarking-your-disk-io.html

 

Now backup all your data!

 

I spent several weeks researching VMWare iSCSI performance.  The results of the research concluded that NIC teaming, LACP IEEE 802.3ad Link Aggregation and the IP hash algorithm will not work with my switch an Adtran 1534.  What will work is MPIO (Multipath I/O) round robin on the ESXi 5 hosts.  With MPIO you simply create multiple paths between the ESXi host and the ReadyNAS on different VLANs.. 

 

I purchased a 2 port 1 gig nic card from ReadyNAS so I would have 4 nics for my iSCSI san and a 4 port 1gig nic card for my ESXi hosts.  This will allow me to have 4 paths with a theoretical maximum throughput of 500 MBs. 

 

To configure the 4 paths I setup 4 VLANS, vlan 20,110,120 and 130.  One nic on the ReadyNAS and ESXi host will be in each VLAN.  Using separate networks is the recommended configuration for MPIO.

 

I would suggest you document your network setup as all these VLAN's and IP addresses get confusing.

 

ESXi

 

nic                    IP                     Physical Switch Port             VLAN

vmnic2   192.168.20.87                                   15                    20

vmnic3   192.168.110.87                                 16                    110

vmnic4   192.168.120.87                                 7                      120

vmnic5   192.168.130.87                                 8                      130

 

ReadyNAS

 

nic                    IP                     Physical Switch Ports              VLAN

1          192.168.20.130                                   10                    20       

2          192.168.110.130                                 11                    110

3          192.168.120.130                                 12                    120

4          192.168.130.140                                 13                    130

 

 

ReadyNAS NIC setup

 

If you already have an iSCSI LUN on your ReadyNAS back up all the data.  I have found changing IP addresses of a ReadyNAS with a iSCSI LUN will cause it to be inaccessible from VMware.  When you change back the IP’s the LUN becomes available again.

 

 

 

Set up each network interface.

 

 

 

Make sure teaming is disabled on all 4 interfaces.

 

 

 

Uncheck VLAN if you are using default VLAN on your ports and Enable jumbo frames on all 4 interfaces.  If you are using VLAN tagging you will need to check VLANs and put in the VLAN id.

 

 

Finish configuring the 3 remaining nics.

 

ReadyNAS iSCSI setup

 

Open your ReadyNAS frontview.

Select Volume | Volume Settings | Volume C | iSCSI

Enable iSCSI support and click Apply.

 

Select Create iSCSI Target

 

 

 

 

Enter the target name and capacity.  I believe 2000 GB is the max for the ReadyNAS.  Select Create.

 

OK through the confirmation windows.  We will set access control once we get the iSCSI initiator configured on the ESXi host.

 

 

 

 

Switch Setup

 

Now you need to configure your switch ports for the proper VLANs and enable Jumbo Frames.  I have an Adtran 1534.

 

Login to the switch and create a new VLAN

Data | VLANS | New Vlan

 

 

Configure the other 3 VLANS the same way.

 

 

Configure all 8 ports for the default VLANs using your documentation for physical switch port and VLAN

Ports | select port

Set the default VLAN

 

Repeat for the remaining 7 ports.

 

Configure Jumbo Frames

System | Physical Interfaces

Set the MTU to the max 9216

 

 

ESXi 5 setup

 

Verify your NIC's in VSphere

Select the host | Configuration | Network Adapters

 

 

Create a vSwitch for the NIC's. 

We will be associating each vmnic with a vmKernel port.  For naming conventions it is best to take the port number of the vmnic and use it for the vmKernel port name.

 

Nic                   vKernel port

Vmnic2            iSCSI2

Vmnic3            iSCSI3

Vmnic4            iSCSI4

Vmnic5            iSCSI5

 

Select your EXSi host | Configuration | Networking | Choose Add Networking

 

 

 

 

 

 

Select all 4 of the nics.

 

 

 

 

 

 

 

 

 


 

 

Name the network label the same # as the vmnic.  So this is iSCSI2 which will be assigned the vmnic2.

 

 


 

Assign your IP address for the nic.

 

 

 

Now go into the Properties of the vSwitch

 

 

 

 

 

 

Add a new vmKernel port group we will be adding 3 more for a total of 4 one for each nic.

 

 

 

 




Name the network label the same # as the vmnic.  So this is iSCSI3 which will be assigned the  vmnic3.

.

 

 

 


 

Assign your IP address for the nic.

 

 

 

 

Continue to add the 2 remaing vmKernel ports.


Your switch should end up with 4 iSCSI vmKernel ports.

 

Now we will set the Jumbo frames on the vSwitch.  Chose the vSwitch and Edit.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Change the MTU to 9000 and click ok

 

 

Now we will set the Jumbo Frames and assign 1 nic on each of the vmKernel ports.  Select the first vmKernel port and click Edit.

 

On the General tab set the MTU to 9000 then click on NIC teaming.


On the NIC teaming tab check Override switch failover order.  We must have only 1 vmnic active.  Since this is iSCSI5 we want to have vmnic5 active ad all the rest set as Unused Adapters.   Select vmnic2 and click move down 5 times.  Do this for vmnic3 and vmnic4 .

 

 

The end result should look like this.  One active adapter vmnic5.  Choose OK.

 

 

Edit the remaining 4 vmKernal ports setting the MTU to 9000 and setting the nics to Unused.

 

 


Network Testing

 

Now that the ESXi4 networking and ReadyNAS networking are configured we should do some ping tests to verify all the MPIO paths.

 

SSH into the ESXi host. And attempt to ping all 4 IP’s of the Readynas using vmkping.

 

# vmkping 192.168.20.130

# vmkping 192.168.110.130

# vmkping 192.168.120.130

# vmkping 192.168.130.130

 

If any of the pings fail you will need to troubleshoot the cabling and VLAN configuration.  The best way to verify you have your cables where you want them is to go into your switch and disable a single port.  Then in vSphere Networking or ReadyNAS frontview you should see that nic disconnected.  If you disable port 16 and vmic2 gets disconnected you know you have connected the wrong cable because port 16 should be connected to vmnic3.  This where good network documentation comes into play.

 

Once you get all your paths pinging.  We will want to test Jumbo Frames.

# vmkping -s 8972 –d 192.168.20.130

# vmkping -s 8972 –d 192.168.110.130

# vmkping -s 8972 –d 192.168.120.130

# vmkping -s 8972 –d 192.168.130.130

 

If any fail you will need to check MTU is 9000 on the vSwitch and vmKernal ports.  Check your switch for a MTU of at least 9000.  And lastly that Jumbo Frames are enabled on all 4 of the ReadyNAS nics.  ReadyNAS requires a reboot once Jumbo Frames are enabled.

 

 

 

 

ESXi 5 iSCSI initiator

 

By default ESXi 5 does not have the iSCSI Software Adapter loaded.

Select the ESXi host | Configuration | Storage Adapters | Add

 

 

Now select the iSCSI storage adapter and select properties.

Copy the name into your clipboard.  So we can paste it into the access control list on the ReadyNAS LUN.

 


Open your ReadyNAS frontview.  Select Volumes | Volume Settings | Volume C | iSCSI

Select your Target and the configure “gear” for the LUN.

 

 

 


Now back to the ESXi host.

On the iSCSI Initiator select network configuration. And add all 4 of the vmkernel groups iSCSI2, iSCSI3, iSCSI4, iSCSI5.

 

 


Select Dynamic Discovery and add your target.  Only add one IP for your ReadyNAS not all 4.

 

 

 

 

 

 

 

 

 

Select the Static Discovery tab and all 4 paths should be shown.

 

 

 

Go to Configuration | Storage

And you should see your LUN.

 

 

 

 

Right click on the LUN and choose properties.

 

Select Mange Paths

 

 

 

By default ESXi is set to Most Recently Used (VMware).  So we have 4 nic’s but only 1 will be used.  We need to change the path selection to Round Robin.

 

 

 

 

Select round robin from the drop down press change.  Now you will see all 4 adapters are set to active i/o.  Press Close.


The Holy Grail

 

At this point we MPIO setup and all 4 paths will be used.  However I found my performance was identical to having a single NIC.   With one nic I was getting about 100 MBs and with 4 nics I was getting 100 MBs.  Looking at the VMware performance graph for networks I saw something odd.  With a single nic I saw about 1000 KBps so the nic was maxed out.  With 4 nics I saw each one at 250 KBps so round robin spread the load but I was not utilizing the nics fully.  I searched long and hard for a solution until I found this thread http://communities.vmware.com/message/1281730

 

By default round robin sends 1000 IOPS before switching to the next nic.  As we can see on the ReadyNAS this is not optimal.  The recommendation is to change round robin to 1 IOPS before switching to the next nic.  When I made this setting I went from 100 MBs to 400 MBs and all 4 nics fully saturated.

 

SSH to the ESXi 5 host and run this

 

for i in `ls /vmfs/devices/disks/ | grep naa.600` ;
do esxcli storage nmp psp roundrobin deviceconfig set -d $i --iops 1 --type iops;done

 

 

Now load up a test VM and rerun the ReadNAS test and the Open Peformance Test.  Here are my results.

 

ReadNAS test

 

1 NIC                                      4 NIC

Read  = 74 MBs                      161 MBs

Write = 65 MBs                       97 MBs

 

unofficial VMware Open Performance Test

 

                                    1 NIC              4 NIC 

Read 100%         = 112 MBs               404 MBs         

RealLife               =   86 MBs               185 MBs

MaxThroughput = 117 MBs                 393 MBs

Random              =   85 MBs               235 MBs         

 

The ReadyNAS test does not stress test the network as well as the Open Performance Test.  I found using more workers would make a difference but I did not want to publish results with that change.

 

So I now have my ReadyNAS VMware environment running at full speed.  Hope this document can save you some time and headache getting max performance out of your setup.

 




 

 

ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:44 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
ą
Troy Sorzano,
Jul 24, 2012, 8:45 AM
コメント