Yesterday was an very busy day. I didn’t have time to think, much less put together a post about it. I hit every session which I was looking for including the always hard to get into Navisphere Manager Hands-on workshop.
The session I probably got the most out of was the session on what’s new in the FLARE version 26 which was released a few months ago.
FLARE 26 now supports Active/Active presentation of the LUNs. What this means is that in the event of a fibre cut on either the front end or the back end the host machine (Server) will no longer need to trespass the LUN to the other SP. The LUN can simply send the IO request to the other SP. The non-preferred SP will then forward the request to the preferred SP automatically for completion. Upon the preferred SPs connectivity coming back online the requests will then be sent to the preferred SP. The newest version of PowerPath is required for this to work, or the native multipathing driver such as the Windows 2008 driver must support ALVA.
The support for supporting a broken connection between the host and the storage is from ALVA. The support for handling the request when the connection is broken between the SP and the DAE is an EMC only extension of ALVA.
FLARE 26 also includes RAID 6 support. When comparing RAID 6 with RAID 5 on the same system read performance will typically be better as the data is spread across all the drives in the RAID 6 array. Unlike a lot of other systems the EMC CLARiiON array spreads the parity sectors of (RAID 5 and) RAID 6 across all the drives in the RAID Group. So because there is an extra drive in the array a 4+2 RAID 6 RAID Group will give better read performance that a 4+1 RAID 5 RAID Group. When doing a full strip write the write speed between a RAID 5 and RAID 6 array will be basically the same. When doing smaller writes a RAID 5 array will have a faster write time than a RAID 6 array because RAID 6 has the extra parity to account for. The rebuild times for rebuilding after a failed drive will be about the same between a RAID 5 and RAID 6 array which have suffered a single drive failure. If the RAID 6 array has to recover from a dual drive failure it will take longer to recover than the single drive failure as the data must be recalculated from the two parity bits rather than from a single parity bit. However the odds of a dual disk failure are slim.
Just like with RAID 5 within the CLARiiON the RAID 6 supports the proactive hot spare. This is where when the system sees that a drive is going to fail it will automatically copy the data from the failing disk to a hot spare and mark the disk as bad. As the data does not have to be rebuilt this is a very quick operation.
FLARE 26 now supports a Security Administrator role. Members of this role have no access to the storage settings it self. They can only create accounts within the Array.
A very important change is that the SPs can now be setup to sync thier system time to a networked NTP time server. This will force the time on the SPs to be the same. Until now the times could end up getting a little off which could make tracking down event information very hard to do as the log entries would have different times on each SPs log file.
FLARE 26 now supports replication over the built-in iSCSI ports on the new CX3 line of systems. This is a great change as before you had to use the iSCSI ports on a FC-IP switch to do this replication. This includes SAN Copy, MirrorView, etc.
MirrorView /S should only be used for connections within ~100 miles as beyond that you start to get to much latency between the arrays.
Starting later this year (Q3 or so) there will be an extension to MirrorView /S called MirrorView /SE (Cluster Enabler) for Microsoft Cluster Service. This will give you the ability to use CLARiiON to setup a geographically disbursed cluster. In other words you can have servers in two different cities setup in a Windows Cluster.