Showing posts with label Clariion. Show all posts
Showing posts with label Clariion. Show all posts

Thursday, September 22, 2011

SAN Switch Migration - How to plan and what need to consider?

       The Zoning Migration within SAN Fabric can import complete zone set information and aliases without any effect on existing SAN fabrics, simplifying SAN migration between different vendors. Such as Cisco, Brocade and McData.
         Import of zone set(h/w + s/w) actually save up a lot time for the migration work but there is some preparation need to be done before the migration. Prepare of the script of zone set, check the interopmode for future fabric expansion (In case other brand of switch will be add in the fabric is required)
     Before you begin, save the current Production Fabric from all SAN Switches.
           Example of Import Zone set. Export the zoning info from the Old switches and prepare the script and import to the new SAN. One important reminder, make sure you have configured the Interoperation mode “Interopmode” before you import the zone set. As the change of Interopmode setting will reset the zoning config.  Make sure all Switch that going to merge / ISL are in the same or Compatible Interop-Mode.
Before ISL / merge the switches,  you need to make sure all Switch in the fabric have a unique DID. You need to determine the principal switch in the fabric. This is to ensure that you have a proper fabric management  for future expansion.
Migration between Brocade Switches is very Simple and easy. Below is the script that I prepared before I import to the new Brocade switch.



Create Zone
switch>zonecreate "USUNIXSAN_HBA0_CX1234_SPA0","10:00:00:00:C9:2D:10:12;50:06:01:60:39:01:2D:xx"

switch>zonecreate "USUNIXSAN_HBA0_CX1234_SPB0","10:00:00:00:C9:23:11:13;50:06:01:68:39:01:2D:xx"

switch>zonecreate "SANDUEL_HBA0_CX1234_SPA0","10:00:00:00:C9:2A:10:17;50:06:01:60:39:01:2D:xx"



switch>zonecreate "SANDUEL_HBA0_CX1234_SPB0","10:00:00:00:C9:23:12:3D;50:06:01:68:39:01:2D:xx"

switch>zonecreate "WINAPPS2008_HBA0_CX1234_SPA0","10:00:00:00:C9:2D:10:12;50:06:01:60:39:01:2D:xx"

switch>zonecreate "WINAPPS2008_HBA0_CX1234_SPB0","10:00:00:00:C9:23:11:13;50:06:01:68:39:01:2D:xx"





Config Create

switch>cfgcreate "SANDUEL_FabricA", "USUNIXSAN_HBA0_CX1234_SPA0"

switch>cfgadd "SANDUEL_FabricA", "USUNIXSAN_HBA0_CX1234_SPB0"

switch>cfgadd "SANDUEL_FabricA", "SANDUEL_HBA0_CX1234_SPA0"

switch>cfgadd "SANDUEL_FabricA", "SANDUEL_HBA0_CX1234_SPB0"

switch>cfgadd "SANDUEL_FabricA", "WINAPPS2008_HBA0_CX1234_SPA0"

switch>cfgadd "SANDUEL_FabricA", "WINAPPS2008_HBA0_CX1234_SPB0"





switch>enable the configure.

switch>cfgenable “SANDUEL_FabricA

Sunday, August 14, 2011

SAN : Clariion CX4 Series

CX4
                                             


Wednesday, August 10, 2011

NAS on SAN ( Network Data Management Protocal )




       NDMP (Network Data Management Protocol) is an open protocol used to control data backup and recovery communications between primary and secondary storage in a heterogeneous network environment. 

       NDMP specifies a common architecture for the backup of network file servers and enables the creation of a common agent that a centralized program can use to back up data on file servers running on different platforms. By separating the data path from the control path, NDMP minimizes demands on network resources and enables localized backups and disaster recovery. With NDMP, heterogeneous network file servers can communicate directly to a network-attached tape device for backup or recovery operations. Without NDMP, administrators must remotely mount the network-attached storage (NAS) volumes on their server and back up or restore the files to directly attached tape backup and tape library devices.


   NDMP addresses a problem caused by the particular nature of network-attached storage devices. These devices are not connected to networks through a central server, so they must have their own operating systems. Because NAS devices are dedicated file servers, they aren't intended to host applications such as backup software agents and clients. Consequently, administrators have to mount every NAS volume by either the Network File System (NFS) or Common Internet File System (CIFS) from a network server that does host a backup software agent. However, this cumbersome method causes an increase in network traffic and a resulting degradation of performance. NDMP uses a common data format that is written to and read from the drivers for the various devices.


    Network Data Management Protocol was  originally developed by NetApp Inc., but the list of data backup software and hardware vendors that support the protocol has grown significantly. Currently, the Storage Networking Industry Association (SNIA) oversees the development of the protocol.

Thursday, August 4, 2011

Configuring the EMC CLARiiON controller with Access Logix installed


The SAN Volume Controller does not have access to the storage controller logical units (LUs) if Access Logix is installed on the EMC CLARiiON controller. You must use the EMC CLARiiON configuration tools to associate the SAN Volume Controller and LU.
The following prerequisites must be met before you can configure an EMC CLARiiON controller with Access Logix installed:
  • The EMC CLARiiON controller is not connected to the SAN Volume Controller
  • You have a RAID controller with LUs and you have identified which LUs you want to present to the SAN Volume Controller

You must complete the following tasks to configure an EMC CLARiiON controller with Access Logix installed:
  • Register the SAN Volume Controller ports with the EMC CLARiiON
  • Configure storage groups
The association between the SAN Volume Controller and the LU is formed when you create a storage group that contains both the LU and the SAN Volume Controller.

EMC Clariion : Access Logix


   Access Logix is an optional feature of the firmware code that provides the functionality that is known as LUN Mapping or LUN Virtualization.
    You can use the software tab in the storage systems properties page of the EMC Navisphere GUI to determine if Access Logix is installed.
  After Access Logix is installed it can be disabled but not removed. The following are the two modes of operation for Access Logix:
 
  • Access Logix not installed: In this mode of operation, all LUNs are accessible from all target ports by any host. Therefore, the SAN fabric must be zoned to ensure that only the SAN Volume Controller can access the target ports.
  • Access Logix enabled: In this mode of operation, a storage group can be formed from a set of LUNs. Only the hosts that are assigned to the storage group are allowed to access these LUNs.

Monday, August 1, 2011

RAID Technology in DMX / Symmetrix Continued



RAID [Redundant Array of Independent (Inexpensive) Disk]

After reading couple of Blogs from last week regarding RAID Technology from StorageSearch and StorageIO, decided to elaborate more about the technology behind RAID and its functionality across Storage Platforms.

After I almost finished writing this blog, I ran into a Wikipedia article explaining RAID TECHNOLOGY at a much length, covering different types of RAID technologies like RAID 2, RAID 4, RAID 10, RAID 50, etc.

For example purposes, let’s say we need 5 TB of Space; each disk in this example is 1 TB each.


RAID 0

Technology: Striping Data with No Data Protection.

Performance: Highest

Overhead: None

Minimum Number of Drives: 2 since striping

Data Loss: Upon one drive failure

Example: 5TB of usable space can be achieved through 5 x 1TB of disk.

Advantages:
>
High Performance

Disadvantages: Guaranteed Data loss

Hot Spare: Upon a drive failure, a hot spare can be invoked, but there will be no data to copy over. Hot Spare is not a good option for this RAID type.

Supported: Clariion, Symmetrix, Symmetrix DMX (Meta BCV’s or DRV’s)

In RAID 0, the data is written / stripped across all of the disks. This is great for performance, but if one disk fails, the data will be lost because since there is no protection of that data.


RAID 1

Technology: Mirroring and Duplexing

Performance: Highest

Overhead: 50%

Minimum Number of Drives: 2

Data Loss: 1 Drive failure will cause no data loss. 2 drive failures, all the data is lost.

Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

Advantages: Highest Performance, One of the safest.

Disadvantages: High Overhead, Additional overhead on the storage subsystem. Upon a drive failure it becomes RAID 0.
=”font-size:small;”>

Hot Spare: A Hot Spare can be invoked and data can be copied over from the surviving paired drive using Disk copy.

Supported: Clariion, Symmetrix, Symmetrix DMX

The exact data is written to two disks at the same time. Upon a single drive failure, no data is lost, no degradation, performance or data integrity issues. One of the safest forms of RAID, but with high overhead. In the old days, all the Symmetrix supported RAID 1 and RAID S. Highly recommended for high end business critical applications.

The controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. One Write or two Reads are possible per mirrored pair. Upon a drive failure only the failed disk needs to be replaced.


RAID 1+0

Technology: Mirroring and Striping Data

Performance: High

Overhead: 50%

Minimum Number of Drives: 4

Data Loss: Upon 1 drive failure (M1) device, no issues. With multiple drive failures in the stripe (M1) device, no issues. With failure of both the M1 and M2 data loss is certain.

Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

Advantages: Similar Fault Tolerance to RAID 5, Because of striping high I/O is achievable.

Disadvantages: Upon a drive failure, it becomes RAID 0.

Hot Spare: Hot Spare is a good option with this RAID type, since with a failure the data can be copied over from the surviving paired device.

Supported: Clariion, Symmetrix, Symmetrix DMX

RAID 1+0 is implemented as a mirrored array whose segments are RAID 0 arrays.



RAID 3

Technology: Striping Data with dedicated Parity Drive.

Performance: High

Overhead: 33% Overhead with Parity (in the example above), more drives in Raid 3 configuration will bring overhead down.

Minimum Number of Drives: 3

Data Loss: Upon 1 drive failure, Parity will be used to rebuild data. Two drive failures in the same Raid group will cause data loss.

Example: 5TB of usable space would be achieved through 9 1TB disk.

Advantages: Very high Read data transfer rate. Very high Write data transfer rate. Disk failure has an insignificant impact on throughput. Low ratio of ECC (Parity) disks to data disks which converts to high efficiency.

Disadvantages: Transaction rate will be equal to the single Spindle speed

Hot Spare: A Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

Supported: Clariion


RAID 5

Technology: Striping Data with Distributed Parity, Block Interleaved Distributed Parity

Performance: Medium

Overhead: 20% in our example, with additional drives in the Raid group you can substantially bring down the overhead.

Minimum Number of Drives: 3

Data Loss: With one drive failure, no data loss, with multiple drive failures in the Raid group data loss will occur.

Example: For 5TB of usable space, we might need 6 x 1 TB drives

Advantages: It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of ECC (Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.

Disadvantages: Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk. Ask the PSE’s about RAID 5 issues and data loss?

Hot Spare: Similar to RAID 3, where a Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

Supported: Clariion, Symmetrix DMX code 71

RAID Level 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.

This would classify to be the most favorite RAID Technology used today.



RAID 6

Technology: Striping Data with Double Parity, Independent Data Disk with Double Parity

Performance: Medium

Overhead: 28% in our example, with additional drives you can bring down the overhead.

Minimum Number of Drives: 4

Data Loss: With one drive failure and two drive failures in the same Raid Group no data loss. Very reliable.

Example: For 5 TB of usable space, we might need 7 x 1TB drives

Advantages: RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.

Disadvantages: Very poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.

Hot Spare: Hot Spare can be invoked against a drive failure, built it from parity or data drives and then upon drive replacement use that hot spare to build the replaced drive.

Supported: Clariion Flare 26, 28, Symmetrix DMX Code 72, 73

Clariion Flare Code 26 supports RAID 6. It is also being implemented with the 72 code on the Symmetrix DMX. The simplest explanation of RAID 6 is double the parity. This allows a RAID 6 RAID Groups to be able to have two drive failures in the RAID Group, while maintaining access to the data.


RAID S (3+1)

Technology: RAID Symmetrix

Performance:
>
High

Overhead: 25%

Minimum Number of Drives: 4

Data Loss: Upon two drive failures in the same Raid Group

Example: For 5 TB of usable space, 8 x 1 TB drives

Advantages: High Performance on Symmetrix Environment

Disadvantages: Proprietary to EMC. RAID S can be implemented on Symmetrix 8000, 5000 and 3000 Series. Known to have backend issues with director replacements, SCSI Chip replacements and backend DA replacements causing DU or offline procedures.

Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

Supported: Symmetrix 8000, 5000, 3000. With the DMX platform it is just called RAID (3+1)

EMC Symmetrix / DMX disk arrays use an alternate, proprietary method for parity RAID that they call RAID-S. Three Data Drives (X) along with One Parity device. RAID-S is proprietary to EMC but seems to be similar to RAID-5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

The data protection feature is based on a Parity RAID (3+1) volume configuration (three data volumes to one parity volume).

RAID (7+1)

Technology: RAID Symmetrix

Performance: High

Overhead: 12.5%

Minimum Number of Drives: 8

Data Loss: Upon two drive failures in the same Raid Group

Example: For 5 TB of usable space, 8 x 1 TB drives (rather you will get 7 TB)

Advantages: High Performance on Symmetrix Environment

Disadvantages: Proprietary to EMC. Available only on Symmetrix DMX Series. Known to have a lot of backend issues with director replacements, backend DA replacements since you have to verify the spindle locations. Cause of concern with DU.

Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

Supported: With the DMX platform it is just called RAID (7+1). Not supported on the Symms.

EMC DMX disk arrays use an alternate, proprietary method for parity RAID that is called RAID. Seven Data Drives (X) along with One Parity device. RAID is proprietary to EMC but seems to be similar to RAID-S or RAID5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

The data protection feature is based on a Parity RAID (7+1) volume configuration (seven data volumes to one parity volume).

Saturday, July 30, 2011

EMC CLARIION FLARE : Fibre Logic Array Runtime Environment

The Clariion Environment is governed by FLARE Code and the Symmetrix / DMX by Enginuity Code. FLARE Code was developed internally at EMC (Data General).


FLARE: Fibre Logic Array Runtime Environment


Clariion name comes from Data General, where they designed the first 16bit minicomputer called NOVA. Later NOVA was called NOVAII. NOVAII became AVIION (letters rearranged). CLARiiON is a simple derivative of that naming convention. AVIION name still exist with AX100, AX150 and AX-4.

EMC Engineering is the crown of EMC, inventing new technology and pushing the envelope in terms of defining future products, technologies and markets. That is exactly what has happened with acquisition of Data General by EMC. They have really taken the Clariion products, rebranded them with tons of features and user interfaces to make it the flagship product. If you asked anyone at EMC about 3 to 5 years ago about their flagship product, the answer would have been Symmetrix, ask them now? Clariion has dominated the SMB and the Mid Tier Enterprise market making it the cash cow at EMC.

Unlike the Enginuity Code, the FLARE Code is customer self upgradable. This Code sits on the first 5 drives of the Clariion SPE or DAE (depending on the model), the drives that are marked with numbers (0 to 4) and do not remove stickers.

With a Code upgrade, the FLARE Operating Environment gets loaded onto the service processor and this can be performed while the machine is running. The funny part is, a Clariion service processor is merely a PC running Microsoft Windows XP 32 Bit (which might have changed with CX4 to possibly a Windows XP 64Bit Version). In short when you reboot your Clariion service processors, Windows XP will start and load the FLARE Operating Environment from the first 5 drives and bring the system online.

With these first 5 drives, do not to configure any user-host LUN Space on them. Best bet, get 5 x 73GB 15K drives and only use it for FLARE Code operation. The total space the FLARE Code occupies is 6GB per disk if its release 19 and lower and for releases 24 and above its 33GB per disk drive. Also along with the Flare Operating Environment on the first 5 drives is stored the PSM LUN (Persistent Storage Manager), Flare Database LUN and Vault Data (Save Area for write cache in case of a catastropic failure of SP). Do not move your drives around on the Clariion. Also do not insert a different drive type when replacing the first 5 drives.

From the Data General days with the Clariion, the FLARE Operating Environment is pretty open; in sense the customer can perform all sorts of changes without any restrictions (unlike the Symmetrix and DMX) where a lot of it is done through Hardware BIN file changes. Upgrades in terms of hardware, software, etc can all be performed by the customer themselves making it a neat product to work on.

As new FLARE Code releases hit market, customers can download those FLARE Code upgrades from EMC’s Customer Website (Powerlink) and self install it (I believe if you have purchased Clariion from Dell, you have to obtain FLARE Code through Dell).

The service processors run the Flare Operating Environment along with the first 5 drives. During a Non Disruptive Upgrade (NDU), the FLARE Code is loaded on one SP at a time and then reboot is performed. In short if your failover and redundancy is setup correctly you will not have any outages. It is highly recommended you perform these changes during quite times or possibly take your SQL and Oracle databases down before performing this upgrade. Also a good practice would be to get EMC Grabs out of the host that are connected to this Clariion and verify that they are new FLARE Code compatible.

If you are new to Clariion Environment, it is highly recommended you perform the pre-installation steps or read release notes before performing an upgrade or get professional assistance. It is very normal for customers to go through multiple code upgrades during the 3 to 5 year life cycle of these machines.

These Service processors also sent you service alerts through an email or sms system for proactive replacement and failing components example: failing drive, failing SP, backend issues, data sector invalidates, etc. The replacement of these parts should be carried out by an EMC trained and qualified engineer.

It is common knowledge, you can enter Engineering mode on FLARE Code using keys Ctrl + Shft + F12 and using the engineering password. The Engineering mode will allow you to perform certain functions not allowed in a normal Admin or User mode.

Initially with the FC series of Clariion, there was no web interface into the Service Processors, which has been added with the CX series of machines. With release 26 new features enhancing customers to perform a lot of maintenance work themselves has been added including performing SP Collects, etc.

FLARE Code version information is as follows.

For the sake of this blog we will limit our explanation only to CX, CX3 and CX4 platforms.

Generation 1: CX200, CX400, CX600

Generation 2: CX300, CX500, CX700 including the iSCSI flavors

Generation 3: CX3-10, CX3-20, CX3-40, CX3-80

Generation 4: CX4-120, CX4-240, CX4-480, CX4-960
(last three digits are the number of drives it can support)

The FLARE Code is broken down as follows (Please see the color coded scheme below).

1.14.600.5.022 (32 Bit)

2.16.700.5.031 (32 Bit)

2.24.700.5.031 (32 Bit)

3.26.020.5.011 (32 Bit)

4.28.480.5.010 (64 Bit)


The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be installed on. For the 1st and the 2nd generation of machines (CX600 and CX700), you should be able to use standard 2nd Generation code levels. CX3 code levels would have a 3 in front of it and so forth.

These numbers will always increase as new Generations of Clariion machines are added.

The next two digits are the release numbers; these release numbers are very important and really give you additional features related to the Clariion FLARE Operating Environment. When someone comes up to you and says, my Clariion CX3 is running Flare 26, this is what they mean.

These numbers will always increase, 28 being the latest FLARE Code Version.

The next 3 digits are the model number of the Clariion, like the CX600, CX700, CX3-20 and CX4-480.

These numbers can be all over the map, depending what the model number of your Clariion is.

The 5 here is unknown, its coming across from previous FLARE releases. Going back to the pre CX days (FC), this 5 was still used in there. I believe this was some sort of code internally used at Data General indicating its a FLARE release.

The last 3 digits are the Patch level of the FLARE Environment. This would be the last known compilation of the code for that FLARE version.

Again if you are looking at the CX and the FLARE Code Operating Environment it is pretty strong, powerful, lots of features a customer can use and does blow away a lot of other manufacturers in the same market space.


Hope this information was useful in your endeavor while searching for Clariion Flare Code Operating Environment information.



Follow storageadmiins on Twitter Follow storageadmiins on Twitter