Sunday, August 28, 2011

Brocade switch Zoning and Fabric Operations



        When configuring zoning or other fabric-wide settings in a fabric that has products operating with different versions of FOS, it is recommended that the configuration be performed via an interface (such as WebTools) to a product with the most recent version of FOS. 


       Some older versions of FOS do not fully support newer hardware models, and problems may arise when configuring settings through these older products. 

       Zoning configuration in particular should never be performed through a switch operating with FOS v3.x in a fabric that also has products operating with newer releases of FOS firmware.

Brocade Switch Technical Support


Contact your switch supplier for hardware, firmware, and software support, including product repairs and part ordering. To expedite your call, have the following information immediately available :




1. General Information

· Technical Support contract number, if applicable

· Switch model

Fabric OS v6.1.1a Release Notes, v1.0 Page 6 of 33

· Switch operating system version

· Error numbers and messages received


· supportSave command output

· Detailed description of the problem, including the switch or fabric behavior immediately
following the problem, and specific questions

· Description of any troubleshooting steps already performed and the results

· Serial console and Telnet session logs

· Syslog message logs


2. Switch Serial Number

The switch serial number is provided on the serial number label.
 
The serial number label is located as follows:

· Brocade 200E—On the nonport side of the chassis

· Brocade 4100, 4900, and 7500/7500E—On the switch ID pull-out tab located inside the
chassis on the port side on the left

· Brocade 300, 5000, 5100, and 5300—On the switch ID pull-out tab located on the bottom of
the port side of the switch

· Brocade 7600—On the bottom of the chassis

· Brocade 48000 —Inside the chassis next to the power supply bays

· Brocade DCX—Bottom right of the port side.



3. World Wide Name (WWN)

Use the wwn command to display the switch WWN.

If you cannot use the wwn command because the switch is inoperable, you can get the
WWN from the same place as the serial number, except for the Brocade DCX. For the
Brocade DCX, access the numbers on the WWN cards by removing the Brocade logo
plate at the top of the non-port side. The WWN is printed on the LED side of both cards.

Follow By Email to get latest Blog updates.

Monday, August 22, 2011

EMC Symmetrix Management Console (SMC – For Symmetrix V-Max Systems)

        The Symmetrix Management Console is a very important step towards allowing customers take control of their Symmetrix V-Max Systems. With the new Symmetrix V-Max comes a new version of Symmetrix Management Console allowing customers to manage their EMC Symmetrix V-Max Systems through a GUI web browser interface with tons of new added features and wizards for usability.


       The Symmetrix Management Console was developed back in the day as a GUI to view customers Symmetrix DMX environment, over years it has evolved more to be a functional and operational tool to interface the machine for data gathering but also to perform changes. EMC Solutions Enabler Symcli is a CLI based interface to the DMX and V-Max Systems, but the SMC complements the CLI by allowing customers to perform more or less similar functions through a GUI. The looks & feels of SMC also resemble ECC (EMC Control Center) and customers sometime refer it as a ECC-lite (SMC).


symmetrix-management-console-in-action


EMC Symmetrix Management Console in action monitoring EMC Symmetrix V-Max Systems

Some of the important features and benefits of the SMC for V-Max are listed below:

1)    Allows customers to manage multiple EMC Symmetrix V-Max Systems

2)    Increase customer management efficiency by using Symmetrix Management Console to automate or perform functions with a few set of clicks

3)    The Symmetrix Management Console 7.0 only works with Symmetrix V-Max systems

4)    The Symmetrix Management Console is installed on the Service Processor of the V-Max System and can also be installed on a host in the SAN environment.

5)    Customers can now do trending, performance reporting, planning and consolidation using SMC

6)    SMC will help customers reduce their TCO with V-Max Systems

7)    It takes minutes to install. Windows environment running a Windows Server 2003 along with IIS would be the best choice.

8 )    The interface the customers work on is a GUI. It has the looks and feels of ECC and the Console also integrates with ECC.

9)    New Symmetrix V-Max systems are configured and managed through the Symmetrix Management Console.

10) SMC also manages user, host permissions and access controls

11) Alert Management

12) From a free product, SMC now becomes a licensed product, which the customers will  have to pay for

13) It allows customers to perform functions related to configuration changes like creating and mapping masking devices, changing device attributes, flag settings, etc

14) Perform replication functions using SMC like Clone, Snap, Open Replicator, etc

15) SMC enables Virtual Provisioning with the Symmetrix V-Max arrays

16) Enables Virtual LUN technology for automated policies and tiering.

17) Auto Provisioning Group technology is offered through wizards in SMC

18) Dynamic Cache Partitioning: Allocates and deallocates cache based on policies and utilization.

19) Symmetrix Priority Controls

20) From the SMC, customers can now launch SPA (Symmetrix Performance Analyzer), this is more on the lines of Workload Analyzer which is a standard component of ECC Suite. This allows customers to view their storage & application performance & monitoring. SPA will can be obtained as a Add-on product from EMC based on licensing.


virtual-lun-technology-in-smc1


Virtual LUN Technology in works using a wizard

21) The SMC gives the customer capabilities for Discovery, Configuration, Monitoring, Administration and Replication Management.

22) SMC can be obtained from EMC Powerlink or through your account manager from EMC if you have an active contract in place with EMC for hardware/software maintenance or if your systems are under warranty.

Highly recommended management tool for SAN Admins and yea it’s not free anymore for V-Max Systems.   

 

EMC Symmetrix: Calculations for Heads, Tracks, Cylinders, GB

Here is the quick and dirty math on EMC Symmetrix Heads, Tracks, Cylinder sizes to actual usable GB’s of space.
Based on different generations of Symmetrix systems, here is how the conversions work.
Before we jump into each model type, lets look at what the basics are, with the following calculations.
.
.
.
.
There are s number of splits (hyper) per physical device.
There are n number of cylinders per split (hyper)
There are 15 tracks per cylinder (heads)
There are either 64 or 128 blocks of 512 bytes per track
.
All the calculations discussed here are for Open Systems (FBA) device types. Different device emulations like 3380K, 3390-1, 3390-2, 3390-3, 3390-4, 3390-27, 3390-54 have different bytes/track, different bytes/cylinder and cylinders/volume.
.

Symmetrix 8000/DMX/DMX-2 Series

Enginuity Code: 5567, 5568, 5669, 5670, 5671
Includes EMC Symmetrix 8130, 8230, 8430, 8530, 8730, 8830, DMX1000, DMX2000, DMX3000 and various different configurations within those models.
GB = Cylinders * 15 * 64 * 512 / 1024 / 1024 / 1024
eg: 6140 Cylinder devices equates to 2.81 GB of usable data
6140 * 15 * 64 * 512 / 1024 / 1024 / 1024 = 2.81 GB

Cylinders = GB / 15 / 64 / 512 * 1024 * 1024 * 1024
Where
15 = tracks per cylinder
64 = blocks per track
512 = bytes per block
1024 = conversions of bytes to kb to mb to gb.
.

Symmetrix DMX-3/DMX-4 Series

Enginuity Code: 5771, 5772, 5773
Includes EMC Symmetrix DMX-3, DMX-4 and various different configurations within those models.
GB = Cylinders * 15 * 128 * 512 / 1024 / 1024 / 1024
Eg: 65520 Cylinder device equates to 59.97 GB of usable data
65540 * 15 * 128 * 512 / 1024 / 1024 / 1024 = 59.97 GB

Cylinders = GB / 15 / 128 / 512 * 1024 * 1024 * 1024
15 = tracks per cylinder
128 = blocks per track
512 = bytes per block
1024 = conversions of bytes to kb to mb to gb
.

Symmetrix V-Max

Enginuity Code: 5874
Includes EMC Symmetrix V-Max and various different configurations within this model.
GB = Cylinders * 15 * 128 * 512 / 1024 / 1024 / 1024
Eg: 262668 Cylinder device equates to 240.47 GB of usable data
262668 * 15 * 128 * 512 / 1024 / 1024 / 1024 = 240.47 GB

Cylinders = GB / 15 / 128 / 512 * 1024 * 1024 * 1024
15 = tracks per cylinder
128 = blocks per track
512 = bytes per block
8 bytes = 520-512 used for T10-DIF
1024 = conversions of bytes to kb to mb to gb
Drive format on a V-Max is 520 bytes, out of which 8 bytes are used for T10-DIF ( A post on DMX-4 and V-Max differences).

Friday, August 19, 2011

Emc Symmetrix : Volume Logix

      The order for getting fibre channel based hypervolume extentions (HVEs) viewable on systems, particularly SUN systems, is as follows:


1. Appropriately zone so the Host Bus Adapter (HBA) can see the EMC Fibre Adapter (FA).

2. Reboot the system so it can see the vcm database disk on the FA OR

    1. SUN:

            1. drvconfig -i sd; disks; devlinks (SunOS <= 5.7)

            2. devfsadm -i sd (SunOS >= 5.7 (w/patches))
     2. HP:
             
            1. ioscan -f # Note the new hw address
           

            2. insf -e -H ${hw}

3. Execute vcmfind to ensure the system sees the Volume Logix database.

4. ID mapped informationi

         1. Map HVEs to the FA if not already done.
     
         2. symdev list -SA ${fa} to see what’s mapped.

         3. symdev show ${dev} to ID the lun that ${dev} is mapped as. The display should look   something like:
 Front Director Paths (4):

{

———————————————————————————————–
POWERPATH DIRECTOR PORT
———————- —————– ————
PdevName Type Num Type Num Sts VBUS TID LUN
———————————————————————————————–
Not Visible N/A 03A FA 0 RW 000 00 70
Not Visible N/A 14A FA 0 NR 000 00 70
Not Visible N/A 03B FA 0 NR 000 00 70
Not Visible N/A 14B FA 0 NR 000 00 70

}

    The number you’re looking for is under the column LUN. Remember, it’s HEX, so the lun that’ll show up on the ctd is (0×70=112) c#t#d112

       5. On SUN systems, modify the /kernel/drv/sd.conf file so the system will see the new disks. You’ll need to do a reconfig reboot after modifying this file. If the system doesn’t see it on a reconfig reboot, this file is probably the culprit!
      
       6. fpath adddev -w ${hba_wwn} -f ${fa} -r “${list_of_EMC_devs}”
You can specify multiple EMC device ranges; just separate them by spaces, not commas
      
       7. Reboot the system so it can see the new disks on the FA OR

1. SUN:

      1. drvconfig -i sd; disks; devlinks (SunOS <= 5.7)

      2. devfsadm -i sd (SunOS >= 5.7 (w/patches))

2. HP:

       1. ioscan -f # Note the new hw address
       

       2. insf -e -H ${hw} 

Thursday, August 18, 2011

EMC Symmetrix LUN Allocation


Minimum Requirements:

 

         Knowledge on Basic Symmetrix Architecture
     
             Operating systems knowledge

             And a test Symmetrix, hosts to try


      Symmetrix Allocation Steps 

       Step 1: Create symmetrix devices from the free space.

                    

            To create a symmetrix device, first we need to know what type of device we need to create. For example, RAID-5, RAID-1 etc… I’m going to write both the commands. To start with, we need to create a simple text file and add the below line to the file.

      filename: config1

      create dev count=xx, size=17480, emulation=FBA, config=2-way-mir, disk_group=x;


      Command Explanation:

      dev count=xx
      (replace xx with the number of devices we need to create)

      emulation=FBA 
      (FBA > Fixed Block Architecture used for Open Systems which are Solaris, HP-UC and Window$)

      config=2-way-mir
      (Configure the devices as RAID-1, one of the oldest configuration available in all symmetrix models)

      disk_group=x 
      (disk groups are created to differentiate the  tiers, performance and capacity. Based on the requirements, we can select the desired disk group number to create the new symmetrix devices)


                Once you add the above line in the text file, save it and check the syntax. To make any configuration changes in the symmetrix we need to run the below mentioned commands. Ensure that the raid1.txt file in your current working directory.
        

      symconfigure -sid xxxx -f raid1.txt preview -V 

      symconfigure -sid xxxx -f raid1.txt prepare -V 

      symconfigure -sid xxxx -f raid1.txt commit -V



      Command Explanation:


       symconfigure
      (This command used to manage major configuration changes, display capacity of symmetrix and manage dynamic (hot) spares and device reservations)

      -sid
      (Symmetrix ID, always prefix with hyphen (-) )

      -f
      (filename, mention the file name to which we’ll use. In this example it is raid1.txt)

      preview
      (The preview argument verifies the syntax and correctness of each individual change defined, and then terminates the session without change execution.)

      prepare
      (The prepare argument performs the preview checks and also verifies the appropriateness of the resulting configuration definition against the current state of the Symmetrix array)

      commit
      (The commit argument completes all stages and executes the changes in the specified Symmetrix array.)
      -V (Yes, you’re right, its verbose mode)

                That’s it! After running the above the commands the new devices are created. It was to create symmetrix devices from the symcli right. Let us assume that the devices ID’s are 001 through 00A (devices created with hexadecimal numbers)


      Step 2: Search for free LUN ID on the FA (Fibre Adapters)

       

            After creating the devices, we need to map the devices to the Fibre Adapters. In legacy symmetrix, it will be SCSI Adapters (SA). IF we need to do it from ECC (EMC Control Center now called as IONIX, we need to a. Right click on the device b. Go to ‘Configure’ c. Select and Click ‘SDR Device Mapping’ and follow the wizard. Here I’ll be writing the commands to do the same. 


      symcfg -sid xxxx list -available -address -fa xy -p n |more

      Command Explanation:

      symcfg
      (Discovers or displays Symmetrix configuration information)

      list
      (Lists brief or detailed information about your Symmetrix configuration.)

      -available -address
      (Requests the next available Vbus, TID, or LUN address be appended to the output list. Used with the -address option.)

      -fa
      (Confines the action to a Fibre Adapter (FA) director number)

      xy
      (x is the director number eg. 8 and y is the processor number eg. a or b)

      -p n
      (p is the port and n is the number eg. 0 or 1)


            Up to DMX-4 we follow the RULE-17. So repeat the command for another FA in this e.g. 9.


      Step 3: Mapping a device to the FA

       

             Now, lets assume that the LUN ID’s 52 onwards are free to use on both the FA’s 8a and 9a. Also, the file map.txt contains the below commands. After saving the file, we have run the symconfigure commands as shown above in Step 1. The below command will map the device 0001 to the FA’s 8a:0 and 9a:0 which will have the LUN ID’s 52 on the FA’s. 



      map dev 0001  to dir 8a:0 target=0,lun=52;

       

      map dev 0001  to dir 9a:0 target=0,lun=52;

       

       

      Command Explanation:

      map
      (map a device to fa)

      dev
      (symmetrix device ID)

      dir 8a
      (FA:port no#)

      target
      (The SCSI target ID (hex value)

      lun
      (Specifies the LUN addresses to be used for each device that is to be added for the host HBA.)


      Step 4: LUN Masking a device to the FA

       

       

              The last but one step is to do the LUN masking. It performs control and monitoring operations on a device masking environment. After running the below command, provides RW accessibility to the server having 2 port HBA’s for the device 0001.

      symmask -sid xxxx -wwn 10000000c130880a -dir 8a -p 0 add dev 0001

       

       

      symmask -sid xxxx -wwn 10000000c131084a -dir 9a -p 0 add dev 0001

       

       

      Command Explanation:

      symmask
      (Sets up or modifies Symmetrix device masking functionality.)

      -wwn
       (World Wide Name of the Host Bus Adapter (HBA) zoned with the FA)

      add
      (Adds devices to the device masking record in the database with the matching WWN)




      Step 5: Update and refresh the Symmetrix database VCMDB 

       

      (Volume Control Manager Database) Steps to perform from the HP-UX server.

       

       

                The below command looks pretty simple but very important command to append all changes to the VCMDB. This will ensure and update the DB which protects the changes made the symmetrix.

      Symmask -sid xxxx refresh

       

       

      refresh
      (updates and refreshes the VCMDB)


                To confirm the symmetrix allocation done properly are not we can run the symcfg command as shown above in the Step 2. The output should show the LUN ID 52 being occupied by the Symmetrix device ID 0001.


       

      Steps to perform from HP-UX server 

       

              Once we finish the task from the EMC end, we need to scan for the new LUN from the operating system. First we’ll deal with HP-UX server. The following commands needs to be executed in the same order from the server to start using the new device.


      #ioscan -fnC disk

       

      The ioscan command displays a list of system disks with identifying information, location, and size.

      #insf -e

       

                The insf command installs special files in the devices directory, normally /dev. If required, insf creates any subdirectories that are defined for the resulting special file. After running this command the new devices will be added to the /dev directory as special device file. 

      #powermt config


      Configure logical devices as PowerPath devices. It will search for the EMC devices and adds it  as a power path devices.

      #powermt display dev=all


            Displays configured powerpath devices. If the previous command ran successfully then we should see the new device in the output list. 

      #powermt save


      Save a custom PowerPath configuration. Once we see the new device in the previous command output, then we can save the power path configuation database.


      After the successful completion of the above steps we can use the devices. Further by using the volume managers we can add them to the existing volume group or the new volume group.



       

       

      To Allocate LUN’s in V-Max array, following steps has to be performed. (courtesy: Sanjeev Tanwar)

       

       

      1. Create a storage groups (Containing symm devices)

      2. Create a port group (one or more director /port combinations)

      3. Create an intitors group (one or more host wwns)

      4. Creating a masking view containg the storage groups,port groups, and inititors group.
      When a masking view is created,the devices are automatically masked and mapped.

       

      Creating Storage Group

       

      #symaccess create -sid xxx -name SG1 -type storage devs 01c,03c

       

      Creating Port Group

       

      #symaccess create -sid xxx -name PG1 -type port -dirport 6d:0,7e:1

       

      Creating inititor Group

       

      #symaccess create -sid xxx -name IG1 -type inititors -file txt1

       

      txt1 conatins

      wwn:21000000008b090

      www:21000000008c090


      Creating a masking view

       

       

      #symaccess create view -name test_view -sg SG1 -pg PG1 -ig IG1

       

       

      Monday, August 15, 2011

      Netapp : How to restore data from aggregate snapshot


                Today one of our user found himself in wet pants when he noticed his robocopy job has overwritten a folder, rather than appending new data to it. Being panicked he run to me looking for any tape or snapshot backup of his original data, which unfortunately wasn't there as previously he confirmed that they don't need any kind of protection.

      Now at this time I had only place left where I can recover the data, aggregate level snapshots; so I looked at aggregate snapshots and saw it goes back to time when he had data in place. Knowing that the data deleted from volume is still locked in aggregate's snapshot I was feeling good that I have done a good job by having some space reserved for aggregate level snapshot, which no one ever advocated.


      Now the next step is to recover the data, but problem was that if I revert aggregate using "snap restore –A" then all the volumes in that aggregate will be reverted which will be bigger problem. So had to go on a different way, use aggregate copy function to copy the aggregate's snapshot to an empty aggregate and then restore the data from there.


      Here's the cookbook for this.


      Pre-checks:


      • The volume you lost data from is a flexible volume
      • Identify an aggregate which is empty so it can be used for destination (could be on another controller also)
      • Make sure the destination aggregate is either equal or larger than source aggregate
      • /etc/hosts.equiv has entry for the filer you want to copy data to and /etc/hosts has its IP address added, in case of copying on same controller loopback address (127.0.0.1) should be added in /etc/hosts file and local filername should be in hosts.equiv file
      • Name of aggregate's snapshot which you want to copy


      Example:

      Let's say the volume we lost data was 'vol1', the aggregate which has this volume is 'aggr_source', the aggregate's snapshot which has lost data is 'hourly.1' and empty aggregate where we will be storing data to is 'aggr_destination'


      Execution:


      • Restrict the destination aggregate using 'aggr restrict aggr_destination'
      • Start the aggregate data copy using 'aggr copy start –s hourly.1 aggr_source aggr_destination'
      • Once the copy is completed online the aggregate using 'aggr online aggr_destination'
      • If you have done copy on same controller, system will rename the volume 'vol1' of 'aggr_destination' to 'vol1(1)'
      • Now export the volume or lun and you have your all lost data available.
      So here's the answer to another popular question, why do I need to reserve space for aggregate level snapshot. Do you have the answer now?

      EMC Symmetrix : DRV Devices ( Dynamic Reallocation Volume)

              A dynamic Reallocation Volume is a nonuser addressable symmetrix device used by symmetrix optimizer to temporarily hold user data while recognization of devices being executed. Typically it is used in logical volume swapping operations.


              Below diagram illutrates the stages and status of volumes during a complete volume swap   ( Volume 1 with 4 )
           


      Volume Stages of DRV SWAP
       



      EMC Symmetrix : Device Masking (VCM) Devices

      Symmetrix device masking or

      VCM devices are Symmetrix devices

      that have been masked for visibility to certain hosts. The device

      masking database (VCMDB), which holds device masking records,

      typically resides on a small disk device (such as a 48-cylinder, 24 MB

      device). For more information, see the
      EMC Solutions Enabler

      Symmetrix Device Masking CLI Product Guide.

      Sunday, August 14, 2011

      EMC Symmetrix : Dynamic RDF Devices

      Since Enginuity Version 5568, devices can be configured to be dynamic RDF Capable Devices. Dynamic RDF enables you to create , delete & swap SRDF pairs while the symmetrix array in operation.

      And you can establish SRDF device pairs from non-SRDF devices, & then syncronize and manage them in the same way as you configured SRDF pairs.

      The Dynamic RDF configuration state of the Symmetrix array must be enabled in symmWin or via the Configuration Manager and the devices must be designated as dynamic RDF capable devices.


      Thank you..
      Don't forgot to leave ur valuable Comment ..

      SAN : Clariion CX4 Series

      CX4
                                                   


      Wednesday, August 10, 2011

      NAS on SAN ( Network Data Management Protocal )




             NDMP (Network Data Management Protocol) is an open protocol used to control data backup and recovery communications between primary and secondary storage in a heterogeneous network environment. 

             NDMP specifies a common architecture for the backup of network file servers and enables the creation of a common agent that a centralized program can use to back up data on file servers running on different platforms. By separating the data path from the control path, NDMP minimizes demands on network resources and enables localized backups and disaster recovery. With NDMP, heterogeneous network file servers can communicate directly to a network-attached tape device for backup or recovery operations. Without NDMP, administrators must remotely mount the network-attached storage (NAS) volumes on their server and back up or restore the files to directly attached tape backup and tape library devices.


         NDMP addresses a problem caused by the particular nature of network-attached storage devices. These devices are not connected to networks through a central server, so they must have their own operating systems. Because NAS devices are dedicated file servers, they aren't intended to host applications such as backup software agents and clients. Consequently, administrators have to mount every NAS volume by either the Network File System (NFS) or Common Internet File System (CIFS) from a network server that does host a backup software agent. However, this cumbersome method causes an increase in network traffic and a resulting degradation of performance. NDMP uses a common data format that is written to and read from the drivers for the various devices.


          Network Data Management Protocol was  originally developed by NetApp Inc., but the list of data backup software and hardware vendors that support the protocol has grown significantly. Currently, the Storage Networking Industry Association (SNIA) oversees the development of the protocol.

      Common Internet File System (CIFS)

          Common Internet File System (CIFS) is a protocol that lets programs make requests for files and services on remote computers on the Internet. CIFS uses the client/server  programming model. A client program makes a request of a server program (usually in another computer) for access to a file or to pass a message to a program that runs in the server computer. The server takes the requested action and returns a response.

          CIFS is a public or open variation of the Server Message Block Protocol developed and used by Microsoft. Like the SMB protocol, CIFS runs at a higher level than and uses the Internet's TCP/IP protocol. CIFS is viewed as a complement to the existing Internet application protocols such as the File Transfer Protocol (FTP) and the Hypertext Transfer Protocol(HTTP). 


      CIFS lets you:
      • Get access to files that are local to the server and read and write to them
      • Share files with other clients using special locks
      • Restore connections automatically in case of network failure
      • Use Unicode file names

      Thursday, August 4, 2011

      Configuring the EMC CLARiiON controller with Access Logix installed


      The SAN Volume Controller does not have access to the storage controller logical units (LUs) if Access Logix is installed on the EMC CLARiiON controller. You must use the EMC CLARiiON configuration tools to associate the SAN Volume Controller and LU.
      The following prerequisites must be met before you can configure an EMC CLARiiON controller with Access Logix installed:
      • The EMC CLARiiON controller is not connected to the SAN Volume Controller
      • You have a RAID controller with LUs and you have identified which LUs you want to present to the SAN Volume Controller

      You must complete the following tasks to configure an EMC CLARiiON controller with Access Logix installed:
      • Register the SAN Volume Controller ports with the EMC CLARiiON
      • Configure storage groups
      The association between the SAN Volume Controller and the LU is formed when you create a storage group that contains both the LU and the SAN Volume Controller.

      EMC Clariion : Access Logix


         Access Logix is an optional feature of the firmware code that provides the functionality that is known as LUN Mapping or LUN Virtualization.
          You can use the software tab in the storage systems properties page of the EMC Navisphere GUI to determine if Access Logix is installed.
        After Access Logix is installed it can be disabled but not removed. The following are the two modes of operation for Access Logix:
       
      • Access Logix not installed: In this mode of operation, all LUNs are accessible from all target ports by any host. Therefore, the SAN fabric must be zoned to ensure that only the SAN Volume Controller can access the target ports.
      • Access Logix enabled: In this mode of operation, a storage group can be formed from a set of LUNs. Only the hosts that are assigned to the storage group are allowed to access these LUNs.

      Monday, August 1, 2011

      RAID Technology in DMX / Symmetrix Continued



      RAID [Redundant Array of Independent (Inexpensive) Disk]

      After reading couple of Blogs from last week regarding RAID Technology from StorageSearch and StorageIO, decided to elaborate more about the technology behind RAID and its functionality across Storage Platforms.

      After I almost finished writing this blog, I ran into a Wikipedia article explaining RAID TECHNOLOGY at a much length, covering different types of RAID technologies like RAID 2, RAID 4, RAID 10, RAID 50, etc.

      For example purposes, let’s say we need 5 TB of Space; each disk in this example is 1 TB each.


      RAID 0

      Technology: Striping Data with No Data Protection.

      Performance: Highest

      Overhead: None

      Minimum Number of Drives: 2 since striping

      Data Loss: Upon one drive failure

      Example: 5TB of usable space can be achieved through 5 x 1TB of disk.

      Advantages:
      >
      High Performance

      Disadvantages: Guaranteed Data loss

      Hot Spare: Upon a drive failure, a hot spare can be invoked, but there will be no data to copy over. Hot Spare is not a good option for this RAID type.

      Supported: Clariion, Symmetrix, Symmetrix DMX (Meta BCV’s or DRV’s)

      In RAID 0, the data is written / stripped across all of the disks. This is great for performance, but if one disk fails, the data will be lost because since there is no protection of that data.


      RAID 1

      Technology: Mirroring and Duplexing

      Performance: Highest

      Overhead: 50%

      Minimum Number of Drives: 2

      Data Loss: 1 Drive failure will cause no data loss. 2 drive failures, all the data is lost.

      Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

      Advantages: Highest Performance, One of the safest.

      Disadvantages: High Overhead, Additional overhead on the storage subsystem. Upon a drive failure it becomes RAID 0.
      =”font-size:small;”>

      Hot Spare: A Hot Spare can be invoked and data can be copied over from the surviving paired drive using Disk copy.

      Supported: Clariion, Symmetrix, Symmetrix DMX

      The exact data is written to two disks at the same time. Upon a single drive failure, no data is lost, no degradation, performance or data integrity issues. One of the safest forms of RAID, but with high overhead. In the old days, all the Symmetrix supported RAID 1 and RAID S. Highly recommended for high end business critical applications.

      The controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. One Write or two Reads are possible per mirrored pair. Upon a drive failure only the failed disk needs to be replaced.


      RAID 1+0

      Technology: Mirroring and Striping Data

      Performance: High

      Overhead: 50%

      Minimum Number of Drives: 4

      Data Loss: Upon 1 drive failure (M1) device, no issues. With multiple drive failures in the stripe (M1) device, no issues. With failure of both the M1 and M2 data loss is certain.

      Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

      Advantages: Similar Fault Tolerance to RAID 5, Because of striping high I/O is achievable.

      Disadvantages: Upon a drive failure, it becomes RAID 0.

      Hot Spare: Hot Spare is a good option with this RAID type, since with a failure the data can be copied over from the surviving paired device.

      Supported: Clariion, Symmetrix, Symmetrix DMX

      RAID 1+0 is implemented as a mirrored array whose segments are RAID 0 arrays.



      RAID 3

      Technology: Striping Data with dedicated Parity Drive.

      Performance: High

      Overhead: 33% Overhead with Parity (in the example above), more drives in Raid 3 configuration will bring overhead down.

      Minimum Number of Drives: 3

      Data Loss: Upon 1 drive failure, Parity will be used to rebuild data. Two drive failures in the same Raid group will cause data loss.

      Example: 5TB of usable space would be achieved through 9 1TB disk.

      Advantages: Very high Read data transfer rate. Very high Write data transfer rate. Disk failure has an insignificant impact on throughput. Low ratio of ECC (Parity) disks to data disks which converts to high efficiency.

      Disadvantages: Transaction rate will be equal to the single Spindle speed

      Hot Spare: A Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

      Supported: Clariion


      RAID 5

      Technology: Striping Data with Distributed Parity, Block Interleaved Distributed Parity

      Performance: Medium

      Overhead: 20% in our example, with additional drives in the Raid group you can substantially bring down the overhead.

      Minimum Number of Drives: 3

      Data Loss: With one drive failure, no data loss, with multiple drive failures in the Raid group data loss will occur.

      Example: For 5TB of usable space, we might need 6 x 1 TB drives

      Advantages: It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of ECC (Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.

      Disadvantages: Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk. Ask the PSE’s about RAID 5 issues and data loss?

      Hot Spare: Similar to RAID 3, where a Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

      Supported: Clariion, Symmetrix DMX code 71

      RAID Level 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.

      This would classify to be the most favorite RAID Technology used today.



      RAID 6

      Technology: Striping Data with Double Parity, Independent Data Disk with Double Parity

      Performance: Medium

      Overhead: 28% in our example, with additional drives you can bring down the overhead.

      Minimum Number of Drives: 4

      Data Loss: With one drive failure and two drive failures in the same Raid Group no data loss. Very reliable.

      Example: For 5 TB of usable space, we might need 7 x 1TB drives

      Advantages: RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.

      Disadvantages: Very poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.

      Hot Spare: Hot Spare can be invoked against a drive failure, built it from parity or data drives and then upon drive replacement use that hot spare to build the replaced drive.

      Supported: Clariion Flare 26, 28, Symmetrix DMX Code 72, 73

      Clariion Flare Code 26 supports RAID 6. It is also being implemented with the 72 code on the Symmetrix DMX. The simplest explanation of RAID 6 is double the parity. This allows a RAID 6 RAID Groups to be able to have two drive failures in the RAID Group, while maintaining access to the data.


      RAID S (3+1)

      Technology: RAID Symmetrix

      Performance:
      >
      High

      Overhead: 25%

      Minimum Number of Drives: 4

      Data Loss: Upon two drive failures in the same Raid Group

      Example: For 5 TB of usable space, 8 x 1 TB drives

      Advantages: High Performance on Symmetrix Environment

      Disadvantages: Proprietary to EMC. RAID S can be implemented on Symmetrix 8000, 5000 and 3000 Series. Known to have backend issues with director replacements, SCSI Chip replacements and backend DA replacements causing DU or offline procedures.

      Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

      Supported: Symmetrix 8000, 5000, 3000. With the DMX platform it is just called RAID (3+1)

      EMC Symmetrix / DMX disk arrays use an alternate, proprietary method for parity RAID that they call RAID-S. Three Data Drives (X) along with One Parity device. RAID-S is proprietary to EMC but seems to be similar to RAID-5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

      The data protection feature is based on a Parity RAID (3+1) volume configuration (three data volumes to one parity volume).

      RAID (7+1)

      Technology: RAID Symmetrix

      Performance: High

      Overhead: 12.5%

      Minimum Number of Drives: 8

      Data Loss: Upon two drive failures in the same Raid Group

      Example: For 5 TB of usable space, 8 x 1 TB drives (rather you will get 7 TB)

      Advantages: High Performance on Symmetrix Environment

      Disadvantages: Proprietary to EMC. Available only on Symmetrix DMX Series. Known to have a lot of backend issues with director replacements, backend DA replacements since you have to verify the spindle locations. Cause of concern with DU.

      Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

      Supported: With the DMX platform it is just called RAID (7+1). Not supported on the Symms.

      EMC DMX disk arrays use an alternate, proprietary method for parity RAID that is called RAID. Seven Data Drives (X) along with One Parity device. RAID is proprietary to EMC but seems to be similar to RAID-S or RAID5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

      The data protection feature is based on a Parity RAID (7+1) volume configuration (seven data volumes to one parity volume).


      Follow storageadmiins on Twitter Follow storageadmiins on Twitter