Showing posts with label emc. Show all posts
Showing posts with label emc. Show all posts

Thursday, September 29, 2011

EMC Symmetrix: VCMDB and ACLX

VCMDB: Volume Control Manager Database

ACLX: Access Control Logix

VCM: Volume Control Manager device (where the database resides)

VCM Gatekeeper: Volume Control Manager Gatekeeper (database doesn’t reside on these devices)

SFS Volumes: Symmetrix File System Volumes

.
        If you work with EMC Symmetrix systems, you know the importance of VCMDB. Introduced with Symmetrix 4.0 and used in every generation after that, VCMDB stands for Volume Control Manager Database). Also in the latest generation of systems the VCM device is at times also referenced as VCM Gatekeeper.

      VCMDB is a relatively small device that is created on the Symmetrix system that allows for hosts access to various devices on the Symmetrix. VCMDB keeps an inventory of which devices have access to which host (HBA’s). Without a VCMDB in place, host systems will not be able to access the Symmetrix. The VCMDB should be backed up on regular intervals and would be helpful in a rainy day.

          The VCMDB device size grew along with new generations of Symmetrix systems that got introduced, primarily a means to keep a track of more supported devices (hypers / splits) on these platforms. With the introduction of Symmetrix V-Max, the VCMDB concept is now a bit changed to ACLX (Access Control Logix). Access Logix is being used on the Clariion systems for years now.

.

Here are a few things to consider with VCMDB
  • On the older Symmetrix systems (4.0, 4.8, 5.0 and 5.5), the VCMDB (device) is mapped to all the channels, host
  • In these systems the VCMDB access is typically restricted by Volume Logix or ACL (access control lists)
  • With the Symmetrix DMX, DMX2 Systems – Enginuity Code 5670, 5671 the VCM device only requires to be mapped to the Management stations
  • Management stations include SYMCLI Server / Ionix Control Center Server / Symmetrix Management Console
  • At all given times on the DMX, DMX2 platforms, the VCMDB would need to be mapped to at least one station to perform online SDDR changes. Alternatively this problem of not having device mapped to at least one host can also be fixed by the PSE lab
  • Mapping VCMDB to multiple hosts, channels may make the device venerable to crashes, potential tampering, device attributes and data change
  • You can write disable VCMDB to avoid the potential of the above
  • With these systems, the host can communicate to the VCMDB via Syscalls
  • The VCM Edit Director Flag (fibrepath) needs to be enabled for management stations to see VCM device
  • The database (device masking database) of the VCMDB resides on the SFS volumes. This feature was introduced with DMX-3 / DMX-4 (5772 version of microcode). A 6 cylinder VCM Gatekeeper device is okay to use with these versions of microcode.
  • Starting Symmetrix V-Max systems, the concept of ACLX was introducted for Auto Provisioning etc.
  • VCM volumes are required to be mirrored devices like SFS volumes
.

 Various different types of VCMDB

Type 0, Type 1, Type 2, Type 3, Type 4, Type 5, Type 6 :


  • Type 0: Symmetrix 4.0, 32 Director System, 16 cylinder device size, Volume Logix 2.x
  • Type 1: Symmetrix 4.8, 64 Director System, 16 cylinder device size, ESN Manager 1.x
  • Type 2: Symmetrix 5.0/5.5, 64 Director System, 16 cylinder device size, ESN Manager 2.x
  • Type 3: Symmetrix DMX, supports 32 fibre/ 32 iSCSI initiator records per port, 24 cylinder device in size. Enginuity 5569, Solutions Enabler 5.2, Support 8000 devices
  • Type 4: Symmetrix DMX/DMX-2, supports 64 fibre/ 128 iSCSI initiator records per port, 48 cylinder device in size. Enginuity 5670, Solutions Enabler 5.3, Supports 8000 devices
  • Type 5: Symmetrix DMX/DMX-2, supports 64 fibre / 128 iSCSI initiator records per port, 96 cylinder device in size, Enginuity 5671, Solutions Enabler 6.0, Supports 16000 devices
  • Type 6: Symmetrix DMX-3, DMX-4, supports 256 fibre / 512 iSCSI initiator records per port, 96 cylinder device in size, Enginuity 5771, 5772 Solutions Enabler 6.0, Supports 64000 devices
.

Notes about various Types of VCMDB

  • Type 3 of VCMDB can be converted to Type 4 VCMDB (code upgrade from 5669 to 5670 to 5671)
  • Solutions enabler 5.2 and Solutions Enabler 5.3 can read/write Type 3 VCMDB
  • Solutions enabler 5.3 can read/write Type 4 VCMDB
  • VCMDB device is recommended to be a certain size, but it is okay to use a larger size device if no choices are available.
.

Converting various types of VCMDB using SymCLI

  • If the device cylinder size is equal with a conversion you are attempting, the following will help you convert your VCMDB from type x to type y.
    • Backup the device
    • symmaskdb –sid <symmid> backup –file backup
    • Check the VCMDB type using
    • symmaskdb – sid <symmid> list database
    • Convert from type 4 to type 5
    • Symmaskdb – sid <symmid> convert –vcmdb_type 5 –file Covertfilename
.

To initialize VCMDB for the first time on a Symmetrix System

Within Ionix Control Center

  • Click on the Symmetrix array you are trying to initialize the VCMDB
  • Select Masking then VCMDB Management and then initialize
  • Select a new backup and create a file name
  • Create a file name with .sdm extenstion
  • Click on Activate the VCMDB
  • VCMDB backups are stored at \home\ecc_inf\data\hostname\data\backup\symmserial\
  • Also it will be viewable within Ionix Control Center at Systems/Symmetrix/VCMDB Backups/
.

With SymCLI


  • To query the VCMDB database
    • symmaskdb –sid <symmid> list database
    • To backup and init an existing VCMDB database
      • symmaskdb – sid <symmid> init –file backup

More technical deep dive coming soon on various other topics…including ACLX.
Cheers

Wednesday, September 21, 2011

EMC Symmetrix DMX-4: Supported Drive Types

In this blog post we will discuss the supported drive models for EMC Symmetrix DMX-4. Right before the release of Symmetrix V-Max systems, in early Feb 2009 we saw some added support for EFD’s (Enterprise Flash Disk) on the Symmetrix DMX-4 platform. The additions were denser 200GB and 400GB EFD’s.
The following size drives types are supported with Symmetrix DMX-4 Systems at the current microcode 5773: 73GB, 146GB, 200GB, 300GB, 400GB, 450GB, 500GB, 1000GB. Flavors of drives include 10K or 15K and interface varies 2GB or 4GB.
The drive has capabilities to auto negotiate to the backplane speed. If the drive LED is green the speed is 2GB, if its neon blue its 4GB interface.
To read a blog post on supported drive types on EMC Symmetrix V-Max System

The following are details on the drives for the Symmetrix DMX-4 Systems. You will find details around Drive Types, Rotational Speed, Interface, Device Cache, Access times, Raw Capacity, Open Systems Formatted Capacity and Mainframe Formatted Capacity.



73GB FC Drive
Drive Speed: 10K
Interface: 2GB / 4GB
Device Cache: 16MB
Access speed: 4.7 – 5.4 mS
Raw Capacity: 73.41 GB
Open Systems Formatted Cap: 68.30 GB
Mainframe Formatted Cap: 72.40 GB
73GB FC Drive
Drive Speed: 15K
Interface: 2GB / 4GB
Device Cache: 16MB
Access speed: 3.5 – 4.0 mS
Raw Capacity: 73.41 GB
Open Systems Formatted Cap: 68.30 GB
Mainframe Formatted Cap: 72.40 GB
146GB FC Drive
Drive Speed: 10K
Interface: 2GB / 4GB
Device Cache: 32MB
Access speed: 4.7 – 5.4 mS
Raw Capacity: 146.82 GB
Open Systems Formatted Cap: 136.62 GB
Mainframe Formatted Cap: 144.81 GB
146GB FC Drive
Drive Speed: 15K
Interface: 2GB / 4GB
Device Cache: 32MB
Access speed: 3.5 – 4.0 mS
Raw Capacity: 146.82 GB
Open Systems Formatted Cap: 136.62 GB
Mainframe Formatted Cap: 144.81 GB
300GB FC Drive
Drive Speed: 10K
Interface: 2GB / 4GB
Device Cache: 32MB
Access speed: 4.7 – 5.4 mS
Raw Capacity: 300.0 GB
Open Systems Formatted Cap: 279.17 GB
Mainframe Formatted Cap: 295.91 GB
300GB FC Drive
Drive Speed: 15K
Interface: 2GB / 4GB
Device Cache: 32MB
Access speed: 3.6 – 4.1 mS
Raw Capacity: 300.0 GB
Open Systems Formatted Cap: 279.17 GB
Mainframe Formatted Cap: 295.91 GB
400GB FC Drive
Drive Speed: 10K
Interface: 2GB / 4GB
Device Cache: 16MB
Access speed: 3.9 – 4.2 mS
Raw Capacity: 400.0 GB
Open Systems Formatted Cap: 372.23 GB
Mainframe Formatted Cap: 394.55 GB
450GB FC Drive
Drive Speed: 15K
Interface: 2GB / 4GB
Device Cache: 16MB
Access speed: 3.4 – 4.1 mS
Raw Capacity: 450.0 GB
Open Systems Formatted Cap: 418.76 GB
Mainframe Formatted Cap: 443.87 GB
500GB SATA II Drive
Drive Speed: 7.2K
Interface: 2GB / 4GB
Device Cache: 32MB
Access speed: 8.5 to 9.5 mS
Raw Capacity: 500.0 GB
Open Systems Formatted Cap: 465.29 GB
Mainframe Formatted Cap: 493.19 GB
1000GB SATA II Drive
Drive Speed: 7.2K
Interface: 2GB / 4GB
Device Cache: 32MB
Access speed: 8.2 – 9.2 mS
Raw Capacity: 1000.0 GB
Open Systems Formatted Cap: 930.78 GB
Mainframe Formatted Cap: 986.58 GB
73GB EFD
Drive Speed: Not Applicable
Interface: 2GB
Device Cache: Not Applicable
Access speed: 1mS
Raw Capacity: 73.0 GB
Open Systems Formatted Cap: 73.0 GB
Mainframe Formatted Cap: 73.0 GB
146GB EFD
Drive Speed: Not Applicable
Interface: 2GB
Device Cache: Not Applicable
Access speed: 1mS
Raw Capacity: 146.0 GB
Open Systems Formatted Cap: 146.0 GB
Mainframe Formatted Cap: 146.0 GB
200GB EFD
Drive Speed: Not Applicable
Interface: 2GB / 4GB
Device Cache: Not Applicable
Access speed: 1mS
Raw Capacity: 200 GB
Open Systems Formatted Cap: 196.97 GB
Mainframe Formatted Cap: 191.21 GB
400GB EFD
Drive Speed: Not Applicable
Interface: 2GB / 4GB
Device Cache: Not Applicable
Access speed: 1mS
Raw Capacity: 400.0 GB
Open Systems Formatted Cap: 393.84 GB
Mainframe Formatted Cap: 382.33 GB
Support for 73GB and 146GB EFD’s have been dropped with the Symmetrix V-Max Systems, they will still be supported with the Symmetrix DMX-4 Systems which in addition to 73 GB and 146GB also supports 200GB and 400GB EFD’s.

Monday, September 19, 2011

EMC Timefinder Commands

The following are the Timefinder Procedural Commands
It outlines everything that needs to be done from start to finish. Realize that for routine operations, some of these steps won’t be needed; however, for the sake of completeness.
Prepare EMC structures

1. Create a Symmetrix disk group

symdg -t [ Regular | RDF1 | RDF2 ] create ${group}

2. Add devices to the disk group

symld -g ${group} add pd /dev/dsk/c#t#d#

symld -g ${group} add dev 01a

3. Associate BCV devices to the disk group

symbcv -g ${group} associate pd ${bcv_ctd}

symbcv -g ${group} associate dev ${bcv_dev}


Establish BCV mirrors

1. ID the logical device names: Timefinder defaults to using the logical device names. You can id the logical device names by:

symmir -g ${group} query

2. First time establish, execute a full establish:

symmir -g ${group} -full establish ${std_log_dev} bcv ${bcv_log_dev}

3. Use symmir query to monitor progress.

symmir -g ${group} query


Break BCV mirrors

1. Types of splits:

1. Instant split: Split is performed in the background after the completion of the split I/O request.

2. Force split: Splits the pair during establish or restore operations; invalid tracks may exist.

3. Reverse split: Resyncs the BCV with the full data copy from its local or remote mirror.

4. Reverse differential split: Enables a copy of only out-of-sync tracks to the BCV from its mirror.

5. Differential split: Enables a copy of only the updated tracks to the BCV’s mirror.

2. Commands:

symmir -g ${group} split

symmir -g ${group} split -instant

symmir -g ${group} split -differential

symmir -g ${group} reverse split -differential


Reestablish or restore BCV mirrors

1. Restore copies data from BCV back to standard pair. >Reestablish, on the other hand, does a
differential update of the BCV from the standard device.

2. Commands:

symmir -g ${group} establish Differential reestablish from standard device to BCV

symmir -g ${group} -full restore Full restore of all tracks on BCV to standard device.

symmir -g ${group} restore Differential restore of BCV data to standard device.


The Timefinder Strategies are as follows

1. Maintain BCV mirrors with the standard device; break the mirrors when you want to backup, test,
or develop on a copy of the original.

This is probably the most common way of running Timefinder. The advantage is that the split operation will happen almost instantly as the mirrors are fully synced all the time. The disadvantage is that anything towards that happens to the standard device will be reflected in the BCV mirror.

2. Maintain the BCV as a split device to keep an online backup of the original data.

Thursday, September 15, 2011

How to do Connection between standard to concorent BCVs

In order to use the control functions of Solutions Enabler, you must create device groups and add/associate Symmetrix devices with the group. The following example shows how to create a device group, add a standard device to it and associate two BCV devices to the group.
The following commands will create a device group using the default type (regular). Next we will add a device to the device group and assign it a logical name. Then we associate two BCV devices with the device group so we can switch back and forth with the BCV devices.

symdg create mygroupsymld -g mygroup add dev 000 STD000
symbcv -g mygroup associate dev 110 BCV000
symbcv -g mygroup associate dev 111 BCV001

NOTE: At this point you have only added/associated devices with a device group. These actions do not in any way describe which devices should actually be paired. This may be confusing as the documentation is not very explicit. The fact is that the symmetrix may already have BCV pair information about these devices depending on how they were used in the past.
Now issue the commands to define the STD/BCV pair and actually synchronize the pair with a full establish.

symmir -g mygroup -full establish STD000 BCV dev 110
or
symmir -g mygroup -full establish STD000 BCV ld BCV000

This explicit definition of the STD device and the particular BCV device will cause any existing pair information to be disregarded and will use this new information to create a pair. This is

comparable to the older TimeFinder Command Line Interface "bcv -f filename" where the file "filename" consisted on one line entries pairing STD devices with BCV devices. And finally, split this TimeFinder pair and synchronize the STD device with a different BCV device.
symmir -g mygroup split
symmir -g mygroup -full establish STD000 BCV dev 111

Another method to establish pairs, using the "-exact" option [Available in V3.2-73-06 and higher]The -full -exact options on the symmir command instructs SYMCLI to define the STD/BCV pairs in the same order they were entered into the device group.

symdg create mygroupsymld -g mygroup add dev 000 STD000
symld -g mygroup add dev 001 STD001
symbcv -g mygroup associate dev 110 BCV000
symbcv -g mygroup associate dev 111 BCV001
symmir -g mygroup -full -exact establish

This will pair the first STD device (STD000) with the first BCV (BCV000) entered into the device group, and pair the second STD device (STD001) with the second BCV (BCV001) entered into the device group.

Monday, September 12, 2011

EMC Symmetrix DMX Models by Cabinets Types :

The below is the true breakdown of the type of the EMC Symmetrix and EMC DMX machines to the type of cabinet properties it has. 

Starting with the Symm 3.0′s EMC introduced a 1/2 Hieght cabinets, Single Full Cabinet and a 3 Cabinet machine. The same ideas went into the Symm 4.0 and 4.8. 

Starting the Symm 5.0 and into Symm 5.5, EMC introduced the Badger cabinets, which where much slimmer and about 5 ft in height, it was a disaster with those cabinets. Really no one bought it. 

Starting the DMX800′s and DMX1000′s which are the single cabinet, EMC introduced the DMX2000′s in 2 cabinets and DMX3000 in 3 cabinet style. 

Also if you ever wondered where those Symm modell numbers came from

1st Digit: 3 = Open Systems. 5 = Mainframe. 8 = Mixed.
2nd Digit: Related to Cabinet size, dependant on Generation’
3rd Digit: 00 = 5¼” Drives. 30 = 3½” Drives              

The DMX uses 31/2″ Fiber Drives

Saturday, September 3, 2011

EMC Symmetrix V-Max: Enginuity 5874

          
           EMC Symmetrix V-Max systems were introduced back in the month of April 2009. With this new generation of Symmetrix came a new name V-Max and a new Enginuity family of microcode 5874.

          With this family of microcode 5874: there are 7 major areas of enhancements as listed below.

Base enhancements

Management Interfaces enhancements

SRDF functionality changes

Timefinder Performance enhancements

Open Replicator Support and enhancements



Virtualization enhancements


    
         With Enginuity family 5874 you also need solutions enabler 7.0. The initial Enginuity was release 5874.121.102, a month into the release we saw a new emulation and SP release 5874.122.103 and the latest release as of 18th of June 2009 is 5874.123.104. With these new emulation and SP releases, there aren’t any new features added to the microcode rather just some patches and fixes related to the maintenance, DU/DL and environmentals. Based on some initial list of enhancements by EMC and then a few we heard at EMC World 2009, to sum up, here are all of those.


   RVA: Raid Virtual Architecture:


        With Enginuity 5874 EMC introduced the concept of single mirror positions. Normally it has always been challenging to reduce the mirror positions since they cap out at 4. With enhancements to mirror positions related to SRDF environments and RAID 5 (3D + 1P, 7D +1P) / RAID 6  (6D+2P, 14D+2P) / RAID 1 devices, now it will open doors to some further migration and data movement opportunities related to SRDF and RAID devices.

Large Volume Support:

       With this version of Enginuity, we will see max volume size of 240GB for open systems and 223GB for mainframe systems with 512 hypers per drive. The maximum drive size supported on Symmetrix V-Max system is 1TB SATA II drives. The maximum drive size supported for EFD on a Symmetrix V-Max system is 400GB. 

Dynamic Provisioning:

        Enhancements related to SRDF and BCV device attributes will overall improve efficiency during configuration management. Will provide methods and means for faster provisioning.  

Concurrent Configuration Changes:

        Enhancements to concurrent configuration changes will allow the customer and customer engineer to perform through Service Processor and through Solutions enabler certain procedures and changes that can be all combined and executed through a single script rather than running them in a series of changes.

Service Processor IP Interface:

        All Service Processors attached to the Symmetrix V-Max systems will have Symmetrix Management Console 7.0 on it, that will allow customers to login and perform Symmetrix related management functions. Also the service processor will have capabilities to be managed through the customer’s current IP (network) environment. Symmetrix Management Console will have to be licensed and purchased from EMC for V-Max systems. The prior versions of SMC were free. SMC will now have capabilities to be opened through a web interface.

SRDF Enhancements:

       With introduction of RAID 5 and RAID 6 devices on the previous generation of Symmetrix (DMX-4), now the V-Max offers a 300% better performance with TImefinder and other SRDF layered apps to make the process very efficient and resilient.

Enhanced Virtual LUN Technology:

        Enhancements related to Virtual LUN Technology will allow customers to non-disruptively perform changes to the location of disk either physically or logically and further simplify the process of migration on various systems.

Virtual Provisioning:

        Virtual Provisioning can now be pushed to RAID 5 and RAID 6 devices that were restrictive in the previous versions of Symmetrix. 

Autoprovisioning Groups:

       Using Autoprovisiong groups, customers will now be able to perform device masking by creating host initiators, front-end ports and storage volumes. There was an EMC Challenge at EMC World 2009 Symmetrix corner for auto provisioning the symms with a minimum number of clicks. Autoprovisioning groups are supported through Symmetrix Management Console. So the above are the highlights of EMC Symmetrix V-Max Enginuity 5874. As new version of the microcode is released later in the year stay plugged in for more info.

Monday, August 22, 2011

EMC Symmetrix Management Console (SMC – For Symmetrix V-Max Systems)

        The Symmetrix Management Console is a very important step towards allowing customers take control of their Symmetrix V-Max Systems. With the new Symmetrix V-Max comes a new version of Symmetrix Management Console allowing customers to manage their EMC Symmetrix V-Max Systems through a GUI web browser interface with tons of new added features and wizards for usability.


       The Symmetrix Management Console was developed back in the day as a GUI to view customers Symmetrix DMX environment, over years it has evolved more to be a functional and operational tool to interface the machine for data gathering but also to perform changes. EMC Solutions Enabler Symcli is a CLI based interface to the DMX and V-Max Systems, but the SMC complements the CLI by allowing customers to perform more or less similar functions through a GUI. The looks & feels of SMC also resemble ECC (EMC Control Center) and customers sometime refer it as a ECC-lite (SMC).


symmetrix-management-console-in-action


EMC Symmetrix Management Console in action monitoring EMC Symmetrix V-Max Systems

Some of the important features and benefits of the SMC for V-Max are listed below:

1)    Allows customers to manage multiple EMC Symmetrix V-Max Systems

2)    Increase customer management efficiency by using Symmetrix Management Console to automate or perform functions with a few set of clicks

3)    The Symmetrix Management Console 7.0 only works with Symmetrix V-Max systems

4)    The Symmetrix Management Console is installed on the Service Processor of the V-Max System and can also be installed on a host in the SAN environment.

5)    Customers can now do trending, performance reporting, planning and consolidation using SMC

6)    SMC will help customers reduce their TCO with V-Max Systems

7)    It takes minutes to install. Windows environment running a Windows Server 2003 along with IIS would be the best choice.

8 )    The interface the customers work on is a GUI. It has the looks and feels of ECC and the Console also integrates with ECC.

9)    New Symmetrix V-Max systems are configured and managed through the Symmetrix Management Console.

10) SMC also manages user, host permissions and access controls

11) Alert Management

12) From a free product, SMC now becomes a licensed product, which the customers will  have to pay for

13) It allows customers to perform functions related to configuration changes like creating and mapping masking devices, changing device attributes, flag settings, etc

14) Perform replication functions using SMC like Clone, Snap, Open Replicator, etc

15) SMC enables Virtual Provisioning with the Symmetrix V-Max arrays

16) Enables Virtual LUN technology for automated policies and tiering.

17) Auto Provisioning Group technology is offered through wizards in SMC

18) Dynamic Cache Partitioning: Allocates and deallocates cache based on policies and utilization.

19) Symmetrix Priority Controls

20) From the SMC, customers can now launch SPA (Symmetrix Performance Analyzer), this is more on the lines of Workload Analyzer which is a standard component of ECC Suite. This allows customers to view their storage & application performance & monitoring. SPA will can be obtained as a Add-on product from EMC based on licensing.


virtual-lun-technology-in-smc1


Virtual LUN Technology in works using a wizard

21) The SMC gives the customer capabilities for Discovery, Configuration, Monitoring, Administration and Replication Management.

22) SMC can be obtained from EMC Powerlink or through your account manager from EMC if you have an active contract in place with EMC for hardware/software maintenance or if your systems are under warranty.

Highly recommended management tool for SAN Admins and yea it’s not free anymore for V-Max Systems.   

 

EMC Symmetrix: Calculations for Heads, Tracks, Cylinders, GB

Here is the quick and dirty math on EMC Symmetrix Heads, Tracks, Cylinder sizes to actual usable GB’s of space.
Based on different generations of Symmetrix systems, here is how the conversions work.
Before we jump into each model type, lets look at what the basics are, with the following calculations.
.
.
.
.
There are s number of splits (hyper) per physical device.
There are n number of cylinders per split (hyper)
There are 15 tracks per cylinder (heads)
There are either 64 or 128 blocks of 512 bytes per track
.
All the calculations discussed here are for Open Systems (FBA) device types. Different device emulations like 3380K, 3390-1, 3390-2, 3390-3, 3390-4, 3390-27, 3390-54 have different bytes/track, different bytes/cylinder and cylinders/volume.
.

Symmetrix 8000/DMX/DMX-2 Series

Enginuity Code: 5567, 5568, 5669, 5670, 5671
Includes EMC Symmetrix 8130, 8230, 8430, 8530, 8730, 8830, DMX1000, DMX2000, DMX3000 and various different configurations within those models.
GB = Cylinders * 15 * 64 * 512 / 1024 / 1024 / 1024
eg: 6140 Cylinder devices equates to 2.81 GB of usable data
6140 * 15 * 64 * 512 / 1024 / 1024 / 1024 = 2.81 GB

Cylinders = GB / 15 / 64 / 512 * 1024 * 1024 * 1024
Where
15 = tracks per cylinder
64 = blocks per track
512 = bytes per block
1024 = conversions of bytes to kb to mb to gb.
.

Symmetrix DMX-3/DMX-4 Series

Enginuity Code: 5771, 5772, 5773
Includes EMC Symmetrix DMX-3, DMX-4 and various different configurations within those models.
GB = Cylinders * 15 * 128 * 512 / 1024 / 1024 / 1024
Eg: 65520 Cylinder device equates to 59.97 GB of usable data
65540 * 15 * 128 * 512 / 1024 / 1024 / 1024 = 59.97 GB

Cylinders = GB / 15 / 128 / 512 * 1024 * 1024 * 1024
15 = tracks per cylinder
128 = blocks per track
512 = bytes per block
1024 = conversions of bytes to kb to mb to gb
.

Symmetrix V-Max

Enginuity Code: 5874
Includes EMC Symmetrix V-Max and various different configurations within this model.
GB = Cylinders * 15 * 128 * 512 / 1024 / 1024 / 1024
Eg: 262668 Cylinder device equates to 240.47 GB of usable data
262668 * 15 * 128 * 512 / 1024 / 1024 / 1024 = 240.47 GB

Cylinders = GB / 15 / 128 / 512 * 1024 * 1024 * 1024
15 = tracks per cylinder
128 = blocks per track
512 = bytes per block
8 bytes = 520-512 used for T10-DIF
1024 = conversions of bytes to kb to mb to gb
Drive format on a V-Max is 520 bytes, out of which 8 bytes are used for T10-DIF ( A post on DMX-4 and V-Max differences).

Friday, August 19, 2011

Emc Symmetrix : Volume Logix

      The order for getting fibre channel based hypervolume extentions (HVEs) viewable on systems, particularly SUN systems, is as follows:


1. Appropriately zone so the Host Bus Adapter (HBA) can see the EMC Fibre Adapter (FA).

2. Reboot the system so it can see the vcm database disk on the FA OR

    1. SUN:

            1. drvconfig -i sd; disks; devlinks (SunOS <= 5.7)

            2. devfsadm -i sd (SunOS >= 5.7 (w/patches))
     2. HP:
             
            1. ioscan -f # Note the new hw address
           

            2. insf -e -H ${hw}

3. Execute vcmfind to ensure the system sees the Volume Logix database.

4. ID mapped informationi

         1. Map HVEs to the FA if not already done.
     
         2. symdev list -SA ${fa} to see what’s mapped.

         3. symdev show ${dev} to ID the lun that ${dev} is mapped as. The display should look   something like:
 Front Director Paths (4):

{

———————————————————————————————–
POWERPATH DIRECTOR PORT
———————- —————– ————
PdevName Type Num Type Num Sts VBUS TID LUN
———————————————————————————————–
Not Visible N/A 03A FA 0 RW 000 00 70
Not Visible N/A 14A FA 0 NR 000 00 70
Not Visible N/A 03B FA 0 NR 000 00 70
Not Visible N/A 14B FA 0 NR 000 00 70

}

    The number you’re looking for is under the column LUN. Remember, it’s HEX, so the lun that’ll show up on the ctd is (0×70=112) c#t#d112

       5. On SUN systems, modify the /kernel/drv/sd.conf file so the system will see the new disks. You’ll need to do a reconfig reboot after modifying this file. If the system doesn’t see it on a reconfig reboot, this file is probably the culprit!
      
       6. fpath adddev -w ${hba_wwn} -f ${fa} -r “${list_of_EMC_devs}”
You can specify multiple EMC device ranges; just separate them by spaces, not commas
      
       7. Reboot the system so it can see the new disks on the FA OR

1. SUN:

      1. drvconfig -i sd; disks; devlinks (SunOS <= 5.7)

      2. devfsadm -i sd (SunOS >= 5.7 (w/patches))

2. HP:

       1. ioscan -f # Note the new hw address
       

       2. insf -e -H ${hw} 

Thursday, August 18, 2011

EMC Symmetrix LUN Allocation


Minimum Requirements:

 

         Knowledge on Basic Symmetrix Architecture
     
             Operating systems knowledge

             And a test Symmetrix, hosts to try


      Symmetrix Allocation Steps 

       Step 1: Create symmetrix devices from the free space.

                    

            To create a symmetrix device, first we need to know what type of device we need to create. For example, RAID-5, RAID-1 etc… I’m going to write both the commands. To start with, we need to create a simple text file and add the below line to the file.

      filename: config1

      create dev count=xx, size=17480, emulation=FBA, config=2-way-mir, disk_group=x;


      Command Explanation:

      dev count=xx
      (replace xx with the number of devices we need to create)

      emulation=FBA 
      (FBA > Fixed Block Architecture used for Open Systems which are Solaris, HP-UC and Window$)

      config=2-way-mir
      (Configure the devices as RAID-1, one of the oldest configuration available in all symmetrix models)

      disk_group=x 
      (disk groups are created to differentiate the  tiers, performance and capacity. Based on the requirements, we can select the desired disk group number to create the new symmetrix devices)


                Once you add the above line in the text file, save it and check the syntax. To make any configuration changes in the symmetrix we need to run the below mentioned commands. Ensure that the raid1.txt file in your current working directory.
        

      symconfigure -sid xxxx -f raid1.txt preview -V 

      symconfigure -sid xxxx -f raid1.txt prepare -V 

      symconfigure -sid xxxx -f raid1.txt commit -V



      Command Explanation:


       symconfigure
      (This command used to manage major configuration changes, display capacity of symmetrix and manage dynamic (hot) spares and device reservations)

      -sid
      (Symmetrix ID, always prefix with hyphen (-) )

      -f
      (filename, mention the file name to which we’ll use. In this example it is raid1.txt)

      preview
      (The preview argument verifies the syntax and correctness of each individual change defined, and then terminates the session without change execution.)

      prepare
      (The prepare argument performs the preview checks and also verifies the appropriateness of the resulting configuration definition against the current state of the Symmetrix array)

      commit
      (The commit argument completes all stages and executes the changes in the specified Symmetrix array.)
      -V (Yes, you’re right, its verbose mode)

                That’s it! After running the above the commands the new devices are created. It was to create symmetrix devices from the symcli right. Let us assume that the devices ID’s are 001 through 00A (devices created with hexadecimal numbers)


      Step 2: Search for free LUN ID on the FA (Fibre Adapters)

       

            After creating the devices, we need to map the devices to the Fibre Adapters. In legacy symmetrix, it will be SCSI Adapters (SA). IF we need to do it from ECC (EMC Control Center now called as IONIX, we need to a. Right click on the device b. Go to ‘Configure’ c. Select and Click ‘SDR Device Mapping’ and follow the wizard. Here I’ll be writing the commands to do the same. 


      symcfg -sid xxxx list -available -address -fa xy -p n |more

      Command Explanation:

      symcfg
      (Discovers or displays Symmetrix configuration information)

      list
      (Lists brief or detailed information about your Symmetrix configuration.)

      -available -address
      (Requests the next available Vbus, TID, or LUN address be appended to the output list. Used with the -address option.)

      -fa
      (Confines the action to a Fibre Adapter (FA) director number)

      xy
      (x is the director number eg. 8 and y is the processor number eg. a or b)

      -p n
      (p is the port and n is the number eg. 0 or 1)


            Up to DMX-4 we follow the RULE-17. So repeat the command for another FA in this e.g. 9.


      Step 3: Mapping a device to the FA

       

             Now, lets assume that the LUN ID’s 52 onwards are free to use on both the FA’s 8a and 9a. Also, the file map.txt contains the below commands. After saving the file, we have run the symconfigure commands as shown above in Step 1. The below command will map the device 0001 to the FA’s 8a:0 and 9a:0 which will have the LUN ID’s 52 on the FA’s. 



      map dev 0001  to dir 8a:0 target=0,lun=52;

       

      map dev 0001  to dir 9a:0 target=0,lun=52;

       

       

      Command Explanation:

      map
      (map a device to fa)

      dev
      (symmetrix device ID)

      dir 8a
      (FA:port no#)

      target
      (The SCSI target ID (hex value)

      lun
      (Specifies the LUN addresses to be used for each device that is to be added for the host HBA.)


      Step 4: LUN Masking a device to the FA

       

       

              The last but one step is to do the LUN masking. It performs control and monitoring operations on a device masking environment. After running the below command, provides RW accessibility to the server having 2 port HBA’s for the device 0001.

      symmask -sid xxxx -wwn 10000000c130880a -dir 8a -p 0 add dev 0001

       

       

      symmask -sid xxxx -wwn 10000000c131084a -dir 9a -p 0 add dev 0001

       

       

      Command Explanation:

      symmask
      (Sets up or modifies Symmetrix device masking functionality.)

      -wwn
       (World Wide Name of the Host Bus Adapter (HBA) zoned with the FA)

      add
      (Adds devices to the device masking record in the database with the matching WWN)




      Step 5: Update and refresh the Symmetrix database VCMDB 

       

      (Volume Control Manager Database) Steps to perform from the HP-UX server.

       

       

                The below command looks pretty simple but very important command to append all changes to the VCMDB. This will ensure and update the DB which protects the changes made the symmetrix.

      Symmask -sid xxxx refresh

       

       

      refresh
      (updates and refreshes the VCMDB)


                To confirm the symmetrix allocation done properly are not we can run the symcfg command as shown above in the Step 2. The output should show the LUN ID 52 being occupied by the Symmetrix device ID 0001.


       

      Steps to perform from HP-UX server 

       

              Once we finish the task from the EMC end, we need to scan for the new LUN from the operating system. First we’ll deal with HP-UX server. The following commands needs to be executed in the same order from the server to start using the new device.


      #ioscan -fnC disk

       

      The ioscan command displays a list of system disks with identifying information, location, and size.

      #insf -e

       

                The insf command installs special files in the devices directory, normally /dev. If required, insf creates any subdirectories that are defined for the resulting special file. After running this command the new devices will be added to the /dev directory as special device file. 

      #powermt config


      Configure logical devices as PowerPath devices. It will search for the EMC devices and adds it  as a power path devices.

      #powermt display dev=all


            Displays configured powerpath devices. If the previous command ran successfully then we should see the new device in the output list. 

      #powermt save


      Save a custom PowerPath configuration. Once we see the new device in the previous command output, then we can save the power path configuation database.


      After the successful completion of the above steps we can use the devices. Further by using the volume managers we can add them to the existing volume group or the new volume group.



       

       

      To Allocate LUN’s in V-Max array, following steps has to be performed. (courtesy: Sanjeev Tanwar)

       

       

      1. Create a storage groups (Containing symm devices)

      2. Create a port group (one or more director /port combinations)

      3. Create an intitors group (one or more host wwns)

      4. Creating a masking view containg the storage groups,port groups, and inititors group.
      When a masking view is created,the devices are automatically masked and mapped.

       

      Creating Storage Group

       

      #symaccess create -sid xxx -name SG1 -type storage devs 01c,03c

       

      Creating Port Group

       

      #symaccess create -sid xxx -name PG1 -type port -dirport 6d:0,7e:1

       

      Creating inititor Group

       

      #symaccess create -sid xxx -name IG1 -type inititors -file txt1

       

      txt1 conatins

      wwn:21000000008b090

      www:21000000008c090


      Creating a masking view

       

       

      #symaccess create view -name test_view -sg SG1 -pg PG1 -ig IG1

       

       

      Monday, August 15, 2011

      EMC Symmetrix : DRV Devices ( Dynamic Reallocation Volume)

              A dynamic Reallocation Volume is a nonuser addressable symmetrix device used by symmetrix optimizer to temporarily hold user data while recognization of devices being executed. Typically it is used in logical volume swapping operations.


              Below diagram illutrates the stages and status of volumes during a complete volume swap   ( Volume 1 with 4 )
           


      Volume Stages of DRV SWAP
       



      EMC Symmetrix : Device Masking (VCM) Devices

      Symmetrix device masking or

      VCM devices are Symmetrix devices

      that have been masked for visibility to certain hosts. The device

      masking database (VCMDB), which holds device masking records,

      typically resides on a small disk device (such as a 48-cylinder, 24 MB

      device). For more information, see the
      EMC Solutions Enabler

      Symmetrix Device Masking CLI Product Guide.

      Sunday, August 14, 2011

      EMC Symmetrix : Dynamic RDF Devices

      Since Enginuity Version 5568, devices can be configured to be dynamic RDF Capable Devices. Dynamic RDF enables you to create , delete & swap SRDF pairs while the symmetrix array in operation.

      And you can establish SRDF device pairs from non-SRDF devices, & then syncronize and manage them in the same way as you configured SRDF pairs.

      The Dynamic RDF configuration state of the Symmetrix array must be enabled in symmWin or via the Configuration Manager and the devices must be designated as dynamic RDF capable devices.


      Thank you..
      Don't forgot to leave ur valuable Comment ..

      SAN : Clariion CX4 Series

      CX4
                                                   


      Thursday, August 4, 2011

      Configuring the EMC CLARiiON controller with Access Logix installed


      The SAN Volume Controller does not have access to the storage controller logical units (LUs) if Access Logix is installed on the EMC CLARiiON controller. You must use the EMC CLARiiON configuration tools to associate the SAN Volume Controller and LU.
      The following prerequisites must be met before you can configure an EMC CLARiiON controller with Access Logix installed:
      • The EMC CLARiiON controller is not connected to the SAN Volume Controller
      • You have a RAID controller with LUs and you have identified which LUs you want to present to the SAN Volume Controller

      You must complete the following tasks to configure an EMC CLARiiON controller with Access Logix installed:
      • Register the SAN Volume Controller ports with the EMC CLARiiON
      • Configure storage groups
      The association between the SAN Volume Controller and the LU is formed when you create a storage group that contains both the LU and the SAN Volume Controller.

      EMC Clariion : Access Logix


         Access Logix is an optional feature of the firmware code that provides the functionality that is known as LUN Mapping or LUN Virtualization.
          You can use the software tab in the storage systems properties page of the EMC Navisphere GUI to determine if Access Logix is installed.
        After Access Logix is installed it can be disabled but not removed. The following are the two modes of operation for Access Logix:
       
      • Access Logix not installed: In this mode of operation, all LUNs are accessible from all target ports by any host. Therefore, the SAN fabric must be zoned to ensure that only the SAN Volume Controller can access the target ports.
      • Access Logix enabled: In this mode of operation, a storage group can be formed from a set of LUNs. Only the hosts that are assigned to the storage group are allowed to access these LUNs.

      Monday, August 1, 2011

      RAID Technology in DMX / Symmetrix Continued



      RAID [Redundant Array of Independent (Inexpensive) Disk]

      After reading couple of Blogs from last week regarding RAID Technology from StorageSearch and StorageIO, decided to elaborate more about the technology behind RAID and its functionality across Storage Platforms.

      After I almost finished writing this blog, I ran into a Wikipedia article explaining RAID TECHNOLOGY at a much length, covering different types of RAID technologies like RAID 2, RAID 4, RAID 10, RAID 50, etc.

      For example purposes, let’s say we need 5 TB of Space; each disk in this example is 1 TB each.


      RAID 0

      Technology: Striping Data with No Data Protection.

      Performance: Highest

      Overhead: None

      Minimum Number of Drives: 2 since striping

      Data Loss: Upon one drive failure

      Example: 5TB of usable space can be achieved through 5 x 1TB of disk.

      Advantages:
      >
      High Performance

      Disadvantages: Guaranteed Data loss

      Hot Spare: Upon a drive failure, a hot spare can be invoked, but there will be no data to copy over. Hot Spare is not a good option for this RAID type.

      Supported: Clariion, Symmetrix, Symmetrix DMX (Meta BCV’s or DRV’s)

      In RAID 0, the data is written / stripped across all of the disks. This is great for performance, but if one disk fails, the data will be lost because since there is no protection of that data.


      RAID 1

      Technology: Mirroring and Duplexing

      Performance: Highest

      Overhead: 50%

      Minimum Number of Drives: 2

      Data Loss: 1 Drive failure will cause no data loss. 2 drive failures, all the data is lost.

      Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

      Advantages: Highest Performance, One of the safest.

      Disadvantages: High Overhead, Additional overhead on the storage subsystem. Upon a drive failure it becomes RAID 0.
      =”font-size:small;”>

      Hot Spare: A Hot Spare can be invoked and data can be copied over from the surviving paired drive using Disk copy.

      Supported: Clariion, Symmetrix, Symmetrix DMX

      The exact data is written to two disks at the same time. Upon a single drive failure, no data is lost, no degradation, performance or data integrity issues. One of the safest forms of RAID, but with high overhead. In the old days, all the Symmetrix supported RAID 1 and RAID S. Highly recommended for high end business critical applications.

      The controller must be able to perform two concurrent separate Reads per mirrored pair or two duplicate Writes per mirrored pair. One Write or two Reads are possible per mirrored pair. Upon a drive failure only the failed disk needs to be replaced.


      RAID 1+0

      Technology: Mirroring and Striping Data

      Performance: High

      Overhead: 50%

      Minimum Number of Drives: 4

      Data Loss: Upon 1 drive failure (M1) device, no issues. With multiple drive failures in the stripe (M1) device, no issues. With failure of both the M1 and M2 data loss is certain.

      Example: 5TB of usable space can be achieved through 10 x 1TB of disk.

      Advantages: Similar Fault Tolerance to RAID 5, Because of striping high I/O is achievable.

      Disadvantages: Upon a drive failure, it becomes RAID 0.

      Hot Spare: Hot Spare is a good option with this RAID type, since with a failure the data can be copied over from the surviving paired device.

      Supported: Clariion, Symmetrix, Symmetrix DMX

      RAID 1+0 is implemented as a mirrored array whose segments are RAID 0 arrays.



      RAID 3

      Technology: Striping Data with dedicated Parity Drive.

      Performance: High

      Overhead: 33% Overhead with Parity (in the example above), more drives in Raid 3 configuration will bring overhead down.

      Minimum Number of Drives: 3

      Data Loss: Upon 1 drive failure, Parity will be used to rebuild data. Two drive failures in the same Raid group will cause data loss.

      Example: 5TB of usable space would be achieved through 9 1TB disk.

      Advantages: Very high Read data transfer rate. Very high Write data transfer rate. Disk failure has an insignificant impact on throughput. Low ratio of ECC (Parity) disks to data disks which converts to high efficiency.

      Disadvantages: Transaction rate will be equal to the single Spindle speed

      Hot Spare: A Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

      Supported: Clariion


      RAID 5

      Technology: Striping Data with Distributed Parity, Block Interleaved Distributed Parity

      Performance: Medium

      Overhead: 20% in our example, with additional drives in the Raid group you can substantially bring down the overhead.

      Minimum Number of Drives: 3

      Data Loss: With one drive failure, no data loss, with multiple drive failures in the Raid group data loss will occur.

      Example: For 5TB of usable space, we might need 6 x 1 TB drives

      Advantages: It has the highest Read data transaction rate and with a medium write data transaction rate. A low ratio of ECC (Parity) disks to data disks which converts to high efficiency along with a good aggregate transfer rate.

      Disadvantages: Disk failure has medium impact on throughput. It also has most complex controller design. Often difficult to rebuild in the event of a disk failure (as compared to RAID level 1) and individual block data transfer rate same as single disk. Ask the PSE’s about RAID 5 issues and data loss?

      Hot Spare: Similar to RAID 3, where a Hot Spare can be configured and invoked upon a drive failure which can be built from parity device. Upon drive replacement, hot spare can be used to rebuild the replaced drive.

      Supported: Clariion, Symmetrix DMX code 71

      RAID Level 5 also relies on parity information to provide redundancy and fault tolerance using independent data disks with distributed parity blocks. Each entire data block is written onto a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.

      This would classify to be the most favorite RAID Technology used today.



      RAID 6

      Technology: Striping Data with Double Parity, Independent Data Disk with Double Parity

      Performance: Medium

      Overhead: 28% in our example, with additional drives you can bring down the overhead.

      Minimum Number of Drives: 4

      Data Loss: With one drive failure and two drive failures in the same Raid Group no data loss. Very reliable.

      Example: For 5 TB of usable space, we might need 7 x 1TB drives

      Advantages: RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures which typically makes it a perfect solution for mission critical applications.

      Disadvantages: Very poor Write performance in addition to requiring N+2 drives to implement because of two-dimensional parity scheme.

      Hot Spare: Hot Spare can be invoked against a drive failure, built it from parity or data drives and then upon drive replacement use that hot spare to build the replaced drive.

      Supported: Clariion Flare 26, 28, Symmetrix DMX Code 72, 73

      Clariion Flare Code 26 supports RAID 6. It is also being implemented with the 72 code on the Symmetrix DMX. The simplest explanation of RAID 6 is double the parity. This allows a RAID 6 RAID Groups to be able to have two drive failures in the RAID Group, while maintaining access to the data.


      RAID S (3+1)

      Technology: RAID Symmetrix

      Performance:
      >
      High

      Overhead: 25%

      Minimum Number of Drives: 4

      Data Loss: Upon two drive failures in the same Raid Group

      Example: For 5 TB of usable space, 8 x 1 TB drives

      Advantages: High Performance on Symmetrix Environment

      Disadvantages: Proprietary to EMC. RAID S can be implemented on Symmetrix 8000, 5000 and 3000 Series. Known to have backend issues with director replacements, SCSI Chip replacements and backend DA replacements causing DU or offline procedures.

      Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

      Supported: Symmetrix 8000, 5000, 3000. With the DMX platform it is just called RAID (3+1)

      EMC Symmetrix / DMX disk arrays use an alternate, proprietary method for parity RAID that they call RAID-S. Three Data Drives (X) along with One Parity device. RAID-S is proprietary to EMC but seems to be similar to RAID-5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

      The data protection feature is based on a Parity RAID (3+1) volume configuration (three data volumes to one parity volume).

      RAID (7+1)

      Technology: RAID Symmetrix

      Performance: High

      Overhead: 12.5%

      Minimum Number of Drives: 8

      Data Loss: Upon two drive failures in the same Raid Group

      Example: For 5 TB of usable space, 8 x 1 TB drives (rather you will get 7 TB)

      Advantages: High Performance on Symmetrix Environment

      Disadvantages: Proprietary to EMC. Available only on Symmetrix DMX Series. Known to have a lot of backend issues with director replacements, backend DA replacements since you have to verify the spindle locations. Cause of concern with DU.

      Hot Spare: Hot Spare can be invoked against a failed drive, data can be built from the parity or the data drives and upon a successful drive replacement, the hot spare can be used to rebuild the replaced drive.

      Supported: With the DMX platform it is just called RAID (7+1). Not supported on the Symms.

      EMC DMX disk arrays use an alternate, proprietary method for parity RAID that is called RAID. Seven Data Drives (X) along with One Parity device. RAID is proprietary to EMC but seems to be similar to RAID-S or RAID5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

      The data protection feature is based on a Parity RAID (7+1) volume configuration (seven data volumes to one parity volume).


      Follow storageadmiins on Twitter Follow storageadmiins on Twitter