This following steps are for migrating a Windows 2008 R2 box
from an EMC VNX to HP 3PAR. The tool offers migration for Oracle (RAC
included), Windows (clusters included), Linux, and Hyper-V. You also have 3
migrations types: online, minimally disruptive migration (MDM), and offline. I
will be using the minimally disruptive migration (MDM). Please review the
official HP documentation before attempting as I will not be covering every
single pre and post item in this guide.
1.
Build/select a machine to install the 3PAR
Online Import Utility.
a.
Must be Windows based machine.
2.
Install
EMC SMI-S Provider (Solutions Enabler).
a.
Install version 4.6.2.9
b.
Use Default destination folder for install
location.
c.
Select Array Provider under the Provider List.
d.
Leave defaults for daemon list.
3.
Configure EMC SMI-S Provider
a.
Open a Windows command prompt on the server
where the EMC SMI-S Provider software was installed.
b.
Change the directory to the location of the EMC
SMI-S Provider installation. The default is: C:\Program Files\EMC\ECIM\ECOM\bin
c.
Start the testsmiprovider program.
d.
Accept all default settings when prompted.
e.
Attach the VNX storage system to the EMC SMI-S
Provider.
i. Use
the addsys command.
ii. Pick
#1 for Clar (Array Type)
iii. Enter
in IP address for SPA then SPB.
1.
You don’t need anything for array id 2 as you
only have 2 SP IP’s.
iv. For
Address types pick default of 2.
v. Type
in user name and password
1.
This needs to be an EMC VNX global administrator
account.
vi. If
the Output reads as 0 you have established communication.
vii. You
can also run the dv command to verify the communication to the VNX.
viii.
If SMI-S has client IP filtering enabled you
will need to add the 3PAR Online Import Utility servers IP to the trust listed.
1.
Please refer to the HP documentation for further
instructions.
4.
Install 3PAR Online Import Utility.
a.
Install the 1.3.0 version of the online utility.
b.
Choose the default settings.
i. Pick
the client and server for install.
1.
This guide covers the online utility install on
a single server. Refer to the HP documentation if you wish to split up the
roles.
ii. This
guide does not cover the CA certificate section of the installer. If you
require this please refer to the HP documentation for further instructions.
5.
Configure 3PAR Online Import Utility.
a.
Once installation is complete you will need to
add an account with local machine administrative privileges as a member of the HP
Storage Migration Admins group.
i. This
group was created, on the local machine, during the installation process.
ii. The
account added here will be used to run the migration utility.
iii. You
will also see an HP Storage Migration Users group. This group will not be
needed for the purposes of this guide.
b.
Launch the tool.
i. You
can use LOCALHOST for the IP address if the client and server portions were
installed on the same machine (both were installed on same machine for this
guide).
ii. Enter
in credentials from the account above added into the local group the HP Storage
Migration Admins.
6.
Migration types and pre-planning.
a.
Please refer to the HP documentation for migration
types and pre-planning steps.
b.
This guide will use minimally disruptive
migration (MDM for the migration type).
7.
Zoning requirements
a.
Two unique paths (only 2 paths) must be zoned
between the source and destination storage systems. To create two paths, two
controller ports on the source EMC Storage system must be connected to two peer
ports on the destination HP 3PAR StoreServ Storage system.
b.
The peer ports on the destination HP 3PAR
StoreServ Storage system must be on adjacent nodes: 0/1, 2/3, 4/5, or 6/7.
c.
Zone the source and destination systems together
and make sure they are visible to each other before zoning hosts to the
destination system.
d.
Do not unzone the source and destination systems
from each other until the data migration is complete.
8.
Zoning creation
a.
On the destination HP 3PAR StoreServ Storage
system, configure two free ports as peer ports using adjacent nodes: 0/1, 2/3,
4/5, or 6/7.
i. Set
the port connection type to point.
ii. Set
the port connection mode to peer.
iii. The
WWN of a host port changes when it is set to become a peer port. Use the new
WWN of the peer port in the zoning.
b.
Create two zones between the source EMC Storage
system and the destination HP 3PAR StoreServ Storage system, ensuring that one
EMC host port is in the same zone with one HP 3PAR peer port.
c.
Each zone should contain only two ports: one
from each storage system. Adjacent HP 3PAR peer nodes should be zoned to
separate EMC controllers in a one-to-one mapping. For example, zone the peer
port on node 0 on the HP 3PAR StoreServ Storage system to controller A on the
EMC Storage system, and zone the peer port on node 1 on the HP 3PAR StoreServ
Storage system to controller B on the EMC Storage system.
d.
Verify that the EMC Storage system is shown as a
connected device on both peer ports of the destination HP 3PAR StoreServ
Storage system.
i. To
check peer port connections from the HP 3PAR StoreServ Storage, issue the
showtarget command with the -rescan option, and then issue the showtarget
command.
9.
Add the Source Storage System
a.
From the HP 3PAR Online Import Utility, issue
the addsource command.
i. addsource
-type VNX -mgmtip XX.XX.XX.XX -user admin -password adminpw —uid
xxxxxxxxxxxxxxxx
1.
XX.XX.XX.XX is the IP address of the EMC SMI-S
Provider server.
2.
xxxxxxxxxxxxxxxx is the WWN of the EMC Storage
system.
3.
–user and –password is the account used on the
EMC SMI-S provider.
b.
Issue the showsource command to verify the
source storage system information.
i. showsource
–type VNX
10.
Add the Destination Storage System
a.
From the HP 3PAR Online Import Utility, issue
the adddestination command.
i. adddestination
–mgmtip XX.XX.XX.XX –user 3paradm –password 2Password
1.
XX.XX.XX.XX is the HP 3PAR management port IP
address.
2.
Use a 3PAR super user account for –user and
–password.
b.
If a certificate validation error occurs on the
adddestination command, first run the installcertificate command, then run the
adddestination command again.
i. installcertificate
–mgmtip xx.xx.xx.xx
c.
Issue the showdestination command to verify the
destination storage system information.
11.
Preparing the Host for Data Migration
a.
Stop all applications on the host that may be
accessing the LUNs to be migrated. This also includes any replication and
backup applications on the LUNs.
b.
Set the LUNs to be migrated to an
offline/unmounted state.
c.
Modify the startup scripts for all applications
and services that are using the migrating LUNs to prevent them from starting at
the next reboot. They should remain disabled until after the migration has
started and documentation instructs you to bring your host back online.
12.
Creating the Data Migration Task
a.
Host and volumes must be in supported storage
group configuration on the source EMC Storage system.
b.
On a VNX source storage system, two HP 3PAR
initiators are added to the storage group that contains the hosts or volumes
being migrated. The host name of the HP 3PAR initiators is unknown. The
initiator name appears as the WWN of the peer port on the HP 3PAR StoreServ
Storage.
c.
For MDM migration, the host or hosts that are
being migrated are created on the destination storage system. A host set is
also created on the destination storage system.
d.
The createmigration command performs the
following checks before creating the migration:
i. No
migration task for the specified source storage system exists. Only one
migration at a time can be provisioned for a given source EMC Storage system.
ii. The
source storage system group configuration is valid.
1.
The volumes or hosts specified in the
createmigration command are mapped to a storage group on the associated source
EMC Storage system. All LUNs and all hosts in the mapped storage group will be
migrated even if only a subset are entered in the createmigration command.
iii. EMC
SMI-S Provider version.
iv. Source
storage system model number.
v. LUN
migration eligibility:
1.
Protocol must be Fibre Channel.
2.
A LUN under replication cannot be migrated.
e.
Using the HP 3PAR Online Import Utility, issue
the createmigration command with the -migtype MDM option.
i. createmigration
-sourceuid 50067890C6E059EE -srchost "HPDL585-01" -destcpg FC_r5
-destprov thin –migtype MDM —persona “WINDOWS_2008_R2”
1.
For supported persona types and syntax please
refer to HP documentation.
2.
Make a note of the migration ID, as it will be
used in commands to track migration progress.
f. Issue
the showmigration command to verify that the data migration task preparation
has completed successfully.
i. This
may take some time. Upon successful creation of the createmigration task, the
STATUS column in the showmigration command output will indicate
preparationcomplete(100%). When this status is indicated, continue to the next
step.
13.
Updating Host Multipath Software and Unzoning
from the Source EMC Storage System
a. Windows
2008 R2 requires Windows native MPIO to manage the paths.
b. EMC
PowerPath must not be installed during the migration process.
i. Stop
all applications before removing EMC PowerPath.
ii. Uninstall
EMC PowerPath.
1. If
you are uninstalling EMC PowerPath from a Windows host where the host is
booting over SAN from the source system, you will be prompted to restart before
removal is completed. This restart is required and must be completed.
iii. After
removal, you will be prompted to restart the host in order for the changes to
take effect. Do not restart at this point.
iv. The
removal of EMC PowerPath Multipath software may have also disabled the MPIO
installation without removing it. Manually check to see whether the native
Microsoft MPIO is still loaded, and if it is, fully remove it at this time. Now
restart the host.
c. Zone
the host to the destination HP 3PAR StoreServ Storage system to establish
communication.
i. Using
the HP 3PAR Management Console, verify that the host whose LUNs are under
migration has paths to as many HP 3PAR controller nodes as are zoned in the
SAN.
d. Configure
MPIO multipath software on the host.
i. Enable
the Windows native multipath MPIO (if it is not already not enabled).
ii. Register
HP 3PAR LUN types with MPIO by configuring MPIO to use 3PARdataVV (case
sensitive) as the device hardware ID.
1. You
will be prompted to reboot the host in order for the change to take effect. Do
not reboot at this point, since the MDM procedure will require stopping the
Windows host.
e. Shut
down the Windows host. Leave the host offline until the migration is started.
f. Unzone
the source EMC Storage system from the host.
14.
Starting the Data Migration from Source EMC
Storage System to Destination HP 3PAR StoreServ Storage System
a. From
the HP 3PAR Online Import Utility, issue the startmigration command.
i. startmigration
-migrationid 1395864499741
ii. The
-migrationid (in the example above, 1411673030253) will have been assigned by
the createmigration command.
b. To
view the status of the migration, issue the showmigration command.
i. showmigration
-migrationid 1395864499741
c. Issue
the showmigrationdetails command to verify the volumes being migrated.
i. showmigrationdetails
-migrationid 1395864499741
ii. In
the preparation stage, PROGRESS for each volume will be 0% and the TASK_ID will
be unknown.
iii. A
task ID will be assigned and percentage of progress will be shown while the
task is being executed.
d. The
STATUS column in the showmigration command output will indicate success when
all volumes have been migrated successfully.
15.
Bringing the Windows Host Back Online
a. Verify
that the import has started and confirm that a TASK_ID is available for each
volume by issuing the showmigrationdetails command.
i. It
is unsafe to bring the hosts back online if the TASK_ID number is not yet
available in the output of showmigrationdetails.
b. After
the import has started for the volumes, bring the Windows host that was shut
down back online.
i. For
a Windows host that is booting over SAN from the source system, the HBA BIOS
must be reconfigured during host startup to select the HP 3PAR boot device. If
this is not done, the host will not start up.
c. Check
the disk status by pointing to diskpart.exe or by opening Disk Manager.
i. Depending
on the Windows SAN Policy, the disks that were migrated are offline or online.
If offline, use diskpart or the disk manager to bring them online.
d. You
can now restart the host and restart all applications and services. The host
may resume normal operations.