The VNX File has built in replication that will allow you to
create full copies of a file system internally, for disaster recovery, or for
migrations. Replicating the Virtual Data
Movers as well will also maintain any CIFS Share and CIFS server information as
well. Here, we will briefly go over all
of the steps required to be able to replicate a NAS file system for CIFS
The first step is to confirm licensing for all of required
products.
[nasadmin@bobnas ~]$ nas_license -l
key status value
site_key online 54 f0 f5 3a
nfs online
cifs online
snapsure online
replicatorV2 online
You will need the snapsure and replicatorV2 license to be
able to complete this task. Contact your
sales representative if you need to acquire these licenses.
To set up replication properly, you will need to follow
these steps.
- Create a NAS-to-NAS relationship
- Create a Data Mover Interconnect
- Configure User Mapper
- Start VDM Replication
- Start File Systems Replication
How to create a NAS-to-NAS relationship.
The first step is to tell each VNX about each other. You
will need to use the ‘nas_cel’ command to establish a connection from each VNX
Control Station to send administrative commands to each other. You will need to establish the connection in
both directions, so that each VNX will be able to query the other VNX.
Here is the syntax for the command.
nasadmin@bobnas ~]$ nas_cel
usage: nas_cel
-list
| -delete {
<cel_name> | id=<cel_id> } [-Force]
| -info {
<cel_name> | id=<cel_id> }
| -update {
<cel_name> | id=<cel_id> }
| -modify {
<cel_name> | id=<cel_id> }
{ [-passphrase
<passphrase>] [-name <new_name>] [-ip <ipaddr>] }
| -create
<cel_name> -ip <ipaddr> -passphrase <passphrase>
| -interconnect
<interconnect_options>
To get detailed
options for interconnect, please type
"nas_cel
-interconnect".
Breaking it down, here is what you need to create the
relationship.
nas_cel –create <remote VNX name> -ip <IP address
of the remote control station> -passphrase <passphrase for the
relationship>
The remote VNX name doesn’t have to be the DNS name, but it
is a very good idea to use the same name as the VNX is known as in DNS, just for
simplicity sake.
The IP address is the IP of the control station. If you have 2 control stations in the VNX,
then the alias IP address can be used. This alias is set with the ‘nas_cs’
command.
The passphrase is a phrase to match up the connections up
between the VNX systems. The passphrase
will need to match when you create the NAS-to-NAS relationships on both sides
of the replication. It is sent over the
wire as plain text so do not use one of your more secure passwords. As a rule, I have always used ‘nasadmin’ as
my passphrase, and I have always encouraged all of my customers to do the same.
In practice, here is what you should see on the VNX.
Source VNX
[nasadmin@bobnas ~]$ nas_cel -create dr_bobnas
-ip 192.168.1.117 -passphrase nasadmin
operation in progress (not interruptible)...
id
= 2
name
= dr_bobnas
owner
= 0
device
=
channel
=
net_path
= 192.168.1.117
celerra_id = BB000C294BE3BE0000
passphrase = nasadmin
Target VNX
[nasadmin@dr_bobnas ~]$ nas_cel -create bobnas
-ip 192.168.1.118 -passphrase nasadmin
operation in progress (not interruptible)...
id
= 2
name
= bobnas
owner
= 0
device
=
channel
=
net_path
= 192.168.1.118
celerra_id = BB000C293BA1980000
passphrase = nasadmin
Create Data Mover to Data Mover relationship Interconnect.
Next task will be to create the Datamover to Datamover Interconnect. This link is the actual IP link that the data
will flow over. When possible, I like to use a dedicated Interface (VNX speak
for an IP address) to use for replication. I just like to call this interface ‘rep’
on both sides. It makes it easy to
identify the purpose of the Interface this way.
So, let’s check the syntax of the command.
[nasadmin@bobnas ~]$ nas_cel -interconnect
nas_cel -interconnect
{ -create <name>
-source_server <movername>
-destination_system {<cel_name> | id=<cel_id>}
-destination_server <movername>
-source_interfaces {<name_service_interface_name> |
ip=<ipaddr>}
[,{<name_service_interface_name> | ip=<ipaddr>},...]
-destination_interfaces {<name_service_interface_name> |
ip=<ipaddr>}
[,{<name_service_interface_name> | ip=<ipaddr>},...]
[-bandwidth <bandwidthSched>]
|
-modify {<name> | id=<interConnectId>}
{[-source_interfaces {<name_service_interface_name> |
ip=<ipaddr>},...]
[-destination_interfaces {<name_service_interface_name> |
ip=<ipaddr>},...]
[-bandwidth <bandwidthSched>]
[-name <newName>]}
|
-pause {<name> | id=<interConnectId>}
|
-resume {<name> | id=<interConnectId>}
|
-delete {<name> | id=<interConnectId>}
|
-info {<name> | id=<interConnectId> | -all}
|
-list [-destination_system {<cel_name> | id=<cel_id>}]
|
-validate {<name> | id=<interConnectId>}
}
To create the Data Mover to Data Mover Interconnect, you
will use the following:
nas_cel –interconnect –create <interconnect name>
-source_server <local data mover name> -destination_system <remote VNX
system name> -destination_server <remote data mover name>
-source_interfaces ip=<ip of local interface> -destination_interfaces
ip=<ip of remote interface>
Breaking down the command.
-create <interconnect name> is the name of the
interconnect. This name is just a human
friendly name, but it is still a good idea to use a name that makes sense to
you and is descriptive. I like to use
the format SourceSystem_DMx_DestSystem_DMx so that it is clear on what the
source and target systems and Data Movers are.
-source_server <local Data Mover name> is the local
Data Mover you are configuring for IP Replication. This is server_2 in many environments, but if
you have more than one active Data Mover, you may be creating Interconnects for
server_3 or server_4.
-destination_system <remote VNX System name> is the
name of the remote system you defined in the NAS-to-NAS relationship step.
–destination_server <remote data mover name> is the
name of the data mover you are replicating to.
Again, this is server_2 normally, but may be server_3, server_4, etc, on
larger arrays.
-source_interfaces ip=<ip of local interface> is where
you define what is the IP address on the source array that will be linked up in
the Interconnect. If you are typing this
command out, remember that interfaces is plural in this case.
-destination_interfaces ip=<ip of remote Interface> is
where you define the IP address on the remote data mover for the interconnect.
Again, if you are typing the command out, make sure that you put the s on
interfaces.
In action, we should see the following.
Source VNX
[nasadmin@bobnas ~]$ nas_cel -interconnect
-create bobnas_dm2_dr_bobnas_dm2 -source_server server_2 -destination_system
dr_bobnas -destination_server server_2 -source_interfaces ip=192.168.1.120
-destination_interfaces ip=192.168.1.121
operation in progress (not interruptible)...
id = 20003
name =
bobnas_dm2_dr_bobnas_dm2
source_server = server_2
source_interfaces = 192.168.1.120
destination_system = dr_bobnas
destination_server = server_2
destination_interfaces = 192.168.1.121
bandwidth schedule = uses available bandwidth
crc enabled = yes
number of configured replications = 0
number of replications in transfer = 0
status = The interconnect
is OK.
Target VNX
[nasadmin@dr_bobnas ~]$ nas_cel -interconnect
-create dr_bobnas_dm2_bobnas_dm2 -source_server server_2 -destination_system
bobnas -destination_server server_2 -source_interfaces ip=192.168.1.121
-destination_interfaces ip=192.168.1.120
operation in progress (not interruptible)...
id = 20003
name =
dr_bobnas_dm2_bobnas_dm2
source_server = server_2
source_interfaces = 192.168.1.121
destination_system = bobnas
destination_server = server_2
destination_interfaces = 192.168.1.120
bandwidth schedule = uses available bandwidth
crc enabled = yes
number of configured replications = 0
number of replications in transfer = 0
status = The interconnect
is OK.
To verify that all connectivity is in place, there is a
verify command. This will run a few
network checks on your interconnect and ensure that data can go over the interconnect,
just like how your actual replication sessions would. This is great to test against any routing
issues or firewalls that may impact your replication sessions.
nas_cel –interconnect –validate <Interconnect name>
So, to check, we would run the following.
Source VNX
[nasadmin@bobnas ~]$ nas_cel -interconnect
-validate bobnas_dm2_dr_bobnas_dm2
bobnas_dm2_dr_bobnas_dm2: has 1 source and 1
destination interface(s); validating - please wait...ok
Target VNX
[nasadmin@dr_bobnas ~]$ nas_cel -interconnect
-validate dr_bobnas_dm2_bobnas_dm2
dr_bobnas_dm2_bobnas_dm2: has 1 source and 1
destination interface(s); validating - please wait...ok
Configure Usermapper
Usermapper is one of the more unknown topics of the VNX
File. It is not as critical to CIFS
operations since many of its duties were offloaded to Secmap which is stored
inside the VDM, but there are still some VNX operations, such as user quotas
that use the Usermapper database.
EMC recommends only one active Usermapper database in a
single environment, and all other VNXs will be set as secondary Usermapper,
pointing to a single as the primary Usermapper. I go one step further and put
my primary Usermapper on my DR site, and have all of my production Usermapper
databases point to it. This includes my
production site set as a secondary Usermapper.
My reasoning is that in most cases, the Primary Usermapper
in the DR site will be up and available. Any new users that are not in the
Usermapper database at the production site will then communicate to the DR
site, which has the Primary Usermapper running.
The new user gets its SID to UID entry, and the database entry is then
cached on the Secondary Usermapper site in production. All subsequent requests
will then be satisfied with the local cached copy. If there is an event that
requires the DR to come online, you just activate the DR side. New and existing users will be in the
database and won’t notice any issues.
When normal operation returns and the production VNX is back online, it
should already be set (or in case of a full disaster, reconfigured) to
secondary Usermapper. Data access and
Usermapper entries will continue as normal.
Simple failover and failback with no issues.
So, let’s talk about what happens if we let the source side
be the primary Usermapper. Users will populate the primary Usermapper on the
source side. The remote Usermapper will
not populate, since it is not being queried at all. Come failover time, the Usermapper database
will need to be converted to Primary to give access to the users. This is an empty database. In most cases (i.e. no quotas are in use),
this won’t cause any issues to the end users. Once you bring the source back
online, you then have two Usermapper databases with different entries for your
SIDs. Your data will be fine, but you may have some access and security issues
with the data due to the mismatched databases.
It is fixable via EMC support, but will take some time.
This bad scenario can be 100% avoided if you use a single Usermapper
database at the DR site.
These items are best done during off hours since there is a
blip in service on the production side as you are in-between commands.
The steps for properly configuring Usermapper for DR are:
- Disable Usermapper on the DR side.
- Export the Usermapper group and user database to a file.
- Copy the files to the DR side.
- Disable the Usermapper Database on the Source side (This
will cut access for a few moments)
- Import and start the Usermapper Database on the DR side.
- Point the Source Usermapper Database to the DR side
(Services are restored at this point.)
Here is the syntax to run the Usermapper commands.
[nasadmin@bobnas ~]$ server_usermapper
usage: server_usermapper { <movername> |
ALL }
| -enable [primary=<ip>]
[config=<path>]
| -disable
| -remove -all
| -Import { -user | -group } [ -force ] pathname
| -Export { -user | -group }
pathname
So, in practice, here is how we configure Usermapper.
Disable Usermapper on the DR side.
[nasadmin@dr_bobnas ~]$ server_usermapper
server_2 -d
server_2 : done
Next, we go to the source side and export the Usermapper
user and group files. The E in Export has to be a capital E.
[nasadmin@bobnas ~]$ server_usermapper server_2
-E -user user.mapper
server_2 : done
[nasadmin@bobnas ~]$ server_usermapper server_2
-E -group group.mapper
server_2 : done
Now, copy the exports to the DR side. I use the SCP tool, but you are welcome to
use any file copying software you like.
[nasadmin@bobnas ~]$ scp *.mapper
nasadmin@192.168.1.117:/home/nasadmin/
The authenticity of host '192.168.1.117
(192.168.1.117)' can't be established.
RSA key fingerprint is
7c:2a:b5:23:24:40:e7:60:ce:3d:e8:33:46:30:cb:48.
Are you sure you want to continue connecting (yes/no)?
yes
Warning: Permanently added '192.168.1.117' (RSA)
to the list of known hosts.
EMC EULA removed for brevity sake.
EMC VNX Control Station Linux release 3.0 (NAS
8.1.0)
nasadmin@192.168.1.117's password:
group.mapper
100% 0 0.0KB/s
00:00
user.mapper 100% 0
0.0KB/s 00:00
Disable Usermapper on the Source Side. If you are doing this on a production file
system, then do this off hours, as this will cause an interruption until you
finish the Usermapper configuration in the environment.
[nasadmin@bobnas ~]$ server_usermapper server_2
-d
server_2 : done
Now, import the Usermapper group and users to the database.
[nasadmin@dr_bobnas ~]$ server_usermapper
server_2 -I -g group.mapper
server_2 : done
[nasadmin@dr_bobnas ~]$ server_usermapper
server_2 -I -u user.mapper
server_2 : done
You can then start the Usermapper database on the DR side at
this time.
[nasadmin@dr_bobnas ~]$ server_usermapper
server_2 -e
server_2 : done
Finally, to restore service and to get the Usermapper
database set properly, point the Source Usermapper database to the DR side. The IP used here is the DR Data Mover
IP. I normally use the IP used for
replication, but any IP on the DM will work.
[nasadmin@bobnas ~]$ server_usermapper server_2
-e primary=192.168.1.121
server_2 : done
If you were using the source for production data, service
are restored at this point.
To check your work, you can just type the following.
server_usermapper server_2
Source VNX
[nasadmin@bobnas ~]$ server_usermapper server_2
server_2 : Usrmapper service: Enabled
Service Class: Secondary
Primary = 192.168.1.121
Target VNX
[nasadmin@dr_bobnas ~]$ server_usermapper
server_2
server_2 : Usrmapper service: Enabled
Service Class: Primary
Configure VDM Replication
Now we are all set to start copying our Virtual Data
Movers. The Virtual Data Movers, or
VDMs, store all of the CIFS data. This
includes the CIFS servers themselves, the Interface names that the CIFS servers
use (But not the actual IP. More on that in a few moments.), the shares, local
user databases, local group databases, and other details to make the CIFS
servers easily protected. They do not
contain any of the local data, since that is stored in the file systems.
What is important is the Interface of the CIFS server. Let’s take for example our CIFS server, ‘bobstuff’.
Source VNX
[nasadmin@bobnas ~]$ server_cifs bobstuff
bobstuff :
64 Cifs threads started
Security mode = NT
Max protocol = SMB3.0
I18N mode = ASCII
CIFS service of VDM bobstuff (state=loaded)
Home Directory Shares DISABLED
Usermapper auto broadcast enabled
Usermapper[0] = [127.0.0.1] state:active (auto
discovered)
Enabled interfaces: (All interfaces are enabled)
Disabled interfaces: (No interface disabled)
Unused Interface(s):
if=rep
l=192.168.1.120 b=192.168.1.255 mac=0:c:29:3b:a1:98
CIFS Server BOBSTUFF[] RC=2
Full
computer name=bobstuff.corp.anexinetdemo.com realm=CORP.ANEXINETDEMO.COM
Comment='EMC-SNAS:T8.1.0.38015'
if=bobstuff l=192.168.1.122 b=192.168.1.255
mac=0:c:29:3b:a1:98
FQDN=bobstuff.corp.anexinetdemo.com
Password
change interval: 0 minutes
Look at the bolded line.
It is using the Interface ‘bobstuff’.
Disregard of what the IP of the CIFS server is at the moment and just
think that the CIFS server needs the Interface ‘bobstuff’ to operate.
On our DR side, in order to be able to bring up the CIFS
server on the DR side, we will need to have a matching environment. The CIFS server details will come over in the
VDM replication and the data will come over in the file system
replication. It is our job to ensure
that the DR VNX is in the proper state to let the VDM bring up the CIFS
servers. We do this by creating the
Interface ‘bobstuff’ on the DR side.
Obviously, with a different IP address.
Target VNX
[nasadmin@dr_bobnas ~]$ server_ifconfig server_2
-a
server_2 :
el30 protocol=IP device=el30
inet=128.221.252.2 netmask=255.255.255.0 broadcast=128.221.252.255
UP,
Ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:5 netname=localhost
el31 protocol=IP device=el31
inet=128.221.253.2 netmask=255.255.255.0 broadcast=128.221.253.255
UP,
Ethernet, mtu=1500, vlan=0, macaddr=44:41:52:54:0:6 netname=localhost
loop6 protocol=IP6 device=loop
inet=::1 prefix=128
UP,
Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP,
Loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0 netname=localhost
bobstuff
protocol=IP device=cge0
inet=192.168.1.123
netmask=255.255.255.0 broadcast=192.168.1.255
UP, Ethernet, mtu=1500, vlan=0, macaddr=0:c:29:4b:e3:be
rep protocol=IP device=cge0
inet=192.168.1.121 netmask=255.255.255.0 broadcast=192.168.1.255
UP,
Ethernet, mtu=1500, vlan=0, macaddr=0:c:29:4b:e3:be
Here, we see the ‘bobstuff’ interface but it has a different
IP address. What this means is that when
we failover the replication environment, the CIFS server will start on the DR
side with this IP address. If we are
using Active Directory DNS with dynamic updates enabled, this will be an
automatic change in DNS. Most users won’t
notice that CIFS services are on the DR side. They may need to do a ‘ipconfig
/flushdns’ or at worst, reboot their workstation to get the new IP address, but
in most cases, they can reconnect once the failover is complete. If your DNS server does not support dynamic
updates, then a DNS change will be required to allow users to reconnect to the
share.
Once we have the Interface created on the target side, we
are all set to start our VDM replication. The command to start it off is
nas_replicate, and it does a lot.
[nasadmin@bobnas ~]$ nas_replicate
Usage:
nas_replicate
-list [
-id ]
| -info {
-all | id=<sessionId> | <name> }
| -create
<name>
-source
-fs { <fsName> | id=<fsId> }
[ -sav { <srcSavVolStoragePool> |
id=<srcSavVolStoragePoolId> }
[ -storage
<srcSavStorageSerialNumber> ] ]
-destination
{ -fs { id=<dstFsId> | <existing_dstFsName> }
| -pool { id=<dstStoragePoolId> | <dstStoragePool> }
[ -storage
<dstStorageSerialNumber> ] }
[ -vdm <dstVdmName> ]}
[ -sav { id=<dstSavVolStoragePoolId> | <dstSavVolStoragePool>
}
[ -storage
<dstSavStorageSerialNumber> ] ]
-interconnect { <name> | id=<interConnectId> }
[ -source_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[ -destination_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[ { -max_time_out_of_sync <maxTimeOutOfSync> | -manual_refresh } ]
[ -overwrite_destination ]
[ -tape_copy ]
[ -background ]
| -create
<name>
-source
-vdm <vdmName>
-destination
{ -vdm <existing_dstVdmName>
| -pool { id=<dstStoragePoolId> | <dstStoragePool> }
[ -storage
<dstStorageSerialNumber> ] }
-interconnect { <name> | id=<interConnectId> }
[ -source_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[ -destination_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[ { -max_time_out_of_sync <maxTimeOutOfSync> | -manual_refresh } ]
[ -overwrite_destination ]
[ -background ]
| -start {
<name> | id=<sessionId> }
[
-interconnect { <name> | id=<interConnectId> } ]
[ -source_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[ -destination_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[
{ -max_time_out_of_sync <maxTimeOutOfSync> | -manual_refresh } ]
[ -overwrite_destination ]
[
-reverse ]
[
-full_copy ]
[
-background ]
| -modify
{ <name> | id=<sessionId> }
[ -name <new name> ]
[ -source_interface { ip=<ipAddr> |
<nameServiceInterfaceName> } ]
[ -destination_interface {
ip=<ipAddr> | <nameServiceInterfaceName> } ]
[ { -max_time_out_of_sync <maxTimeOutOfSync> | -manual_refresh } ]
| -stop {
<name> | id=<sessionId> }
[
-mode { source | destination | both } ]
[
-background ]
| -delete
{ <name> | id=<sessionId> }
[ -mode { source | destination | both } ]
[ -background ]
|
-failover { <name> | id=<sessionId> }
[ -background ]
|
-switchover { <name> | id=<sessionId> }
[ -background ]
| -reverse
{ <name> | id=<sessionId> }
[ -background ]
| -refresh
{ <name> | id=<sessionId> }
[ -source { <ckptName> | id=<ckptId> }
-destination { <ckptName> | id=<ckptId> } ]
[ -background ]
Don’t concern yourself with all of these switches at this
time. We are just going to use a few.
The syntax on our VDM replication is:
nas_replicate –create <replication name> -source –vdm <vdm
to replicate> -destination –pool <destination pool name> -interconnect
<interconnect name> -source_interface ip=<IP address of Source VNX
Interconnect> -destination_interface ip=<IP address of Target VNX
Interconnect>
So this command is not that complicated.
-create <replication name> is just our friendly name
of our replication session. I tend to
start all of my replications sessions as rep_<thing I’m replicating>.
-vdm <vdm to replicate> is the name of the VDM we are
replicating. Make sure that with this command syntax, not to have a VDM with
the same name on the target side. If you
do, things will still work, but you will see ‘bobstuff_replica1’ on the target
side. Everything still works, but it
will not look as clean.
-pool <destination pool name> is the storage pool on
the target side in which the VDM will be carved from. It could be ‘Pool 0’ or ‘myPool’. Remember, if your pool name has a space, you
may need to surround it with quotes for the command to work.
-interconnect <interconnect name> is the interconnect
that we created in the earlier steps.
-source_interface ip=<IP address of Source VNX
Interconnect> is the source Interconnect IP you wish to use. Please take notice that word interface, in
this case, is singular.
-destination_interface ip=<Ip address of Target VNX
Interconnect> is the target Interconnect
IP that we defined earlier. Also, this
is also singular Interface.
So, let’s run this against our source VDM.
[nasadmin@bobnas ~]$ nas_replicate -create
rep_vdm_bobstuff -source -vdm bobstuff -destination -pool clar_r5_performance
-interconnect bobnas_dm2_dr_bobnas_dm2 -source_interface ip=192.168.1.120
-destination_interface ip=192.168.1.121
OK
The OK means that it created the VDM on the target side
successfully and started to copy the VDM over the Interconnect. VDMs are relatively
small, so they do not take long to sync.
We can check the status of the VDM replication with this
command.
Source VNX
[nasadmin@bobnas ~]$ nas_replicate -l
Name Type Local Mover Interconnect Celerra Status
rep_vdm_bobstuff
vdm server_2 -->bobnas_dm2_dr_bo+
dr_bobnas OK
[nasadmin@bobnas ~]$ nas_replicate -i
rep_vdm_bobstuff
ID =
110_BB000C293BA198_0000_110_BB000C294BE3BE_0000
Name = rep_vdm_bobstuff
Source Status = OK
Network Status = OK
Destination Status = OK
Last Sync
Time = Tue Mar 31
22:09:31 EDT 2015
Type = vdm
Celerra Network Server = dr_bobnas
Dart Interconnect = bobnas_dm2_dr_bobnas_dm2
Peer Dart Interconnect = dr_bobnas_dm2_bobnas_dm2
Replication Role = source
Source VDM = bobstuff
Source Data Mover = server_2
Source Interface = 192.168.1.120
Source Control Port = 0
Source Current Data Port = 0
Destination VDM = bobstuff
Destination Data Mover = server_2
Destination Interface = 192.168.1.121
Destination Control Port = 5085
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 5
Current Transfer Size (KB) = 0
Current Transfer Remain (KB) = 0
Estimated Completion Time =
Current Transfer is Full Copy = No
Current Transfer Rate (KB/s) = 0
Current Read Rate (KB/s) = 0
Current Write Rate (KB/s) = 0
Previous Transfer Rate (KB/s) = 5781
Previous Read Rate (KB/s) = 1985
Previous Write Rate (KB/s) = 508
Average Transfer Rate (KB/s) = 5781
Average Read Rate (KB/s) = 1985
Average Write Rate (KB/s) = 508
If you see data in the last sync time field, then the data
is insync, which defaults to 5 minutes or less on a VDM.
Now we are ready to start the replication of the actual data. The syntax will seem very similar.
nas_replicate –create <replication name> -source –fs <fs
to replicate> -destination –pool <destination pool name> -vdm <vdm
to mount the file system to> -interconnect <interconnect name>
-source_interface ip=<IP address of Source VNX Interconnect>
-destination_interface ip=<IP address of Target VNX Interconnect>
With minimal changes to the command we used for VDM
replication, we can start up a file system replication.
-create <replication name> Again, this is where we
have our friendly name for the replication session. I still start my replications with rep_.
-fs <fs to replicate> This is where we define the name
of the file system we want to replicate.
-pool <destination pool name> Here is where we will
define what storage pool we will be carving out our file system from. Make sure
you have enough space on the target to allow for the creation of the file
system. If the source is thin, the target will be thin. If the source is thick, the target will be
thick. It will also be exactly the same size, as this is a requirement for
replicator.
-vdm <vdm to mount the file system to> is where we
tell the VNX that we want to mount it on the VDM.
-interconnect <interconnect name> for the name of the
Interconnect we wish to use.
-source_interface ip=<ip of the source VNX
Interconnect> to enter the source IP of the Interconnect to use.
-destination_interface ip=<IP address of Target VNX
Interconnect> to enter the target IP of the Interconnect to use.
So in practice, here is the File System replication in
action.
When you run this command, do not panic if it takes some
time to run. Remember, to get the OK, it
needs to carve out the file system on the target side. If you have a large, thick file system, it
can take time for the system to report back the OK. Afterwards, you can get the status of the
replication with the nas_replication –info command.
Source VNX
[nasadmin@bobnas ~]$ nas_replicate -create
rep_fs_bobstuff -source -fs bobstuff -destination -pool clar_r5_performance
-vdm bobstuff -interconnect bobnas_dm2_dr_bobnas_dm2 -source_interface
ip=192.168.1.120 -destination_interface ip=192.168.1.121
OK
And now to check up on the status.
[nasadmin@bobnas ~]$ nas_replicate -i
rep_fs_bobstuff
ID =
113_BB000C293BA198_0000_116_BB000C294BE3BE_0000
Name = rep_fs_bobstuff
Source Status = OK
Network Status = OK
Destination Status = OK
Last Sync Time = Tue Mar 31 22:28:18 EDT 2015
Type = filesystem
Celerra Network Server = dr_bobnas
Dart Interconnect = bobnas_dm2_dr_bobnas_dm2
Peer Dart Interconnect = dr_bobnas_dm2_bobnas_dm2
Replication Role = source
Source Filesystem = bobstuff
Source Data Mover = server_2
Source Interface = 192.168.1.120
Source Control Port = 0
Source Current Data Port = 0
Destination Filesystem = bobstuff
Destination Data Mover = server_2
Destination Interface = 192.168.1.121
Destination Control Port = 5085
Destination Data Port = 8888
Max Out of Sync Time (minutes) = 10
Current Transfer Size (KB) = 0
Current Transfer Remain (KB) = 0
Estimated Completion Time =
Current Transfer is Full Copy = No
Current Transfer Rate (KB/s) = 0
Current Read Rate (KB/s) = 0
Current Write Rate (KB/s) = 0
Previous Transfer Rate (KB/s) = 12738
Previous Read Rate (KB/s) = 6759
Previous Write Rate (KB/s) = 807
Average Transfer Rate (KB/s) = 12738
Average Read Rate (KB/s) = 6759
Average Write Rate (KB/s) = 807
From this we can see that our file system in in sync. If your file system is not in sync, don’t
worry; it can take some time to move the data over the Interconnect, depending
on the speed of the link and the amount of data it needs to copy.
At this time, give yourself a pat on the back. You just successfully protected yourself and
your file data to a remote VNX. I hoped this guide gave a glimpse of how VNX
file replication works and how to configure it with the VNX command line.
There are many, many other features to IP replication, such
as the different failover types, local replication, NFS Replication, bandwidth throttling,
migrations, cascading replication, DR Testing to name a few.
Many of these topics can be found in the EMC
guide, Using VNX Replicator Release 8.1.
Of course, feel free to reach out to Anexinet and one of our consultants would be
happy to develop a VNX File DR plan to fit your business data needs.
Labels: Active Directory, Disaster Recovery, EMC, File, migration, replication