Well, I’m happy to say that replication to my offsite facility is finally up and running now. Let me share with you the final steps to get this project wrapped up.
You might recall that in my previous offsite replication posts, I had a few extra challenges. We were a single site organization, so in order to get replication up and running, an infrastructure at a second site needed to be designed and in place. My topology still reflects what I described in the first installment, but simple pictures don’t describe the work getting this set up. It was certainly a good exercise in keeping my networking skills sharp. My appreciation for the folks who specialize in complex network configurations, and address management has been renewed. They probably seldom hear words of thanks for say, that well designed sub netting strategy. They are an underappreciated bunch for sure.
My replication has been running for some time now, but this was all within the same internal SAN network. While other projects prevented me from completing this sooner, it gave me a good opportunity to observe how replication works.
Here is the way my topology looks fully deployed.
Most Collocations or Datacenters give you about 2 square feet to move around, (only a slight exaggeration on the truth) so it’s not the place you want to be contemplating reasons why something isn’t working. It’s also no fun realizing you don’t have the remote access you need to make the necessary modifications, and you don’t, or can’t drive to the CoLo. My plan for getting this second site running was simple. Build up everything locally (switchgear, firewalls, SAN, etc.) and change my topology at my primary site to emulate my the 2nd site.
Here is the way it was running while I worked out the kinks.
All replication traffic occurs over TCP port 3260. Both locations had to have accommodations for this. I also had to ensure I could manage the array living offsite. Testing this out with the modified infrastructure at my primary site allowed me to verify traffic was flowing correctly.
The steps taken to get two SAN replication partners transitioned from a single network to two networks (onsite) were:
- Verify that all replication is running correctly when the two replication partners are in the same SAN Network
- You will need a way to split the feed from your ISP, so if you don’t have one already, place a temporary switch at the primary site on the outside of your existing firewall. This will allow you to emulate the physical topology of the real site, while having the convenience of all of the equipment located at the primary site.
- After the 2nd firewall (destined for the CoLo) is built and configured, place it on that temporary switch at the primary site.
- Place something (a spare computer perhaps) on the SAN segment of the 2nd firewall so you can test basic connectivity (to ensure routing is functioning, etc) between the two SAN networks.
- Pause replication on both ends, take the target array and it’s switchgear offline.
- Plug the target array’s Ethernet ports to the SAN switchgear for the second site, then change the IP addressing of the array/group so that it’s running under the correct net block.
- Re-enable replication and run test replicas. Starting out with the Group Manager. Then to ASM/VE, then onto ASM/ME.
It would be crazy not to take one step at a time on this, as you learn a little on each step, and can identify issues more easily. Step 3 introduced the most problems, because traffic has to traverse routers that also are secure gateways. Not only does one have to consider a couple of firewalls, you now run into other considerations that may be undocumented. For instance.
- ASM/VE replication occurs courtesy of vCenter. But ASM/ME replication is configured inside the VM. Sure, it’s obvious, but so obvious it’s easy to overlook. That means any topology changes will require adjustments in each VM that utilize guest attached volumes. You will need to re-run the “Remote Setup Wizard” to adjust the IP address of the target group that you will be replicating to.
- ASM/ME also uses a VSS control channel to talk with the array. If you changed the target array’s group and interface IP addresses, you will probably need to adjust what IP range will be allowed for VSS control.
- Not so fast though. VM’s that use guest iSCSI initiated volumes typically have those iSCSi dedicated virtual network cards set with no default gateway. You never want to enter more than one default gateway on this sort of situation. The proper way to do this will be to add a persistent static route. This needs to be done before you run the remote Setup Wizard above. Fortunately the method to do this hasn’t changed for at least a decade. Just type in
route –p add [destinationnetwork] [subnetmask] [gateway] [metric]
- Certain kinds of traffic that passes almost without a trace across a layer 2 segment shows up right away when being pushed through very sophisticated firewalls who’s default stances are deny all unless explicitly allowed. Fortunately, Dell puts out a nice document on their EqualLogic arrays.
- If possible, it will be easiest to configure your firewalls with route relationships between the source SAN and the target SAN. It may complicate your rulesets (NAT relationships are a little more intelligent when it comes to rulesets in TMG), but it simplifies how each node is seeing each other. This is not to say that NAT won’t work, but it might introduce some issues that wouldn’t be documented.
Step 7 exposed an unexpected issue; terribly slow replicas. Slow even though it wasn’t even going across a WAN link. We’re talking VERY slow, as in 1/300th the speed I was expecting. The good news is that this problem had nothing to do with the EqualLogic arrays. It was an upstream switch that I was using to split my feed from my ISP. The temporary switch was not negotiating correctly, and causing packet fragmentation. Once that switch was replaced, all was good.
The other strange issue was that even though replication was running great in this test environment, I was getting errors with VSS. ASM/ME at startup would indicate “No control volume detected.” Even though replicas were running, the replica’s can’t be accessed, used, or managed in any way. After a significant amount of experimentation, I eventually opened up a case with Dell Support. Running out of time to troubleshoot, I decided to move the equipment offsite so that I could meet my deadline. Well, when I came back to the office, VSS control magically worked. I suspect that the array simply needed to be restarted after I had changed the IP addressing assigned to it.
My CoLo facility is an impressive site. Located in the Westin Building in Seattle, it is also where the Seattle Internet Exchange (SIX) is located. Some might think of it as another insignificant building in Seattle’s skyline, but it plays an important part in efficient peering for major Service Providers. Much of the building has been converted from a hotel to a top tier, highly secure datacenter and a location in which ISP’s get to bridge over to other ISP’s without hitting the backbone. Dedicated water and power supplies, full facility fail-over, and elevator shafts that have been remodeled to provide nothing but risers for all of the cabling. Having a CoLo facility that is also an Internet Exchange Point for your ISP is a nice combination.
Since I emulated the offsite topology internally, I was able to simply plug in the equipment, and turn it on, with the confidence that it will work. It did.
My early measurements on my feed to the CoLo are quite good. Since the replication times include buildup and teardown of the sessions, one might get a more accurate measurement on sustained throughput on larger replicas. The early numbers show that my 30mbps circuit is translating to replication rates that range in the neighborhood of 10 to 12GB per hour (205MB per min, or 3.4MB per sec.). If multiple jobs are running at the same time, the rate will be affected by the other replication jobs, but the overall throughput appears to be about the same. Also affecting speeds will be other traffic coming to and from our site.
There is still a bit of work to do. I will monitor the resources, and tweak the scheduling to minimize the overlap on the replication jobs. In past posts, I’ve mentioned that I’ve been considering the idea of separating the guest OS swap files from the VM’s, in an effort to reduce the replication size. Apparently I’m not the only one thinking about this, as I stumbled upon this article. It’s interesting, but a nice amount of work. Not sure if I want to go down that road yet.
I hope this series helped someone with their plans to deploy replication. Not only was it fun, but it is a relief to know that my data, and the VM’s that serve up that data, are being automatically replicated to an offsite location.
