< ? php //If there is analytic campaign data, attempt to get the campaign_guid from that cookie if ( 1 === preg_match( '/pk10mkto-([0-9]+)/', $_COOKIE[ '__utmz' ], $match ) ) { $campaign_guid = $match[ 1 ]; } ?>

Five Workload-to-Cloud Migration Methods: Part 5

photo-ebook-five-workload-to-cloud-migration-methods
January 6, 2014
Shares

Software agent-based data replication between old and new operating-system and application environments

Today in our six-post series about migration to the cloud the subject is … too long to repeat.

Hosting business applications in the cloud offers many companies a very cost-effective and flexible IT platform. But how can you get your existing applications and data into the cloud without halting processing on production servers or migrating over large data volumes?  Using data replication tools such as Double-Take lets you replicate over time without bringing systems down for prolonged periods.

Consider agent-based replication for migrating large data sets, when time is not an issue, and your cloud services provider (CSP) is far away. Minor latency caused by network and distance is irrelevant.

Install the replication software on the old server and the new destination server. Let the data trickle through during business hours when Internet usage is at its peak, and flow freely when there are no users around to complain about how slow things are. All the while, production servers continue to run, and you can maintain this sync state indefinitely. When the new server catches up with the old server, stop the replication and test the new servers (testing will break sync between old and new). Once you’re satisfied that all is well, restart synchronization until the new server catches up again with the old server. When replication and testing are complete, failover to the new environment.

More so than with other methods explored in this blog series, there is more cost associated with software agent-based replication … the software itself and the skill sets to manage it.

Internet bandwidth at both ends needs to be properly sized so that it can keep up with the rate of exchange, which may bring additional cost, especially in rapidly changing environments such as databases.  The complexity of using this method increases with the number of servers. However, any risks are lowered by the fact that you can stop the replication process and test any time.

The maintenance window is short. The actual failover process from old to new doesn’t take long. One caveat; unexpectedly large changes to source-system data, as can happen with disk defragmentation programs, effectively restarts the process from the beginning. In the extreme, the two systems may never be able to reach parity without changing the conditions.

Next week we will close out this series when we look using software agents for full server failover from the original OS into the cloud.

Fine tune your content search

About Peak 10

"Our values are the foundation for everything we do at Peak 10, and are ultimately what enable us to earn our customers' business and their trust."
David H. Jones,
Board Member, Peak 10 + ViaWest