< ? php //If there is analytic campaign data, attempt to get the campaign_guid from that cookie if ( 1 === preg_match( '/pk10mkto-([0-9]+)/', $_COOKIE[ '__utmz' ], $match ) ) { $campaign_guid = $match[ 1 ]; } ?>

“Flying the Kite” or Physically Connecting to the Cloud in a Hybrid IT World


Whether or not Ben Franklin actually flew a kite into a storm is debatable, but the story has stood through time as the best example of human beings physically connecting with a cloud…until now.

The “cloud” buzzword has made its way out of tech magazines and deep into the culture of our globally connected world. As such, it has us either connected or trying to connect to this amorphous, powerful resource. But what about those people who expended their capital on physical assets for financial reasons or have workload requirements that preclude them from benefitting from this ethereal entity? Are they destined for a life of power cords, memory stick and hard drive outages?

Of course not.  They just need to understand how and in what ways the cloud can supplement their existing infrastructure. Virtual infrastructures are not an “all-or-nothing” play. More and more companies are finding that a balance between physical and virtual assets is required to deliver the performance, user experience and/or compliancy requirements that they need. So let’s talk a bit about some ways that can be technically accomplished.

Scenario One:

Virtual Web Front End – Physical Application or Database Back End

One of the most common deployments of a hybrid infrastructure involves placing your web servers (and load balancer) in a virtual infrastructure and tying physical application servers and/or database servers to them via the physical switch layer that lives between them.

Normally configured on 1 GB minimum or 10 GB ports, this design (Fig 1.1) allows for extremely high I/O or demanding application workloads to be crunched by physical machines that have significant memory and CPU resources dedicated for these tasks. Larger databases that are now being configured to run “in-memory” potentially become cost inefficient to deploy in a “shared resource” world, so this design works well. Furthermore, this design allows for quick deployment of front-end servers should an application or site be featured on websites like Ars Technica or Wired.

You are poised to capture all the market share for your line of business. Is there a lightning strike danger moment possible in this design? You bet. If you don’t take care to diversify your virtual front end and just assume the cloud will always be available, then you risk downtime despite your physical infrastructure humming along with no issues. Before venturing into this design, make sure your provider is accounting for outages on individual components within its cloud. “What happens when a single host fails?” “What happens when one of the storage processors in the SAN dies?” “How much capacity is available in case of ramp needs?” Those are just some of the questions you need to address prior to launch.

(Fig. 1.1)

Colocation Environment - Cloud Environment Chart

Scenario Two:

Physical Production Environment – Cloud DR Environment

IT happens right? By that I mean: something is bound to break. We often use the expression “hard drives do three things: read, write and fail.” Power outages, cable cuts, natural disasters, Bob from HR messing with something in the closet…whatever the issue, something will go wrong. Disaster Recovery (DR) has always been the most talked about, least acted upon initiative within the IT world for years. One of the reasons for this has been capital cost. It has traditionally been a 2X+ investment in order to protect what you have.

With the cloud, there are now options where you can leave your servers right where they are, but recreate a workable environment, off-site, via replication to virtual instances of the same machines. There are many ways to do this — and it would exceed the scope of this post — but we are willing to discuss them all with you if you would like.

What you need to know from this post is that, increasingly, what you are running in production is becoming less relevant to your ability to replicate off-site. Many of the tools used to solve these DR scenarios are agnostic in their approach and allow for “in-line conversion.” That means that just because you deployed on physical assets, Hyper-V or VMware within your server room, you are not stuck protecting yourself in a like-for-like environment. The ability to leverage large cloud deployments is how you can reduce your spend down from 2X+ to something much more tolerable.

Is there a lightning strike danger moment possible here? Again, you bet. It’s more of a caveat than a lighting strike, but replication (especially of large data sets) requires bandwidth (BW) — luckily an ever-decreasing cost per MB item — and consistency. This means that if the connection fails for extended periods of time, you might be playing catch up forever and never return to a protected state without a bit of manual intervention.

Be sure to ask your provider “what is the plan during an extended connectivity outage” so you can be in alignment in terms of what might be required. Also, please, please, please be aware of your data rate change for all the servers being protected. This is the #1 piece of information (followed by BW capacity) that will determine success or failure in this solution.

Scenario Three:

Non-sensitive Information in the Cloud – Sensitive Information Behind the Curtain

We have a fair amount of construction work we do here at Peak 10 with all of the data center builds we are involved in. One acronym we are very familiar with from the construction world is “AHJ” or authority having jurisdiction. Basically, it means that some codes and rules will be “interpreted” at the time of inspection. You have to simply adhere to the inspectors’ take on the wording. Such is the world of IT compliancy.

“AHJ” is replaced by “discretion of the auditor,” but the outcome is the same. Every day the rules seem to slide one way or the other on PCI or HIPAA data and whether it can be in the cloud or not. Sometimes the way you have chosen to store, process and transfer this type of data forces you into a physical world. You are required to greatly reduce the physical and virtual exposure to said data, so you lock it away behind a physical curtain of your making (please don’t use an actual curtain though as that will not pass an audit).

Doing this doesn’t mean you are completely stuck. Many times we see (much like in scenario 1) a portion of the environment being deployed out in the cloud, but the connection between the cloud and the physical data store is fully encrypted (can’t stress this enough). The sensitive data resides on your glass-cased physical box. Again, this solution means that you can leverage the quick scale or efficiency gains of the cloud without sacrificing your compliancy protection.

Lightning strike moment? It’s more of a caveat than a strike, but this design can extend the scope of your audit. It will be imperative to select a provider that can ease the audit process by being able to explain its infrastructure to the satisfaction of the auditor as opposed to making that “your problem.”

So go ahead. Touch the cloud. It has many benefits and opportunities to solve your real business problems. Just be sure to ask the right questions when reaching out so that you can avoid being shocked.

Fine tune your content search

About Peak 10

"Our values are the foundation for everything we do at Peak 10, and are ultimately what enable us to earn our customers' business and their trust."
David H. Jones,
Board Member, Peak 10 + ViaWest