Application Centric Management of Infrastructure Resources

Cloud Application Management

Subscribe to Cloud Application Management: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Cloud Application Management: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Cloud App Mgmt. Authors: Elizabeth White, Yeshim Deniz, Harry Trott, Amit Kumar, AppNeta Blog

Related Topics: Cloud Computing, Microservices Journal, Cloud Application Management, Cloud Development Tools

Article

Get Smart: The Case for Intelligent Application Mobility in the Cloud

Realizing the full benefits of cloud computing

Traditional approaches to scaling and distributing transactional applications need careful consideration - the phrase walking on egg shells springs to mind - and are not amenable to frequent reconfiguration. Consequently, these approaches are only suitable for situations where you are master of all you survey and where you are prepared to over-provision resources in order to reduce the need for reconfiguration. Worse, they are major stumbling blocks to taking full advantage of the elasticity and cost benefits of cloud computing, whether you choose to work in private, public or hybrid clouds. In order to remove these roadblocks, a new approach to scaling and distributing applications is required. We need to get smart and exploit intelligent application mobility.

To realize the full benefits of cloud computing, application services must be built in a way that gives cloud providers the freedom to deploy them in the most efficient way while respecting any business constraints. Any technical constraints that introduce rigidity into the application are sure to impede the cloud provider's ability to do this.

The principle of building systems for flexible deployment has been well understood for many years by developers of web applications, who use short-lived stateless event processing to great effect. The beauty of this approach is that it doesn't matter which instance of a web server handles any given request. Consequently, you can have as many instances of a web server as are needed to handle any given workload, with a load-balancer dynamically spreading the workload across those instances. Your web server farm can be scaled out or scaled back very easily - a great example of "elasticity." The flexibility and location transparency inherent in web applications makes them ideally suited to running in the cloud. The cloud provider can create as many or as few instances of an application service as are required to meet the demand at any given time, thereby delivering the elasticity and pay-per-usage advantages of cloud computing.

The picture changes radically for the applications that enterprises typically use to run their businesses. These applications are stateful and transactional, typically with significant data contention that needs to be carefully managed. Consequently, there are many more constraints on processing, and hence on how transactional applications are designed and scaled. For example, whenever a trade is approved and successfully executed, a trader's limits and the bank's overall risk position are changed, which needs to be taken into account when approving further trades. Maintaining these constraints is rarely a problem when there are only a handful of occasional users. When you need to scale-out your application in order to handle increased volumes, or to service users in multiple geographies, then the constraints required by transactional applications cause them to become increasingly rigid in order to achieve the required scale while maintaining transactional integrity.

A traditional means of scaling transactional applications is to statically partition resources, and to route subsets of the workload to each partition. A typical configuration might involve dedicating the first server machine to requests relating to items in the range A-E, the second to F-J, the third to K-N, and so on - with a message broker sitting in front of this group of server machines to route requests to the correct machine. A single application is scaled out by partitioning it across multiple server machines. Not only is this partitioning static, but you have to ensure you allocate enough processing resource to each partition in order for each partition to handle its maximum workload. By definition, this "just in case" provisioning of resources is wasteful most of the time. It also means you have to constantly monitor actual usage in order to identify when the current partitioning scheme is becoming inadequate. Generally, the only way to scale-out further when you get to this point is to stop work while you repartition the application in order to allow more resources to be allocated to the application as a whole. The static partitioning model is wasteful in operation and expensive to maintain. It is especially wasteful in cloud environments as the rigidity of the approach makes it impossible to realize the full elasticity and cost benefits of cloud computing. You could, of course, scale-up partitions by adding and subtracting virtual compute resources to a VM, but this is only a partial solution at best.

An alternative means of scaling enterprise applications is often referred to as "instance-on-demand" or "replication everywhere." Using this design pattern, there is no partitioning of applications or application services. Instead, additional instances of the entire application or the entire application service are created whenever you need to scale-out. Conceptually this approach re-creates the flexibility and location transparency of traditional web applications because any instance of an application service can handle any request, and you can scale-out by adding as many instances of the application service as needed. However, it is naïve in the extreme as the constraints required by transactional applications invariably limit the effectiveness of this approach. In order to ensure that all instances have a consistent up-to-date view of the world before processing any transaction, any change made by an application service needs to be simultaneously propagated to all other instances. There are many products that attempt to address this problem of distributed data contention, typically by introducing a caching tier, but this inevitably creates inefficiencies that limit scalability - which again limits the elasticity and cost benefits that cloud computing can bring.

Removing these impediments requires a new approach to scaling and distributing transactional application services, one that revisits what is fundamentally required based on first principles.

1. Much finer-grained scalability and distribution is required
Decomposing an application into coarse-grained services so that services can be individually distributed across multiple machines is a well-established pattern for scaling a transactional application. Effectively the service is the unit-of-scalability. The finer-grained the decomposition, the greater the number of services and the greater the number of machines a transactional application can scale across before partitioning or replication are required.

Inevitably the number of services into which an application can be decomposed is limited; therefore the scalability achievable in this way is equally limited. Within a service are a potentially unlimited number of segments. If a segment becomes the unit-of-scalability, then a service can itself scale out by distributing individual segments across multiple machines - thereby dramatically increasing scalability, potentially by orders of magnitude.

For example, a stock trading application consists of a number of services, one of them being the "order-book" service (where bids and asks for stocks are recorded). The order-book service contains any number of individual order-books, one for each stock being traded. The order-book is an example of a segment. Hence if a segment (rather than a service) is the unit-of-scalability, then the order-book service is dramatically more scalable - anywhere up to the point where a machine is allocated to each individual order-book!

2. Each segment must be mobile
The potential of much finer-grained scalability can only be fully realized if segments are mobile, i.e., can be dynamically migrated to different resources. Without mobility, the way you initially deploy the segments is the way that they stay deployed, which effectively provides nothing more than an alternative (albeit enhanced) mechanism for statically partitioning a service.

The importance of mobility is well established - as evidenced by VMotion from VMWare and Live Application Mobility from IBM. These forms of mobility apply at the infrastructure not the service or application level and they are extremely coarse-grained (virtual-machine or LPAR), whereas what's now needed is mobility for very fine-grained segments of a service.

Segment-mobility enables the configuration of segments to change dynamically. When a service needs to scale out, individual segments can be moved to one or more additional server machine(s) in order to increase the processing power and throughput available to the service as a whole.

3. Mobility must not interrupt or degrade the service
A limitation of conventional mobility technologies is that services are temporarily paused when they are moved, and moving entire virtual machines or LPARS can take quite some time, even within the same data center.

One of the advantages of mobility at the segment level is that it offers the possibility of fine-grained resource-optimization and very precise scalability. Individual segments can be redistributed dynamically between available resources in order to continually achieve the best allocation of resources to the total workload.

Continual resource optimization is only feasible if frequent reconfiguration is possible, which requires the ability to move transactional segments around with zero interruption or degradation to service, which in turn means you have to be able to move segments while they are still running and especially without pausing them.

4. Mobility must be near-instantaneous
Another advantage of mobility at the segment level is that moving segments is orders-of-magnitude faster than moving entire virtual machines or LPARS - moving even large numbers of segments typically takes just milliseconds. If you have the ability to move segments with zero interruption or degradation to service, then you have the potential to be near-instantly responsive to changes in workload and to precisely match resource usage to rapidly fluctuating workloads.

This capability is critical for realizing the full elasticity and pay-per-use benefits of cloud computing.

5. Ideally, full mobility must be available across the wide area network
There are several scenarios where mobility over the wide area network is tremendously valuable. For example:

  • Being able to dynamically move processing to the location(s) where the greatest demand is being generated
  • Relocating segments closer to a user in order to improve latency
  • "Follow-the-sun" processing
  • Using the most cost-effective computing resource, perhaps in data centers that are under-utilized, or clouds that are offering the best spot-prices

Again, segment-level mobility makes it very fast to relocate some or all of your applications across geographies and clouds, and when this is coupled with zero interruption or degradation to service, then resource optimization and precise scalability of transactional applications becomes possible at the global level, thereby tapping into additional benefits of cloud computing.

6. Automatic control and governance
With the capabilities described above, we are looking at the potential to fully exploit the benefits of cloud computing via highly mobile application segments that can dynamically and continuously reconfigure themselves, including across geographies, in response to changing workloads, resource availability, user demand, performance criteria and costs.

With so many factors in play, it is only possible to transform the potential benefits into real benefits by fully automating the management of these capabilities. This requires:

  • The continuous, real-time monitoring of workloads, capacities, resource availability, user demand, locations, costs, etc.
  • Matching these monitoring data against business policies in order to determine when, where and how to respond to changes in any of the above

Using a policy-based framework to automate the management of a dynamic application is not just essential for driving down the cost of management and driving up the elasticity benefits of cloud computing. It's also essential for ensuring that good governance is enforced. For example, in many industries there are regulations governing where data processing and storage must and must not take place. A policy framework that is geo-location aware can ensure compliance with such regulations even as data and process dynamically move around a cloud.

Bringing It All Together: Intelligent Application Mobility
In order to liberate transactional applications from the constraints of traditional approaches to scaling and distribution, so that they can take full advantage of the elasticity and cost benefits of cloud computing, a new approach is needed - one that is based on the high-speed mobility of fine-grained application service components, where their deployment is automatically managed in real-time by user-defined policies in order to ensure continual optimization and compliance.

This is a combination of:

  • Dynamic, fine-grained, near-instant scalability
  • The ability to move processing - even across wide-area networks - with zero interruption or degradation to service
  • Automatic, real-time responsiveness to change driven by user-defined policies

We refer to these capabilities collectively as intelligent application mobility.

Intelligent application mobility is essential if we're to make the full elasticity and cost-model benefits of cloud computing available to transactional applications. In short, we need to get smart.

More Stories By Duncan Johnston-Watt

Duncan Johnston-Watt, Founder & Chief Executive Officer of Cloudsoft, is a serial entrepreneur and industry visionary with over 20 years of experience in the software industry. Immediately prior to founding Cloudsoft Duncan was CTO at Enigmatec Corporation, the enterprise data center automation company he founded in 2001.

A Computerworld Smithsonian Laureate for his pioneering work introducing Java Enterprise to financial services, Duncan holds an MSc in Computation from Oxford University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.