Application Centric Management of Infrastructure Resources

Cloud Application Management

Subscribe to Cloud Application Management: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Cloud Application Management: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Cloud App Mgmt. Authors: Elizabeth White, Yeshim Deniz, Harry Trott, Amit Kumar, AppNeta Blog

Related Topics: Cloud Computing, Cloud Data Analytics, Cloud Application Management

Blog Post

Cloud Computing Standards – Not This Year

It’s clear that end users of cloud computing would like to see true interoperability

I started out writing a blog about the state of cloud computing to review how things have evolved in the cloud space over the last year (2010 was a good year for cloud computing) but I got sidetracked thinking about how clouds are converging, or in reality, not converging.

It’s clear that end users of cloud computing would like to see true interoperability.  Companies want the freedom to pick a cloud that meets their needs, without worrying that choices made today will cost them big in the future or lock them in.  Interoperability would mean that a company could choose a cloud for a given workload, and if conditions change, they could opt to bring the workload back in-house or move to another cloud environment – without requiring a major engineering project or a shift to a different computing paradigm.



However, there are several things working against this interoperability, making it unlikely to happen anytime soon based on emerging industry standards:

  1. There are many types of requirements from end users feeding into the cloud definition; customers are looking for architectures in the cloud that match their application configurations, performance requirements, geographic locations, and security concerns.  They want specific infrastructure capabilities (think SANs, network gear, and hypervisors) because these are existing enterprise standards, and look for specific flavors of architecture/topologies/OS that most closely match what they already have.

  2. This range of customer requirements creates opportunities for cloud providers to differentiate based on features and services that let them serve specific market segments better than their competitors – think security, performance, specialties (like government or medical), or even different hypervisors (for compatibility with in-house platforms), networking architectures, and pricing models.

  3. The competition among cloud providers in turn leads to intense “land grabs” by technology vendors in the cloud market. This includes the big guys like VMware, Microsoft, and Citrix as well as startups like Eucalyptus, Cloud.com, and Nimbula.  It also includes most of the networking players and many of the IT ops providers. Each of these vendors has a different view on how cloud infrastructure should be built and managed (using their solutions and core components), and these differences alter the design of the cloud as well as the attributes of the cloud that the end users can control.

In the end, although everyone is talking about standards and converging models for cloud computing, the customers, cloud providers, and technology vendors are in fact all working against standards – not because they don’t want or believe in standards, but because market forces make this inevitable. Customers demand variety and flexibility from the cloud to meet their specific needs, while technology and cloud providers rush to deliver what their customers want, to differentiate themselves and create “unfair advantage” in an infrastructure market that might otherwise commoditize them.

So how will all of this all play out?  Today, we see some basic moves around standardizing APIs (such as Eucalyptus/Amazon, the vCloud efforts, etc.). These only scratch the surface of interoperability, without addressing the underlying complexity of cloud infrastructure. It is possible that in a few “cloud generations” the industry will mature enough for some of the grand unification computing models to come into existence.  These are very cool models where the workloads are self-descriptive and the cloud will accept or reject loads based on their ability to satisfy the complete requirements encapsulated within the workload.  I love this vision, but it requires a lot of different groups (software vendors, cloud providers, hypervisor vendors, application developers, and infrastructure component vendors) to get together and optimize for the whole instead of for their particular product or business.  What this really means is: not in the near future.

Where does this leave those who want to use the cloud?  Fortunately, there are a number of "cloud enablement" players out there focused on orchestration and interoperability, whose goal is to make it easy for companies to take advantage of cloud computing without worrying about all the differences between clouds.  At CloudSwitch, we believe true interoperability lies well beyond simple API aggregation – what enterprises need is a solution that lets them create and migrate workloads in the cloud that are not only position independent, but also hypervisor and cloud provider agnostic.

Read the original blog entry...

More Stories By John Considine

John Considine is Co-Founder & CTO of Cloudswitch. He brings two decades of technology vision and proven experience in complex enterprise system development, integration and product delivery to CloudSwitch. Before founding CloudSwitch, he was Director of the Platform Products Group at Sun Microsystems, where he was responsible for the 69xx virtualized block storage system, 53xx NAS products, the 5800 Object Archive system, as well as the next generation NAS portfolio.

Considine came to Sun through the acquisition of Pirus Networks, where he was part of the early engineering team responsible for the development and release of the Pirus NAS product, including advanced development of parallel NAS functions and the Segmented File System. He has started and boot-strapped a number of start-ups with breakthrough technology in high-performance distributed systems and image processing. He has been granted patents for RAID and distributed file system technology. He began his career as an engineer at Raytheon Missile Systems, and holds a BS in Electrical Engineering from Rensselaer Polytechnic Institute.