Are All Public Clouds Created Equally? – Forbes Blog by John Webster

By , Wednesday, November 18th 2015

Categories: Analyst Blogs

Tags: cloud, Forbes, John Webster,

Cloud computing is often spoken of in “as a service” terms like IT as a Service, Infrastructure as a Service, Software as a Service, and Platform as a Service. And each has its well know XaaS acronym to go along with it—ITaaS, IaaS, SaaS, PaaS. Public clouds that fit into these descriptors tend to get lumped into an amorphous cloud blob where all appear to offer the same value propositions like IT agility, “infinite” scale, and lower cost per unit of compute power.

Unfortunately, the acronyms beget a tendency to bypass the qualities that differentiate public cloud services providers and that’s unfortunate because differentiation is becoming a big deal. In fact, as a public cloud user, you might want to consider these differences and their potential impact on the applications you want to run in them. You may find that some are better for certain applications than others—an important thing to know if your IT organization’s stated direction is onward to the hybrid cloud.

Public CloudsHow would you differentiate one public cloud from another on a per-application basis? Cloud benchmark testing offers a good place to start your research. Suppose you wanted to accelerate Internet of Things (IoT) application development for example by starting in the cloud and you know how the different service providers would handle such an app. We now have an example of what one such benchmark run looks like that would give you some answers.

Tim Callaghan of CrunchTime Information Systems recently ran a benchmark test called the Yahoo YHOO +0.00% Cloud Servicing Benchmark (YCSB) using VoltDBas the in-memory database platform on four different popular cloud services providers (CSPs)—Amazon AWS, Google Cloud Platform, IBM SoftLayer, andMicrosoft Azure. I’m using IoT as an application example because in-memory is a good computing environment for IoT where users are often looking for real-time performance (example to follow below).

One of the really interesting things that was done with this benchmark that makes it comparable to other performance benchmarking standards like TPC is that the results are expressed in a number of very usable metrics. These include network performance (important as a latency measurement), YCSB operations per second, and a “price-adjusted” operations per second metric that indicates who among the four demonstrated the best price for performance. Another interesting aspect of this benchmark was the selection of IBM IBM +0.75%SoftLayer’s Bare Metal option vs. the other three running virtual environments.

You can review the testing rational, environments, and results here. As the final analysis says, IBM SoftLayer’s Bare Metal option delivered the best price/performance ratio followed by Google GOOGL +0.53% Cloud Platform. Azure was a solid performer as well, but suffered when price became part of the metric. However, price for performance may be only one point to consider in the selection of a CSP so this report will be updated shortly with a comparative review of provisioning, security, networking issues that were experienced during the benchmark runs.

Read the blog on Forbes.Com here

Forgot your password? Reset it here.