Not so fast, there, buddy.
CloudSleuth has released a new tool that allows users and customers to see how the different cloud providers stack up. As of 6:45 PM EST on August 1, Microsoft’s Azure services come out ahead by about 0.08 seconds.
0.08 seconds per what?
Turns out that it was the time to request and receive the same files from each service provider. Is this a meaningful metric for gauging the speed of cloud service providers? Arguably, yes. There was considerable variance across the different cloud providers, but that is also to be expected since there are different internet usage patterns in different parts of the world.
More importantly, using speed as a metric points out something much more important: As we develop more and more applications in the cloud, we will need new benchmarks to determine the effectiveness of an application. We can no longer just look at raw query performance and raw application response time. We need to consider things like global availability and global response times. In the past, we could get away with storing data on a remote storage server and caching commonly used files in memory on our web servers or in a separatereverse proxy server. Moving to the cloud makes it more difficult to accomplish some of this advanced performance tuning. Arguably, our cloud service providers should be handling these things for us, but that may not be the case.
With the proliferation of cheaper and cheaper cloud based hosting (did you know you can host basic apps for free with heroku?) the problems that used to plague only large customers can now bother smaller customers. As soon as you begin distributing data and applications across data centers, you can run into all kinds of problems. If you are sharding data based on geographic location, you may have problems when a data center goes down. Worse than that, because you’re in the cloud you may not notice it.
This isn’t meant to be fear mongering, I’ve come to the conclusion that the cloud is a great thing and it opens up a lot of exciting options that weren’t available before. But moving applications into the cloud caries some risks and requires new ways of looking at application performance. Latency is no longer as simple as measuring the time to render a page or the time to return a database query. We need to be aware of the differences between different cloud providers’ offerings and why performance on one provider may not directly equate to performance with a different provider.