The Cloud's Hidden Lock-in: Network Latency
by Tom Hughes-Croucher, Principal Consultant / Owner at Jetpacks for Dinosaurs on Nov 20, 2009
- 11,346 views
Every war is different but everyone prepares for the last war. If there was one lesson to learn from the Browser & OS Wars, it’s that open APIs and data formats are not negotible. For every API ...
Every war is different but everyone prepares for the last war. If there was one lesson to learn from the Browser & OS Wars, it’s that open APIs and data formats are not negotible. For every API there are several wrappers and compatibility layers. For every closed format there are reverse engineers waging constant guerrilla action to force it open.
Cloud computing and the Platform War it will bring is different because there are fundamental problems that you can’t code your way out of. Network latency is one of them. The poor quality of inter-cloud data exchange creates an inherent bias towards using a complete solution stack from a single vendor. This lockin is especially devilish because no one can be blamed for actively creating it, and every vendor gains by ignoring it.
Network latency is (roughly) the time it takes to send a packet of data from point A to point B, and it directly impacts the utility and cost of any distributed system. Cloud vendors put a lot of effort into reducing latency within and between their datacenters. But between vendors, data is transmitted over the open internet, where bandwidth and latency degrade considerably. So the customer is charged twice for degraded service, whereas intra-cloud data exchange is essentially free.
Thus through neglect, vendors can create lockin. If you stay within the confines of a single vendor everything is cheap and fast. If you stray outside of that vendor’s cloud everything becomes expensive and slow. It is infeasible to use (say) one vendor’s virtual hosts with another vendor’s database service—not because they are “incompatible” but because, in network terms, they are too far away. Latency reduces customers’ negotiating leverage: switching vendors becomes more of an all-or-nothing thing.
We propose that the cloud vendors work out peering agreements to establish fast and cheap communications between their datacenters. We envision these working similarly to network peering agreements which reduce the friction of sending data anywhere on the internet. There are already real-world examples of this, such as the special pipe between Joyent and Facebook for hosted Facebook Apps.
We propose that CTOs who are being wooed by cloud vendors demand interoperability not just of APIs but also in the transfer of services. Right now, before they hand over their data, CTOs hold the most leverage they will ever have. They shouldn’t budge until the latency trap is disarmed.
We will also talk about how smaller companies can minimize risk and preserve their leverage:
* Servers are cheap: have deployable copies of software ready to switch to alternate vendors. Host your development site on a separate cloud. Carlos has tips and experience on this from his work at Archivd, Spock, Terespondo (Yahoo Search Marketing), and irs.gov.
* Space is cheap: Keep continuous backups of data in three places: Cloud A, Cloud B and your office.
* Talk is cheap: demand real progress on the issue every time you talk with your vendors.
Cloud peering will also have implications for “traditional” web services. Few companies base their operations on 3rd-party web services precisely because they are slow compared to inhouse systems. On the other hand, few companies are big enough to negotiate peering directly with the web services vendors. So here’s how having lots of businesses in a Cloud can help. The cloud vendors (eg Amazon) can negotiate with the web services vendors (eg Yahoo) on a fast Amazon-Yahoo pipe, and everyone wins.
Cloud vendors are well-placed to do for web services what Content Delivery Networks (CDNs) do for images and video: bring them closer to their consumers as well as reducing the costs of the original publisher. There will be less incentive for, say, a website to maintain their own stale currency conversion tables when up-to-the-second rates are available with low latency.
The benefits don’t stop there, however. With some of the recent web
- Total Views
- Views on SlideShare
- Embed Views