Last week’s S3 cloud outage gave the cloud world a timely reminder that it doesn’t matter how big your infrastructure, you’re going to have some downtime at some point. With the continued migration to cloud solutions for not just B2C and B2B but increasingly intranet apps the question for a lot of organisations is, will the cost saving justify the risk and is that risk acceptable?
Where do we head next? Metacloud? Distribution across clouds? Or do we look to home hosted solutions? Or all of these?
One thing to bear in mind with cloud scale solutions is that they only work because they already use ubiquitous technology. The mechanism that makes them work is something any business of any size can leverage themselves if they know what they’re doing. The same is true for distributed computing as it is file systems but of course with storage you need to think about replication if you can afford it.
So when we talk about the cloud we need have no migration or lock-in fear. We should use it as we use our own servers, our desktops or even our Raspberry Pi. By using a combination of solutions we provide resiliency and availability. Nothing is going to be bulletproof but by anticipating failure modes during planning we can reduce downtime. Additionally we shouldn’t discount local solutions both for backups and for providing redundancy and uptime. The cloud or any geographically distinct storage solution no matter how big is only a part of the solution -and like any part if you rely on it completely then at some point you will have no other option.