The other key is to avoid using cloud provider
specific features like RDS. This is killing some AWS
customers as RDS seems to be having an especially hard time
recovering and there is no RDS equivalent in other cloud
provider feature inventories to fail over to.
Following the major outage that AWS
suffered in their east coast US
facility this week, after the dust settles, what lessons can
actually learn from the events of this last week?
Here are the five key lessons we've highlighted to
Lesson 1: Both Cloud and Dedicated Computing Have Single
Lesson 2: Size is No Protection from Outages without
Lesson 3: All Data Centres Are Not Equal
Lesson 4: The Price-Performance-Reliability Metric
Lesson 5: Achieving a highly robust set-up is cheaper and
Customers need openness from vendors about their
choices and locations in order to create
comparisons between clouds. This is a key development needed
are to make the right decisions and create the appropriate
in line with their computing needs in the cloud.
Customers already have wide choice of
locations within Amazon EC2.
If you saw their status of
availability, only one location of many was affected,
so if one has to engineer his/her apps
then first quick and easy way would be
to implement such DR functionality within one cloud (be it
AWS or another provider, if they have multiple locations
where you have uniform interfaces and
formats, so re-engineering efforts do not sacrifice
reliability and interoperability of final solution.