Wednesday, July 25, 2012

Why it's a bad idea for SP to compete with Amazon? (*)


(*) SP is a connection service provider who provides managed services to it's customers.

Following Amazon sucess with EC2 everyone wants to jump on Cloud bandwagon and repeat that success. SP's are no different. They suffer from declining margins and are looking for new opportunities to secure the revenue. Unfortunatley  "EC2 type" of cloud  (called later on "commodity cloud") is a bad idea to pursue. This does not mean that there's no opportunity in a cloud for SP's. Of course there is but in a bit different type of cloud.
Why it's a bad idea for SP's  to compete in "commodity cloud" space?
First we need to define who's SP regular customer. In most of the cases it's enterprise or SMB customer who's already subscribing number of services for their businesses. It's used to quality service support and willing to pay the premium for good SLA it guarantees business continuity. If we look on Amazon regular customer - in most of the cases it's a software developper in a form of startup or well established software house. It's not hard to imagine that those two types of customers have very different expectations and demands. Developpers are hardly willing to pay the premium. They will figure out hundreds of ways to customize their applications in order to avoid the premium.  Having said that commodity cloud consumer is looking for relatively simple and not expensive service.
Second thing is commodity cloud compatibility with enterprise class customer demands. In most of the cases enterprise customers are running "legacy applications" which very often are only vertically scalable. Those kind of workloads are hardly portable to commodity cloud due to it's lack of compatibility with cloud architecture.
Going further we need to look on scale. Commodity clouds are leveraging on scale effect.  Amazon is lowering it's prices few times a year - but they really can afford it. The reached critical mass  and their runrate business is reaching $1B.  Quite a lot - but let's see how many servers are needed to run this cloud. Amazon is not sharing this data but according to some analysts they manage over 500k physical servers and over 1.5M IP addresses. No wonder they are leveraging on scale effect. 
Last but not least: innovation rate. Amazon is introducing couple of  new features per month. This is something which is bit odd for SP's who are providing Managed Services.  Usually their processes are constructed to introduce new feature once per few months.
As we can see, there's number of incompatibilities between commodity cloud and SP. But where's "the beef" for SP's ?
I see an opportunity in selling VDC's (Virtualized Data Centers) - where enterprise customers can leverage on the new consumption & operational model that cloud is providing to them (cloud is operational model and technology is only enabler). Enterprise customers can request by themselves chunk of infrastructure - I'll call it container - which is compatibile with container in their datacenter. Such container shall contain elements that are currenlty used in enterprise datacenter like: multiple L2 segments (to host multitier applicaitons), firewalls & loadbalancers. Some of them can be virtualzied but some of them must remain physical (for security & audit reasons). We shouldn't forget that container must be stand up on fully redundant infrastructure (something missing in commodity cloud). As we can see there's whole lot of infrastructure automation - as cloud is all about automation.
Why this is needed? Because cloud is a journey. Sooner or later all will migrate to commodity cloud - but before that happens - enterprises must rewrite their applications to be "cloud compatibile". This will not happen overnight. Therefore I see an opportuinty for SP's to help enterprises transition into that space.
Will SP's make millions on it? I really doubt it. But we need to take into account that for SP's very important is customer loyalty and stickiness. Very often reffered as "churn". The more good services you provide to customer - the less likely customer is willing to move to another SP. This is something that we shouldn't underestimate.

Tuesday, July 24, 2012

Inevitable cloud outages

Few weeks ago market was boiling hot with news and analysis on Amazon EC2 outage. Microsoft Azure was no different. They faced an outage as well. I'd bet that smaller cloud providers are facing this more often but due to it's local nature - we do not see it much in media. There were number of blogs and analysis trying to figure out whether cloud can be more robust or redundant. Some of blogs were pitching hybrid clouds as a remedy. Some were asking for more governance improvements (as yes, most of outages were caused by human factor)
Let's face the truth: cloud infrastructure is designed to fail!
If we look at architecture designs, the most important aspect are scalability & cost - which are not going on par with redundancy I'm afraid. This does not mean that applications running on cloud will be impacted. It's really up to application developer to take into account cloud architecture and design application in a way that can cope with cloud outage. If we take for instance Amazon EC2 - they have number of mechanisms which  used properly shall provide robust applications running on EC2. Multi-region zones, availability zones, load balancers etc. There's very nice whitepaper describing how to build fault tolerant applications on AWS. As we can see - responsibility shifted from infrastructure provider into application developer - and this is major change that comes with cloud.
If we take legacy datacenter application and we try to move it into the cloud - we will have very unpleasant surprise. Legacy applications are mainly designed with assumption that they're running on redundant infrastructure. Of course some of them are clustered in order to withstand single server failure but in most of the cases legacy applications are only vertically scalable monoblocks. Cloud applications are different beasts. They're horizontally scalable entities and this makes a difference. When we add distributed load balancers that balance traffic into different regions - we should be safe when single availability zone or even region goes down. Of course it comes at price but it's different story ;)
Having this distinction in mind - I very often characterize cloud as two different flavors: commodity cloud and enterprise class cloud. The first one is targeted for developers who need to design applications with given cloud architecture in mind. Commodity cloud infrastructure is built in a very specific way and it's designed to fail. Enterprise class cloud is bit different. It uses the same consumption & operational model ("as a service, on demand. self service") but it's designed to host legacy applications in it. It's infrastructure is redundant and maybe less scalable but more robust for sure.
If you're interested how commodity clouds are being built, there are nice resources on www.referencearchitecture.org - especially networking part - as in fact it's network that makes a difference. There's very good lecture from one of OpenStack Summits: Discover Diablo Networking Mode

Summarizing, commodity clouds are designed to fail - but it does not mean that it's something bad. We simply have responsibility shift - which is now on developer to cope with it. It's like power supply to our home. Who does have redundant cables from separate power supply companies coming to your home? Likely no one. It's upon us to secure continuity for our servers at home, hence we buy UPS'es. Cloud is utility. Let's face it :)