In the last blog entry I talked about the business, financial and contractual side of enterprise cloud. Lets change gears and discuss the technology.
Basically there aren’t a lot of differences and the premise is the same - use virtualization to provide a different ROI (not necessarily lower) for the computing needs of the enterprise. If we take a look at enterprise workloads, we get a little more insight into the characteristics that become more important for an enterprise vs. a 2.0/developer customer.
Many enterprise cloud deployments are “project”-based — that is, the enterprise needs to add a new feature - e.g. CRM - needs it up quickly, wants to avoid capital expense, etc. The need for extreme elasticity is rarely included in the workloads of an enterprise. Sure, once a month having some extra horsepower to close the books is great, but it’s not an hour-by-hour swing of multiples of the environment we might see from a social media site. There is also the need for short-term project based solutions – e.g. need a development environment for a few months to build out a new version of an app - do a trial with a new software solution, etc.
When enterprises do take the plunge, they need more than just a flat pool of virtual machines. They look for a higher-level construct, group, or application. Grouping allows architecture to be defined, blueprints created, and run books produced. They also look for “topology” in that virtual appliances - such as loadbalancers - need to connect to firewalls, and web servers via vlans/virtual switches. They want the same degree of isolation they have with today’s dedicated infrastructure. The good news is we are seeing a high degree of innovation in this space with the likes of Citrix, Vyatta, Nicira, and Altor really pushing virtualization in the network.
Another big difference is the applications themselves. A couple of days ago I had a really interesting conversation with a prospective customer. They have a very cool app that automates document processing using OCR, bar code reading and assisted processing. When they onboard new customers, there is a large ramp to digitize the historic data. This company created a very smart queue-based solution decoupling the collection of the raw data from the processing. So far, so good – the perfect candidate for some form of capacity of demand. But there is a catch - a big one - and one we see all too often when architecting enterprise solutions…the software license terms.
The OCR package is licensed on a per processor basis. Traditional perpetual license. Scaling up is no issue - just purchase more licenses – but scaling down isn’t an option with the vendor. Today, they optimize for the worse case scenario, the cost of the compute in this equation is tiny compared to the cost of the license. Until ISV’s offer license models that fit the infrastructure deployment models, we will have a cadence mismatch. We spend a lot of time looking for creative options to solve this – e.g. rather than scaling out - scaling virtual machines up by adding dynamically more memory, if its a per server license adding more CPU’s.
One area we are seeing good traction in is the virtualization of disaster recovery . Providing offsite data storage via the cloud, then applications on demand, pay per use is hitting the mark. If we go back to the software licensing schemes for a second, this is a place where we have some degree of synchronization. Many vendors provide a “DR” license for a limited amount of time per year. Now we can finally show compliance, based on the actual number of hours a DR instance was active.
So, enterprise cloud is a little more than “enterprises using the cloud”. To support mission critical workloads enterprises look for the right blend of people, process and technology. The cloud isn’t one size fits all — if it was, the battle for the cloud would be over. Thank goodness there is plenty of room for innovation.