In the past year, I have been fortunate to have many different views of cloud services – from SaaS, PaaS, ITaaS, to WuPaaS. Ok, the last one is a Warm-up Party as a Service, but you get my drift. It seems every vendor has a slightly different take on cloud. But, in effort to simplify, I believe that one thing hold true across all offerings today: Cloud is a radically different way to handling computing for companies and there are complex concepts that must be understood in moving towards a cloud.
In my mind, and from checking with my peers, the cloud is almost a polar opposite of the established IT department. Cloud today seems to be working best for new companies which don’t have the established IT infrastructure. Cloud offers a major cost advantage compared with creating your own IT shop from scratch, with pay as you go and on-demand pricing and availability. I’m not sure how much value cloud offers to a heavily virtualized, established IT shop with storage, compute and networking already in place.
For established IT groups, moving towards IT as a Service is the main goal – to replace and augment all the traditional IT functions performed by internal system administration and storage groups. Testing cloud services with a Software as a Service offering is probably the easiest first step – because it involves a subset of users (sales, for instance). But the nirvana goal would be paying for on-demand usage of computing and storage from a service provider and eliminating most or all of capital investment in IT equipment.
I titled the post ‘crossing the great divide’ because I think that the established IT department is going to be forced to make major changes to how they operate to move towards cloud. Some of these changes simply don’t make sense in organizations, however. For instance, ERP and CRM systems in many organizations represent their bread and butter systems, but how many times does an IT group actually deploy these systems? I’d say, not often. And how many of these identical servers does IT deploy? Again, probably not many. Yet, but really build these on the cloud, IT will first have to define and create these installation workflows which can be deployed within a cloud service offering. The cloud offerings I have surveyed all require definition of set, repeatable workflows to deploy services in a layered approach. These repeatable definitions make most sense for a service provider and not for end-user IT shops.
Another example from my company, we have a business analytics software. I can tell you from personal experience with two installation/upgrades, I cannot imagine a service catalog being able to deploy this service. Each installation has taken weeks of tweaks and changes to enable it to work properly. Maybe even a little voodoo. The thing is, even if a vendor provides a best-practices scripted installation, I’m sure every company will have variations they must change and I’m not sure how many administrators would be happy running a provided install script for their business critical applications.
So packaging software, updates and deployments is probably the first step that established shops will need to make. Virtualization is brought the goodness of template servers for easier, managed deployment. Packaging takes things a step forward. A template may already have the OS, virtual tools, antivirus and other miscellaneous software pre-installed. In a cloud offering, the template is likely OS only and ancillary software will be chosen from a list of options to be deployed. Think of it as a layer cake with each layer being optional. Updates to each layer would allow for deployment of upgrades into deployed assets. If these were already in the template, your long term management may be limited.
Packaging may present a type of problem, itself. For instance, Microsoft products come in the handy, dandy MSI format, and System Center and other management products do well with their own software, but sometimes struggle with third party installations (like Install Anywhere). The same holds true with cloud. With multiple different ways to install, administrators will be forced to test packages for installation against a number of options and scenarios to properly install.
Packaging of custom written software is another issue. Many companies don’t package their custom written software today. With Java WAR deployments and other EXE drop-in replacements, companies must figure out how to deploy these with required database layer software and configuration onto servers through scripted installations.
In our environment today, the systems administration and database administration teams handle a lot of these supporting software installations onto systems before they are turned over to developers. The tables will turn with the cloud to where developers must handle more of these functions or collaboration must occur more in-depth to enable packages to be built with all the necessary requirements completed. This will mean more time from many team members in testing and quality assurance which is not required today.
I don’t think cloud presents a clearly defined cost-savings proposition, when compared against the sales case for virtualization. The value it provides is on-demand growth and bursting. For an established IT organization which may be considering an internal cloud, I’m not sure where the value comes in – if anything, it only creates new packaging work for teams. Hybrid or public cloud is really where value seems to lie, however, I think most organizations will have some assets that cannot move outside the campus firewall, requiring some sort of internal cloud – or maybe legacy (traditional) configurations on-site.
Troubleshooting presents issues. As soon as a user or administrator logs into a deployed compute instance, they will make changes which are not controlled from the service catalog, impeding maintenance and upgradability of the deployed server. All things must be deployed through a service catalog to really be maintainable, which will be a change for administrators.
Control of systems will also be an issue. The enterprise’s internal IT today is owned and controlled by the company. When bringing in a cloud partner, there will be some sort of dependency on their resources and IT professionals to solve and end problems. A lot of control could potentially be lost, if there were storage or virtualization layer problems in their cloud offering. And I think administrators will be faced with a difficulty when their hands are tied and they are unable to aid in resolving performance or malfunctions of out-sourced systems.
And then there is security. This is a great challenge as things move outside of the corporate controlled datacenter. There is data in transit and all sorts of threats in the wild that must be addressed. I’ve not seen compelling answers to the security questions yet, but its an evolving field and I think these will eventually be addressed.
These are the thoughts I have at the present time. I think cloud is evolving as a topic and there will be big changes during 2012 from vendors like VMware, VCE, HP and others. Its a space of radical growth and potential, but I think there are so many definitions and interpretations of how to deploy clouds that it makes the customer’s job difficult in evaluating which solution is right for their enterprise. And once that decision is made, moving towards initial deployments on the cloud may be impaired by these challenges I have discussed. As with the transition to virtualization, the low hanging fruit will be the first to move, followed by the more complex and critical systems later. But, vendors have the difficult job of answering WHY should companies move to cloud in the coming year. And that’s an answer I’m interested to hear.