I’m at Interop Las Vegas today presenting a talk entitled How to Manage Your Cloud of Choice. The motivation for this is talk comes from our customers trying to wrap their heads around hybrid cloud and understand how to make the best use of it.
The most basic question customers ask about hybrid cloud is whether it makes sense for their organization to adopt or not. As you can imagine, there are tradeoffs in security, performance, SLA guarantees, cost, and much more between private, hybrid, and public clouds. Each organization will have different constraints and priorities. Thus the decision of whether to move to hybrid or public cloud – or more specifically what apps, data, and services to move – will be different for each organization.
In my session, I talk about how to architect your datacenter to provide the flexibility to make changes over time. The right architecture will enable you to move between private, hybrid, and public cloud seamlessly, without impact to services and be completely transparent to users. And this is exactly the point of hybrid cloud: it’s about flexibility and choice. But in order to avoid lock-in, you need the right architecture.
But what does the right architecture look like? We’ve found that the answer is a self-service architecture. While it may seem a bit of a leap from hybrid cloud to self-service, the point is that the cloud that you use to run your infrastructure should be considered an implementation decision, not an architectural decision. And in order to achieve that, you need the right abstraction layer between your users and services and the backend infrastructure they utilize.
Ok, so how does self-service achieve this? It’s instructive to look at non-technical, real-life examples for inspiration. In my talk at CloudConnect Silicon Valley last month, I used the example of FedEx. As a customer, FedEx gives me a very simple interface with three input variables: where my package is going, how much it weighs, and when I want it to get there.
Assuming the price is right, I print my shipping label, put it on my box, and drop it at a drop location. This is all I see as a customer. However, on the backend, there is an amazing amount of complicated logistics to get my package to its destination on-time. But really, as a customer, do I care if FedEx uses a truck versus a plane to get my package to its destination? Do I care that they optimize to avoid making left turns? No – so long as my SLA is met, I don’t. Thus exactly how FedEx gets my package to its destination is an implementation detail that I as the customer don’t see or care about. More importantly, they can change this implementation detail as frequently as they’d like without affecting me (so long as my SLA is met!).
The question before us is how we can architect a datacenter such that we have the same clean separation between the customer (user) and the backend implementation chosen by IT. We believe that self-service is the answer our customers are looking for.
So what should you be thinking about if you want to build a self-service datacenter? The first piece of the puzzle is a self-service portal. This is a place where users can provision new services and manage their existing services. Like FedEx, each service would have an associated cost and users would need to decide if their business needs justify the cost of the service. If they do, then the user can start the process for provisioning that service. Whether that service is provisioned into the private cloud or the public cloud is of little concern to the user so long as the SLA they specified is met. And that’s exactly the point: all the user sees is the self-service portal, they’re unaware of exactly where this service has been provisioned.
It’s important to note the importance of this self-service interface. Typically in today’s IT environment, all requests for new services are made through a ticketing system. Within this system, the user requesting the service must provide very detailed specs for every aspect of their service, sometimes including the physical hardware or virtualization solution their service will run on. Because the user is deeply involved in specifying the backend infrastructure for running their service, it means IT has little wiggle room in case it decides to change vendors or technologies. This significantly ties IT’s hands and results in “silos” being created – different technology stacks for different applications (e.g. the Windows stack is different top-to-bottom from the Linux stack is different from the Tier 1 apps stack).
In the end, users are good at creating and running their services, not managing the underlying infrastructure their services run on. In the self-service model, IT handles the infrastructure choices and users can focus on their services. This gives IT the freedom to change technologies, move from the private cloud to the public cloud or vice-versa, test out new ideas on a small percentage of the services before rolling the change out to all of them, etc. And there are many benefits with having a clean separation between service and infrastructure implementation!
So what else should you be thinking about in build a self-service datacenter? Well, questions like “how can I automate the provisioning of services?” or “how can I prevent this self-service datacenter from turning into total chaos?” or “how do I operate a self-service datacenter?” should be top of mind. The answer in two words to all of these questions is: management tools. In particular, you’re looking for two types of management tools: cloud automation and cloud operations.
Cloud automation tools take care of wiring up and connecting all the disparate services you have in your datacenter. To provision a service, you need to contact vCenter Server to deploy the VM from a template, talk to the networking gear to set up a vLAN, portgroups, and configure network settings for the service, connect with the storage backend to provision a new LUN or NAS mount, talk to vShield or some other security service to configure firewalls and anti-virus, and much more. Previously, all of these were tasks manually performed by IT admins, which could take days. Cloud automation allows you to automate all of this so that it can be done in seconds. At the same time, to prevent the “total chaos” scenarios so many IT admins worry about in self-service scenarios, you need to be able to specify and apply policies to different services and workflows.
VMware’s vCloud Automation Center (vCAC) solves the cloud automation problem. It is designed specifically for hybrid cloud scenarios, where a company’s datacenter spans private cloud, public cloud, and even (gasp!) physical servers. There are three primary components to it: a fully-configurable self-service portal, automated workflows, and policy-based governance. The self-service portal is straightforward enough – this is the web page where users go to provision and manage services and you have very granular control over the look and functionality of it. In terms of automated workflows and policy-based governance, vCAC allows you to create “blueprints” that specify the steps that are taken after a user selects a service in the self-service portal. Sometimes that service request will need approval, say from the user’s manager or the LOB’s VP. Other times the request is small enough that no approval is needed. This is completely configurable by IT, and this is just one example of how the right cloud automation tool can allow you to avoid the theoretical chaos associated with self-service architectures by inserting the right control points. Assuming the request is approved, vCAC then provisions the service. As mentioned in the preceding paragraph, this involves connecting to APIs for many different infrastructure components, including virtualization, network, storage, and security. Again, this is all automatic based on the blueprint specification.
vCAC and other cloud automation tools are the glue that hold the system together. Moreover, they abstract away the specifics of the infrastructure from the user. All the user sees is the self-service portal. Behind the scenes, vCAC orchestrates a lot of components and potential complexity, but due to the data-driven nature of the blueprint, it makes it very easy for IT to manage it all. You can simply specify what components are involved and what you want to happen, and vCAC takes care of it for you. This is where the benefits of the architecture start to show themselves. While a blueprint might call for deployment in a private cloud, the admin could easily change it to a public cloud without any users knowing. The user would only see the service name and some of its characteristics, but would not know where it was being provisioned. Thus vCAC provides that separation layer that gives IT flexibility in moving between private cloud and public cloud (and back!).
Once services are provisioned, you then need to think about how you’ll manage their ongoing operations. This is where cloud operations tools come in. vCenter Operations Management Suite (vC Ops) is VMware’s solution in this space. Like vCAC, it was designed from the ground up for hybrid cloud scenarios – both private and public cloud (and yes – even physical servers too!). vC Ops covers a wide range of functionality, including performance, capacity, configuration, compliance, log analysis, in-guest monitoring, and much more. For this discussion, the most important items to think about are performance, capacity, and compliance. Performance issues are certainly top of mind in hybrid cloud scenarios. Public clouds often give you an SLA, but how do you know the provider is meeting that SLA? Or if there is a problem, is it on the provider’s side or yours? vC Ops has powerful analytics to help answer these questions.
With regard to capacity management, a self-service model necessitates you think differently. Typically, capacity management is done by IT on a per-request basis. A new request comes in, IT works with the user to understand infrastructure requirements, and then new hardware is bought and provisioned for that service. And this can take months. If we want to provision new services in seconds, we need to have the physical hardware ready before the request comes in. But how can we do that? Well, FedEx does it. I mean, have you ever gone to a FedEx drop location and found that all their trucks were full and they couldn’t take any more packages? Of course not! FedEx analyzes historical trends to understand customer demand and ensures that enough trucks are ready as the packages come, even for busy times like the winter holidays. Similarly, IT should start doing capacity trending – analyzing usage to understand when and where capacity shortfalls will occur. vC Ops provides these capabilities and will accurately forecast future capacity availability.
Compliance is another big issue for the hybrid cloud. IT needs to ensure that its datacenters are in compliance, irrespective of whether they’re in a private or public cloud. vC Ops provides automated configuration information collection and compliance assessment against that configuration. It will automatically flag items out of compliance and can automatically remediate them if IT so desires. This way admins can be assured that no matter where a workload is running, it will be in compliance.
You also no longer need different management tools for different environments. vCAC and vC Ops work across all your environments – private, hybrid, and public clouds – so you can create blueprints or check compliance in the same way and from the same screen regardless of the underlying infrastructure. This means that your admins can quickly get up-to-speed on new environments.
Together vCloud Automation Center and vCenter Operations Management Suite are crucial for enabling a self-service datacenter and with it, offer greater flexibility in your hybrid cloud strategy. Have you given them a try?