It was at VMworld Barcelona 2017 that we got a glimpse of what was then called VMware Automation Services in the keynote. Today is a big day, what was shown 12 months ago is now a reality with the launch of Cloud Assembly, Service Broker and Codestream which will join the VMware Cloud Services family as SaaS services.
There is going to be some confusion around where does vRA sit now? is it being deprecated? what if I want these services on-prem?
I will try my best to answer these questions from how I interpret them, but either way vRA was and is still good, but Cloud Assembly and Service broker are leap frogging a generation on all CMP’s out there and can not wait to see the cool things the VMware community does with these awesome services.
There are 3 different services I am going to focus on, all can be run interdependently or use them together to get a richer experience. these are:
- Cloud Assembly
- Service Broker
Guarantee many people will be going around simply saying that Cloud Assembly / Service Broker are SaaS version of vRA or this is vRA in the cloud which is going to add confusion because there is overlap of functionality, but as of right now they are different products and fill different use cases for the short term. If you are licensed for vRA you do not get these services currently.
The image below shows where the 2 products started and where they are going. Which may clear things up.
As you can see the direction of the 2 products have been different, vRA will still have better governance and extensibility for the near future and stronger private cloud capabilities, where VMware’s Cloud Automation Services are stronger in the public cloud and infra as code more suited to developer audience. The future from my understanding is that the 2 products will merge giving a choice of on-prem (vRA8) or SaaS which will be fantastic to see.
Lets get a couple of quick questions many might be asking out of the way:
- If I have vRA can I get these new services? – sure if you pay for them, your current licensing and sns do not entitle you to these services.
- Is vRA going away? – No, it has a bright future.
- Can I run both managing the same clouds and vSphere endpoints? – Yes.
- If I dont have vRA yet which should I get? – If your only looking to automate and manage majority vSphere endpoints on premise go with vRA, if you purely looking to go more public cloud these new services would be better currently.
- When does it go GA? – End of the year 2018.
- Is it available now? – It is IA (inital Availability) for selected IA customers.
- I notice Extensibility says Beta why? – It is Beta for IA will be production for GA.
Now that we have gotten all that out of the way, for this post lets dive into Cloud Assembly. Just a note, I am so going to miss some stuff here because there is so much to cover, but will do my best to give a good overview.
In simple terms Cloud Assembly is the blueprint and infrastructure configuration and management part of the story. In vRA terms, it is basically everything under the infrastructure and blueprint tabs (at a basic comparison level).
First thing to mention here is the infrastructure side, Cloud Assembly has a tag based placement engine with abstracted layers to allow for true agnostic cloud provisioning. I have done up diagram below to illustrate how it all hangs together with my experience of it so far.
Everything is tag based and this is how choices are made at allocation time, if there where no tags it would round robin between cloud endpoints if more than one is available. Tags are what ever you make them, but if I had a tag called “cloud” and assigned vSphere endpoint and gave it the value of “cloud:private”, and AWS endpoint “cloud:public”. Then specify “cloud:private” in my blueprint as a constraint, it will be provisioned to my vSphere endpoint, changing that to “cloud:public” it would provision to AWS. Very simple example but you can start to see how great this is.
Tags are inherited from parent resources, place generic ones at the top like cloud:public or platform:aws and within Availability zones or clusters you can add others like availabilityzone:a (you make your own tags but important to come up with a good tagging policy)
First thing that needs to be configured is a cloud account, as of launch there is 6 types (AWS, Azure, VMC, vCenter, NSX-V, NSX-T) For those familiar with vRA these are your “endpoints” equivalents.
There is not much to these, for all the endpoints supply credentials, a name, any tags and select the regions or datacenters that you want to link this account too. For me I chose ap-southeast-2 for AWS as that’s my local AWS region. But could select every single one too. If your connecting a vCenter endpoint you will require a datacollector to be downloaded and configured, This will be the proxy agent for Cloud Assembly.
After the Cloud Account has been created if you select the “create cloud zones for each region” button you will already have some Cloud Zone pre populated. These for keeping with vRA analogy are the “reservations” equivalent.
Under the summary tab, we select the placement policy (round robin etc) and add any relevant tags.
Moving onto the Compute tab we can add additional tags to the Availability Zones for AWS and Azure or Clusters for vCenter.
Under the Projects tab is where we can assign the cloud zone to a project. I will not go into this as I have a project section further down.
This is really cool, and the part where you can now start to tell how well this works making blueprints cloud agnostic. We create a new flavor and give it a name, this could be “Small” or “Xlarge” really anything you want, then we add the associated cloud sizes from all of the Cloud Accounts/Regions. As you can see in the image below, I have assigned an AWS, Azure and vSphere size that I want associated with my flavor (obviously best to have them as close as possible in sizes)
Very similar to the flavor mappings. Within creating a new image mapping, we choose a name, this could be CentOS-7 or Windows 2016, then we assign the image from the endpoints we want associated with this cloud agnostic image reference. In the image below we can see that I have assigned CentOS images from AWS, Azure and vCenter. For public these can be both marketplace or private images.
Network profiles are self explanatory, Create one and assign it to an Account/Region. Give it a name and any tags that you want.
Under the Networks tab, we add in the networks we want to use, and assign tags to these as shown in the below image.
Under Network Policies, we set the details around on demand network and security groups, The below image is of a vSphere one I have configured.
Security Tab is where we select the security policies you want to use with this network profile.
Here is where we assign a storage policy, when it comes to cloud this is where we tell what disk type you want to have, like Managed Disks or unmanaged disks in Azure, or GP2 or I01 disks in AWS. If we want it to use encryption etc. Create as many as required and tag appropriately.
I am not going to go through resources in this post, under this section is all the resources that has been provisioning or discovered on your Accounts and Regions.
To use another vRA term this is the equivalent construct as a Business Group, This is where you assign access to people, and assign cloud zones (“Reservations”) to projects. I like the project construct and something that I have pushed with vRA in the past, but had to custom create this capability. Moving forward projects will have governance around costs, resources and people.
I will not make this blog post a novel and will be covering Cloud Assembly in more detail in future posts, but lets finally look at the provisioning a simple blueprint which has network, server and load balancer.
From the image below you will notice this is a similar layout to what many have seen in vRA, we have the design canvas directly in the middle of the screen, Then we have our building block components on the left, you will notice both cloud agnostic as well as cloud specific components. Finally on the right we have the blueprint code which is in YAML. if you drag components from the left and drop onto the canvas it will automatically build the YAML, or you can write directly in the code section and watch the canvas draw as your coding the components.
You may have noticed in the above image we have the ability to deploy directly from the design canvas, as well as version the blueprint internally to Cloud Assembly, This will allow the publishing of multiple versions but if using a release pipeline most will version the blueprint in source control and submit the YAML directly.
In this simple example I have hard coded the image and flavor but have allowed for the constraint tag as well as the number of machines to be an input. This will allow me to provision this blueprint into AWS, Azure and vSphere just by changing the tag on submission.
When I deploy I can choose a new deployment or an existing. This follows the concept of immutable infrastructure, if we choose existing it will act on the difference between the one deployed and the new blueprint. We then select the version to be deployed.
The inputs are grabbed from the YAML and we can see it is my machine count and platform. I will submit to AWS and hit deploy.
After deploy it will kick us to the deployments tab, for yet another vRA reference think of this as both the Items tab and the request tab in one. The below image that my simple deployment from above has been successful.
We can then select this deployment and dive into it. The image below shows we can see the canvas of the deployment as well as all the deployment data.
If we look at the history tab within this deployment we can see all the actions that have been taken, If we were to deploy to this deployment (deploy to existing) we will see the different requests down the left hand side. (this is how we resume, we don’t resume a failed deployment we re-deploy to the existing failed deployment and anything that was successful and still needed will be left and anything that failed would be re created.)
This is really cool, will be hard to show well on a screenshot with it being readable, but within the events on the above image you will notice little boxes with arrow against some tasks, specifically the allocation ones. If we select these we get kicked to the request under the request tabs and can see why it chose AWS over the other platforms.
In the image below you can see that it Azure and vSphere could not match the “platform:aws” constraint I put in.
I now look in AWS and I can see my machine and LB
Now just to make sure lets change this blueprint to go to Azure. Change the tag constraint to “platfrom:azure”
Now we can see below that Azure was chosen
This is really exciting, being able to deploy as code to any cloud provider, from simple machines and load balancers to complex multi tiered application. I can not wait to see what can be done with this service going forward.