Press "Enter" to skip to content

Extending vRA with ETCD – Part 2

Hey Hey,

There has been great feed back from part 1 of this series, this part will carry on and extend Part 1.

I will point out that alot of the data im putting into ETCD or the key value store(KVS) can be sourced from making calls to the vRA and Application Services API, The issue is these APIs are heavy particularly the Application Services API, meaning that there is allot of data being pulled back even if you want it or not. I have some Deployments in Application Services which return 100 pages of JSON that are many many levels deep and there is no simple way to pull out a single piece of information.
Using these additional services allows us to store the data in consistent manner as well as being extremely light weight both on processing and data. It’s allowed us to deliver quite complex solutions with very simple code at very fast speeds as well as some added bonuses like locking and long polling waiting for values to change etc. allowing the customer to continue using and developing long after we have implemented it.

For those who are not in the mood to read head to the bottom I have linked a video covering most of this, but unfortunately you will have to listen to me 🙂

To get started what I did was modify one of the OOTB demo applications called nanotrader or springtrader. Below is a screenshot of what the application looks like stock.

Now lets apply a little key value store and PaaS ID love, we have get something like the below image.

Lets now explain the highlighted services in the above image:

A) This is what I call the PaaS_ID service, this can be a generic service that can be called on any application and it has a single input which is the unique ID or PaaS ID, from this ID if in a optimum format for your environment will provide the building blocks for almost all the other would be inputs like:

  • environment
  • Project
  • Application
  • Instance
  • Release
  • Domain
  • user account format
  • Loadbalancer Name
  • application variables
  • what firewalls to configure
  • etc, etc. etc
  • Sure this wont fit every use case but it keeps consistency and eliminates alot of fat finger errors.
    This will also create the top folder for the Key value store as well as populating it with all these derived values under and other folder like Deployment
    In this case just for something I am deriving the F5 Load balancer name which in turn becomes the FQDN in DNS

    B) This service is what will populate the host details in the KVS, This is dragged onto every server and will place data like:

  • Node Type
  • IPAddress
  • Domain
  • Host name
  • Password ID for Secret Server
  • Any other details like cpu mem, or anything that could be of value
  • As you can see from the below image this is what I will be putting in for this demo, You will also notice its taking in the PaaSID value so I know where to place this information under in this case will be under “Nodes” under the “PaaSID”

    C) This Service is a service I use to both generate a password using Secret Server as well as store that password within secret server. This allows every deployment to be unique and we are not using the same account across deployments. Many customer I work with are very security conscious and this has been a fantastic fit. As you can see from the below image this will return the Secret ID and the Password. The secret ID will be stored in the KVS so we can programmatically call on it to run tasks like day 2 operations and the end users never need to know the password.
    May also notice from the properties that this service calls GIT to then call vRO to run a workflow to make this happen.

    D)This external service is a generic service that takes in random data, In this instance its taking in Database information, LoadBalancer information, and Password information. I have set this up to allow any data to be entered for some flexibility.

    So we kick off a nanotrader build and what do we get?

    We get a new entry in ETCD, The image below is from a custom web interface so the data can be browsed in a human redable way. Created by a very smart guy at one of my customers sites. This will allow me to show you the information in a nice way.
    The below image is of the deployment as a folder under the top folder called paas. This is what the PaaSID service or A above created.

    I now expand that and look under the nodes folder which is what Service B above created and populated.

    The rest of the data shown below is from both service C and D above. This is all data that is very useful and very easy to pull back for a myriad of other services or functions.

    For a quick example of how this can be used I whipped up a vRO flow which takes in the PaaS ID and allows snapshot of all the machines associated for that PaaS ID
    I published this through vRA and the below image is of the catalogue item. but this could also be a day 2 action and no ID would need to be entered as Im able to grab the PaaS ID based on vRA IDs etc.

    request and place the PaaS ID in and it will automatically populate the machines from that deployment.

    Very simple example but another one could be for a an IIS farm is deployed and we can create new website as a day 2 action with all the data at your fingertips to make it work with very little input. I am literally coming across a new way to use the KVS and vRA every day and what’s great is the clients currently using this say they want to do x the KVS is the first avenue I look at and its generally a sure that will be easy!!!

    Ok so we will now move onto one last demo, This is something a client is about to implement on a very large scale. To get the deployments down as quick as possible There was a big hurdle holding up a deployment and this was the provisioning of an oracle database on either and exadata for prod and a general x86 implementation for development. on the exadata database would take 23 to 36 minutes to create depending on the load. on the x86 it was more like 40 minutes even if we kicked of the database provisioning right at the start of an Application deployment everything came to a halt for about 20 minutes waiting for the database to be created.

    Solution: have many databases in a pool waiting to be consumed. Using ETCD we could have a pre provisioned pool which a vRO workflow or another service can monitor and if it drops below a threshold it would provisioning more. We can than use a service in vRA Application services to query this pool take the details and plum them into the application… bang 20 – 30 minutes shaved off provisioning time with very little effort. instead of calling OEM to provision the database and wait we call ETCD for the next available database.

    For this demo I have used another OOTB demo application Dukes Bank. Below is the stock version of this application.

    Now I have removed the database and put in my own external service which will call from a pool of databases, This pool is created from an additional vRA and Application services blueprint which is monitored by vRO and will deploy and add themselves into ETCD, when the counter gets low.
    As we can see from the images below this service will return all the details the Dukes bank application requires.

    Now we will deploy one of these and we can see the data its returned matches the data in the KVS and its moved from the pool to used. Note the secretid in this instance is actually the password 🙂

    Hope this post was able to get across just how flexible using a Key Value store like ETCD and vRA can be, They complement each other very well allowing more for less.

    I have also added a video of this in action for a trail.



    1. v1gnesh
      v1gnesh October 2, 2015

      “I have some Deployments in Application Services which return 100 pages of JSON that are many many levels deep and there is no simple way to pull out a single piece of information.”

      It’s very likely that you’d have come across this already but here it is anyway – a way to parse JSON and get only the bits you need –

      Hope this helps!

      • Scott Norris
        Scott Norris October 2, 2015

        Hey v1gnesh,

        Cheers for the reply, Yep I have seen Jq, I have a Customer working with it now, similarly json.parse() command in vRO achieves the same result.
        The issue isnt so much parsing the json, the issue is why pull 100 pages that is process heavy when I can pull a single line or less, which is light.

        Also when working with many different services it handy having all the values required in a single location in a consistent format instead of calling multiple api’s to get the same data that is generally not consistent.

        • v1gnesh
          v1gnesh October 3, 2015

          I’m sure you’d have already spoken about the heavy calls to the vROps folk within VMware..

          The scenario you’ve explained seems very intense. I can only imagine how fun it must have been for you to get this going! 🙂

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Anti SPAM BOT Question * Time limit is exhausted. Please reload CAPTCHA.