Press "Enter" to skip to content

Force Success in Application Services Deployment

I have had the fun of playing with Application Services previously known as Application Director or AppD. There has been lots of improvements done over the last 12 months one of the greatest ones for me is the resume feature that was delivered in 6.1. This was a god send specially when you have 4 hour deployments with multi tiered multi external services deployment and it fails 3 hours in due to the database service being down. The resume saved so much time in development of these deployments.

The next biggest gripe which has not been addressed is the ability to do an assisted tear down on a failed deployment. almost all my deployments configure firewalls/load balancers/account creation/password creation and storage/database creation/ip address allocation all of which need to be cleaned up on a tear down. Now when a deployment failed 3 quarters through and needs to be torn down there is only a quick tear down available. meaning allot of manual work to clean up what has been created.

This started me on the path of playing with the Application Service database. This is a PostgreSQL database located on the application services appliance.

The problem…..
I have a large application deployment which has failed 2 hours in and can not be resumed it has created many artifacts that will need to manually torn down because we can not run an assisted teardown on a failed applications service deployment. This manual teardown is fiddly, multiple domains, multiple firewalls, databases, allocated ip addresses and more, not only does this take time we have to trust it was completely torn down correctly and nothing was missed.

So I thought why not force appd to think it was successful…

To be able to connect to this database from an external source it will require some config changes. The details provided in THIS KB will help you configure the local postresql database a side note and this had me stumped for a while. the /home/darwin/pgsql/data/postgres.conf has 2 listen_ipaddresses entries and both must be changed.

Background on how it fits together in the database:

Each attempt to deploy or to resume a deployment creates a deployment task. below image is an example from the Application Services deployment summary view, the deployment id that is highlighted in the deployment name – this is the key to the deployment table in the database.
appd deployment

Each deployment task is related to its own instances of deployment nodes, which have one or more deployment node instances. Each deployment node instance can have many deployment node tasks. This portion of the execution plan for the first of the above deployment tasks shows these relationships:
AppD_Deployment_Nodes_Display

Each of the objects described above has its own table in the database, and they can be linked together by performing joins on the appropriate id values. Some fields that are of particular interest to us are highlighted:
AppD_Deployment_Task_Relationships

States:
The state of tasks and deployments are represented through a code. The below tables which are available in the database to view are listed below.

deployment_state_type

State Name Description
1 Task Scheduled The deployment task has been scheduled.
2 Task In Progress The deployment task is in progress.
3 Deployment Success The deployment is in a successful state.
4 Deployment With Issues One or more deployment tasks for the deployment has failed.
5 Deployment Torn down The deployment is torn down

deployment_task_state_type

State Name Description
1 Scheduled The deployment task has been scheduled.
2 In Progress The deployment task is in progress.
3 Success The deployment task completed successfully.
4 Failed The deployment task failed
5 Stopped The deployment task stopped
6 Stopping The deployment task is being stopped

Run_state_type:

State Name Description
1 Unknown Unknown
2 Not Started The task has not started
3 Starting The task is starting
4 Running The task is running
5 Rebooting VM is rebooting
6 Completed The task has completed
7 Stopped The task has is stopped
8 Stopping Stopping
9 Waiting The task is waiting on other task(s) to complete
10 Failed The task has failed
11 Did Not Run The task did not run
12 Initializing The task is initializing
13 Provisioning VM VM is being provisioned
14 VM Provisioned VM is provisioned
15 Starting VM VM is starting
16 VM Started VM is started
17 Stopping VM VM is stopping
18 VM Stopped VM is stopped
19 Deprovisioning VM VM is being deprovisioned
20 VM Deprovisioned VM is deprovisioned
21 Error Error
22 Unsupported Unsupported
23 Not started on server Not started on server

So now we have a failed deployment we need mark as a success so we need to change the state. We could just mark the whole deployment as a success.
Using a tool like pgadmin we can connect in and just change the deployment_state_type_id in the deployment_task table. with a 3 and save it. (This is the bare minimum to allow an assisted teardown. (Note – will need to destroy all external service vms if used in the deployment)
pgadmin

below is a sql command that could be run, where ID is the task id of the deployment.

UPDATE deployment_task
SET run_state_type_id = 6 -- Completed
,deployment_task_state_type_id = 3 -- Success
WHERE id = 9999

This will mark the deployment as complete and allow us to run an assisted teardown.. wooohoo

But say we had a deployment failure right at the end because of something dumb. everything is fine but the deployment failed on the very last step. We can not only mark the deployment as a success but also the failed task.
we can run this command to update the node tasks.


UPDATE deployment_node_task
SET run_state_type_id = 6 -- Completed
,log_description = 'Log is missing because the task was manually marked as completed'
WHERE id = 999999

not when looking at the deployment overview everything will look as a success.

Cheers

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Anti SPAM BOT Question * Time limit is exhausted. Please reload CAPTCHA.