Skip to main content

Terminate Lifecycle Hook

This post is part of a bigger topic Autoscaling Publishers in AWS.

In a previous post, I mentioned the LifeCycle Termination Hooks for our Auto Scaling Policy. In this post, we see more details about this hook and how it is actually used to gracefully shutdown service on the instance that is about to be terminated.

As per earlier post, we defined one termination hook in the Auto Scaling Policy, named 'sdl_terminate_publisher':


Next, we use this a CloudWatch event to execute a Lambda Function that performs the graceful shutdown of the Publisher service on the instance, and then releases the termination hook, so the instance can be terminated properly.

In CloudWatch, create a new Rule as per below:

  • Event Source: Event Pattern
  • Based on service: Auto Scaling
  • Event Type: Instance Launch and Terminate
  • Specific event: EC2 Instance-terminate Lifecycle Action
  • Specific group: sdl_publisher-asg
Target a Lambda function to be executed when this event triggers:
  • SDL_CleanUpPublisher



Lastly, we need the Lambda function that performs the actual graceful stopping of the Publisher service. This function uses the 'boto3' client to send a message to the instance to execute a cleanup script that was placed there in advance. The script stops the Publisher service, releases the license and removes the server from the AD domain.

The Lambda function does wait until the script execution finishes and only then releases the lifecycle termination hook, which then leads to the termination of the instance.

import boto3
import logging
import time

def lambda_handler(event, context):
    message = event['detail']
    instanceId = str(message['EC2InstanceId'])

    ssmClient = boto3.client('ssm')
    ssmCommand = ssmClient.send_command( 
        InstanceIds = [ instanceId ], 
        DocumentName = 'AWS-RunPowerShellScript', 
        TimeoutSeconds = 270,
        Parameters = { 'commands': ['D:\\scripts\\cleanup.ps1'] },
        OutputS3BucketName = 'sdl-log',
        OutputS3KeyPrefix = 'CleanUpPublisher'
    )

    status = ssmCommand['Command']['Status']
    while status == 'Pending' or status == 'InProgress': 
        time.sleep(3)
        status = (ssmClient.list_commands(CommandId=ssmCommand['Command']['CommandId']))['Commands'][0]['Status']

    actionResult = "CONTINUE"
    if (status != 'Success'):
        actionResult = "ABANDON"

    asgClient = boto3.client('autoscaling')
    lifeCycleHook = message['LifecycleHookName']
    autoScalingGroup = message['AutoScalingGroupName']

    response = asgClient.complete_lifecycle_action(
        LifecycleHookName = lifeCycleHook,
        AutoScalingGroupName = autoScalingGroup,
        LifecycleActionResult = actionResult,
        InstanceId = instanceId
    )

    return None


Below are the highlights of the 'cleanup.ps1' script invoked above:

Stop-Service TcmPublisher
Stop-Service TCDTransportService

$hostname = HostName
$database = "sdl_licenses"
$connectionString = "Server=$dataSource;uid=$user; pwd=$pwd;Database=$database;Integrated Security=False;"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$query = "UPDATE licenses SET licenses.available = 'True' WHERE licenses.hostname = '$hostname'"
$command = $connection.CreateCommand()
$command.CommandText = $query
$adapter = New-Object System.Data.SqlClient.SqlDataAdapter $command
$dataset = New-Object System.Data.DataSet
$adapter.Fill($dataset) | out-null
$connection.Close()
echo "Unlocked"

$credential = New-Object System.Management.Automation.PSCredential($username,$password)
Remove-Computer -UnjoinDomainCredential $credential -ComputerName $hostname -Force -PassThru -Verbose
echo "Removed"


Comments

Popular posts from this blog

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio