Skip to main content

Publishing Queue metrics in CloudWatch

This post is part of a bigger topic Autoscaling Publishers in AWS.

In order to define autoscaling of some servers, we need some metrics that we can use to create the autoscaling logic, i.e. when to spin up new instances and when to terminate them. A good measure for this, in Tridion terms, is the size of the Publishing Queue. Namely for Publishers autoscaling, it's useful to look at the number of items in the Publishing Queue that are in the state "Waiting for Publish".

The approach is to read this metric somehow from the Tridion Content Manager database and make it available in AWS, so that we can use it later. AWS CloudWatch provides a way to define and/or intercept events that can trigger some code execution. The code executed is supposed to read the Publishing Queue and push the count of items into CloudWatch as a custom metric.

1. Define Lambda Function

This function represents the code that is executed by the CloudWatch rule. The function reads the size of the Publishing Queue and pushes it as custom metrics into CloudWatch.

The languages available in AWS Lambda at the moment include .Net Core 1 and Python 2.7. I tried writing a nice .net application that uses Tridion's CoreService client to read the Publishing Queue metrics I needed. Unfortunately, I had to give this up after realizing the limitations in .Net Core 1 regarding connectivity to WCF services. Connecting to a service is really a big deal in 2017 -- you need a ton of DLLs!

Instead, I wrote the Lambda code in Python 2.7 using direct DB access to read the metrics from the Tridion CM DB. Definitely not the nicest approach, but it seems like the only way to do it. Also because the DB is an RDS instance in the same VPC, I wasn't too concerned with security.

After a few iterations and optimizations, the code looks like this:

from os import getenv
import pymssql
import boto3

client = boto3.client('cloudwatch')

def handler(event, context):

    server = getenv("PYMSSQL_SERVER")
    user = getenv("PYMSSQL_USERNAME")
    password = getenv("PYMSSQL_PASSWORD")
    database = getenv("PYMSSQL_DB")

    conn = pymssql.connect(server, user, password, database)
    cursor = conn.cursor()
    cursor.execute('select STATE, COUNT(*) from PUBLISH_TRANSACTIONS where STATE=1 or STATE=4 group by STATE')

    metrics = {'Waiting for Publish': 0, 'Waiting for Deployment': 0}

    for row in cursor.fetchall():
        count = row[1]
        if row[0] == 1:
            metrics['Waiting for Publish'] = count
        elif row[0] == 4:
            metrics['Waiting for Deployment'] = count

    print 'Metrics', metrics

    for metric in metrics:
        response = client.put_metric_data(
            Namespace='SDL Web',
            MetricData=[
                {
                'MetricName': metric,
                'Value': metrics[metric],
                'Unit': 'Count',
                },
            ]
        )

    conn.close()

I used environment variables in order to make the code more portable and clean. These variables are specified in the AWS console.

The code reads 2 values:
  • number of items in Waiting for Publish state;
  • number of items in Waiting for Deployment state;

Since I'm going to implement autoscaling for Deployers, I might as well read the relevant metrics in one go.

The code uses pymssql library to interact with CM DB. It also uses the boto3 CloudWatch client to push the custom metrics into CloudWatch.

2. Define Rule in CloudWatch

CloudWatch rules can be defined based on a time schedule (like a cron job) or based on events raised somewhere else.

In this situation, a time pattern rule made sense. So I created a rule that fires every minute.

You also associate a target with the rule. This specifies what happens when the rule fires. In my case it executes the Lambda function created in step 1.


Give the rule a name and a description and save it.


3. Visualize Data in Dashboard

One can inspect the new custom metrics in CloudWatch and use them for creating alarms (presented in a later post) or place them in a dashboard like this:



Comments

Popular posts from this blog

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio

A DD4T.net Implementation - Custom Binary Publisher

The default way to publish binaries in DD4T is implemented in class DD4T.Templates.Base.Utils.BinaryPublisher and uses method RenderedItem.AddBinary(Component) . This produces binaries that have their TCM URI as suffix in their filename. In my recent project, we had a requirement that binary file names should be clean (without the TCM URI suffix). Therefore, it was time to modify the way DD4T was publishing binaries. The method in charge with publishing binaries is called PublishItem and is defined in class BinaryPublisher . I therefore extended the BinaryPublisher and overrode method PublishItem. public class CustomBinaryPublisher : BinaryPublisher { private Template currentTemplate; private TcmUri structureGroupUri; In its simplest form, method PublishItem just takes the item and passes it to the AddBinary. In order to accomplish the requirement, we must specify a filename while publishing. This is the file name part of the binary path of Component.BinaryConten

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the