Skip to main content

Using Amazon SQS for Out-Scaled Deployers

When a Deployer Receiver receives a transport package, it notifies the Deployer Workers there is 'work' for them to do. This notification can take the form of JMS messages sent using some kinds of queuing mechanism. This post describes the Amazon Simple Queuing Service (SQS) to send these notifications. The Deployer workers receive these messages from SQS and they start deploying/undeploying the package.

In order to setup this notification system, we must first create the SQS queues and configure them across the Deployer Receiver and all Deployer Workers.

Start by creating Standard Queues using all default properties. We need 3 queues (commit, content, and prepare):



Once the SQS queues are in place, we can configure the Deployers to use them in the file deployer-config.xml. Amazon gives us the queues URLs. We need to specify the URL base separately, and then simply name the queues individually, as per below:

<Queues>
    <Queue Default="true" Verbs="Content" Adapter="JMS" Id="mihai-content">
        <Property Name="Workers" Value="16"/>
    </Queue>
    <Queue Verbs="Commit,Rollback" Adapter="JMS" Id="mihai-commit">
        <Property Name="Workers" Value="16"/>
    </Queue>
    <Queue Verbs="Prepare" Adapter="JMS" Id="mihai-prepare">
        <Property Name="Workers" Value="16"/>
    </Queue>

    <Adapter Id="JMS">
        <Property Name="JMSConnectionFactoryBuilderClass"
                  Value="com.sdl.delivery.spring.configuration.jms.AmazonSQSConnectionFactoryBuilder"/>
        <Property Name="JMSUri" Value="https://sqs.eu-west-1.amazonaws.com/692321"/>
        <Property Name="Username" Value="username"/>
        <Property Name="Password" Value="password"/>
        <Property Name="ReceiveTimeout" Value="200"/>
    </Adapter>
</Queues>

The properties Username and Password represent the AWS account credentials and they can be found in AWS user security settings.

The property Workers specifies the number of worker threads for each queue. Values that perform best are around 10-20 worker threads. Performance degrades using lower values and there isn't any significant performance gain when using higher values.



Comments

Popular posts from this blog

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio

A DD4T.net Implementation - Custom Binary Publisher

The default way to publish binaries in DD4T is implemented in class DD4T.Templates.Base.Utils.BinaryPublisher and uses method RenderedItem.AddBinary(Component) . This produces binaries that have their TCM URI as suffix in their filename. In my recent project, we had a requirement that binary file names should be clean (without the TCM URI suffix). Therefore, it was time to modify the way DD4T was publishing binaries. The method in charge with publishing binaries is called PublishItem and is defined in class BinaryPublisher . I therefore extended the BinaryPublisher and overrode method PublishItem. public class CustomBinaryPublisher : BinaryPublisher { private Template currentTemplate; private TcmUri structureGroupUri; In its simplest form, method PublishItem just takes the item and passes it to the AddBinary. In order to accomplish the requirement, we must specify a filename while publishing. This is the file name part of the binary path of Component.BinaryConten

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the