Skip to main content

Posts

Showing posts from July, 2017

Using File System Queues for Out-Scaled Deployers

When a Deployer Receiver receives a transport package, it notifies the Deployer Workers there is 'work' for them to do. This notification can take the form of files on a shared File System. The Deployer workers monitor the file system and upon noticing a change, they start deploying/undeploying the package. In order to setup this File System notification system, we must first create a shared File System (shared across the Deployer Receiver and all Deployer Workers). In order to do that, have a look at an earlier post Using Elastic File System for Out-Scaled Deployers . Once the EFS is in place, for example under folder /efs01 , we can configure the Deployers to use this share file system in the file deployer-config.xml , as per below: <Queues> <Queue Default= "true" Verbs= "Content" Adapter= "FileSystem" Id= "ContentQueue" > <Property Name= "Destination" Value= "/efs01/deployer-queu

Using ElastiCache (Redis) for Out-Scaled Deployers

In a scaled-out scenario for Content Delivery Deployers, it is possible to setup an AWS ElastiCache Redis DB as the Binary Storage medium for incoming transport packages. The Deployer Receiver writes these transport package zip files into the Binary Storage Redis instance. Then it is up to the Deployer Workers to read these zip files as they deployer/underploy the content. Below, we present the configurations for an AWS ElastiCache Redis acting as storage medium for transport packages. Start by simply creating an ElastiCache Redis instance in AWS console. This whole step might take you 5 minutes :) AWS will give you the hostname where the Redis DB is available and the port, usually 6379. In your deployer-conf.xml , setup this Redis instance using the BinaryStorage node format as below:     <BinaryStorage Id="RedisStorage" Adapter="RedisBlobStorage">         <Property Name="Host" Value="10.10.2.232"/>         <Prop