Skip to main content

JMS Cache Channel Setup

In my current project, we are implementing the Tridion Cache Channel Service using JMS technology. Without having installed it previously I used to regard this as a little bit of voodoo.

I started doing my due diligence research on the topic, only to find out that actually there are quite a few resources out there on JMS in general (doh!), as well as in conjunction with Tridion specifically.

There are some great blog posts on JMS consumption and setup from Bruce Snyder:
And a concise, but great post from Julian on enabling JMS for a Tridion Deployer:
Diagram showing the deployer sending 'flush' messages to the Cache Channel Service, which then relays the messages as 'invalidates' to all connected Web-Applications
In a classic configuration (i.e. one using RMI), the Cache Channel Service (CCS) is running as a separate process, most likely a small stand-alone Java application.

In a JMS configuration, the CCS is in fact replaced by a messaging server, like ApacheMQ, for instance. The overall logic is still the same, only the communication is handled a bit differently. Namely, the Deployer contains the JMS producer logic. This is a JMS factory that knows about the JMS server it connects to and upon publishing/unpublishing it produces JMS messages and sends them to the messaging server.

Next, the Web-Applications register themselves with the messaging server using JMS listeners and instruct the messaging server they are interested in receiving messages on a certain topic or queue.

Upon publish/unpublish activity, the Deployer produces JMS messages that it sends to the messaging server, indicating it should relay them to all listening web-apps. The messages contain notifications on the items that have been updated during the deployment transaction.

Upon receiving such notifications, the web-apps must react by flushing those items from their internal caches, if present.

Configure Deployer with JMS Cache Channel

This is one of the easiest parts one would have to do. It involves modifying the configuration of your Tridion Deployer to instruct it to use JMS rather than RMI when sending the cache flush/invalidate messages.

My assumption is that yo already have a working & configured Deployer and that you are only about to add the JMS Cache Channel configuration to it.

First, you'll need to update your cd_storage_conf.xml. As per Julian's post, make sure you have -- a) ObjectCache enabled, and b) configured the RemoteSynchronization element.

<RemoteSynchronization>
    <Connector Class="com.tridion.cache.JMSCacheChannelConnector" Topic="Tridion"> 
        <JndiContext>
            <Property Name="java.naming.factory.initial"
                Value="org.apache.activemq.jndi.ActiveMQInitialContextFactory"/>
            <Property Name="java.naming.provider.url"
                Value="tcp://localhost:61616?soTimeout=30000"/>
            <Property Name="topic.Tridion" Value="TridionCCS"/>
        </JndiContext>
    </Connector>
</RemoteSynchronization>

Pay good attention to the Topic attribute value "Tridion" name and property Name attribute value "topic.Tridion". You will need to prefix the property with "topic." in order to make it work.

The attribute Value with value TridionCCS, as well as the topic name are just arbitrary names, so you can choose something you feel appropriate.

In case your messaging server is ActiveMQ, you would need to specify the URL it is available under. In the configuration snippet above, it is tcp://localhost:61616

Next, as per Raimond's comment below, set the cached attribute to true in the item type mappings.

<ItemTypes defaultStorageId="brokerdb" cached="true">
    <!-- your mappings go here -->
</ItemTypes>

Finally, you would also need the JAR files necessary to make the connection to the messaging server and to handle the communication. Again, if your JMS server is ActiveMQ (I used version 5.10.0), these are the files (make them available in your Deployer's class-path):
  • activemq-client-5.10.0.jar
  • geronimo-j2ee-management_1.1_spec-1.0.1.jar
  • geronimo-jms_1.1_spec-1.1.1.jar
  • hawtbuf-1.10.jar
  • jms-1.1.jar
The Deployer configuration is ready now, but it's not quite ready to use yet. In my next post, we'll go through installing and configuring ActiveMQ as your messaging server.


Comments

Raimond said…
One important thing to add: in the Storage configuration for the Deployer you *must* set cached="true" in for the itemtypes: .

Popular posts from this blog

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio

A DD4T.net Implementation - Custom Binary Publisher

The default way to publish binaries in DD4T is implemented in class DD4T.Templates.Base.Utils.BinaryPublisher and uses method RenderedItem.AddBinary(Component) . This produces binaries that have their TCM URI as suffix in their filename. In my recent project, we had a requirement that binary file names should be clean (without the TCM URI suffix). Therefore, it was time to modify the way DD4T was publishing binaries. The method in charge with publishing binaries is called PublishItem and is defined in class BinaryPublisher . I therefore extended the BinaryPublisher and overrode method PublishItem. public class CustomBinaryPublisher : BinaryPublisher { private Template currentTemplate; private TcmUri structureGroupUri; In its simplest form, method PublishItem just takes the item and passes it to the AddBinary. In order to accomplish the requirement, we must specify a filename while publishing. This is the file name part of the binary path of Component.BinaryConten

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the