Skip to main content

A Donut Cache Implementation for DD4T

I was presenting in a previous post "A Simple Output Cache", in the form of a Java Servlet filter that was caching requests to DD4T pages and subsequently serve them from cache. In this post I am enhancing the output cache idea to make it more flexible.

Namely, this post presents a donut cache solution that allows for caching parts of the page output. In Tridion terminology, the donut cache will allow a Tridion developer to specify at Component Presentation level which CP to cache and which not to cache. To make it even more flexible, the Page Template itself can be configured to allow being cached entirely or not.

The control over what PT/CT is cached is specified in Tridion CM by making use of metadata on the respective template. As such, I defined a metadata field "cache" that I added to the CT/PT Metadata Schema. The field can have two values only, based off a predefined list -- "true" or "false". When false, it indicates the output of the template should not be cached by the Output Cache filter. Default value, true.

Component Template Metadata field 'cache'
The configurations in web.xml are as follows:
<filter>
    <filter-name>OutputCacheFilter</filter-name>
    <filter-class>com.anchorage.web.filters.OutputCacheFilter</filter-class>
</filter>

<filter-mapping>
    <filter-name>OutputCacheFilter</filter-name>
    <url-pattern>*.jsp</url-pattern>
    <dispatcher>ERROR</dispatcher>
    <dispatcher>FORWARD</dispatcher>
    <dispatcher>INCLUDE</dispatcher>
    <dispatcher>REQUEST</dispatcher>
</filter-mapping>
Notice the mapping to the filter is not on *.html anymore (otherwise it would cache the entire output of the page). Instead, we are caching the output form *.jsp views.

The code in Output Cache is almost identical to the original, with a few minor changes. The highlighted lines have been added/modified:
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain chain)
        throws IOException, ServletException {
    HttpServletRequest request = (HttpServletRequest) servletRequest;
    HttpServletResponse response = (HttpServletResponse) servletResponse;

    CachedResponse cachedResponse;
    boolean useCache = isCacheEnabled(request);
    if (useCache) {
        String key = getKey(request);
        CacheElement<CachedResponse> cacheElement = cacheProvider.loadFromLocalCache(key);

        if (cacheElement.isExpired()) {
            synchronized (cacheElement) {
                if (cacheElement.isExpired()) {
                    CharResponseWrapper responseWrapper = new CharResponseWrapper(response);
                    chain.doFilter(request, responseWrapper);
                    cachedResponse = new CachedResponse(responseWrapper);
                    cacheElement.setPayload(cachedResponse);
                    RepositoryLocalItem model = getModel(request);
                    if (model == null) {
                        cacheProvider.storeInItemCache(key, cacheElement);
                    } else {
                        TCMURI tcmuri = new TCMURI(model.getId());
                        cacheProvider.storeInItemCache(key, cacheElement, tcmuri.getPublicationId(), tcmuri.getItemId());
                    }
                } else {
                    cachedResponse = cacheElement.getPayload();
                }
            }
        } else {
            cachedResponse = cacheElement.getPayload();
        }
    } else { // no cache
        CharResponseWrapper responseWrapper = new CharResponseWrapper(response);
        chain.doFilter(request, responseWrapper);
        cachedResponse = new CachedResponse(responseWrapper);
    }
    sendCachedResponse(response, cachedResponse);
}
The first change was to add the isCacheEnabled method. This method (presented below) checks whether or not the current request should be cached. We do this by verifying the presence of a special request header. The Page or Component controller sets such header in the request, depending on the value of the template metadata field 'cache'.
private boolean isCacheEnabled(HttpServletRequest request) {
    Object noCacheAttribute = request.getAttribute("NO_CACHE");
    request.removeAttribute("NO_CACHE");
    boolean result = noCacheAttribute == null || !noCacheAttribute.equals(Boolean.TRUE);
    return result;
}
The second change is in the getModel method. By contrast to the first Output Cache filter, the donut cache filter can be called for both Pages and Components. So method getPage had to be replaced by getModel -- a more generic model that attempts to look up the model of a Component or a Page form the attributes of the current request.
private RepositoryLocalItem getModel(HttpServletRequest request) {
    Object model = request.getAttribute(ComponentUtils.COMPONENT_NAME);
    if (model == null) {
        model = request.getAttribute(Constants.PAGE_MODEL_KEY);
    }

    if (model instanceof RepositoryLocalItem) {
        return (RepositoryLocalItem) model;
    }

    return null;
}

The third and last change is the else branch of the "if useCache". Namely, when set to false, the caching logic is completely bypassed. Instead the normal filter chain is invoked and the output is immediately sent in the response.

The other changes to the solution are in the Page and Component controllers. Their code had to be enhanced with logic that reads the value of the template metadata field 'cache' and add the request header NO_CACHE in case field value is 'false'.
boolean useCache = getCacheStatus(pageModel);
if (useCache) {
    request.removeAttribute("NO_CACHE");
} else {
    request.setAttribute("NO_CACHE", true);
}

And the method getCacheStatus is the one actually looking at the metadata field:
public boolean getCacheStatus(final GenericPage page) {
    PageTemplate pageTemplate = page.getPageTemplate();
    Map<String, Field> metadata = pageTemplate.getMetadata();

    if (metadata != null && metadata.containsKey("cache")) {
        String useCache = (String) metadata.get("cache").getValues().get(0);
        if (StringUtils.isNotEmpty(useCache)) {
            return !useCache.toLowerCase().equals("false");
        }
    }

    return true;
}




Comments

Popular posts from this blog

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio

A DD4T.net Implementation - Custom Binary Publisher

The default way to publish binaries in DD4T is implemented in class DD4T.Templates.Base.Utils.BinaryPublisher and uses method RenderedItem.AddBinary(Component) . This produces binaries that have their TCM URI as suffix in their filename. In my recent project, we had a requirement that binary file names should be clean (without the TCM URI suffix). Therefore, it was time to modify the way DD4T was publishing binaries. The method in charge with publishing binaries is called PublishItem and is defined in class BinaryPublisher . I therefore extended the BinaryPublisher and overrode method PublishItem. public class CustomBinaryPublisher : BinaryPublisher { private Template currentTemplate; private TcmUri structureGroupUri; In its simplest form, method PublishItem just takes the item and passes it to the AddBinary. In order to accomplish the requirement, we must specify a filename while publishing. This is the file name part of the binary path of Component.BinaryConten

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the