Skip to main content

A Donut Cache Implementation for DD4T

I was presenting in a previous post "A Simple Output Cache", in the form of a Java Servlet filter that was caching requests to DD4T pages and subsequently serve them from cache. In this post I am enhancing the output cache idea to make it more flexible.

Namely, this post presents a donut cache solution that allows for caching parts of the page output. In Tridion terminology, the donut cache will allow a Tridion developer to specify at Component Presentation level which CP to cache and which not to cache. To make it even more flexible, the Page Template itself can be configured to allow being cached entirely or not.

The control over what PT/CT is cached is specified in Tridion CM by making use of metadata on the respective template. As such, I defined a metadata field "cache" that I added to the CT/PT Metadata Schema. The field can have two values only, based off a predefined list -- "true" or "false". When false, it indicates the output of the template should not be cached by the Output Cache filter. Default value, true.

Component Template Metadata field 'cache'
The configurations in web.xml are as follows:
<filter>
    <filter-name>OutputCacheFilter</filter-name>
    <filter-class>com.anchorage.web.filters.OutputCacheFilter</filter-class>
</filter>

<filter-mapping>
    <filter-name>OutputCacheFilter</filter-name>
    <url-pattern>*.jsp</url-pattern>
    <dispatcher>ERROR</dispatcher>
    <dispatcher>FORWARD</dispatcher>
    <dispatcher>INCLUDE</dispatcher>
    <dispatcher>REQUEST</dispatcher>
</filter-mapping>
Notice the mapping to the filter is not on *.html anymore (otherwise it would cache the entire output of the page). Instead, we are caching the output form *.jsp views.

The code in Output Cache is almost identical to the original, with a few minor changes. The highlighted lines have been added/modified:
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain chain)
        throws IOException, ServletException {
    HttpServletRequest request = (HttpServletRequest) servletRequest;
    HttpServletResponse response = (HttpServletResponse) servletResponse;

    CachedResponse cachedResponse;
    boolean useCache = isCacheEnabled(request);
    if (useCache) {
        String key = getKey(request);
        CacheElement<CachedResponse> cacheElement = cacheProvider.loadFromLocalCache(key);

        if (cacheElement.isExpired()) {
            synchronized (cacheElement) {
                if (cacheElement.isExpired()) {
                    CharResponseWrapper responseWrapper = new CharResponseWrapper(response);
                    chain.doFilter(request, responseWrapper);
                    cachedResponse = new CachedResponse(responseWrapper);
                    cacheElement.setPayload(cachedResponse);
                    RepositoryLocalItem model = getModel(request);
                    if (model == null) {
                        cacheProvider.storeInItemCache(key, cacheElement);
                    } else {
                        TCMURI tcmuri = new TCMURI(model.getId());
                        cacheProvider.storeInItemCache(key, cacheElement, tcmuri.getPublicationId(), tcmuri.getItemId());
                    }
                } else {
                    cachedResponse = cacheElement.getPayload();
                }
            }
        } else {
            cachedResponse = cacheElement.getPayload();
        }
    } else { // no cache
        CharResponseWrapper responseWrapper = new CharResponseWrapper(response);
        chain.doFilter(request, responseWrapper);
        cachedResponse = new CachedResponse(responseWrapper);
    }
    sendCachedResponse(response, cachedResponse);
}
The first change was to add the isCacheEnabled method. This method (presented below) checks whether or not the current request should be cached. We do this by verifying the presence of a special request header. The Page or Component controller sets such header in the request, depending on the value of the template metadata field 'cache'.
private boolean isCacheEnabled(HttpServletRequest request) {
    Object noCacheAttribute = request.getAttribute("NO_CACHE");
    request.removeAttribute("NO_CACHE");
    boolean result = noCacheAttribute == null || !noCacheAttribute.equals(Boolean.TRUE);
    return result;
}
The second change is in the getModel method. By contrast to the first Output Cache filter, the donut cache filter can be called for both Pages and Components. So method getPage had to be replaced by getModel -- a more generic model that attempts to look up the model of a Component or a Page form the attributes of the current request.
private RepositoryLocalItem getModel(HttpServletRequest request) {
    Object model = request.getAttribute(ComponentUtils.COMPONENT_NAME);
    if (model == null) {
        model = request.getAttribute(Constants.PAGE_MODEL_KEY);
    }

    if (model instanceof RepositoryLocalItem) {
        return (RepositoryLocalItem) model;
    }

    return null;
}

The third and last change is the else branch of the "if useCache". Namely, when set to false, the caching logic is completely bypassed. Instead the normal filter chain is invoked and the output is immediately sent in the response.

The other changes to the solution are in the Page and Component controllers. Their code had to be enhanced with logic that reads the value of the template metadata field 'cache' and add the request header NO_CACHE in case field value is 'false'.
boolean useCache = getCacheStatus(pageModel);
if (useCache) {
    request.removeAttribute("NO_CACHE");
} else {
    request.setAttribute("NO_CACHE", true);
}

And the method getCacheStatus is the one actually looking at the metadata field:
public boolean getCacheStatus(final GenericPage page) {
    PageTemplate pageTemplate = page.getPageTemplate();
    Map<String, Field> metadata = pageTemplate.getMetadata();

    if (metadata != null && metadata.containsKey("cache")) {
        String useCache = (String) metadata.get("cache").getValues().get(0);
        if (StringUtils.isNotEmpty(useCache)) {
            return !useCache.toLowerCase().equals("false");
        }
    }

    return true;
}




Comments

Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Content Delivery Monitoring in AWS with CloudWatch

This post describes a way of monitoring a Tridion 9 combined Deployer by sending the health checks into a custom metric in CloudWatch in AWS. The same approach can also be used for other Content Delivery services. Once the metric is available in CloudWatch, we can create alarms in case the service errors out or becomes unresponsive. The overall architecture is as follows: Content Delivery service sends heartbeat (or exposes HTTP endpoint) for monitoring Monitoring Agent checks heartbeat (or HTTP health check) regularly and stores health state AWS lambda function: runs regularly reads the health state from Monitoring Agent pushes custom metrics into CloudWatch I am running the Deployer ( installation docs ) and Monitoring Agent ( installation docs ) on a t2.medium EC2 instance running CentOS on which I also installed the Systems Manager Agent (SSM Agent) ( installation docs ). In my case I have a combined Deployer that I want to monitor. This consists of an Endpoint and a

Debugging a Tridion 2011 Event System

OK, so you wrote your Tridion Event System. Now it's time to debug it. I know this is a hypothetical situtation -- your code never needs any kind of debugging ;) but indulge me... Recently, Alvin Reyes ( @nivlong ) blogged about being difficult to know how exactly to debug a Tridion Event System. More exactly, the question was " What process do I attach to for debugging even system code? ". Unfortunately, there is no simple or generic answer for it. Different events are fired by different Tridion CM modules. These modules run as different programs (or services) or run inside other programs (e.g. IIS). This means that you will need to monitor (or debug) different processes, based on which events your code handles. So the usual suspects are: dllhost.exe (or dllhost3g.exe ) - running as the MTSUser is the SDL Tridion Content Manager COM+ application and it fires events on generic TOM objects (e.g. events based on Tridion.ContentManager.Extensibility.Events.CrudEven