Skip to main content

Toolkit - Cache Invalidation

This post if part of a series about the File System Toolkit - a custom content delivery API for SDL Tridion.

In previous post, I explained the use of a Cache Factory. This post describes a very simple cache invalidation mechanism the Toolkit uses in order to make sure it does not serve stale values (for a long time).

Given the nature of the Toolkit storage, i.e. files on a file-system, it is very easy to check when they were last published. By simply looking at the JSON model file last modified attribute, one can see the very moment that model was created/updated. This is the last publish time-stamp as well.

When a CacheElement is placed in cache, it is also given a last-check timestamp, which initially is set to now. This last-check will be greater than the JSON file last-modified attribute. The moment the JSON model is republished, the file last-modified file attribute will be greater than the last-check. When this happens, we know the CacheElement is stale and we remove it from cache.

The stale logic also checks for the existence of a JSON model file. An unpublished (a.k.a. missing) file will immediately be considered stale, and as such removed from cache.

In order to make the cache invalidation check more performant and to minimize I/O operations with the file system, we only check for stale elements on (a) get from cache operation and (b) when a certain interval has passed since the previous stale check. This interval is configurable in Toolkit configuration under property name cacheMonitorSeconds. This value represents a period of time that is acceptable to retrieve potentially stale objects. Typically this value should be low - i.e. 5, 10 or 30 seconds or as long as you're comfortable with. A value of 0 will perform the stale check on all calls to the cache.get() method.

More information about possible values in the toolkit.properties file is available in post Installation and Configuration.

private boolean isStale(Element element) {
    CacheElement<Object> cacheElement = (CacheElement<Object>) element.getObjectValue();
    long lastCheck = cacheElement.getLastCheck();
    long now = System.currentTimeMillis();

    if (now - lastCheck > cacheMonitorInterval) {
        cacheElement.setLastCheck(now);
        Object value = cacheElement.getPayload();

        if (value instanceof IdentifiableObject) {
            IdentifiableObject identifiableObject = (IdentifiableObject) value;
            TcmUri tcmUri = identifiableObject.getTcmUri();
            PathMapper pathMapper = new PathMapper();
            File file = new File(pathMapper.getModelAbsolutePath(tcmUri));

            return !file.exists() || file.lastModified() > lastCheck;
        }
    }

    return false;
}



Comments

Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Content Delivery Monitoring in AWS with CloudWatch

This post describes a way of monitoring a Tridion 9 combined Deployer by sending the health checks into a custom metric in CloudWatch in AWS. The same approach can also be used for other Content Delivery services. Once the metric is available in CloudWatch, we can create alarms in case the service errors out or becomes unresponsive. The overall architecture is as follows: Content Delivery service sends heartbeat (or exposes HTTP endpoint) for monitoring Monitoring Agent checks heartbeat (or HTTP health check) regularly and stores health state AWS lambda function: runs regularly reads the health state from Monitoring Agent pushes custom metrics into CloudWatch I am running the Deployer ( installation docs ) and Monitoring Agent ( installation docs ) on a t2.medium EC2 instance running CentOS on which I also installed the Systems Manager Agent (SSM Agent) ( installation docs ). In my case I have a combined Deployer that I want to monitor. This consists of an Endpoint and a

SDL Web 8 - Content Delivery Microservices

Among the new features in SDL Web 8 there are the Content Delivery Microservices, namely: Audience Manager Content Deployer Contextual Image Delivery Discovery Service Dynamic Content Dynamic Linking Profiling and Personalization Metadata Query Taxonomy User Generated Content These microservices make up the Content Interaction Services and they expose the existing Content Delivery in-process APIs as RESTful services. They provide the server-side component in a Services-Oriented Architecture and act as data layer between the the web client and the Content Delivery Storage Layer. According to the SDL marketing, these microservices: Simplify upgrades, thus offering shorter time to value Modernize architecture, offering better separation between the web application and Tridion APIs Offer more flexibility with less downtime and improved scalability Improve quality, being self-running, contained and having less dependencies In technical words, these microservices