Skip to main content

Toolkit - Performance

This post if part of a series about the File System Toolkit - a custom content delivery API for SDL Tridion.

This post presents performance data that was captured for each major functionality with and without caching Linking, CP Assembler, Component Presentation Factory, Dynamic Content Queries, Model Factory.

The data was captured on a 2014 Macbook Pro 15", 16 GB RAM, 2.6 GHz Intel Core i7 running OS X El Capitan.

Test methodology: each test was run for 3 minutes and the total number of successful Toolkit API calls was measured. Then the number of calls per second was computed 'with cache' and 'without cache' test runs. Then a cache boost factor was calculated by diving (the number of API calls with cache) / (number of API calls without cache).

Each cache test was executed 3 times, with different cache time-to-live values of 1 second, 5 seconds and 0 seconds (eternal cache, no expiration). The rationale is to see what impact different cache expiration/eviction values have on the overall performance.

Then finally the entire suite of tests was run twice in order to calculate averages.

Model Factory

Only read model operations were calculated. The reasoning is that Deployer extension is the only one that creates / writes the JSON files and as such that operation is much less frequent than read operations that are used all the time during the normal functioning of the Toolkit.

Models read per second: 1.9m (with cache) and 26k (without cache)
Cache boost: 73x

Model Factory read performance

Link Factory

The test executed component, page and binary links. Each LinkFactory method was executed and counts as one API call.

Links resolved per second: 383k (with cache) and 26k (without cache)
Cache boost: 15x

Link Factory resolve performance

Component Presentation Factory

The test executed getComponentPresentation and getComponentPresentationWithHighestPriority factory calls. Each call counts as one API call. Each factory calls makes one ModelFactory read call, so the times below include also the ModelFactory call duration.

CPFactory calls per second: 3.2m (with cache) and 24k (without cache)
Cache boost: 135x

Component Presentation Factory read performance

Component Presentation Assembler

The test executes a getComponentPresentation factory method which counts as one API call. The call includes a CPFactory call, so the times below include also the CPFactory call duration.

CPAssembler calls per second: 55k (with cache) and 17k (without cache)
Cache boost: 3x

Component Presentation Assembler performance

Dynamic Content Query

The dynamic content queries are not cached. Therefore, the times below are pretty much the same. The tests were executed using different combinations of query criteria (custom meta, schema, date ranges, pagination, and vs or complex queries). Each query counted as one API call. The numbers below represent an average duration of each of these queries. No models were created during the dynamic queries, the queries simply returned the TcmUri of the identified items.

Dynamic queries per second: 6.8k

Dynamic Query performance


Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: I stumbled upon the following post from 2008 (!!!), , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Content Delivery Monitoring in AWS with CloudWatch

This post describes a way of monitoring a Tridion 9 combined Deployer by sending the health checks into a custom metric in CloudWatch in AWS. The same approach can also be used for other Content Delivery services. Once the metric is available in CloudWatch, we can create alarms in case the service errors out or becomes unresponsive. The overall architecture is as follows: Content Delivery service sends heartbeat (or exposes HTTP endpoint) for monitoring Monitoring Agent checks heartbeat (or HTTP health check) regularly and stores health state AWS lambda function: runs regularly reads the health state from Monitoring Agent pushes custom metrics into CloudWatch I am running the Deployer ( installation docs ) and Monitoring Agent ( installation docs ) on a t2.medium EC2 instance running CentOS on which I also installed the Systems Manager Agent (SSM Agent) ( installation docs ). In my case I have a combined Deployer that I want to monitor. This consists of an Endpoint and a

Debugging a Tridion 2011 Event System

OK, so you wrote your Tridion Event System. Now it's time to debug it. I know this is a hypothetical situtation -- your code never needs any kind of debugging ;) but indulge me... Recently, Alvin Reyes ( @nivlong ) blogged about being difficult to know how exactly to debug a Tridion Event System. More exactly, the question was " What process do I attach to for debugging even system code? ". Unfortunately, there is no simple or generic answer for it. Different events are fired by different Tridion CM modules. These modules run as different programs (or services) or run inside other programs (e.g. IIS). This means that you will need to monitor (or debug) different processes, based on which events your code handles. So the usual suspects are: dllhost.exe (or dllhost3g.exe ) - running as the MTSUser is the SDL Tridion Content Manager COM+ application and it fires events on generic TOM objects (e.g. events based on Tridion.ContentManager.Extensibility.Events.CrudEven