Skip to main content

Purge Publish Transaction Script

Ok, it's time to do some maintenance and performance tuning on your Tridion CMS. One of the things you should keep an eye is the size of the Publishing Queue. This can grow quite large over time depending on the amount of publishing going on daily.

Tridion comes with a very handy tool -- the Purge Tool, which allows trimming the size of the Publishing Queue. I won't go into what the tool can do; there is documentation for that.

The following little script is perfect to run inside a Windows batch script and schedule it to execute periodically. In its current state it deletes Publish Transactions that completed more than 1 year ago relative to today's date.

@echo off
set /a LastYear=%DATE:~10,4%-1
set OneYearAgo=%LastYear%/%date:~4,5%
echo OneYearAgo=%OneYearAgo%

@echo off > "%TRIDION_HOME%bin\purge.xml" & setLocal enableDELAYedexpansion
for /f "tokens=* delims= " %%a in (purgeTemplate.xml) do (
    set str=%%a
    set str=!str:XXX=%OneYearAgo%!
    >> "%TRIDION_HOME%bin\purge.xml" echo !str!

cd "%TRIDION_HOME%bin"
PurgeTool.exe purge.xml /Purge

For the worth, it cool noticing how ingenious the last year's date is constructed, then how the find and replace is taking place in the template file.

The batch script makes use of the following template file, which simply offers all default values for the purge job. The cutoff date is give by pattern XXX:

        <Items Purge="false">
        <ProcessHistories Purge="false" Before=""/>
        <PublishTransactions Purge="true" Before="XXX">
            <State Purge="true">Success</State>
            <State Purge="true">Failed</State>
        <Root Purge="False"/>

In order to execute the script, simply call the batch file from Windows command prompt or within a Scheduled Task. The script requires the environment variable TRIDION_HOME be set. Obviously, run the script on the Tridion CMS server.


Anonymous said…
Hi, the batch script is works perfectly. But , i want to purge all the logs queued by specific user , is there some way to provide the user name ,so that i can script it to delete all the successfully published logs by that user .

Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: I stumbled upon the following post from 2008 (!!!), , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Debugging a Tridion 2011 Event System

OK, so you wrote your Tridion Event System. Now it's time to debug it. I know this is a hypothetical situtation -- your code never needs any kind of debugging ;) but indulge me... Recently, Alvin Reyes ( @nivlong ) blogged about being difficult to know how exactly to debug a Tridion Event System. More exactly, the question was " What process do I attach to for debugging even system code? ". Unfortunately, there is no simple or generic answer for it. Different events are fired by different Tridion CM modules. These modules run as different programs (or services) or run inside other programs (e.g. IIS). This means that you will need to monitor (or debug) different processes, based on which events your code handles. So the usual suspects are: dllhost.exe (or dllhost3g.exe ) - running as the MTSUser is the SDL Tridion Content Manager COM+ application and it fires events on generic TOM objects (e.g. events based on Tridion.ContentManager.Extensibility.Events.CrudEven

Content Delivery Monitoring in AWS with CloudWatch

This post describes a way of monitoring a Tridion 9 combined Deployer by sending the health checks into a custom metric in CloudWatch in AWS. The same approach can also be used for other Content Delivery services. Once the metric is available in CloudWatch, we can create alarms in case the service errors out or becomes unresponsive. The overall architecture is as follows: Content Delivery service sends heartbeat (or exposes HTTP endpoint) for monitoring Monitoring Agent checks heartbeat (or HTTP health check) regularly and stores health state AWS lambda function: runs regularly reads the health state from Monitoring Agent pushes custom metrics into CloudWatch I am running the Deployer ( installation docs ) and Monitoring Agent ( installation docs ) on a t2.medium EC2 instance running CentOS on which I also installed the Systems Manager Agent (SSM Agent) ( installation docs ). In my case I have a combined Deployer that I want to monitor. This consists of an Endpoint and a