Skip to main content

Deployer Extension - Handle Binary Unpublish

In case you have written any SDL Tridion Content Delivery Deployer extensions, you have noticed there is no easy way in intercepting the unpublish/undeploy of a binary. This blog post shows how to intercept such an action and execute your custom code on it.

The reason why it is hard to intercept a binary undeploy is because in fact the binary remove does not happen at Deployer level; rather, it takes place in the storage level. So the extension point to be used in not a Deployer extension, but a storage FileSystem or JPA extension.

File System

The following code implements a storage extension that intercepts the removal of a binary from the File System Content Data Storage (fka the File system broker):

package com.tridion.storage.toolkit;

@Component("FSBinaryDAOExtension")
@Scope("prototype")
public class FSBinaryDAOExtension extends FSBinaryContentDAO implements BinaryContentDAO {

    @Override
    public void remove(int publicationId, int binaryId, String variantId,
            String relativePath) throws StorageException {
        // Your custom code goes here

        super.remove(publicationId, binaryId, variantId, relativePath);

        // or here
    }
}

Notice the package must start with com.tridion.storage. Without it, it will not be found during the storage module initialization.

Depending on the requirement, you can place your custom code before or after the call to super.remove.

Database Storage (JPA)

The code below implements a storage extension that uses the Database storage for Content Delivery:

package com.tridion.storage.toolkit;

@Component("JPABinaryDAOExtension")
@Scope("prototype")
public class JPABinaryDAOExtension extends JPABinaryContentDAO implements BinaryContentDAO {

    @Override
    public void remove(int publicationId, int binaryId, String variantId,
            String relativePath) throws StorageException {
        // your custom code goes here

        super.remove(publicationId, binaryId, variantId, relativePath);

        // or here
    }
}

Also note the package must start with com.tridion.storage.

Bundle XML Descriptor

We must configure the custom classes in a bundle XML descriptor file. Below is such a file, in my case called toolkit_dao_bundle.xml:

<?xml version="1.0" encoding="UTF-8"?>
<StorageDAOBundles>
    <!-- Filesystem mappings -->
    <StorageDAOBundle type="filesystem">
        <StorageDAO typeMapping="Binary" 
                    class="com.tridion.storage.toolkit.FSBinaryDAOExtension"/>
    </StorageDAOBundle>

    <!-- Java Persistence API mappings -->
    <StorageDAOBundle type="persistence">
        <StorageDAO typeMapping="Binary" 
                    class="com.tridion.storage.toolkit.JPABinaryDAOExtension"/>
    </StorageDAOBundle>
</StorageDAOBundles>

Place the bundle configuration XML file either on the class-path of your Deployer, or package it in the root position inside your extension JAR.

Final Configuration

The final configuration must be made in cd_storage_conf.xml of you Deployer. Add the following line inside node Global / Storages / StorageBindings:

<Storages>
    <StorageBindings>
        <Bundle src="toolkit_dao_bundle.xml"/>
    </StorageBindings>
...

Restart the Deployer.

With all these configuration and code in place, your custom binary handling code should be called when a Binary is unpublished/undeployed. Remember that in Tridion binaries are only unpublished when they are not referenced anymore by any published Component.



Comments

Popular posts from this blog

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as