Skip to main content

Toolkit - Tricks with Memory Byte Buffer

This post if part of a series about the File System Toolkit - a custom content delivery API for SDL Tridion.

In previous post Writing My Own Database Engine, I quickly mentioned the use of Memory Byte Buffer from Jana NIO that provides fast access to a file by mapping its content to memory. This post goes into more detail over some tricks that occurred with that implementation.

There is an issue with Memory Byte Buffer. Namely, once it is created by calling FileChannel.map method, it cannot be unmapped, closed or discarded. The byte buffer will exist until it is garbage collected.

From the JavaDoc:

A mapping, once established, is not dependent upon the file channel that was used to create it. Closing the channel, in particular, has no effect upon the validity of the mapping.

A mapped byte buffer and the file mapping that it represents remain valid until the buffer itself is garbage-collected.


The issue is affects Windows OS Java implementations, in the sense that it keep portions of the mapped file open and unavailable to modifications. The index file must be modifiable at any time because of publish/unpublish activities that must be able to modify the index.

Attempting to modify the index file on Windows OS will result in a FileNotFoundException (The requested operation cannot be per formed on a file with a user-mapped section open)

In order to work around this issue, I created a big hack. I created my own factory that creates and destroys MemoryByteBuffers.

public enum MemoryBufferFactory {

    INSTANCE;
    MemoryBufferFactory() {
    }

    public MemoryBuffer getBuffer(FileChannel fileChannel, FileChannel.MapMode mode) throws IOException {
        return new MappedMemoryBuffer(fileChannel, mode);
    }
}

The MemoryBuffer is a interface that defines the operations on a wrapped byte buffer:

public interface MemoryBuffer {
    int capacity();
    byte get(int position);
    void close();
}

The MappedMemoryBuffer implements the MemoryBuffer interface and wraps a java.nio.MappedByteBuffer object. The only trick it does is that in its close() method, it calls an unsupported, private unmap method of the MappedByteBuffer.

public class MappedMemoryBuffer implements MemoryBuffer {

    private final MappedByteBuffer buffer;
    private final FileChannel fileChannel;

    public MappedMemoryBuffer(FileChannel fileChannel, FileChannel.MapMode mode) {
        this.fileChannel = fileChannel;
        buffer = fileChannel.map(mode, 0, fileChannel.size());
    }

    public int capacity() {
        return buffer.capacity();
    }

    public byte get(int position) {
        return buffer.get(position);
    }

    public void close() {
        Class<?> clazz = fileChannel.getClass();
        Method method = clazz.getDeclaredMethod("unmap",
            new Class[]{MappedByteBuffer.class});
        method.setAccessible(true);
        method.invoke(null, new Object[]{buffer});
    }
}



Comments

Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Content Delivery Monitoring in AWS with CloudWatch

This post describes a way of monitoring a Tridion 9 combined Deployer by sending the health checks into a custom metric in CloudWatch in AWS. The same approach can also be used for other Content Delivery services. Once the metric is available in CloudWatch, we can create alarms in case the service errors out or becomes unresponsive. The overall architecture is as follows: Content Delivery service sends heartbeat (or exposes HTTP endpoint) for monitoring Monitoring Agent checks heartbeat (or HTTP health check) regularly and stores health state AWS lambda function: runs regularly reads the health state from Monitoring Agent pushes custom metrics into CloudWatch I am running the Deployer ( installation docs ) and Monitoring Agent ( installation docs ) on a t2.medium EC2 instance running CentOS on which I also installed the Systems Manager Agent (SSM Agent) ( installation docs ). In my case I have a combined Deployer that I want to monitor. This consists of an Endpoint and a

Debugging a Tridion 2011 Event System

OK, so you wrote your Tridion Event System. Now it's time to debug it. I know this is a hypothetical situtation -- your code never needs any kind of debugging ;) but indulge me... Recently, Alvin Reyes ( @nivlong ) blogged about being difficult to know how exactly to debug a Tridion Event System. More exactly, the question was " What process do I attach to for debugging even system code? ". Unfortunately, there is no simple or generic answer for it. Different events are fired by different Tridion CM modules. These modules run as different programs (or services) or run inside other programs (e.g. IIS). This means that you will need to monitor (or debug) different processes, based on which events your code handles. So the usual suspects are: dllhost.exe (or dllhost3g.exe ) - running as the MTSUser is the SDL Tridion Content Manager COM+ application and it fires events on generic TOM objects (e.g. events based on Tridion.ContentManager.Extensibility.Events.CrudEven