Skip to main content

Executing in Memory a JSP DCP from Database

This is follow-up of my previous post Executing a JSP DCP Stored in the Database, where I was presenting a proof-of-concept on executing a string representing a JSP DCP stored in the Content Delivery Database.

I wasn't too happy with the previous solution, because it was writing the compiled .class to the file system and there was no caching at all (the .class would be recompiled with every request). I can do better than that! :-)

So the new version is compiling everything into a memory byte array, then executes the in-memory class. The compiled class is placed into an LRU cache, in order to optimize performance.

The entire example is available on Google code project; below are just the highlights:

Custom JSP Tag

This is the calling code that initiates the execution of the DCP.

public void doTag() throws JspException {
    try {
        DcpClassLoader loader = new DcpClassLoader(componentUri, componentTemplateUri);
        Executable dcpExecutable = loader.getExecutable();
        String result = dcpExecutable.execute();

        JspContext context = getJspContext();
        JspWriter out = context.getOut();
    } catch (Exception e) {
        throw new JspException(e);


The class extends ClassLoader and is resposible for finding dynamic in-memory class for each DCP. It uses an LRU cache to store the recently created dynamic DCP classes. It also takes into account the lastPublicationDate of the DCP, which is used to invalidate the cache.

This class also instantiates the retrieved class (an implementation of the Executable interface).

If the DCP class is not found in the cache, this is the entry point into the in-memory compiler (method createExecutorClass).

public DcpClassLoader(String componentUri, String componentTemplateUri) throws ParseException {

    cp = getComponentPresentation(componentUri, componentTemplateUri);
    lastModified = getLastPublicationDate(cp);
    className = String.format("Dcp_%d_%d_%d", cp.getPublicationId(), cp.getComponentId(),

protected Class<?> findClass(String name) throws ClassNotFoundException {
    ClassLoaderCache cache = ClassLoaderCache.getInstance();
    CacheEntry cacheEntry = cache.get(name);
    if (cacheEntry != null && cacheEntry.getLastModified() > lastModified) {
        return cacheEntry.getClazz();

    Class<?> clazz = createExecutorClass(name);
    cache.put(name, clazz);

    return clazz;

public Executable getExecutable() throws ClassNotFoundException {
    try {
        Class<?> clazz = loadClass(className);
        return (Executable) clazz.newInstance();
    } catch (Exception e) {
        log.error("Exception occurred", e);

    return null;

private Class<?> createExecutorClass(String name) throws ClassFormatError {
    DcpExecutor executor = new DcpExecutor(name, cp.getContent());
    byte[] classBytes = executor.compile();

    return defineClass(name, classBytes, 0, classBytes.length);


This class puts the DCP content into a Java class context (the sourceSkeleton), compiles it in memory, and returns the byte code of the compiled class.

The sourceSkeleton is in fact implementing the Executable interface (which only defines a public String execute() method). In order to compile the dynamic Java source, the resource mitza.dynamic.compile.Executable needs to be made available to the compiler (in the classpath). Therefore, the compiler accepts an Iterable<Class> containing the classPath elements.

private static final String sourceSkeleton =
        "import;\r\n" +
        "import;\r\n" +
        "import mitza.dynamic.compile.Executable;\r\n" +

        "public class %s implements Executable {\r\n" +

        "    public String execute() {\r\n" +
        "        ByteArrayOutputStream output = new ByteArrayOutputStream();\r\n" +
        "        System.setOut(new PrintStream(output));\r\n" +
        "        %s\r\n" +
        "        return output.toString();\r\n" +
        "    }\r\n" +

private final String className;
private final String javaSource;

public DcpExecutor(String className, String javaSource) {
    this.className = className;
    this.javaSource = String.format(sourceSkeleton, className, javaSource);

public byte[] compile() {
    JavaMemoryCompiler compiler = new JavaMemoryCompiler();
    JavaFileObject javaObject = new JavaMemoryObject(className, javaSource);
    List<Class<? extends Object>> classPath = new ArrayList<Class<? extends Object>>();

    return compiler.compile(javaObject, classPath);


This class serves two purposes:
  • it contains the source Java code that will be compiled. I pass this object to the Compiler object to read the source from member javaSource;
  • it contains the compiled Java byte code (in ByteArrayOutputStream). The Compiler calls the openOutputStream method and writes the byte code in it;
public class JavaMemoryObject extends SimpleJavaFileObject {

    private final String javaSource;
    private final ByteArrayOutputStream bos = new ByteArrayOutputStream();

    public JavaMemoryObject(String className, String javaSource) {
        super(getUri(className, Kind.SOURCE), Kind.SOURCE);
        this.javaSource = javaSource;

    public JavaMemoryObject(String className, Kind kind) {
        super(getUri(className, kind), kind);
        this.javaSource = null;

    public byte[] getBytes() {
        return bos.toByteArray();

    public CharSequence getCharContent(boolean ignoreEncodingErrors) throws IOException {
        return javaSource;

    public OutputStream openOutputStream() throws IOException {
        return bos;

    private static URI getUri(String className, Kind kind) {
        return URI.create("string:///" + className.replace('.', '/') + kind.extension);


This class performs the actual compilation. It uses the standard Java Compiler API from ToolProvider.

It uses a special JavaMemoryManager (detailed below) to trigger the writing of the resulting byte-code into a JavaMemoryObject.

Note also the conversion of class path elements from Iterable<Class> to a String containing a semicolon (;) delimited sequence of .class or .jar files, which contain the actual resource named in the Iterable.

public byte[] compile(JavaFileObject javaObject, Iterable<Class<?>> classPaths) {
    JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
    JavaMemoryManager manager = new JavaMemoryManager(compiler.getStandardFileManager(null, null, null));
    Iterable<JavaFileObject> javaObjects = Arrays.asList(javaObject);

    List<String> options = new ArrayList<String>();
    if (classPaths != null) {
        String classPath = buildClassPath(classPaths);

    CompilationTask task = compiler.getTask(null, manager, null, options, null, javaObjects);
    if (! { // compile error
        throw new RuntimeException("Compilation error");

    return manager.getBytes();

private String buildClassPath(Iterable<Class<?>> classPaths) {
    Set<String> pathSet = new HashSet<String>();
    for (Class<?> clazz : classPaths) {
    StringBuilder classPathBuilder = new StringBuilder();
    for (String path : pathSet) {

    return classPathBuilder.toString();

private String getCompilationPath(Class<?> clazz) {
    String className = clazz.getName();
    className = className.replace('.', '/') + ".class";
    URL classUrl = getClass().getClassLoader().getResource(className);
    String filePath = classUrl.getPath();

    try {
        int exclamationIndex = filePath.indexOf("!");
        if (exclamationIndex >= 0) { // is jar
            filePath = filePath.substring(0, exclamationIndex);
            classUrl = new URL(filePath);
        } else { // is class
            int extensionIndex = filePath.lastIndexOf(className);
            filePath = filePath.substring(0, extensionIndex);
            classUrl = new URL(classUrl.getProtocol(), classUrl.getHost(), filePath);

        File classFile = new File(classUrl.toURI());
        String path = classFile.getPath();

        return path;
    } catch (Exception e) {
        log.error("Exception occurred", e);

    return null;


This class is a wrapper around the JavaMemoryObject. The compiler calls this class' getJavaFileForOutput method when it's about to write the generated byte code into the .class file. Instead of a real file, this implementation uses a JavaMemoryObject that contains a byte array stream. Therefore, the entire compiled byte array is written into this stream, which can be accessed later.

Hence, all the compilation happens in-memory -- without any files being written to the file system.

public class JavaMemoryManager extends ForwardingJavaFileManager<StandardJavaFileManager> {

    private JavaMemoryObject javaMemoryObject;

    protected JavaMemoryManager(StandardJavaFileManager fileManager) {

    public JavaFileObject getJavaFileForOutput(Location location, String className, Kind kind, FileObject sibling)
            throws IOException {
        javaMemoryObject = new JavaMemoryObject(className, kind);

        return javaMemoryObject;

    public byte[] getBytes() {
        return javaMemoryObject.getBytes();


Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: I stumbled upon the following post from 2008 (!!!), , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio