Skip to main content

Executing in Memory a JSP DCP from Database

This is follow-up of my previous post Executing a JSP DCP Stored in the Database, where I was presenting a proof-of-concept on executing a string representing a JSP DCP stored in the Content Delivery Database.

I wasn't too happy with the previous solution, because it was writing the compiled .class to the file system and there was no caching at all (the .class would be recompiled with every request). I can do better than that! :-)

So the new version is compiling everything into a memory byte array, then executes the in-memory class. The compiled class is placed into an LRU cache, in order to optimize performance.

The entire example is available on Google code project; below are just the highlights:

Custom JSP Tag

This is the calling code that initiates the execution of the DCP.

public void doTag() throws JspException {
    try {
        DcpClassLoader loader = new DcpClassLoader(componentUri, componentTemplateUri);
        Executable dcpExecutable = loader.getExecutable();
        String result = dcpExecutable.execute();

        JspContext context = getJspContext();
        JspWriter out = context.getOut();
        out.write(result);
    } catch (Exception e) {
        throw new JspException(e);
    }
}

DcpClassLoader

The class extends ClassLoader and is resposible for finding dynamic in-memory class for each DCP. It uses an LRU cache to store the recently created dynamic DCP classes. It also takes into account the lastPublicationDate of the DCP, which is used to invalidate the cache.

This class also instantiates the retrieved class (an implementation of the Executable interface).

If the DCP class is not found in the cache, this is the entry point into the in-memory compiler (method createExecutorClass).

public DcpClassLoader(String componentUri, String componentTemplateUri) throws ParseException {
    super(DcpClassLoader.class.getClassLoader());

    cp = getComponentPresentation(componentUri, componentTemplateUri);
    lastModified = getLastPublicationDate(cp);
    className = String.format("Dcp_%d_%d_%d", cp.getPublicationId(), cp.getComponentId(),
            cp.getComponentTemplateId());
}

@Override
protected Class<?> findClass(String name) throws ClassNotFoundException {
    ClassLoaderCache cache = ClassLoaderCache.getInstance();
    CacheEntry cacheEntry = cache.get(name);
    if (cacheEntry != null && cacheEntry.getLastModified() > lastModified) {
        return cacheEntry.getClazz();
    }

    Class<?> clazz = createExecutorClass(name);
    cache.put(name, clazz);

    return clazz;
}

public Executable getExecutable() throws ClassNotFoundException {
    try {
        Class<?> clazz = loadClass(className);
        return (Executable) clazz.newInstance();
    } catch (Exception e) {
        log.error("Exception occurred", e);
    }

    return null;
}

private Class<?> createExecutorClass(String name) throws ClassFormatError {
    DcpExecutor executor = new DcpExecutor(name, cp.getContent());
    byte[] classBytes = executor.compile();

    return defineClass(name, classBytes, 0, classBytes.length);
}

DcpExecutor

This class puts the DCP content into a Java class context (the sourceSkeleton), compiles it in memory, and returns the byte code of the compiled class.

The sourceSkeleton is in fact implementing the Executable interface (which only defines a public String execute() method). In order to compile the dynamic Java source, the resource mitza.dynamic.compile.Executable needs to be made available to the compiler (in the classpath). Therefore, the compiler accepts an Iterable<Class> containing the classPath elements.

private static final String sourceSkeleton =
        "import java.io.ByteArrayOutputStream;\r\n" +
        "import java.io.PrintStream;\r\n" +
        "import mitza.dynamic.compile.Executable;\r\n" +

        "public class %s implements Executable {\r\n" +

        "    public String execute() {\r\n" +
        "        ByteArrayOutputStream output = new ByteArrayOutputStream();\r\n" +
        "        System.setOut(new PrintStream(output));\r\n" +
        "        %s\r\n" +
        "        return output.toString();\r\n" +
        "    }\r\n" +
        "}";

private final String className;
private final String javaSource;

public DcpExecutor(String className, String javaSource) {
    this.className = className;
    this.javaSource = String.format(sourceSkeleton, className, javaSource);
}

public byte[] compile() {
    JavaMemoryCompiler compiler = new JavaMemoryCompiler();
    JavaFileObject javaObject = new JavaMemoryObject(className, javaSource);
    List<Class<? extends Object>> classPath = new ArrayList<Class<? extends Object>>();
    classPath.add(Executable.class);

    return compiler.compile(javaObject, classPath);
}

JavaMemoryObject

This class serves two purposes:
  • it contains the source Java code that will be compiled. I pass this object to the Compiler object to read the source from member javaSource;
  • it contains the compiled Java byte code (in ByteArrayOutputStream). The Compiler calls the openOutputStream method and writes the byte code in it;
public class JavaMemoryObject extends SimpleJavaFileObject {

    private final String javaSource;
    private final ByteArrayOutputStream bos = new ByteArrayOutputStream();

    public JavaMemoryObject(String className, String javaSource) {
        super(getUri(className, Kind.SOURCE), Kind.SOURCE);
        this.javaSource = javaSource;
    }

    public JavaMemoryObject(String className, Kind kind) {
        super(getUri(className, kind), kind);
        this.javaSource = null;
    }

    public byte[] getBytes() {
        return bos.toByteArray();
    }

    @Override
    public CharSequence getCharContent(boolean ignoreEncodingErrors) throws IOException {
        return javaSource;
    }

    @Override
    public OutputStream openOutputStream() throws IOException {
        return bos;
    }

    private static URI getUri(String className, Kind kind) {
        return URI.create("string:///" + className.replace('.', '/') + kind.extension);
    }
}

JavaMemoryCompiler

This class performs the actual compilation. It uses the standard Java Compiler API from ToolProvider.

It uses a special JavaMemoryManager (detailed below) to trigger the writing of the resulting byte-code into a JavaMemoryObject.

Note also the conversion of class path elements from Iterable<Class> to a String containing a semicolon (;) delimited sequence of .class or .jar files, which contain the actual resource named in the Iterable.

public byte[] compile(JavaFileObject javaObject, Iterable<Class<?>> classPaths) {
    JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
    JavaMemoryManager manager = new JavaMemoryManager(compiler.getStandardFileManager(null, null, null));
    Iterable<JavaFileObject> javaObjects = Arrays.asList(javaObject);

    List<String> options = new ArrayList<String>();
    if (classPaths != null) {
        options.add("-cp");
        String classPath = buildClassPath(classPaths);
        options.add(classPath);
    }

    CompilationTask task = compiler.getTask(null, manager, null, options, null, javaObjects);
    if (!task.call()) { // compile error
        throw new RuntimeException("Compilation error");
    }

    return manager.getBytes();
}

private String buildClassPath(Iterable<Class<?>> classPaths) {
    Set<String> pathSet = new HashSet<String>();
    for (Class<?> clazz : classPaths) {
        pathSet.add(getCompilationPath(clazz));
    }
    StringBuilder classPathBuilder = new StringBuilder();
    for (String path : pathSet) {
        classPathBuilder.append(path).append(";");
    }

    return classPathBuilder.toString();
}

private String getCompilationPath(Class<?> clazz) {
    String className = clazz.getName();
    className = className.replace('.', '/') + ".class";
    URL classUrl = getClass().getClassLoader().getResource(className);
    String filePath = classUrl.getPath();

    try {
        int exclamationIndex = filePath.indexOf("!");
        if (exclamationIndex >= 0) { // is jar
            filePath = filePath.substring(0, exclamationIndex);
            classUrl = new URL(filePath);
        } else { // is class
            int extensionIndex = filePath.lastIndexOf(className);
            filePath = filePath.substring(0, extensionIndex);
            classUrl = new URL(classUrl.getProtocol(), classUrl.getHost(), filePath);
        }

        File classFile = new File(classUrl.toURI());
        String path = classFile.getPath();

        return path;
    } catch (Exception e) {
        log.error("Exception occurred", e);
    }

    return null;
}

JavaMemoryManager

This class is a wrapper around the JavaMemoryObject. The compiler calls this class' getJavaFileForOutput method when it's about to write the generated byte code into the .class file. Instead of a real file, this implementation uses a JavaMemoryObject that contains a byte array stream. Therefore, the entire compiled byte array is written into this stream, which can be accessed later.

Hence, all the compilation happens in-memory -- without any files being written to the file system.

public class JavaMemoryManager extends ForwardingJavaFileManager<StandardJavaFileManager> {

    private JavaMemoryObject javaMemoryObject;

    protected JavaMemoryManager(StandardJavaFileManager fileManager) {
        super(fileManager);
    }

    @Override
    public JavaFileObject getJavaFileForOutput(Location location, String className, Kind kind, FileObject sibling)
            throws IOException {
        javaMemoryObject = new JavaMemoryObject(className, kind);

        return javaMemoryObject;
    }

    public byte[] getBytes() {
        return javaMemoryObject.getBytes();
    }
}


Comments

Popular posts from this blog

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio

A DD4T.net Implementation - Custom Binary Publisher

The default way to publish binaries in DD4T is implemented in class DD4T.Templates.Base.Utils.BinaryPublisher and uses method RenderedItem.AddBinary(Component) . This produces binaries that have their TCM URI as suffix in their filename. In my recent project, we had a requirement that binary file names should be clean (without the TCM URI suffix). Therefore, it was time to modify the way DD4T was publishing binaries. The method in charge with publishing binaries is called PublishItem and is defined in class BinaryPublisher . I therefore extended the BinaryPublisher and overrode method PublishItem. public class CustomBinaryPublisher : BinaryPublisher { private Template currentTemplate; private TcmUri structureGroupUri; In its simplest form, method PublishItem just takes the item and passes it to the AddBinary. In order to accomplish the requirement, we must specify a filename while publishing. This is the file name part of the binary path of Component.BinaryConten

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the