Skip to main content

Extending OData Query Functionality

Yes, OData is extensible... :) or I should rather call it the Content Delivery Web Service...

In its current state, OData WS is pretty limited when it comes to querying for content. I have written the following article about it: Tridion ODdata -- What's Possible and What Not?. The goal of my exercise is to enhance OData's search capabilities.

The Content Delivery API provides the full functionality on querying for content. Hence, my plan is to extend OData with another collection QueryComponents (i.e. in fact with two more collections -- QueryComponentPresentations and QueryPages, but the query handling is identical to just querying for Components), to which I could pass a $filter parameter using the OData syntax, grammar and operators.

The web service would parse the $filter parameter and construct a com.tridion.broker.querying.criteria.Criteria object from it. It would then execute the query and return Component entries using the OData response and all its formats.

/odata.svc/QueryComponents?$filter=Publication eq 1 and (SchemaTitle like 'Artic%' or CustomMetaValue eq 'Press Release')

The full implementation (for the Java CD Web Service) including sources and deliverables are available on my GitHub project OData Query Extension.

My approach covers the following:
  • extending the Content Delivery Web Service (OData);
  • providing a generic mechanism to query for content using the Content Delivery API;

Extending OData Web Service

A huge disclaimer on this topic: at the moment of writing this (i.e. June 2013 on Tridion 2013 GA), the OData extension mechanism is not public, not documented, nor supported by SDL R&D.

OData extension mechanism is quite elegant -- it uses resource files that the framework loads dynamically using class-loaders. This resource file contains the names of the classes implementing the extension logic. The file has to be called com.tridion.webservices.extension.WebserviceExtension and be placed in folder /META-INF/services inside your JAR. In this file, I provide the name of my extension class: net.mitza.odata.services.QueryRegistrationService, but more about this class further down.

It is possible to extend several aspects of the OData WS:
  • ODataResourceBundleRegistrationService - interface allowing the registration of a "ResourceResolver". This mechanism provides the Read capability of some information, defines the data model, relations between entities, entity properties, etc;
  • ODataServiceOperation - provides a 'servlet'-like approach to OData request/response. The interface has a handleRequest method that accepts a request object and produces a response object;
  • ODataWritableEntryService - provides the Insert, Create, Delete capabilities for writable entities;
  • ODataWritableStreamEntryService - provides Insert, Create capabilities but allowing the payload the be transported as byte arrays;

QueryRegistrationService Class

My extensibility needs are quite simple: provide a new OData collection, read-only, using predefined entities (i.e. Components). The part I want to change is the logic that retrieves these entities. Therefore, I proceeded to implementing interface ODataResourceBundleRegistrationService.

public class QueryRegistrationService implements ODataResourceBundleRegistrationService {
   ...
}

The important bits of this class is the definition of a new OData type (i.e. the top-level collections that are available when accessing the root /odata.svc/). My code defines a QueryComponents type and uses the predefined ComponentPropertyWrapper. This makes life alot easier to me, since I don't have to define yet another entity. Instead, I am piggybacking on the already existing Component entity and reusing all its properties and relations to other entities.

    private static String AREA_NAME;
    public static final ODataInputElementType QUERY_COMPONENT;
    private static List PROPERTY_WRAPPERS;

    static {
        QUERY_COMPONENT = new ODataInputElementType("QueryComponents",
            QueryComponentResolver.class, true"QUERY_AREA");
        PROPERTY_WRAPPERS = new ArrayList();
        PROPERTY_WRAPPERS.add(ComponentPropertyWrapper.class);
    }

    public List getRegistrationEntries() {
        List types = new ArrayList();
        types.add(QUERY_COMPONENT);
        return types;
    }

    public List getPropertyWrappers() {
        return PROPERTY_WRAPPERS;
    }

QueryComponentResolver Class

The only thing that I want to change is how these Component entities are retrieved -- which Components, in which order, how many, etc. This is where an OData Resolver comes in. I defined my own resolver QueryComponentResolver class by extending the ResolverBase.

Again, I'm reusing the ComponentEntryBuilder (the predefined builder), since my output will consist of Component entities and I'm not planning to reinvent the wheel.

public class QueryComponentResolver extends ResolverBase {

    public QueryComponentResolver(ODataInputElement oDataInputElement) {
        super(oDataInputElement, new ComponentEntryBuilder());
    }
  ...
}

The ResolverBase abstract class defines three methods that need be implemented:

resolveEntry()

Resolves an entity by its primary key. For Components that would be PublicationId and ItemId. However, when querying for content, I don't have a primary key. So, this method doesn't really make sense to me. I chose to delegate its implementation to resolveCollection().

    @Override
    protected ODataBase resolveEntry() {
        return resolveCollection();
    }

resolveCollection()

Resolves a collection of entities based on certain criteria. This method is the entry point into the logic that performs the search for content. The oDataInputElement object provides access to the identifiers present in the RESTful URL. Method getRequestParameterMap() provides access to the URL and POST parameters present in the request, which includes the $filter parameters. I pass these objects to the Criteria builder, which parses the values and creates a Criteria object, but more about that later.

    @Override
    protected ODataBase resolveCollection() {
        CriteriaBuilder builder = new CriteriaBuilder(oDataInputElement,
                getRequestParametersMap());
        Criteria criteria = builder.getCriteria();
        return buildQueryEntries(getComponents(criteria));
    }

The actual querying takes place in the getComponents() method, which executes the query using CD API and returns a List of CustomMeta objects.

    private List<ComponentMeta> getComponents(Criteria criteria) {
        List<ComponentMeta> result = new ArrayList<ComponentMeta>();
        ItemTypeCriteria componentTypeCriteria = new ItemTypeCriteria(ItemTypes.COMPONENT);
        if (criteria == null) {
            criteria = componentTypeCriteria;
        } else {
            criteria.addCriteria(componentTypeCriteria);
        }

        Query query = new Query(criteria);
        Item[] items = query.executeEntityQuery();
        for (Item item : items) {
            ComponentMeta componentMeta =
                    MapperFactory.mapComponentMeta((com.tridion.meta.ComponentMeta) item);
            result.add(componentMeta);
        }

        return result;
    }

Now that we have the data, we can build the OData response. This is done in method buildQueryEntries(). A ODataFeed is created that will contain the actual OData response. We set a title, update date (now) and an id (the actual url that was requested). Then iterating over the ComponentMeta list, we add each entry, again piggybacking on the buildEntry() method from ResolverBase, which calls the ComponentEntryBuilder as defined in the constructor.

    private ODataFeed buildQueryEntries(List<ComponentMeta> items) {
        ODataFeed feed = new ODataFeed(ODataFeed.COMPONENT_TITLE,
                getRequestUrl(), getBaseUrl());
        feed.setUpdated(new Date());
        feed.setId(getRequestUrl());
        for (ComponentMeta componentMeta : items) {
            feed.addEntry(buildEntry(componentMeta, false));
        }

        return feed;
    }

resolveRelation()

Since I have not defined any new relations, the framework should not call this method at all. In case it is called somehow, it would simply output an error response with an error message.

    @Override
    protected ODataBase resolveRelation(ODataEntry previouslyResolvedEntry) {
        return new ODataErrorResponse("", "Relation is not supported");

    }

Installing the OData Extension

This is one of the best parts of the entire extension mechanism -- there is no installation necessary :)

We do need however to copy the deliverable JAR file and place it the OData web application's WEB-INF/lib folder (or anywhere in the class-path).

The JAR is available on my Google Code project page -- link here.

Restart the OData web application and the extension will be loaded automatically.

Example Queries

In follow-up post URL Syntax for OData Query Extension, I am writing about the Parser, syntax and grammar used to access the /QueryComponents service.

For now, a quick peek on some accepted RESTful URLs:
  • /odata.svc/QueryComponents(Publication=123,ItemSchema=456) is equivalent to
  • /odata.svc/QueryComponents?$filter=Publication eq 123 and ItemSchema eq 456
  • /odata.svc/QueryComponents?$filter=PublicationTitle like 'Web%' and
            (ItemLastPublishedDate gt '2012-06-20 16:30:25.000' or Multimedia eq true)


Comments

Anonymous said…
This seems to be very useful extension!
The required .jar file is not available on Google code.
Is it possible to download it from somewhere else?

thanks!

Shahid

Popular posts from this blog

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Content Delivery Monitoring in AWS with CloudWatch

This post describes a way of monitoring a Tridion 9 combined Deployer by sending the health checks into a custom metric in CloudWatch in AWS. The same approach can also be used for other Content Delivery services. Once the metric is available in CloudWatch, we can create alarms in case the service errors out or becomes unresponsive. The overall architecture is as follows: Content Delivery service sends heartbeat (or exposes HTTP endpoint) for monitoring Monitoring Agent checks heartbeat (or HTTP health check) regularly and stores health state AWS lambda function: runs regularly reads the health state from Monitoring Agent pushes custom metrics into CloudWatch I am running the Deployer ( installation docs ) and Monitoring Agent ( installation docs ) on a t2.medium EC2 instance running CentOS on which I also installed the Systems Manager Agent (SSM Agent) ( installation docs ). In my case I have a combined Deployer that I want to monitor. This consists of an Endpoint and a

Debugging a Tridion 2011 Event System

OK, so you wrote your Tridion Event System. Now it's time to debug it. I know this is a hypothetical situtation -- your code never needs any kind of debugging ;) but indulge me... Recently, Alvin Reyes ( @nivlong ) blogged about being difficult to know how exactly to debug a Tridion Event System. More exactly, the question was " What process do I attach to for debugging even system code? ". Unfortunately, there is no simple or generic answer for it. Different events are fired by different Tridion CM modules. These modules run as different programs (or services) or run inside other programs (e.g. IIS). This means that you will need to monitor (or debug) different processes, based on which events your code handles. So the usual suspects are: dllhost.exe (or dllhost3g.exe ) - running as the MTSUser is the SDL Tridion Content Manager COM+ application and it fires events on generic TOM objects (e.g. events based on Tridion.ContentManager.Extensibility.Events.CrudEven