Skip to main content

SDL Web 8 - The "Discovery" Microservice

One of the microservices I was describing in my post Content Delivery Microservices is the Discovery Service. This post describes what the service does and provide a step-by-step installation guide.

The Discovery service is a new addition to the set of functionality in Tridion. It is a REST service that provides information about the capabilities of a given content delivery environment. A capability represents a Content Delivery module that is exposed itself as a REST service (i.e. the other CD Microservices of SDL Web 8). The Discovery service knows about each capability in a given environment and exposes this information in its methods.

For example, if a content delivery environment has capabilities Deployer and Dynamic Content, the Discovery service will provide the calling client the information about their endpoints, such that other systems discover what capabilities are available in a given environment, and what are their endpoints, so that they can communicate with these modules. We will see in a subsequent post, how this works in more detail using the New Way of Publishing and the Topology Manager.
Top level XML response of Discovery service

The Environment metadata provides the actual access to the individual capabilities:
Details of the available capabilities

The Discovery service is a server role, running either as a standalone or Windows service Spring Boot package. By default, it contains an embedded Apache Tomcat instance, which makes it very easy to install and get running.

Discovery service uses a Content Delivery database to persist the information about each configured capability in its environment. Therefore, we need to configure node Storage in cd_storage_conf.xml, as well as the individual installed capabilities. These capabilities are in the form of URL endpoints, which define each capability as REST service, a client id and possibly an OAuth secret.

In order to install it as Windows service, the user should execute the .\installService.ps1 PowerShell script:
PowerShell installation script

Also it is possible to run Discovery service in secured mode, using the OAuth protocol. In order to configure it, modify file cd_ambient_conf.xml.
OAuth configuration section cd_ambient_conf.xml

It is possible to run it also as HTTPS, however, due to additional hassle with certificates, SDL recommends to use a reverse Proxy, instead in front of it and to secure the connection at the proxy level.

Lastly, we can start the SDL Delivery Discovery Services Windows Service.
Discovery Service running as Windows Service

Once all configurations are done and the Discovery service is up and running, we need to store the information about each capability in this environment into the Discovery Content Delivery database. In order to do so, we use a utility JAR, named discovery-registration.jar, using the following command line (I ran it from the /config folder of the Discovery Service installation):

    java -jar discovery-registration.jar

This utility reads the capability configurations defined earlier in cd_storage_conf.xml and performs an HTTP POST to the Discovery service, which will then store them in the Content Delivery database. The fact that it is possible to send these configurations over the wire and instruct the Discovery service to persist them in its database, represents a nice step towards modular/self-configuring services and it opens up the scenarios where this can be used in the future.
Execution of the discovery-registration.jar

For the inner geek, this is a little snapshot of how the configurations look like when stored in the Discovery database:
Capabilities persisted in Discovery Service database


Comments

Jordan said…
Interesting points here! It's important to understand this process in these microservices. Thanks for sharing!

Popular posts from this blog

Scaling Policies

This post is part of a bigger topic Autoscaling Publishers in AWS . In a previous post we talked about the Auto Scaling Groups , but we didn't go into details on the Scaling Policies. This is the purpose of this blog post. As defined earlier, the Scaling Policies define the rules according to which the group size is increased or decreased. These rules are based on instance metrics (e.g. CPU), CloudWatch custom metrics, or even CloudWatch alarms and their states and values. We defined a Scaling Policy with Steps, called 'increase_group_size', which is triggered first by the CloudWatch Alarm 'Publish_Alarm' defined earlier. Also depending on the size of the monitored CloudWatch custom metric 'Waiting for Publish', the Scaling Policy with Steps can add a difference number of instances to the group. The scaling policy sets the number of instances in group to 1 if there are between 1000 and 2000 items Waiting for Publish in the queue. It also sets the

Running sp_updatestats on AWS RDS database

Part of the maintenance tasks that I perform on a MSSQL Content Manager database is to run stored procedure sp_updatestats . exec sp_updatestats However, that is not supported on an AWS RDS instance. The error message below indicates that only the sa  account can perform this: Msg 15247 , Level 16 , State 1 , Procedure sp_updatestats, Line 15 [Batch Start Line 0 ] User does not have permission to perform this action. Instead there are several posts that suggest using UPDATE STATISTICS instead: https://dba.stackexchange.com/questions/145982/sp-updatestats-vs-update-statistics I stumbled upon the following post from 2008 (!!!), https://social.msdn.microsoft.com/Forums/sqlserver/en-US/186e3db0-fe37-4c31-b017-8e7c24d19697/spupdatestats-fails-to-run-with-permission-error-under-dbopriveleged-user , which describes a way to wrap the call to sp_updatestats and execute it under a different user: create procedure dbo.sp_updstats with execute as 'dbo' as

Toolkit - Dynamic Content Queries

This post if part of a series about the  File System Toolkit  - a custom content delivery API for SDL Tridion. This post presents the Dynamic Content Query capability. The requirements for the Toolkit API are that it should be able to provide CustomMeta queries, pagination, and sorting -- all on the file system, without the use third party tools (database, search engines, indexers, etc). Therefore I had to implement a simple database engine and indexer -- which is described in more detail in post Writing My Own Database Engine . The querying logic does not make use of cache. This means the query logic is executed every time. When models are requested, the models are however retrieved using the ModelFactory and those are cached. Query Class This is the main class for dynamic content queries. It is the entry point into the execution logic of a query. The class takes as parameter a Criterion (presented below) which triggers the execution of query in all sub-criteria of a Criterio