Pipelines
Every feed has an associated translation. The translation is used to convert the input text or XML into event logging XML format.
XSLT is used to translate from XML to event logging XML.
This is the multi-page printable view of this section. Click here to print.
Every feed has an associated translation. The translation is used to convert the input text or XML into event logging XML format.
XSLT is used to translate from XML to event logging XML.
See Also:
In Stroom reference data is primarily used to decorate events using stroom:lookup()
calls in XSLTs.
For example you may have reference data feed that associates the FQDN of a device to the physical location.
You can then perform a stroom:lookup()
in the XSLT to decorate an event with the physical location of a device by looking up the FQDN found in the event.
Reference data is time sensitive and each stream of reference data has an Effective Date set against it. This allows reference data lookups to be performed using the date of the event to ensure the reference data that was actually effective at the time of the event is used.
Using reference data involves the following steps/processes:
reference-data:2
format XML.The process of creating a reference data pipeline is described in the HOWTO linked at the top of this document.
A reference data entry essentially consists of the following:
The following is an example of some reference data that has been converted from its raw form into reference-data:2
XML.
In this example the reference data contains three entries that each belong to a different map.
Two of the entries are simple text values and the last has an XML value.
<?xml version="1.1" encoding="UTF-8"?>
<referenceData
xmlns="reference-data:2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:stroom="stroom"
xmlns:evt="event-logging:3"
xsi:schemaLocation="reference-data:2 file://reference-data-v2.0.xsd"
version="2.0.1">
<!-- A simple string value -->
<reference>
<map>FQDN_TO_IP</map>
<key>stroomnode00.strmdev00.org</key>
<value>
<IPAddress>192.168.2.245</IPAddress>
</value>
</reference>
<!-- A simple string value -->
<reference>
<map>IP_TO_FQDN</map>
<key>192.168.2.245</key>
<value>
<HostName>stroomnode00.strmdev00.org</HostName>
</value>
</reference>
<!-- A key range -->
<reference>
<map>USER_ID_TO_COUNTRY_CODE</map>
<range>
<from>1</from>
<to>1000</to>
</range>
<value>GBR</value>
</reference>
<!-- An XML fragment value -->
<reference>
<map>FQDN_TO_LOC</map>
<key>stroomnode00.strmdev00.org</key>
<value>
<evt:Location>
<evt:Country>GBR</evt:Country>
<evt:Site>Bristol-S00</evt:Site>
<evt:Building>GZero</evt:Building>
<evt:Room>R00</evt:Room>
<evt:TimeZone>+00:00/+01:00</evt:TimeZone>
</evt:Location>
</value>
</reference>
</referenceData>
When XML reference data values are created, as in the example XML above, the XML values must be qualified with a namespace to distinguish them from the reference-data:2
XML that surrounds them.
In the above example the XML fragment will become as follows when injected into an event:
<evt:Location xmlns:evt="event-logging:3" >
<evt:Country>GBR</evt:Country>
<evt:Site>Bristol-S00</evt:Site>
<evt:Building>GZero</evt:Building>
<evt:Room>R00</evt:Room>
<evt:TimeZone>+00:00/+01:00</evt:TimeZone>
</evt:Location>
Even if evt
is already declared in the XML being injected into it, if it has been declared for the reference fragment then it will be explicitly declared in the destination.
While duplicate namespacing may appear odd it is valid XML.
The namespacing can also be achieved like this:
<?xml version="1.1" encoding="UTF-8"?>
<referenceData
xmlns="reference-data:2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:stroom="stroom"
xsi:schemaLocation="reference-data:2 file://reference-data-v2.0.xsd"
version="2.0.1">
<!-- An XML value -->
<reference>
<map>FQDN_TO_LOC</map>
<key>stroomnode00.strmdev00.org</key>
<value>
<Location xmlns="event-logging:3">
<Country>GBR</Country>
<Site>Bristol-S00</Site>
<Building>GZero</Building>
<Room>R00</Room>
<TimeZone>+00:00/+01:00</TimeZone>
</Location>
</value>
</reference>
</referenceData>
This reference data will be injected into event XML exactly as it, i.e.:
<Location xmlns="event-logging:3">
<Country>GBR</Country>
<Site>Bristol-S00</Site>
<Building>GZero</Building>
<Room>R00</Room>
<TimeZone>+00:00/+01:00</TimeZone>
</Location>
Reference data is stored in two different places on a Stroom node. All reference data is only visible to the node where it is located. Each node that is performing reference data lookups will need to load and store its own reference data. While this will result in duplicate data being held by nodes it makes the storage of reference data and its subsequent lookup very performant.
The On-Heap store is the reference data store that is held in memory in the Java Heap. This store is volatile and will be lost on shut down of the node. The On-Heap store is only used for storage of context data.
The Off-Heap store is the reference data store that is held in memory outside of the Java Heap and is persisted to to local disk. As the store is also persisted to local disk it means the reference data will survive the shutdown of the stroom instance. Storing the data off-heap means Stroom can run with a much smaller Java Heap size.
The Off-Heap store is based on the Lightning Memory-Mapped Database (LMDB). LMDB makes use of the Linux page cache to ensure that hot portions of the reference data are held in the page cache (making use of all available free memory). Infrequently used portions of the reference data will be evicted from the page cache by the Operating System. Given that LMDB utilises the page cache for holding reference data in memory the more free memory the host has the better as there will be less shifting of pages in/out of the OS page cache. When storing large amounts of data you may experience the OS reporting very little free memory as a large amount will be in use by the page cache. This is not an issue as the OS will evict pages when memory is needed for other applications, e.g. the Java Heap.
The Off-Heap store is intended to be located on local disk on the Stroom node.
The location of the store is set using the property stroom.pipeline.referenceData.localDir
.
Using LMDB on remote storage is NOT advised, see http://www.lmdb.tech/doc.
If you are running stroom on AWS EC2 instances then you will need to attach some local instance storage to the host, e.g. SSD, to use for the reference data store. In tests EBS storage was found to be VERY slow. It should be noted that AWS instance storage is not persistent between instance stops, terminations and hardware failure. However any loss of the reference data store will mean that the next time Stroom boots a new store will be created and reference data will be loaded on demand as normal.
LMDB is a transactional database with ACID semantics. All interaction with LMDB is done within a read or write transaction. There can only be one write transaction at a time so if there are a number of concurrent reference data loads then they will have to wait in line. Read transactions, i.e. lookups, are not blocked by each other or by write transactions so once the data is loaded and is in memory lookups can be performed very quickly.
When data is read from the store, if the data is not already in the page cache then it will be read from disk and added to the page cache by the OS.
Read-ahead is the process of speculatively reading ahead to load more pages into the page cache then were requested.
This is on the basis that future requests for data may need the pages speculatively read into memory.
If the reference data store is very large or is larger than the available memory then it is recommended to turn read-ahead off as the result will be to evict hot reference data from the page cache to make room for speculative pages that may not be needed.
It can be tuned off with the system property stroom.pipeline.referenceData.readAheadEnabled
.
When reference data is created care must be taken to ensure that the Key used for each entry is less than 507 bytes. For simple ASCII characters then this means less than 507 characters. If non-ASCII characters are in the key then these will take up more than one byte per character so the length of the key in characters will be less. This is a limitation inherent to LMDB.
The property stroom.pipeline.referenceData.maxPutsBeforeCommit
controls the number of entries that are put into the store between each commit.
As there can be only one transaction writing to the store at a time, committing periodically allows other process to jump in and make writes.
There is a trade off though as reducing the number of entries put between each commit can seriously affect performance.
For the fastest single process performance a value of zero should be used which means it will not commit mid-load.
This however means all other processes wanting to write to the store will need to wait.
If you are provisioning a new stroom node it is possible to copy the off heap store from another node.
Stroom should not be running on the node being copied from.
Simply copy the contents of stroom.pipeline.referenceData.localDir
into the same configured location on the other node.
The new node will use the copied store and have access to its reference data.
Due to the way LMDB works the store can only grow in size, it will never shrink, even if data is deleted. Deleted data frees up space for new writes to the store so will be reused but never freed. If there is a regular process of purging old data and adding new reference data then this should not be an issue as the new reference data will use the space made available by the purged data.
If store size becomes an issue then it is possible to compact the store.
lmdb-utils
is available on some package managers and this has an mdb_copy
command that can be used with the -c
switch to copy the LMDB environment to a new one, compacting it in the process.
This should be done when Stroom is down to avoid writes happening to the store while the copy is happening.
Given that the store is essentially a cache and all data can be re-loaded another option is to delete the contents of stroom.pipeline.referenceData.localDir
when Stroom is not running.
On boot Stroom will create a brand new store and reference data will be re-loaded as required.
Reference data is loaded into the store on demand during the processing of a stroom:lookup()
method call.
Reference data will only be loaded if it does not already exist in the store, however it is always loaded as a complete stream, rather than entry by entry.
The test for existence in the store is based on the following criteria:
If a reference stream has already been loaded matching the above criteria then no additional load is required.
IMPORTANT: It should be noted that as the version of the reference data pipeline forms part of the criteria, if the reference loader pipeline is changed, for whatever reason, then this will invalidate ALL existing reference data associated with that reference loader pipeline.
Typically the reference loader pipeline is very static so this should not be an issue.
Standard practice is to convert raw reference data into reference:2
XML on receipt using a pipeline separate to the reference loader.
The reference loader is then only concerned with reading cooked reference:2
into the Reference Data Filter.
In instances where reference data streams are infrequently used it may be preferable to not convert the raw reference on receipt but instead to do it in the reference loader pipeline.
The Reference Data Filter pipeline element has a property overrideExistingValues
which if set to true means if an entry is found in an effective stream with the same key as an entry already loaded then it will overwrite the existing one.
Entries are loaded in the order they are found in the reference:2
XML document.
If set to false then the existing entry will be kept.
If warnOnDuplicateKeys
is set to true then a warning will be logged for any duplicate keys, whether an overwrite happens or not.
Only unique values are held in the store to reduce the storage footprint. This is useful given that typically, reference data updates may be received daily and each one is a full snapshot of the whole reference data. As a result this can mean many copies of the same value being loaded into the store. The store will only hold the first instance of duplicate values.
The reference data store can be queried within a Dashboard in Stroom by selecting Reference Data Store
in the data source selection pop-up.
Querying the store is currently an experimental feature and is mostly intended for use in fault finding.
Given the localised nature of the reference data store the dashboard can currently only query the store on the node that the user interface is being served from.
In a multi-node environment where some nodes are UI only and most are processing only, the UI nodes will have no reference data in their store.
Reference data loading and purging is done at the level of a reference stream. Whenever a reference lookup is performed the last accessed time of the reference stream in the store is checked. If it is older than one hour then it will be updated to the current time. This last access time is used to determine reference streams that are no longer in active use and thus can be purged.
The Stroom job Ref Data Off-heap Store Purge is used to perform the purge operation on the Off-Heap reference data store.
No purge is required for the On-Heap store as that only holds transient data.
When the purge job is run it checks the time since each reference stream was accessed against the purge cut-off age.
The purge age is configured via the property stroom.pipeline.referenceData.purgeAge
.
It is advised to schedule this job for quiet times when it is unlikely to conflict with reference loading operations as they will fight for access to the single write transaction.
Lookups are performed in XSLT Filters using the XSLT functions.
In order to perform a lookup one or more reference feeds must be specified on the XSLT Filter pipeline element.
Each reference feed is specified along with a reference loader pipeline that will ingest the specified feed (optional convert it into reference:2
XML if it is not already) and pass it into a Reference Data Filter pipeline element.
In the XSLT Filter pipeline element multiple combinations of feed and reference loader pipeline can be specified. There must be at least one in order to perform lookups. If there are multiple then when a lookup is called for a given time, the effective stream for each feed/loader combination is determined. The effective stream for each feed/loader combination will be loaded into the store, unless it is already present.
When the actual lookup is performed Stroom will try to find the key in each of the effective streams that have been loaded and that contain the map in the lookup call. If the lookup is unsuccessful in the effective stream for the first feed/loader combination then it will try the next, and so on until it has tried all of them. For this reason if you have multiple feed/loader combinations then order is important. It is possible for multiple effective streams to contain the same map/key so a feed/loader combination higher up the list will trump one lower down with the same map/key. Also if you have some lookups that may not return a value and others that should always return a value then the feed/loader for the latter should be higher up the list so it is searched first.
Standard key/value lookups consist of a simple string key and a value that is either a simple string or an XML fragment.
Standard lookups are performed using the various forms of the stroom:lookup()
XSLT function.
Range lookups consist of a key that is an integer and a value that is either a simple string or an XML fragment.
For more detail on range lookups see the XSLT function stroom:lookup()
.
Nested map lookups involve chaining a number of lookups with the value of each map being used as the key for the next.
For more detail on nested lookups see the XSLT function stroom:lookup()
.
A bitmap lookup is a special kind of lookup that actually performs a lookup for each enabled bit position of the passed bitmap value.
For more detail on bitmap lookups see the XSLT function stroom:bitmap-lookup()
.
Values can either be a simple string or an XML fragment.
Some event streams have a Context stream associated with them. Context streams allow the system sending the events to Stroom to supply an additional stream of data that provides context to the raw event stream. This can be useful when the system sending the events has no control over the event content but needs to supply additional information. The context stream can be used in lookups as a reference source to decorate events on receipt. Context reference data is specific to a single event stream so is transient in nature, therefore the On Heap Store is used to hold it for the duration of the event stream processing only.
Typically the reference loader for a context stream will include a translation step to convert the raw context data into reference:2
XML.
The reference data store has an API to allow other systems to access the reference data store.
The lookup
endpoint requires the caller to provide details of the reference feed and loader pipeline so if the effective stream is not in the store it can be loaded prior to performing the lookup.
Below is an example of a lookup request.
{
"mapName": "USER_ID_TO_LOCATION",
"effectiveTime": "2020-12-02T08:37:02.772Z",
"key": "jbloggs",
"referenceLoaders": [
{
"loaderPipeline" : {
"name" : "Reference Loader",
"uuid" : "da1c7351-086f-493b-866a-b42dbe990700",
"type" : "Pipeline"
},
"referenceFeed" : {
"name": "USER_ID_TOLOCATION-REFERENCE",
"uuid": "60f9f51d-e5d6-41f5-86b9-ae866b8c9fa3",
"type" : "Feed"
}
}
]
}
When outputting files with Stroom, the output file names and paths can include various substitution variables to form the file and path names.
The following replacement variables are specific to the current processing context.
${feed}
- The name of the feed that the stream being processed belongs to${pipeline}
- The name of the pipeline that is producing output${sourceId}
- The id of the input data being processed${partNo}
- The part number of the input data being processed where data is in aggregated batches${searchId}
- The id of the batch search being performed. This is only available during a batch search${node}
- The name of the node producing the outputThe following replacement variables can be used to include aspects of the current time in UTC.
${year}
- Year in 4 digit form, e.g. 2000${month}
- Month of the year padded to 2 digits${day}
- Day of the month padded to 2 digits${hour}
- Hour padded to 2 digits using 24 hour clock, e.g. 22${minute}
- Minute padded to 2 digits${second}
- Second padded to 2 digits${millis}
- Milliseconds padded to 3 digits${ms}
- Milliseconds since the epochSystem variables (environment variables) can also be used, e.g. ${TMP}
.
rolledFileName in RollingFileAppender can use references to the fileName to incorporate parts of the non rolled file name.
${fileName}
- The complete file name${fileStem}
- Part of the file name before the file extension, i.e. everything before the last ‘.’${fileExtension}
- The extension part of the file name, i.e. everything after the last ‘.’${uuid}
- A randomly generated UUID to guarantee unique file namesThe following capabilities are available to parse input data:
##Context File
###Input File:
<?xml version="1.0" encoding="UTF-8"?>
<SomeData>
<SomeEvent>
<SomeTime>01/01/2009:12:00:01</SomeTime>
<SomeAction>OPEN</SomeAction>
<SomeUser>userone</SomeUser>
<SomeFile>D:\TranslationKit\example\VerySimple\OpenFileEvents.txt</SomeFile>
</SomeEvent>
</SomeData>
###Context File:
<?xml version="1.0" encoding="UTF-8"?>
<SomeContext>
<Machine>MyMachine</Machine>
</SomeContext>
###Context XSLT:
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
xmlns="reference-data:2"
xmlns:evt="event-logging:3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="2.0">
<xsl:template match="SomeContext">
<referenceData
xsi:schemaLocation="event-logging:3 file://event-logging-v3.0.0.xsd reference-data:2 file://reference-data-v2.0.1.xsd"
version="2.0.1">
<xsl:apply-templates/>
</referenceData>
</xsl:template>
<xsl:template match="Machine">
<reference>
<map>CONTEXT</map>
<key>Machine</key>
<value><xsl:value-of select="."/></value>
</reference>
</xsl:template>
</xsl:stylesheet>
###Context XML Translation:
<?xml version="1.0" encoding="UTF-8"?>
<referenceData xmlns:evt="event-logging:3"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="reference-data:2"
xsi:schemaLocation="event-logging:3 file://event-logging-v3.0.0.xsd reference-data:2 file://reference-data-v2.0.1.xsd"
version="2.0.1">
<reference>
<map>CONTEXT</map>
<key>Machine</key>
<value>MyMachine</value>
</reference>
</referenceData>
###Input File:
<?xml version="1.0" encoding="UTF-8"?>
<SomeData>
<SomeEvent>
<SomeTime>01/01/2009:12:00:01</SomeTime>
<SomeAction>OPEN</SomeAction>
<SomeUser>userone</SomeUser>
<SomeFile>D:\TranslationKit\example\VerySimple\OpenFileEvents.txt</SomeFile>
</SomeEvent>
</SomeData>
###Main XSLT (Note the use of the context lookup):
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
xmlns="event-logging:3"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">
<xsl:template match="SomeData">
<Events xsi:schemaLocation="event-logging:3 file://event-logging-v3.0.0.xsd" Version="3.0.0">
<xsl:apply-templates/>
</Events>
</xsl:template>
<xsl:template match="SomeEvent">
<xsl:if test="SomeAction = 'OPEN'">
<Event>
<EventTime>
<TimeCreated>
<xsl:value-of select="s:format-date(SomeTime, 'dd/MM/yyyy:hh:mm:ss')"/>
</TimeCreated>
</EventTime>
<EventSource>
<System>Example</System>
<Environment>Example</Environment>
<Generator>Very Simple Provider</Generator>
<Device>
<IPAddress>182.80.32.132</IPAddress>
<Location>
<Country>UK</Country>
<Site><xsl:value-of select="s:lookup('CONTEXT', 'Machine')"/></Site>
<Building>Main</Building>
<Floor>1</Floor>
<Room>1aaa</Room>
</Location>
</Device>
<User><Id><xsl:value-of select="SomeUser"/></Id></User>
</EventSource>
<EventDetail>
<View>
<Document>
<Title>UNKNOWN</Title>
<File>
<Path><xsl:value-of select="SomeFile"/></Path>
</File>
</Document>
</View>
</EventDetail>
</Event>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
###Main Output XML:
<?xml version="1.0" encoding="UTF-8"?>
<Events xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="event-logging:3"
xsi:schemaLocation="event-logging:3 file://event-logging-v3.0.0.xsd"
Version="3.0.0">
<Event Id="6:1">
<EventTime>
<TimeCreated>2009-01-01T00:00:01.000Z</TimeCreated>
</EventTime>
<EventSource>
<System>Example</System>
<Environment>Example</Environment>
<Generator>Very Simple Provider</Generator>
<Device>
<IPAddress>182.80.32.132</IPAddress>
<Location>
<Country>UK</Country>
<Site>MyMachine</Site>
<Building>Main</Building>
<Floor>1</Floor>
<Room>1aaa</Room>
</Location>
</Device>
<User>
<Id>userone</Id>
</User>
</EventSource>
<EventDetail>
<View>
<Document>
<Title>UNKNOWN</Title>
<File>
<Path>D:\TranslationKit\example\VerySimple\OpenFileEvents.txt</Path>
</File>
</Document>
</View>
</EventDetail>
</Event>
</Events>
Some input XML data may be missing an XML declaration and root level enclosing elements. This data is not a valid XML document and must be treated as an XML fragment. To use XML fragments the input type for a translation must be set to ‘XML Fragment’. A fragment wrapper must be defined in the XML conversion that tells Stroom what declaration and root elements to place around the XML fragment data.
Here is an example:
<?xml version="1.1" encoding="UTF-8"?>
<!DOCTYPE records [
<!ENTITY fragment SYSTEM "fragment">
]>
<records
xmlns="records:2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="records:2 file://records-v2.0.xsd"
version="2.0">
&fragment;
</records>
During conversion Stroom replaces the fragment text entity with the input XML fragment data. Note that XML fragments must still be well formed so that they can be parsed correctly.
Once the text file has been converted into Intermediary XML (or the feed is already XML), XSLT is used to translate the XML into event logging XML format.
Event Feeds must be translated into the events schema and Reference into the reference schema. You can browse documentation relating to the schemas within the application.
Here is an example XSLT:
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
xmlns="event-logging:3"
xmlns:s="stroom"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">
<xsl:template match="SomeData">
<Events
xsi:schemaLocation="event-logging:3 file://event-logging-v3.0.0.xsd"
Version="3.0.0">
<xsl:apply-templates/>
</Events>
</xsl:template>
<xsl:template match="SomeEvent">
<xsl:variable name="dateTime" select="SomeTime"/>
<xsl:variable name="formattedDateTime" select="s:format-date($dateTime, 'dd/MM/yyyyhh:mm:ss')"/>
<xsl:if test="SomeAction = 'OPEN'">
<Event>
<EventTime>
<TimeCreated>
<xsl:value-of select="$formattedDateTime"/>
</TimeCreated>
</EventTime>
<EventSource>
<System>Example</System>
<Environment>Example</Environment>
<Generator>Very Simple Provider</Generator>
<Device>
<IPAddress>3.3.3.3</IPAddress>
</Device>
<User>
<Id><xsl:value-of select="SomeUser"/></Id>
</User>
</EventSource>
<EventDetail>
<View>
<Document>
<Title>UNKNOWN</Title>
<File>
<Path><xsl:value-of select="SomeFile"/></Path>
<File>
</Document>
</View>
</EventDetail>
</Event>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
By including the following namespace:
xmlns:s="stroom"
E.g.
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
xmlns="event-logging:3"
xmlns:s="stroom"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">
The following functions are available to aid your translation:
bitmap-lookup(String map, String key)
- Bitmap based look up against reference data map using the period start timebitmap-lookup(String map, String key, String time)
- Bitmap based look up against reference data map using a specified time, e.g. the event timebitmap-lookup(String map, String key, String time, Boolean ignoreWarnings)
- Bitmap based look up against reference data map using a specified time, e.g. the event time, and ignore any warnings generated by a failed lookupbitmap-lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
- Bitmap based look up against reference data map using a specified time, e.g. the event time, and ignore any warnings generated by a failed lookup and get trace information for the path taken to resolve the lookup.classification()
- The classification of the feed for the data being processedcol-from()
- The column in the input that the current record begins on (can be 0).col-to()
- The column in the input that the current record ends at.current-time()
- The current system timecurrent-user()
- The current user logged into Stroom (only relevant for interactive use, e.g. search)decode-url(String encodedUrl)
- Decode the provided url.dictionary(String name)
- Loads the contents of the named dictionary for use within the translationencode-url(String url)
- Encode the provided url.feed-name()
- Name of the feed for the data being processedformat-date(String date, String pattern)
- Format a date that uses the specified pattern using the default time zoneformat-date(String date, String pattern, String timeZone)
- Format a date that uses the specified pattern with the specified time zoneformat-date(String date, String patternIn, String timeZoneIn, String patternOut, String timeZoneOut)
- Parse a date with the specified input pattern and time zone and format the output with the specified output pattern and time zoneformat-date(String milliseconds)
- Format a date that is specified as a number of milliseconds since a standard base time known as “the epoch”, namely January 1, 1970, 00:00:00 GMTget(String key)
- Returns the value associated with a key that has been stored using put()hash(String value)
- Hash a string value using the default SHA-256
algorithm and no salthash(String value, String algorithm, String salt)
- Hash a string value using the specified hashing algorithm and supplied salt value. Supported hashing algorithms include SHA-256
, SHA-512
, MD5
.hex-to-dec(String hex)
- Convert hex to dec representationhex-to-oct(String hex)
- Convert hex to oct representationjson-to-xml(String json)
- Returns an XML representation of the supplied JSON value for use in XPath expressionsline-from()
- The line in the input that the current record begins on (1 based).line-to()
- The line in the input that the current record ends at.link(String url)
- Creates a stroom dashboard table link.link(String title, String url)
- Creates a stroom dashboard table link.link(String title, String url, String type)
- Creates a stroom dashboard table link.log(String severity, String message)
- Logs a message to the processing log with the specified severitylookup(String map, String key)
- Look up a reference data map using the period start timelookup(String map, String key, String time)
- Look up a reference data map using a specified time, e.g. the event timelookup(String map, String key, String time, Boolean ignoreWarnings)
- Look up a reference data map using a specified time, e.g. the event time, and ignore any warnings generated by a failed lookuplookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
- Look up a reference data map using a specified time, e.g. the event time, ignore any warnings generated by a failed lookup and get trace information for the path taken to resolve the lookup.meta(String key)
- Lookup a meta data value for the current stream using the specified key. The key can be Feed
, StreamType
, CreatedTime
, EffectiveTime
, Pipeline
or any other attribute supplied when the stream was sent to Stroom, e.g. meta(‘System’).numeric-ip(String ipAddress)
- Convert an IP address to a numeric representation for range comparisonpart-no()
- The current part within a multi part aggregated input stream (AKA the substream number) (1 based)parse-uri(String URI)
- Returns an XML structure of the URI providing authority
, fragment
, host
, path
, port
, query
, scheme
, schemeSpecificPart
, and userInfo
components if present.random()
- Get a system generated random number between 0 and 1.record-no()
- The current record number within the current part (substream) (1 based).search-id()
- Get the id of the batch search when a pipeline is processing as part of a batch searchsource()
- Returns an XML structure with the stroom-meta
namespace detailing the current source location.source-id()
- Get the id of the current input stream that is being processedstream-id()
- An alias for source-id
included for backward compatibility.pipeline-name()
- Name of the current processing pipeline using the XSLTput(String key, String value)
- Store a value for use later on in the translationThe bitmap-lookup() function looks up a bitmap key from reference or context data a value (which can be an XML node set) for each set bit position and adds it to the resultant XML.
bitmap-lookup(String map, String key)
bitmap-lookup(String map, String key, String time)
bitmap-lookup(String map, String key, String time, Boolean ignoreWarnings)
bitmap-lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
map
- The name of the reference data map to perform the lookup against.key
- The bitmap value to lookup.
This can either be represented as a decimal integer (e.g. 14
) or as hexadecimal by prefixing with 0x
(e.g 0xE
).time
- Determines which set of reference data was effective at the requested time.
If no reference data exists with an effective time before the requested time then the lookup will fail.
Time is in the format yyyy-MM-dd'T'HH:mm:ss.SSSXX
, e.g. 2010-01-01T00:00:00.000Z
.ignoreWarnings
- If true, any lookup failures will be ignored, else they will be reported as warnings.trace
- If true, additional trace information is output as INFO messages.If the look up fails no result will be returned.
The key is a bitmap expressed as either a decimal integer or a hexidecimal value, e.g. 14
/0xE
is 1110
as a binary bitmap.
For each bit position that is set, (i.e. has a binary value of 1
) a lookup will be performed using that bit position as the key.
In this example, positions 1
, 2
& 3
are set so a lookup would be performed for these bit positions.
The result of each lookup for the bitmap are concatenated together in bit position order, separated by a space.
If ignoreWarnings
is true then any lookup failures will be ignored and it will return the value(s) for the bit positions it was able to lookup.
This function can be useful when you have a set of values that can be represented as a bitmap and you need them to be converted back to individual values. For example if you have a set of additive account permissions (e.g Admin, ManageUsers, PerformExport, etc.), each of which is associated with a bit position, then a user’s permissions could be defined as a single decimal/hex bitmap value. Thus a bitmap lookup with this value would return all the permissions held by the user.
For example the reference data store may contain:
Key (Bit position) | Value |
---|---|
0 | Administrator |
1 | Manage_Users |
2 | Perform_Export |
3 | View_Data |
4 | Manage_Jobs |
5 | Delete_Data |
6 | Manage_Volumes |
The following are example lookups using the above reference data:
Lookup Key (decimal) | Lookup Key (Hex) | Bitmap | Result |
---|---|---|---|
0 |
0x0 |
0000000 |
- |
1 |
0x1 |
0000001 |
Administrator |
74 |
0x4A |
1001010 |
Manage_Users View_Data Manage_Volumes |
2 |
0x2 |
0000010 |
Manage_Users |
96 |
0x60 |
1100000 |
Delete_Data Manage_Volumes |
The dictionary() function gets the contents of the specified dictionary for use during translation. The main use for this function is to allow users to abstract the management of a set of keywords from the XSLT so that it is easier for some users to make quick alterations to a dictionary that is used by some XSLT, without the need for the user to understand the complexities of XSLT.
The format-date() function takes a Pattern and optional TimeZone arguments and replaces the parsed contents with an XML standard Date Format. The pattern must be a Java based SimpleDateFormat. If the optional TimeZone argument is present the pattern must not include the time zone pattern tokens (z and Z). A special time zone value of “GMT/BST” can be used to guess the time based on the date (BST during British Summer Time).
E.g. Convert a GMT date time “2009/12/01 12:34:11”
<xsl:value-of select="s:format-date('2009/08/01 12:34:11', 'yyyy/MM/dd HH:mm:ss')"/>
E.g. Convert a GMT or BST date time “2009/08/01 12:34:11”
<xsl:value-of select="s:format-date('2009/08/01 12:34:11', 'yyyy/MM/dd HH:mm:ss', 'GMT/BST')"/>
E.g. Convert a GMT+1:00 date time “2009/08/01 12:34:11”
<xsl:value-of select="s:format-date('2009/08/01 12:34:11', 'yyyy/MM/dd HH:mm:ss', 'GMT+1:00')"/>
E.g. Convert a date time specified as milliseconds since the epoch “1269270011640”
<xsl:value-of select="s:format-date('1269270011640')"/>
Time Zone Must be as per the rules defined in SimpleDateFormat under General Time Zone syntax.
Create a string that represents a hyperlink for display in a dashboard table.
link(url)
link(title, url)
link(title, url, type)
Example
link('http://www.somehost.com/somepath')
> [http://www.somehost.com/somepath](http://www.somehost.com/somepath)
link('Click Here','http://www.somehost.com/somepath')
> [Click Here](http://www.somehost.com/somepath)
link('Click Here','http://www.somehost.com/somepath', 'dialog')
> [Click Here](http://www.somehost.com/somepath){dialog}
link('Click Here','http://www.somehost.com/somepath', 'dialog|Dialog Title')
> [Click Here](http://www.somehost.com/somepath){dialog|Dialog Title}
Type can be one of:
dialog
: Display the content of the link URL within a stroom popup dialog.tab
: Display the content of the link URL within a stroom tab.browser
: Display the content of the link URL within a new browser tab.dashboard
: Used to launch a stroom dashboard internally with parameters in the URL.If you wish to override the default title or URL of the target link in either a tab or dialog you can. Both dialog
and tab
types allow titles to be specified after a |
, e.g. dialog|My Title
.
The log() function writes a message to the processing log with the specified severity. Severities of INFO, WARN, ERROR and FATAL can be used. Severities of ERROR and FATAL will result in records being omitted from the output if a RecordOutputFilter is used in the pipeline. The counts for RecWarn, RecError will be affected by warnings or errors generated in this way therefore this function is useful for adding business rules to XML output.
E.g. Warn if a SID is not the correct length.
<xsl:if test="string-length($sid) != 7">
<xsl:value-of select="s:log('WARN', concat($sid, ' is not the correct length'))"/>
</xsl:if>
The lookup() function looks up from reference or context data a value (which can be an XML node set) and adds it to the resultant XML.
lookup(String map, String key)
lookup(String map, String key, String time)
lookup(String map, String key, String time, Boolean ignoreWarnings)
lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
map
- The name of the reference data map to perform the lookup against.key
- The key to lookup. The key can be a simple string, an integer value in a numeric range or a nested lookup key.time
- Determines which set of reference data was effective at the requested time.
If no reference data exists with an effective time before the requested time then the lookup will fail.
Time is in the format yyyy-MM-dd'T'HH:mm:ss.SSSXX
, e.g. 2010-01-01T00:00:00.000Z
.ignoreWarnings
- If true, any lookup failures will be ignored, else they will be reported as warnings.trace
- If true, additional trace information is output as INFO messages.If the look up fails no result will be returned. By testing the result a default value may be output if no result is returned.
E.g. Look up a SID given a PF
<xsl:variable name="pf" select="PFNumber"/>
<xsl:if test="$pf">
<xsl:variable name="sid" select="s:lookup('PF_TO_SID', $pf, $formattedDateTime)"/>
<xsl:choose>
<xsl:when test="$sid">
<User>
<Id><xsl:value-of select="$sid"/></Id>
</User>
</xsl:when>
<xsl:otherwise>
<data name="PFNumber">
<xsl:attribute name="Value"><xsl:value-of select="$pf"/></xsl:attribute>
</data>
</xsl:otherwise>
</xsl:choose>
</xsl:if>
Reference data entries can either be stored with single string key or a key range that defines a numeric range, e.g 1-100. When a lookup is preformed the passed key is looked up as if it were a normal string key. If that lookup fails Stroom will try to convert the key to an integer (long) value. If it can be converted to an integer than a second lookup will be performed against entries with key ranges to see if there is a key range that includes the requested key.
Range lookups can be used for looking up an IP address where the reference data values are associated with ranges of IP addresses.
In this use case, the IP address must first be converted into a numeric value using numeric-ip()
, e.g:
stroom:lookup('IP_TO_LOCATION', numeric-ip($ipAddress))
Similarly the reference data must be stored with key ranges whose bounds were created using this function.
The lookup function allows you to perform chained lookups using nested maps.
For example you may have a reference data map called USER_ID_TO_LOCATION that maps user IDs to some location information for that user and a map called USER_ID_TO_MANAGER that maps user IDs to the user ID of their manager.
If you wanted to decorate a user’s event with the location of their manager you could use a nested map to achieve the lookup chain.
To perform the lookup set the map
argument to the list of maps in the lookup chain, separated by a /
, e.g. USER_ID_TO_MANAGER/USER_ID_TO_LOCATION
.
This will perform a lookup against the first map in the list using the requested key.
If a value is found the value will be used as the key in a lookup against the next map.
The value from each map lookup is used as the key in the next map all the way down the chain.
The value from the last lookup is then returned as the result of the lookup()
call.
If no value is found at any point in the chain then that results in no value being returned from the function.
In order to use nested map lookups each intermediate map must contain simple string values. The last map in the chain can either contain string values or XML fragment values.
You can put values into a map using the put() function. These values can then be retrieved later using the get() function. Values are stored against a key name so that multiple values can be stored. These functions can be used for many purposes but are most commonly used to count a number of records that meet certain criteria.
An example of how to count records is shown below:
<!-- Get the current record count -->
<xsl:variable name="currentCount" select="number(s:get('count'))" />
<!-- Increment the record count -->
<xsl:variable name="count">
<xsl:choose>
<xsl:when test="$currentCount">
<xsl:value-of select="$currentCount + 1" />
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="1" />
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<!-- Store the count for future retrieval -->
<xsl:value-of select="s:put('count', $count)" />
<!-- Output the new count -->
<data name="Count">
<xsl:attribute name="Value" select="$count" />
</data>
The parse-uri() function takes a Uniform Resource Identifier (URI) in string form and returns an XML node with a namespace of uri
containing the URI’s individual components of authority
, fragment
, host
, path
, port
, query
, scheme
, schemeSpecificPart
and userInfo
. See either RFC 2306: Uniform Resource Identifiers (URI): Generic Syntax or Java’s java.net.URI Class for details regarding the components.
The following xml
<!-- Display and parse the URI contained within the text of the rURI element -->
<xsl:variable name="u" select="s:parseUri(rURI)" />
<URI>
<xsl:value-of select="rURI" />
</URI>
<URIDetail>
<xsl:copy-of select="$v"/>
</URIDetail>
given the rURI text contains
http://foo:bar@w1.superman.com:8080/very/long/path.html?p1=v1&p2=v2#more-details
would provide
<URL>http://foo:bar@w1.superman.com:8080/very/long/path.html?p1=v1&p2=v2#more-details</URL>
<URIDetail>
<authority xmlns="uri">foo:bar@w1.superman.com:8080</authority>
<fragment xmlns="uri">more-details</fragment>
<host xmlns="uri">w1.superman.com</host>
<path xmlns="uri">/very/long/path.html</path>
<port xmlns="uri">8080</port>
<query xmlns="uri">p1=v1&p2=v2</query>
<scheme xmlns="uri">http</scheme>
<schemeSpecificPart xmlns="uri">//foo:bar@w1.superman.com:8080/very/long/path.html?p1=v1&p2=v2</schemeSpecificPart>
<userInfo xmlns="uri">foo:bar</userInfo>
</URIDetail>
You can use an XSLT import to include XSLT from another translation. E.g.
<xsl:import href="ApacheAccessCommon" />
This would include the XSLT from the ApacheAccessCommon translation.