3.2 - XSLT Functions
Custom XSLT functions available in Stroom.
By including the following namespace:
xmlns:stroom="stroom"
E.g.
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
xmlns="event-logging:3"
xmlns:stroom="stroom"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">
The following functions are available to aid your translation:
bitmap-lookup(String map, String key)
- Bitmap based look up against reference data map using the period start time
bitmap-lookup(String map, String key, String time)
- Bitmap based look up against reference data map using a specified time, e.g. the event time
bitmap-lookup(String map, String key, String time, Boolean ignoreWarnings)
- Bitmap based look up against reference data map using a specified time, e.g. the event time, and ignore any warnings generated by a failed lookup
bitmap-lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
- Bitmap based look up against reference data map using a specified time, e.g. the event time, and ignore any warnings generated by a failed lookup and get trace information for the path taken to resolve the lookup.
cidr-to-numeric-ip-range()
- Converts a CIDR IP address range to an array of numeric IP addresses representing the start and end addresses of the range.
classification()
- The classification of the feed for the data being processed
col-from()
- The column in the input that the current record begins on (can be 0).
col-to()
- The column in the input that the current record ends at.
current-time()
- The current system time
current-unixTime()
- The current system time shown as milliseconds since the epoch
current-user()
- The current user logged into Stroom (only relevant for interactive use, e.g. search)
decode-url(String encodedUrl)
- Decode the provided url.
dictionary(String name)
- Loads the contents of the named dictionary for use within the translation
encode-url(String url)
- Encode the provided url.
feed-attribute(String attributeKey)
- NOTE: This function is deprecated, use meta(String key)
instead.
The value for the supplied feed attributeKey
.
feed-name()
- Name of the feed for the data being processed
fetch-json(String url)
- Simplistic version of http-call
that sends a request to the passed url
and converts the JSON response body to XML using json-to-xml
.
Currently does not support SSL configuration like http-call
does.
format-date(String date, String pattern)
- Format a date that uses the specified pattern using the default time zone
format-date(String date, String pattern, String timeZone)
- Format a date that uses the specified pattern with the specified time zone
format-date(String date, String patternIn, String timeZoneIn, String patternOut, String timeZoneOut)
- Parse a date with the specified input pattern and time zone and format the output with the specified output pattern and time zone
format-date(String milliseconds)
- Format a date that is specified as a number of milliseconds since a standard base time known as “the epoch”, namely January 1, 1970, 00:00:00 GMT
format-dateTime(DateTime dateTime)
- Format a dateTime with the default pattern
format-dateTime(DateTime dateTime, String pattern)
- Format a dateTime with the specified pattern
format-dateTime(DateTime dateTime, String pattern, String timeZone)
- Format a dateTime with the specified pattern and time zone
from-unixTime(Integer milliseconds)
- Returns the specified number of milliseconds since the epoch as a dateTime
get(String key)
- Returns the value associated with a key
that has been stored in a map using the put()
function.
The map is in the scope of the current pipeline process so values do not live after the stream has been processed.
hash(String value)
- Hash a string value using the default SHA-256
algorithm and no salt
hash(String value, String algorithm, String salt)
- Hash a string value using the specified hashing algorithm and supplied salt value.
Supported hashing algorithms include SHA-256
, SHA-512
, MD5
.
hex-to-dec(String hex)
- Convert hex to dec representation.
hex-to-oct(String hex)
- Convert hex to oct representation.
hex-to-string(String hex, String textEncoding)
- Convert hex to string using the specified text encoding.
host-address(String hostname)
- Convert a hostname into an IP address.
host-name(String ipAddress)
- Convert an IP address into a hostname.
http-call(String url, String headers, String mediaType, String data, String clientConfig)
- Makes an HTTP(S) request to a remote server.
ip-in-cidr(String ipAddress, String cidr)
- Return whether an IPv4 address is within the specified CIDR (e.g. 192.168.1.0/24
).
json-to-xml(String json)
- Returns an XML representation of the supplied JSON value for use in XPath expressions
line-from()
- The line in the input that the current record begins on (1 based).
line-to()
- The line in the input that the current record ends at.
link(String url)
- Creates a stroom dashboard table link.
link(String title, String url)
- Creates a stroom dashboard table link.
link(String title, String url, String type)
- Creates a stroom dashboard table link.
log(String severity, String message)
- Logs a message to the processing log with the specified severity
lookup(String map, String key)
- Look up a reference data map using the period start time
lookup(String map, String key, String time)
- Look up a reference data map using a specified time, e.g. the event time
lookup(String map, String key, String time, Boolean ignoreWarnings)
- Look up a reference data map using a specified time, e.g. the event time, and ignore any warnings generated by a failed lookup
lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
- Look up a reference data map using a specified time, e.g. the event time, ignore any warnings generated by a failed lookup and get trace information for the path taken to resolve the lookup.
meta(String key)
- Lookup a meta data value for the current stream using the specified key.
The key can be Feed
, StreamType
, CreatedTime
, EffectiveTime
, Pipeline
or any other attribute supplied when the stream was sent to Stroom, e.g. meta(‘System’).
meta-keys()
- Returns an array of meta keys for the current stream. Each key can then be used to retrieve its corresponding meta value, by calling meta($key)
.
numeric-ip(String ipAddress)
- Convert an IP address to a numeric representation for range comparison
part-no()
- The current part within a multi part aggregated input stream (AKA the substream number) (1 based)
parse-dateTime(String dateTime)
- Returns the dateTime of a specified ISO 8601 formatted string
parse-dateTime(String dateTime, String pattern)
- Returns the dateTime for a specified string using the pattern
parse-dateTime(String dateTime, String pattern, String timeZone)
- Returns the dateTime for a specified string using the pattern and time zone
parse-uri(String URI)
- Returns an XML structure of the URI providing authority
, fragment
, host
, path
, port
, query
, scheme
, schemeSpecificPart
, and userInfo
components if present.
pipeline-name()
- Get the name of the pipeline currently processing the stream.
pointIsInsideXYPolygon(Number xPos, Number yPos, Number[] xPolyData, Number[] yPolyData)
- Get the name of the pipeline currently processing the stream.
random()
- Get a system generated random number between 0 and 1.
record-no()
- The current record number within the current part (substream) (1 based).
search-id()
- Get the id of the batch search when a pipeline is processing as part of a batch search
source()
- Returns an XML structure with the stroom-meta
namespace detailing the current source location.
source-id()
- Get the id of the current input stream that is being processed
stream-id()
- An alias for source-id
included for backward compatibility.
to-unixTime(DateTime dateTime)
- Returns milliseconds since the epoch for a specified dateTime
pipeline-name()
- Name of the current processing pipeline using the XSLT
put(String key, String value)
- Store a value for use later on in the translation
bitmap-lookup()
The bitmap-lookup() function looks up a bitmap key from reference or context data a value (which can be an XML node set) for each set bit position and adds it to the resultant XML.
bitmap-lookup(String map, String key)
bitmap-lookup(String map, String key, String time)
bitmap-lookup(String map, String key, String time, Boolean ignoreWarnings)
bitmap-lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
map
- The name of the reference data map to perform the lookup against.
key
- The bitmap value to lookup.
This can either be represented as a decimal integer (e.g. 14
) or as hexadecimal by prefixing with 0x
(e.g 0xE
).
time
- Determines which set of reference data was effective at the requested time.
If no reference data exists with an effective time before the requested time then the lookup will fail.
Time is in the format yyyy-MM-dd'T'HH:mm:ss.SSSXX
, e.g. 2010-01-01T00:00:00.000Z
.
ignoreWarnings
- If true, any lookup failures will be ignored, else they will be reported as warnings.
trace
- If true, additional trace information is output as INFO messages.
If the look up fails no result will be returned.
The key is a bitmap expressed as either a decimal integer or a hexidecimal value, e.g. 14
/0xE
is 1110
as a binary bitmap.
For each bit position that is set, (i.e. has a binary value of 1
) a lookup will be performed using that bit position as the key.
In this example, positions 1
, 2
& 3
are set so a lookup would be performed for these bit positions.
The result of each lookup for the bitmap are concatenated together in bit position order, separated by a space.
If ignoreWarnings
is true then any lookup failures will be ignored and it will return the value(s) for the bit positions it was able to lookup.
This function can be useful when you have a set of values that can be represented as a bitmap and you need them to be converted back to individual values.
For example if you have a set of additive account permissions (e.g Admin, ManageUsers, PerformExport, etc.), each of which is associated with a bit position, then a user’s permissions could be defined as a single decimal/hex bitmap value.
Thus a bitmap lookup with this value would return all the permissions held by the user.
For example the reference data store may contain:
Key (Bit position) |
Value |
0 |
Administrator |
1 |
Manage_Users |
2 |
Perform_Export |
3 |
View_Data |
4 |
Manage_Jobs |
5 |
Delete_Data |
6 |
Manage_Volumes |
The following are example lookups using the above reference data:
Lookup Key (decimal) |
Lookup Key (Hex) |
Bitmap |
Result |
0 |
0x0 |
0000000 |
- |
1 |
0x1 |
0000001 |
Administrator |
74 |
0x4A |
1001010 |
Manage_Users View_Data Manage_Volumes |
2 |
0x2 |
0000010 |
Manage_Users |
96 |
0x60 |
1100000 |
Delete_Data Manage_Volumes |
cidr-to-numeric-ip-range()
Converts a CIDR IP address range to an array of numeric IP addresses representing the start and end (broadcast) of the range.
When storing the result in a variable, ensure you indicate the type as a string array (xs:string*
), as shown in the below example.
Example XSLT
<xsl:variable name="range" select="stroom:cidr-to-numeric-ip-range('192.168.1.0/24')" as="xs:string*" />
<Range>
<Start><xsl:value-of select="$range[1]" /></Start>
<End><xsl:value-of select="$range[2]" /></End>
</Range>
Example output
<Range>
<Start>3232235776</Start>
<End>3232236031</End>
</Range>
dictionary()
The dictionary() function gets the contents of the specified dictionary for use during translation.
The main use for this function is to allow users to abstract the management of a set of keywords from the XSLT so that it is easier for some users to make quick alterations to a dictionary that is used by some XSLT, without the need for the user to understand the complexities of XSLT.
The format-date() function combines parsing and formatting of date strings.
In its simplest form it will parse a date string and return the parsed date in the XML standard Date Format.
It also supports supplying a custom format pattern to output the parsed date in a specified format.
Function Signatures
The following are the possible forms of the format-date
function.
<!-- Convert time in millis to standard date format -->
format-date(long millisSinceEpoch)
<!-- Convert inputDate to standard date format -->
format-date(String inputDate, String inputPattern)
<!-- Convert inputDate to standard date format using specified input time zone -->
format-date(String inputDate, String inputPattern, String inputTimeZone)
<!-- Convert inputDate to a custom date format using optional input time zone inputTimeZone -->
format-date(String inputDate, String inputPattern, String inputTimeZone, String outputPattern)
<!-- Convert inputDate to a custom date format using optional input time zone and a specified output time zone -->
format-date(String inputDate, String inputPattern, String inputTimeZone, String outputPattern, String outputTimeZone)
millisSinceEpoch
- The date/time expressed as the number of milliseconds since the
UNIX epoch
.
inputDate
- The input date string, e.g. 2009/08/01 12:34:11
.
inputPattern
- The pattern that defines the structure of inputDate
(see Custom Date Formats).
inputTimeZone
- Optional time zone of the inputDate
.
If null
then the UTC/Zulu time zone will be used.
If inputTimeZone
is present, the inputPattern must not include the time zone pattern tokens (z
and Z
).
outputPattern
- The pattern that defines the format of the output date (see Custom Date Formats).
inputTimeZone
- Optional time zone of the output date.
If null
then the UTC/Zulu time zone will be used.
Time Zones
The following is a list of some common time zone values:
Values |
Zone Name |
GMT/BST |
A Stroom specific value for UK daylight saving time (see below) |
UTC , UCT , Zulu , Universal , +00:00 , -00:00 , +00 , +0 |
Coordinated Universal Time (UTC) |
GMT , GMT0 , Greenwich |
Greenwich Mean Time (GMT) |
GB , GB-Eire , Europe/London |
British Time |
NZ , Pacific/Auckland |
New Zealand Time |
Australia/Canberra , Australia/Sydney |
Eastern Australia Time |
CET |
Central European Time |
EET |
Eastern European Time |
Canada/Atlantic |
Atlantic Time |
Canada/Central |
Central Time |
Canada/Pacific |
Pacific Time |
US/Central |
Central Time |
US/Eastern |
Eastern Time |
US/Mountain |
Mountain Time |
US/Pacific |
Pacific Time |
+02:00 , +02 , +2 |
UTC +2hrs |
-03:00 , -03 , -3 |
UTC -3hrs |
A special time zone value of GMT/BST
can be used when the inputDate
is in local wall clock time with time zone information.
In this case, the date/time will be used to determine whether the date is in British Summer Time or in GMT and adjust the output accordingly.
See the examples below.
Parsing Examples
The following table shows various examples of calls to stroom:format-date()
with their output.
The stroom:format-date
part has been omitted for brevity.
<!-- Date in millis since UNIX epoch -->
stroom:format-date('1269270011640')
-> '2010-03-22T15:00:11.640Z'
<!-- Simple date UK style date -->
stroom:format-date('29/08/24', 'dd/MM/yy')
-> '2024-08-29T00:00:00.000Z'
<!-- Simple date US style date -->
stroom:format-date('08/29/24', 'MM/dd/yy')
-> '2024-08-29T00:00:00.000Z'
<!-- ISO date with no delimiters -->
stroom:format-date('20010801184559', 'yyyyMMddHHmmss')
-> '2001-08-01T18:45:59.000Z'
<!-- Standard output, no TZ -->
stroom:format-date('2001/08/01 18:45:59', 'yyyy/MM/dd HH:mm:ss')
-> '2001-08-01T18:45:59.000Z'
<!-- Standard output, date only, with TZ -->
stroom:format-date('2001/08/01', 'yyyy/MM/dd', '-07:00')
-> '2001-08-01T07:00:00.000Z'
<!-- Standard output, with TZ -->
stroom:format-date('2001/08/01 01:00:00', 'yyyy/MM/dd HH:mm:ss', '-08:00')
-> '2001-08-01T09:00:00.000Z'
<!-- Standard output, with TZ -->
stroom:format-date('2001/08/01 01:00:00', 'yyyy/MM/dd HH:mm:ss', '+01:00')
-> '2001-08-01T00:00:00.000Z'
<!-- Single digit day and month, no padding -->
stroom:format-date('2001 8 1', 'yyyy MM dd')
-> '2001-08-01T00:00:00.000Z'
<!-- Double digit day and month, no padding -->
stroom:format-date('2001 12 28', 'yyyy MM dd')
-> '2001-12-28T00:00:00.000Z'
<!-- Single digit day and month, with optional padding -->
stroom:format-date('2001 8 1', 'yyyy ppMM ppdd')
-> '2001-08-01T00:00:00.000Z'
<!-- Double digit day and month, with optional padding -->
stroom:format-date('2001 12 31', 'yyyy ppMM ppdd')
-> '2001-12-31T00:00:00.000Z'
<!-- With abbreviated day of week month -->
stroom:format-date('Wed Aug 14 2024', 'EEE MMM dd yyyy')
-> '2024-08-14T00:00:00.000Z'
<!-- With long form day of week and month -->
stroom:format-date('Wednesday August 14 2024', 'EEEE MMMM dd yyyy')
-> '2024-08-14T00:00:00.000Z'
<!-- With 12 hour clock, AM -->
stroom:format-date('Wed Aug 14 2024 10:32:58 AM', 'E MMM dd yyyy hh:mm:ss a')
-> '2024-08-14T10:32:58.000Z'
<!-- With 12 hour clock, PM (lower case) -->
stroom:format-date('Wed Aug 14 2024 10:32:58 pm', 'E MMM dd yyyy hh:mm:ss a')
-> '2024-08-14T22:32:58.000Z'
<!-- Using minimal symbols -->
stroom:format-date('2001 12 31 22:58:32.123', 'y M d H:m:s.S')
-> '2001-12-31T22:58:32.123Z'
<!-- Optional time portion, with time -->
stroom:format-date('2001/12/31 22:58:32.123', 'yyyy/MM/dd[ HH:mm:ss.SSS]')
-> '2001-12-31T22:58:32.123Z'
<!-- Optional time portion, without time -->
stroom:format-date('2001/12/31', 'yyyy/MM/dd[ HH:mm:ss.SSS]')
-> '2001-12-31T00:00:00.000Z'
<!-- Optional millis portion, with millis -->
stroom:format-date('2001/12/31 22:58:32.123', 'yyyy/MM/dd HH:mm:ss[.SSS]')
-> '2001-12-31T22:58:32.123Z'
<!-- Optional millis portion, without millis -->
stroom:format-date('2001/12/31 22:58:32', 'yyyy/MM/dd HH:mm:ss[.SSS]')
-> '2001-12-31T22:58:32.000Z'
<!-- Optional millis/nanos portion, with nanos -->
stroom:format-date('2001/12/31 22:58:32.123456', 'yyyy/MM/dd HH:mm:ss[.SSS]')
-> '2001-12-31T22:58:32.123Z'
<!-- Fixed text -->
stroom:format-date('Date: 2001/12/31 Time: 22:58:32.123', ''Date: 'yyyy/MM/dd 'Time: 'HH:mm:ss.SSS')
-> '2001-12-31T22:58:32.123Z'
<!-- GMT/BST date that is BST -->
stroom:format-date('2009/06/01 12:34:11', 'yyyy/MM/dd HH:mm:ss', 'GMT/BST')
-> '2009-06-01T11:34:11.000Z'
<!-- GMT/BST date that is GMT -->
stroom:format-date('2009/02/01 12:34:11', 'yyyy/MM/dd HH:mm:ss', 'GMT/BST')
-> '2009-02-01T12:34:11.000Z'
<!-- Time zone offset -->
stroom:format-date('2009/02/01 12:34:11', 'yyyy/MM/dd HH:mm:ss', '+01:00')
-> '2009-02-01T11:34:11.000Z'
<!-- Named timezone -->
stroom:format-date('2009/02/01 23:34:11', 'yyyy/MM/dd HH:mm:ss', 'US/Eastern')
-> '2009-02-02T04:34:11.000Z'
Note
Parsing is done in lenient mode so, the count of each symbol is not critical, e.g. you can parse the year 2024
with y
, yy
, yyy
or yyyy
.
Despite this, it is advisable to use a pattern that matches the known format of the input dates, e.g. in this example yyyy
, to avoid confusing with anyone else reading your XSLT.
The count of each symbol is however critical when it comes to formatting.
<!-- Specific output, no input or output TZ -->
stroom:format-date('2001/08/01 14:30:59', 'yyyy/MM/dd HH:mm:ss', null, 'E dd MMM yyyy HH:mm (s 'secs')')
-> 'Wed 01 Aug 2001 14:30 (59 secs)'
<!-- Specific output, UTC input, no output TZ -->
stroom:format-date('2001/08/01 14:30:59', 'yyyy/MM/dd HH:mm:ss', 'UTC', 'E dd MMM yyyy HH:mm (s 'secs')')
-> 'Wed 01 Aug 2001 14:30 (59 secs)'
<!-- Specific output, no output TZ -->
stroom:format-date('2001/08/01 14:30:59', 'yyyy/MM/dd HH:mm:ss', '+01:00', 'E dd MMM yyyy HH:mm (s 'secs')')
-> 'Wed 01 Aug 2001 13:30 (59 secs)'
<!-- Specific output, with input and output TZ -->
stroom:format-date('2001/08/01 14:30:59', 'yyyy/MM/dd HH:mm:ss', '+01:00', 'E dd MMM yyyy HH:mm', '+02:00')
-> 'Wed 01 Aug 2001 15:30'
<!-- Padded 12 hour clock output -->
stroom:format-date('2001/08/01 14:07:05.123', 'yyyy/MM/dd HH:mm:ss.SSS', 'UTC', 'E dd MMM yyyy pph:ppm:pps a')
-> 'Wed 01 Aug 2001 2: 7: 5 PM'
<!-- Padded 12 hour clock output -->
stroom:format-date('2001/08/01 22:27:25.123', 'yyyy/MM/dd HH:mm:ss.SSS', 'UTC', 'E dd MMM yyyy pph:ppm:pps a')
-> 'Wed 01 Aug 2001 10:27:25 PM'
<!-- Non-Padded 12 hour clock output -->
stroom:format-date('2001/08/01 14:07:05.123', 'yyyy/MM/dd HH:mm:ss.SSS', 'UTC', 'E dd MMM yyyy h:m:s a')
-> 'Wed 01 Aug 2001 2:7:5 PM'
<!-- Long form text -->
stroom:format-date('2001/08/01 14:07:05.123', 'yyyy/MM/dd HH:mm:ss.SSS', 'UTC', 'EEEE d MMMM yyyy HH:mm:ss')
-> 'Wednesday 1 August 2001 14:07:05'
Reference Time
When parsing a date string that does not contain a full zoned date and time, certain assumptions will be made.
If there is no time zone in inputDate
and no inputTimeZone
argument has been passed then the time zone of the input date will be assumed to be in the UTC time zone.
If any of the date parts are not present, e.g. an input of 28 Oct
then Stroom will use a reference date to fill in the gaps.
The reference date is the first of these values that is non-null
- The create time of the stream being processed by the XSLT.
- The current time, i.e. now().
For example for a call of stroom:format-date('28 Oct', 'dd MMM')
and a stream create time of 2024
, it will return 2024-10-28T00:00:00.000Z
.
Formats the dateTime as a string according to the specified pattern and time zone.
Function Signatures
The following are the possible forms of the format-dateTime
function.
<!-- Format dateTime to standard date format -->
format-dateTime(DateTime dateTime)
<!-- Format dateTime to a custom date format-->
format-dateTime(DateTime dateTime, String pattern)
<!-- Convert dateTime to standard date format using specified input time zone -->
format-dateTime(DateTime dateTime, String pattern, String timeZone)
dateTime
- The input dateTime.
pattern
- The pattern that defines the format of the output string (see Custom Date Formats).
timeZone
- Optional time zone of the output.
If null
then the UTC/Zulu time zone will be used.
Examples
<!-- Default format -->
stroom:format-dateTime('xs:dateTime("2024-08-29T00:00:00Z")')
-> '2024-08-29T00:00:00.000Z'
<!-- Default format +2hr zone offset -->
stroom:format-dateTime('xs:dateTime("2001-08-01T18:45:59.123+02:00")')
-> '2001-08-01T16:45:59.123Z'
<!-- Default format +2hr30min zone offset -->
stroom:format-dateTime('xs:dateTime("2001-08-01T18:45:59.123+02:30")')
-> '2001-08-01T16:15:59.123Z'
<!-- Default format -3hr zone offset -->
stroom:format-dateTime('xs:dateTime("2001-08-01T18:45:59.123-03:00")')
-> '2001-08-01T21:45:59.123Z'
<!-- Simple date format UK style date -->
stroom:format-dateTime('xs:dateTime("2024-08-29T00:00:00Z")', 'dd/MM/yy')
-> '29/08/24'
<!-- Simple date format US style date -->
stroom:format-dateTime('xs:dateTime("2024-08-29T00:00:00Z")', 'MM/dd/yy')
-> '08/29/24'
<!-- With no delimiters -->
stroom:format-dateTime('xs:dateTime("2001-08-01T18:45:59Z")', 'yyyyMMddHHmmss')
-> '20010801184559'
<!-- Standard output, no TZ -->
stroom:format-dateTime('xs:dateTime("2001-08-01T18:45:59Z")', 'yyyy/MM/dd HH:mm:ss')
-> '2001/08/01 18:45:59'
<!-- Format with nanos -->
stroom:format-dateTime('xs:dateTime("2010-01-01T23:59:59.123456Z")', 'yyyy-MM-dd'T'HH:mm:ss.SSSSSSXX')
-> '2010-01-01T23:59:59.123456Z'
<!-- Standard output, with TZ -->
stroom:format-dateTime('xs:dateTime("2001-08-01T09:00:00Z")', 'yyyy/MM/dd HH:mm:ss', '-08:00')
-> '2001/08/01 01:00:00'
<!-- Standard output, with TZ -->
stroom:format-dateTime('xs:dateTime("2001-08-01T00:00:00Z")', 'yyyy/MM/dd HH:mm:ss', '+01:00')
-> '2001/08/01 01:00:00'
<!-- GMT/BST date that is BST -->
stroom:format-dateTime('xs:dateTime("2009-06-01T11:34:11Z")', 'yyyy/MM/dd HH:mm:ss', 'GMT/BST')
-> '2009/06/01 12:34:11'
<!-- GMT/BST date that is GMT -->
stroom:format-dateTime('xs:dateTime("2009-02-01T12:34:11Z")', 'yyyy/MM/dd HH:mm:ss', 'GMT/BST')
-> '2009/02/01 12:34:11'
<!-- Named timezone -->
stroom:format-dateTime('xs:dateTime("2009-02-02T04:34:11Z")', 'yyyy/MM/dd HH:mm:ss', 'US/Eastern')
-> '2009/02/01 23:34:11'
hex-to-string()
For a hexadecimal input string, decode it using the specified character set to its original form.
Valid character set names are listed at: https://www.iana.org/assignments/character-sets/character-sets.xhtml.
Common examples are: ASCII
, UTF-8
and UTF-16
.
Example
<string><xsl:value-of select="hex-to-string('74 65 73 74 69 6e 67 20 31 32 33', 'UTF-8')" /></string>
Output
<string>testing 123</string>
http-call()
Executes an HTTP(S) request to a remote server and returns the response.
http-call(String url, [String headers], [String mediaType], [String data], [String clientConfig])
The arguments are as follows:
url
- The URL to send the request to.
headers
- A newline (
) delimited list of HTTP headers to send.
Each header is of the form key:value
.
mediaType
- The media (or MIME) type of the request data
, e.g. application/json
.
If not set application/json; charset=utf-8
will be used.
data
- The data to send.
The data type should be consistent with mediaType
.
Supplying the data
argument means a POST request method will be used rather than the default GET.
clientConfig
- A JSON object containing the configuration for the HTTP client to use, including any SSL configuration.
The function returns the response as XML with namespace stroom-http
.
The XML includes the body of the response in addition to the status code, success status, message and any headers.
clientConfig
The client can be configured using a JSON object containing various optional configuration items.
The following is an example of the client configuration object with all keys populated.
{
"callTimeout": "PT30S",
"connectionTimeout": "PT30S",
"followRedirects": false,
"followSslRedirects": false,
"httpProtocols": [
"http/2",
"http/1.1"
],
"readTimeout": "PT30S",
"retryOnConnectionFailure": true,
"sslConfig": {
"keyStorePassword": "password",
"keyStorePath": "/some/path/client.jks",
"keyStoreType": "JKS",
"trustStorePassword": "password",
"trustStorePath": "/some/path/ca.jks",
"trustStoreType": "JKS",
"sslProtocol": "TLSv1.2",
"hostnameVerificationEnabled": false
},
"writeTimeout": "PT30S"
}
If you are using two-way SSL then you may need to set the protocol to HTTP/1.1
.
"httpProtocols": [
"http/1.1"
],
Example output
The following is an example of the XML returned from the http-call
function:
<response xmlns="stroom-http">
<successful>true</successful>
<code>200</code>
<message>OK</message>
<headers>
<header>
<key>cache-control</key>
<value>public, max-age=600</value>
</header>
<header>
<key>connection</key>
<value>keep-alive</value>
</header>
<header>
<key>content-length</key>
<value>108</value>
</header>
<header>
<key>content-type</key>
<value>application/json;charset=iso-8859-1</value>
</header>
<header>
<key>date</key>
<value>Wed, 29 Jun 2022 13:03:38 GMT</value>
</header>
<header>
<key>expires</key>
<value>Wed, 29 Jun 2022 13:13:38 GMT</value>
</header>
<header>
<key>server</key>
<value>nginx/1.21.6</value>
</header>
<header>
<key>vary</key>
<value>Accept-Encoding</value>
</header>
<header>
<key>x-content-type-options</key>
<value>nosniff</value>
</header>
<header>
<key>x-frame-options</key>
<value>sameorigin</value>
</header>
<header>
<key>x-xss-protection</key>
<value>1; mode=block</value>
</header>
</headers>
<body>{"buildDate":"2022-06-29T09:22:41.541886118Z","buildVersion":"SNAPSHOT","upDate":"2022-06-29T11:06:26.869Z"}</body>
</response>
Example usage
This is an example of how to use the function call in your XSLT.
It is recommended to place the clientConfig
JSON in a
Dictionary
to make it easier to edit and to avoid having to escape all the quotes.
...
<xsl:template match="record">
...
<!-- Read the client config from a Dictionary into a variable -->
<xsl:variable name="clientConfig" select="stroom:dictionary('HTTP Client Config')" />
<!-- Make the HTTP call and store the response in a variable -->
<xsl:variable name="response" select="stroom:http-call('https://reqbin.com/echo', null, null, null, $clientConfig)" />
<!-- Apply 'response' templates to the response -->
<xsl:apply-templates mode="response" select="$response" />
...
</xsl:template>
<xsl:template mode="response" match="http:response">
<!-- Extract just the body of the response -->
<val><xsl:value-of select="./http:body/text()" /></val>
</xsl:template>
...
link()
Create a string that represents a hyperlink for display in a dashboard table.
link(url)
link(title, url)
link(title, url, type)
Example
link('https://www.somehost.com/somepath')
> [https://www.somehost.com/somepath](https://www.somehost.com/somepath)
link('Click Here','https://www.somehost.com/somepath')
> [Click Here](https://www.somehost.com/somepath)
link('Click Here','https://www.somehost.com/somepath', 'dialog')
> [Click Here](https://www.somehost.com/somepath){dialog}
link('Click Here','https://www.somehost.com/somepath', 'dialog|Dialog Title')
> [Click Here](https://www.somehost.com/somepath){dialog|Dialog Title}
Type can be one of:
dialog
: Display the content of the link URL within a stroom popup dialog.
tab
: Display the content of the link URL within a stroom tab.
browser
: Display the content of the link URL within a new browser tab.
dashboard
: Used to launch a stroom dashboard internally with parameters in the URL.
If you wish to override the default title or URL of the target link in either a tab or dialog you can. Both dialog
and tab
types allow titles to be specified after a |
, e.g. dialog|My Title
.
log()
The log() function writes a message to the processing log with the specified severity.
Severities of INFO, WARN, ERROR and FATAL can be used.
Severities of ERROR and FATAL will result in records being omitted from the output if a RecordOutputFilter is used in the pipeline.
The counts for RecWarn, RecError will be affected by warnings or errors generated in this way therefore this function is useful for adding business rules to XML output.
E.g. Warn if a SID is not the correct length.
<xsl:if test="string-length($sid) != 7">
<xsl:value-of select="stroom:log('WARN', concat($sid, ' is not the correct length'))"/>
</xsl:if>
The same functionality can also be achieved using the standard xsl:message
element, see <xsl:message>
lookup()
The lookup() function looks up from reference or context data a value (which can be an XML node set) and adds it to the resultant XML.
lookup(String map, String key)
lookup(String map, String key, String time)
lookup(String map, String key, String time, Boolean ignoreWarnings)
lookup(String map, String key, String time, Boolean ignoreWarnings, Boolean trace)
map
- The name of the reference data map to perform the lookup against.
key
- The key to lookup. The key can be a simple string, an integer value in a numeric range or a nested lookup key.
time
- Determines which set of reference data was effective at the requested time.
If no reference data exists with an effective time before the requested time then the lookup will fail.
Time is in the format yyyy-MM-dd'T'HH:mm:ss.SSSXX
, e.g. 2010-01-01T00:00:00.000Z
.
ignoreWarnings
- If true, any lookup failures will be ignored, else they will be reported as warnings.
trace
- If true, additional trace information is output as INFO messages.
If the look up fails no result will be returned.
By testing the result a default value may be output if no result is returned.
E.g. Look up a SID given a PF
<xsl:variable name="pf" select="PFNumber"/>
<xsl:if test="$pf">
<xsl:variable name="sid" select="stroom:lookup('PF_TO_SID', $pf, $formattedDateTime)"/>
<xsl:choose>
<xsl:when test="$sid">
<User>
<Id><xsl:value-of select="$sid"/></Id>
</User>
</xsl:when>
<xsl:otherwise>
<data name="PFNumber">
<xsl:attribute name="Value"><xsl:value-of select="$pf"/></xsl:attribute>
</data>
</xsl:otherwise>
</xsl:choose>
</xsl:if>
Range lookups
Reference data entries can either be stored with single string key or a key range that defines a numeric range, e.g 1-100.
When a lookup is preformed the passed key is looked up as if it were a normal string key.
If that lookup fails Stroom will try to convert the key to an integer (long) value.
If it can be converted to an integer than a second lookup will be performed against entries with key ranges to see if there is a key range that includes the requested key.
Range lookups can be used for looking up an IP address where the reference data values are associated with ranges of IP addresses.
In this use case, the IP address must first be converted into a numeric value using numeric-ip()
, e.g:
stroom:lookup('IP_TO_LOCATION', numeric-ip($ipAddress))
Similarly the reference data must be stored with key ranges whose bounds were created using this function.
Nested Maps
The lookup function allows you to perform chained lookups using nested maps.
For example you may have a reference data map called USER_ID_TO_LOCATION that maps user IDs to some location information for that user and a map called USER_ID_TO_MANAGER that maps user IDs to the user ID of their manager.
If you wanted to decorate a user’s event with the location of their manager you could use a nested map to achieve the lookup chain.
To perform the lookup set the map
argument to the list of maps in the lookup chain, separated by a /
, e.g. USER_ID_TO_MANAGER/USER_ID_TO_LOCATION
.
This will perform a lookup against the first map in the list using the requested key.
If a value is found the value will be used as the key in a lookup against the next map.
The value from each map lookup is used as the key in the next map all the way down the chain.
The value from the last lookup is then returned as the result of the lookup()
call.
If no value is found at any point in the chain then that results in no value being returned from the function.
In order to use nested map lookups each intermediate map must contain simple string values.
The last map in the chain can either contain string values or XML fragment values.
parse-dateTime()
Parses a string to a dateTime according to the specified pattern and time zone.
Function Signatures
The following are the possible forms of the parse-dateTime
function.
<!-- Converts inputDate to a dateTime -->
parse-dateTime(String inputDate)
<!-- Converts inputDate to a dateTime using a custom date format -->
parse-dateTime(DateTime inputDate, String pattern)
<!-- Converts inputDate to a dateTime using a custom date format in the specified time zone -->
parse-dateTime(DateTime inputDate, String pattern, String timeZone)
inputDate
- The input string.
pattern
- The pattern that defines the format of the input string (see Custom Date Formats).
timeZone
- Optional time zone of the output.
If null
then the UTC/Zulu time zone will be used.
Examples
<!-- ISO 8061 -->
stroom:parse-dateTime('2024-08-29T00:00:00Z')
-> '2024-08-29T00:00:00Z'
<!-- ISO 8061 with nanos -->
stroom:parse-dateTime('2010-01-01T23:59:59.123456Z')
-> '2010-01-01T23:59:59.123456Z'
<!-- ISO 8061 with millis -->
stroom:parse-dateTime('2010-01-01T23:59:59.123Z')
-> '2010-01-01T23:59:59.123Z'
<!-- ISO 8061 Zulu/UTC -->
stroom:parse-dateTime('2001-08-01T18:45:59.123+00:00')
-> '2001-08-01T18:45:59.123Z'
<!-- ISO 8061 +2hr zone offset -->
stroom:parse-dateTime('2001-08-01T18:45:59.123+02')
-> '2001-08-01T16:45:59.123Z'
<!-- ISO 8061 +2hr zone offset -->
stroom:parse-dateTime('2001-08-01T18:45:59.123+02:00')
-> '2001-08-01T16:45:59.123Z'
<!-- ISO 8061 +2hr30min zone offset -->
stroom:parse-dateTime('2001-08-01T18:45:59.123+02:30')
-> '2001-08-01T16:15:59.123Z'
<!-- ISO 8061 -3hr zone offset -->
stroom:parse-dateTime('2001-08-01T18:45:59.123-03:00')
-> '2001-08-01T21:45:59.123Z'
<!-- Simple date UK style date -->
stroom:parse-dateTime('29/08/24', 'dd/MM/yy')
-> '2024-08-29T00:00:00Z'
<!-- Simple date US style date -->
stroom:parse-dateTime('08/29/24', 'MM/dd/yy')
-> '2024-08-29T00:00:00Z'
<!-- ISO date with no delimiters -->
stroom:parse-dateTime('20010801184559', 'yyyyMMddHHmmss')
-> '2001-08-01T18:45:59Z'
<!-- Standard output, no TZ -->
stroom:parse-dateTime('2001/08/01 18:45:59', 'yyyy/MM/dd HH:mm:ss')
-> '2001-08-01T18:45:59Z'
<!-- Standard output, date only, with TZ -->
stroom:parse-dateTime('2001/08/01', 'yyyy/MM/dd', '-07:00')
-> '2001-08-01T07:00:00Z'
<!-- Standard output, with TZ -->
stroom:parse-dateTime('2001/08/01 01:00:00', 'yyyy/MM/dd HH:mm:ss', '-08:00')
-> '2001-08-01T09:00:00Z'
<!-- Standard output, with TZ -->
stroom:parse-dateTime('2001/08/01 01:00:00', 'yyyy/MM/dd HH:mm:ss', '+01:00')
-> '2001-08-01T00:00:00Z'
put() and get()
You can put values into a map using the put()
function.
These values can then be retrieved later using the get()
function.
Values are stored against a key name so that multiple values can be stored.
These functions can be used for many purposes but are most commonly used to count a number of records that meet certain criteria.
The map is in the scope of the current pipeline process so values do not live after the stream has been processed.
Also, the map will only contain entries that were put()
within the current pipeline process.
An example of how to count records is shown below:
<!-- Get the current record count -->
<xsl:variable name="currentCount" select="number(s:get('count'))" />
<!-- Increment the record count -->
<xsl:variable name="count">
<xsl:choose>
<xsl:when test="$currentCount">
<xsl:value-of select="$currentCount + 1" />
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="1" />
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<!-- Store the count for future retrieval -->
<xsl:value-of select="stroom:put('count', $count)" />
<!-- Output the new count -->
<data name="Count">
<xsl:attribute name="Value" select="$count" />
</data>
When calling this function and assigning the result to a variable, you must specify the variable data type of xs:string*
(array of strings).
The following fragment is an example of using meta-keys()
to emit all meta values for a given stream, into an Event/Meta
element:
<Event>
<xsl:variable name="metaKeys" select="stroom:meta-keys()" as="xs:string*" />
<Meta>
<xsl:for-each select="$metaKeys">
<string key="{.}"><xsl:value-of select="stroom:meta(.)" /></string>
</xsl:for-each>
</Meta>
</Event>
parse-uri()
The parse-uri() function takes a Uniform Resource Identifier (URI) in string form and returns an XML node with a namespace of uri
containing the URI’s individual components of authority
, fragment
, host
, path
, port
, query
, scheme
, schemeSpecificPart
and userInfo
. See either RFC 2306: Uniform Resource Identifiers (URI): Generic Syntax or Java’s java.net.URI Class for details regarding the components.
The following xml
<!-- Display and parse the URI contained within the text of the rURI element -->
<xsl:variable name="u" select="stroom:parseUri(rURI)" />
<URI>
<xsl:value-of select="rURI" />
</URI>
<URIDetail>
<xsl:copy-of select="$v"/>
</URIDetail>
given the rURI text contains
http://foo:bar@w1.superman.com:8080/very/long/path.html?p1=v1&p2=v2#more-details
would provide
<URL>http://foo:bar@w1.superman.com:8080/very/long/path.html?p1=v1&p2=v2#more-details</URL>
<URIDetail>
<authority xmlns="uri">foo:bar@w1.superman.com:8080</authority>
<fragment xmlns="uri">more-details</fragment>
<host xmlns="uri">w1.superman.com</host>
<path xmlns="uri">/very/long/path.html</path>
<port xmlns="uri">8080</port>
<query xmlns="uri">p1=v1&p2=v2</query>
<scheme xmlns="uri">http</scheme>
<schemeSpecificPart xmlns="uri">//foo:bar@w1.superman.com:8080/very/long/path.html?p1=v1&p2=v2</schemeSpecificPart>
<userInfo xmlns="uri">foo:bar</userInfo>
</URIDetail>
pointIsInsideXYPolygon()
Returns true if the specified point is inside the specified polygon.
Useful for determining if a user is inside a physical zone based on their location and the boundary of that zone.
pointIsInsideXYPolygon(Number xPos, Number yPos, Number[] xPolyData, Number[] yPolyData)
Arguments:
xPos
- The X value of the point to be tested.
yPos
- The Y value of the point to be tested.
xPolyData
- A sequence of X values that define the polygon.
yPolyData
- A sequence of Y values that define the polygon.
The list of values supplied for xPolyData
must correspond with the list of values supplied for yPolyData
.
The points that define the polygon must be provided in order, i.e. starting from one point on the polygon and then traveling round the path of the polygon until it gets back to the beginning.
5 - Reference Data
Performing temporal reference data lookups to decorate event data.
In Stroom reference data is primarily used to decorate events using stroom:lookup()
calls in XSLTs.
For example you may have reference data feed that associates the FQDN of a device to the physical location.
You can then perform a stroom:lookup()
in the XSLT to decorate an event with the physical location of a device by looking up the FQDN found in the event.
Reference data is time sensitive and each stream of reference data has an Effective Date set against it.
This allows reference data lookups to be performed using the date of the event to ensure the reference data that was actually effective at the time of the event is used.
Using reference data involves the following steps/processes:
- Ingesting the raw reference data into Stroom.
- Creating (and processing) a pipeline to transform the raw reference into
reference-data:2
format XML.
- Creating a reference loader pipeline with a Reference Data Filter element to load cooked reference data into the reference data store.
- Adding reference pipeline/feeds to an XSLT Filter in your event pipeline.
- Adding the lookup call to the XSLT.
- Processing the raw events through the event pipeline.
The process of creating a reference data pipeline is described in the HOWTO linked at the top of this document.
Reference Data Structure
A reference data entry essentially consists of the following:
- Effective time - The data/time that the entry was effective from, i.e the time the raw reference data was received.
- Map name - A unique name for the key/value map that the entry will be stored in.
The name only needs to be unique within all map names that may be loaded within an XSLT Filter.
In practice it makes sense to keep map names globally unique.
- Key - The text that will be used to lookup the value in the reference data map.
Mutually exclusive with Range.
- Range - The inclusive range of integer keys that the entry applies to.
Mutually exclusive with Key.
- Value - The value can either be simple text, e.g. an IP address, or an XML fragment that can be inserted into another XML document.
XML values must be correctly namespaced.
The following is an example of some reference data that has been converted from its raw form into reference-data:2
XML.
In this example the reference data contains three entries that each belong to a different map.
Two of the entries are simple text values and the last has an XML value.
<?xml version="1.1" encoding="UTF-8"?>
<referenceData
xmlns="reference-data:2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:stroom="stroom"
xmlns:evt="event-logging:3"
xsi:schemaLocation="reference-data:2 file://reference-data-v2.0.xsd"
version="2.0.1">
<!-- A simple string value -->
<reference>
<map>FQDN_TO_IP</map>
<key>stroomnode00.strmdev00.org</key>
<value>
<IPAddress>192.168.2.245</IPAddress>
</value>
</reference>
<!-- A simple string value -->
<reference>
<map>IP_TO_FQDN</map>
<key>192.168.2.245</key>
<value>
<HostName>stroomnode00.strmdev00.org</HostName>
</value>
</reference>
<!-- A key range -->
<reference>
<map>USER_ID_TO_COUNTRY_CODE</map>
<range>
<from>1</from>
<to>1000</to>
</range>
<value>GBR</value>
</reference>
<!-- An XML fragment value -->
<reference>
<map>FQDN_TO_LOC</map>
<key>stroomnode00.strmdev00.org</key>
<value>
<evt:Location>
<evt:Country>GBR</evt:Country>
<evt:Site>Bristol-S00</evt:Site>
<evt:Building>GZero</evt:Building>
<evt:Room>R00</evt:Room>
<evt:TimeZone>+00:00/+01:00</evt:TimeZone>
</evt:Location>
</value>
</reference>
</referenceData>
Reference Data Namespaces
When XML reference data values are created, as in the example XML above, the XML values must be qualified with a namespace to distinguish them from the reference-data:2
XML that surrounds them.
In the above example the XML fragment will become as follows when injected into an event:
<evt:Location xmlns:evt="event-logging:3" >
<evt:Country>GBR</evt:Country>
<evt:Site>Bristol-S00</evt:Site>
<evt:Building>GZero</evt:Building>
<evt:Room>R00</evt:Room>
<evt:TimeZone>+00:00/+01:00</evt:TimeZone>
</evt:Location>
Even if evt
is already declared in the XML being injected into it, if it has been declared for the reference fragment then it will be explicitly declared in the destination.
While duplicate namespacing may appear odd it is valid XML.
The namespacing can also be achieved like this:
<?xml version="1.1" encoding="UTF-8"?>
<referenceData
xmlns="reference-data:2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:stroom="stroom"
xsi:schemaLocation="reference-data:2 file://reference-data-v2.0.xsd"
version="2.0.1">
<!-- An XML value -->
<reference>
<map>FQDN_TO_LOC</map>
<key>stroomnode00.strmdev00.org</key>
<value>
<Location xmlns="event-logging:3">
<Country>GBR</Country>
<Site>Bristol-S00</Site>
<Building>GZero</Building>
<Room>R00</Room>
<TimeZone>+00:00/+01:00</TimeZone>
</Location>
</value>
</reference>
</referenceData>
This reference data will be injected into event XML exactly as it, i.e.:
<Location xmlns="event-logging:3">
<Country>GBR</Country>
<Site>Bristol-S00</Site>
<Building>GZero</Building>
<Room>R00</Room>
<TimeZone>+00:00/+01:00</TimeZone>
</Location>
Reference Data Storage
Reference data is stored in two different places on a Stroom node.
All reference data is only visible to the node where it is located.
Each node that is performing reference data lookups will need to load and store its own reference data.
While this will result in duplicate data being held by nodes it makes the storage of reference data and its subsequent lookup very performant.
On-Heap Store
The On-Heap store is the reference data store that is held in memory in the Java Heap.
This store is volatile and will be lost on shut down of the node.
The On-Heap store is only used for storage of context data.
Off-Heap Store
The Off-Heap store is the reference data store that is held in memory outside of the Java Heap and is persisted to to local disk.
As the store is also persisted to local disk it means the reference data will survive the shutdown of the stroom instance.
Storing the data off-heap means Stroom can run with a much smaller Java Heap size.
The Off-Heap store is based on the Lightning Memory-Mapped Database (LMDB).
LMDB makes use of the Linux page cache to ensure that hot portions of the reference data are held in the page cache (making use of all available free memory).
Infrequently used portions of the reference data will be evicted from the page cache by the Operating System.
Given that LMDB utilises the page cache for holding reference data in memory the more free memory the host has the better as there will be less shifting of pages in/out of the OS page cache.
When storing large amounts of data you may experience the OS reporting very little free memory as a large amount will be in use by the page cache.
This is not an issue as the OS will evict pages when memory is needed for other applications, e.g. the Java Heap.
Local Disk
The Off-Heap store is intended to be located on local disk on the Stroom node.
The location of the store is set using the property stroom.pipeline.referenceData.localDir
.
Using LMDB on remote storage is NOT advised, see http://www.lmdb.tech/doc.
Using the fastest storage (i.g. fast SSDs) is advised to reduce load times and lookups of data that is not in memory.
Warning
If you are running stroom on AWS EC2 instances then you will need to attach some local instance storage to the host, e.g. SSD, to use for the reference data store.
In tests EBS storage was found to be VERY slow.
It should be noted that AWS instance storage is not persistent between instance stops, terminations and hardware failure.
However any loss of the reference data store will mean that the next time Stroom boots a new store will be created and reference data will be loaded on demand as normal.
Transactions
LMDB is a transactional database with ACID semantics.
All interaction with LMDB is done within a read or write transaction.
There can only be one write transaction at a time so if there are a number of concurrent reference data loads then they will have to wait in line.
Read transactions, i.e. lookups, are not blocked by each other but may be blocked by a write transaction depending on the value of the system property stroom.pipeline.referenceData.lmdb.readerBlockedByWriter
.
LMDB can operate such that readers are not blocked by writers but if there is an open read transaction while a write transaction is writing data to the store then it is unable to make use of free space (from previous deletes, see Store Size & Compaction) so will result in the store increasing in size.
If read transactions are likely while writes are taking place then this can lead to excessive growth of the store.
Setting stroom.pipeline.referenceData.lmdb.readerBlockedByWriter
to true
will block all reads while a load is happening so any free space can be re-used, at the cost of making all lookups wait for the load to complete.
Use of this setting will depend on how likely it is that loads will clash with lookups and the store size should be monitored.
Read-Ahead Mode
When data is read from the store, if the data is not already in the page cache then it will be read from disk and added to the page cache by the OS.
Read-ahead is the process of speculatively reading ahead to load more pages into the page cache than were requested.
This is on the basis that future requests for data may need the pages speculatively read into memory as it is more efficient to read multiple pages at once.
If the reference data store is very large or is larger than the available memory then it is recommended to turn read-ahead off as the result will be to evict hot reference data from the page cache to make room for speculative pages that may not be needed.
It can be tuned off with the system property stroom.pipeline.referenceData.readAheadEnabled
.
Key Size
When reference data is created care must be taken to ensure that the Key used for each entry is less than 507 bytes.
For simple ASCII characters then this means less than 507 characters.
If non-ASCII characters are in the key then these will take up more than one byte per character so the length of the key in characters will be less.
This is a limitation inherent to LMDB.
Commit intervals
The property stroom.pipeline.referenceData.maxPutsBeforeCommit
controls the number of entries that are put into the store between each commit.
As there can be only one transaction writing to the store at a time, committing periodically allows other process to jump in and make writes.
There is a trade off though as reducing the number of entries put between each commit can seriously affect performance.
For the fastest single process performance a value of 0
should be used which means it will not commit mid-load.
This however means all other processes wanting to write to the store will need to wait.
Low values (e.g. in the hundreds) mean very frequent commits so will hamper performance.
Cloning The Off Heap Store
If you are provisioning a new stroom node it is possible to copy the off heap store from another node.
Stroom should not be running on the node being copied from.
Simply copy the contents of stroom.pipeline.referenceData.localDir
into the same configured location on the other node.
The new node will use the copied store and have access to its reference data.
Store Size & Compaction
Due to the way LMDB works the store can only grow in size, it will never shrink, even if reference data is deleted.
Deleted data frees up space for new writes to the store so will be reused but will never be freed back to the operating system.
If there is a regular process of purging old data and adding new reference data then this should not be an issue as the new reference data will use the space made available by the purged data.
If store size becomes an issue then it is possible to compact the store.
lmdb-utils
is package that is available on some package managers and this has an mdb_copy
command that can be used with the -c
switch to copy the LMDB environment to a new one, compacting it in the process.
This should be done when Stroom is down to avoid writes happening to the store while the copy is happening.
The following is an example of how to compact the store assuming Stroom has been shut down first.
Now you can re-start Stroom and it will use the new compacted store, creating a lock file for it.
The compaction process is fast.
A test compaction of a 4Gb store, compacted down to 1.6Gb took about 7s on non-flash HDD storage.
Alternatively, given that the store is essentially a cache and all data can be re-loaded another option is to delete the contents of stroom.pipeline.referenceData.localDir
when Stroom is not running.
On boot Stroom will create a brand new empty store and reference data will be re-loaded as required.
This approach will result in all data having to be re-loaded so will slow lookups down until it has been loaded.
The Loading Process
Reference data is loaded into the store on demand during the processing of a stroom:lookup()
method call.
Reference data will only be loaded if it does not already exist in the store, however it is always loaded as a complete stream, rather than entry by entry.
The test for existence in the store is based on the following criteria:
- The UUID of the reference loader pipeline.
- The version of the reference loader pipeline.
- The Stream ID for the stream of reference data that has been deemed effective for the lookup.
- The Stream Number (in the case of multi part streams).
If a reference stream has already been loaded matching the above criteria then no additional load is required.
IMPORTANT: It should be noted that as the version of the reference data pipeline forms part of the criteria, if the reference loader pipeline is changed, for whatever reason, then this will invalidate ALL existing reference data associated with that reference loader pipeline.
Typically the reference loader pipeline is very static so this should not be an issue.
Standard practice is to convert raw reference data into reference:2
XML on receipt using a pipeline separate to the reference loader.
The reference loader is then only concerned with reading cooked reference:2
into the Reference Data Filter.
In instances where reference data streams are infrequently used it may be preferable to not convert the raw reference on receipt but instead to do it in the reference loader pipeline.
Duplicate Keys
The Reference Data Filter pipeline element has a property overrideExistingValues
which if set to true means if an entry is found in an effective stream with the same key as an entry already loaded then it will overwrite the existing one.
Entries are loaded in the order they are found in the reference:2
XML document.
If set to false then the existing entry will be kept.
If warnOnDuplicateKeys
is set to true then a warning will be logged for any duplicate keys, whether an overwrite happens or not.
Value De-Duplication
Only unique values are held in the store to reduce the storage footprint.
This is useful given that typically, reference data updates may be received daily and each one is a full snapshot of the whole reference data.
As a result this can mean many copies of the same value being loaded into the store.
The store will only hold the first instance of duplicate values.
Querying the Reference Data Store
The reference data store can be queried within a Dashboard in Stroom by selecting Reference Data Store
in the data source selection pop-up.
Querying the store is currently an experimental feature and is mostly intended for use in fault finding.
Given the localised nature of the reference data store the dashboard can currently only query the store on the node that the user interface is being served from.
In a multi-node environment where some nodes are UI only and most are processing only, the UI nodes will have no reference data in their store.
Purging Old Reference Data
Reference data loading and purging is done at the level of a reference stream.
Whenever a reference lookup is performed the last accessed time of the reference stream in the store is checked.
If it is older than one hour then it will be updated to the current time.
This last access time is used to determine reference streams that are no longer in active use and thus can be purged.
The Stroom job Ref Data Off-heap Store Purge is used to perform the purge operation on the Off-Heap reference data store.
No purge is required for the On-Heap store as that only holds transient data.
When the purge job is run it checks the time since each reference stream was accessed against the purge cut-off age.
The purge age is configured via the property stroom.pipeline.referenceData.purgeAge
.
It is advised to schedule this job for quiet times when it is unlikely to conflict with reference loading operations as they will fight for access to the single write transaction.
Lookups
Lookups are performed in XSLT Filters using the XSLT functions.
In order to perform a lookup one or more reference feeds must be specified on the XSLT Filter pipeline element.
Each reference feed is specified along with a reference loader pipeline that will ingest the specified feed (optional convert it into reference:2
XML if it is not already) and pass it into a Reference Data Filter pipeline element.
Reference Feeds & Loaders
In the XSLT Filter pipeline element multiple combinations of feed and reference loader pipeline can be specified.
There must be at least one in order to perform lookups.
If there are multiple then when a lookup is called for a given time, the effective stream for each feed/loader combination is determined.
The effective stream for each feed/loader combination will be loaded into the store, unless it is already present.
When the actual lookup is performed Stroom will try to find the key in each of the effective streams that have been loaded and that contain the map in the lookup call.
If the lookup is unsuccessful in the effective stream for the first feed/loader combination then it will try the next, and so on until it has tried all of them.
For this reason if you have multiple feed/loader combinations then order is important.
It is possible for multiple effective streams to contain the same map/key so a feed/loader combination higher up the list will trump one lower down with the same map/key.
Also if you have some lookups that may not return a value and others that should always return a value then the feed/loader for the latter should be higher up the list so it is searched first.
Effective Streams
Reference data lookups have the concept of Effective Streams.
An effective stream is the most recent stream for a given
Feed
that has an effective date that is less than or equal to the date used for the lookup.
When performing a lookup, Stroom will search the stream store to find all the effective streams in a time bucket that surrounds the lookup time.
These sets of effective streams are cached so if a new reference stream is created it will not be used until the cached set has expired.
To rectify this you can clear the cache Reference Data - Effective Stream Cache
on the Caches screen accessed from:
Standard Key/Value Lookups
Standard key/value lookups consist of a simple string key and a value that is either a simple string or an XML fragment.
Standard lookups are performed using the various forms of the stroom:lookup()
XSLT function.
Note
If the key is not found and the key is an integer then it will attempt a range lookup using the same key.
This is to allow for maps that contain a mixture of key/value pairs and range/value pairs.
Range Lookups
Range lookups consist of a key that is an integer and a value that is either a simple string or an XML fragment.
For more detail on range lookups see the XSLT function stroom:lookup()
.
Note
The lookup will initially look for a single key that matches the lookup key.
If an exact match is not found then it will look for a range that contains the key.
This is to allow for maps that contain a mixture of key/value pairs and range/value pairs.
Nested Map Lookups
Nested map lookups involve chaining a number of lookups with the value of each map being used as the key for the next.
For more detail on nested lookups see the XSLT function stroom:lookup()
.
Bitmap Lookups
A bitmap lookup is a special kind of lookup that actually performs a lookup for each enabled bit position of the passed bitmap value.
For more detail on bitmap lookups see the XSLT function stroom:bitmap-lookup()
.
Values can either be a simple string or an XML fragment.
Context data lookups
Some event streams have a Context stream associated with them.
Context streams allow the system sending the events to Stroom to supply an additional stream of data that provides context to the raw event stream.
This can be useful when the system sending the events has no control over the event content but needs to supply additional information.
The context stream can be used in lookups as a reference source to decorate events on receipt.
Context reference data is specific to a single event stream so is transient in nature, therefore the On Heap Store is used to hold it for the duration of the event stream processing only.
Typically the reference loader for a context stream will include a translation step to convert the raw context data into reference:2
XML.
Reference Data API
See Reference Data API.