This is the multi-page printable view of this section. Click here to print.
Glossary
- 1: A
- 1.1: Account
- 1.2: API
- 1.3: API Key
- 1.4: Application permission
- 2: B
- 2.1: Byte order mark
- 3: C
- 3.1: Character encoding
- 3.2: Condition
- 3.3: Content
- 3.4: Context data
- 3.5: Cron
- 3.6: CSV
- 4: D
- 4.1: Dashboard
- 4.2: Data source
- 4.3: Data splitter
- 4.4: Dictionary
- 4.5: Doc Ref
- 4.6: Document
- 4.7: Document permission
- 5: E
- 5.1: Elasticsearch
- 5.2: ELFF
- 5.3: Entity
- 5.4: Event
- 5.5: Events
- 5.6: Explorer tree
- 5.7: Expression tree
- 6: F
- 6.1: Feed
- 6.2: Field
- 6.3: Filter
- 6.4: Fully Qualified Domain Name (FQDN)
- 7: G
- 7.1: Git
- 7.2: Group (users)
- 8: H
- 9: I
- 9.1: Identity Provider (IDP)
- 9.2: Index
- 9.3: IP address
- 9.4: ISO 8601
- 10: J
- 11: K
- 12: L
- 13: M
- 14: N
- 14.1: Namespace
- 15: O
- 16: P
- 16.1: Parser
- 16.2: Pipeline
- 16.3: Pipeline element
- 16.4: Processor
- 16.5: Processor filter
- 16.6: Property
- 17: Q
- 17.1: Query
- 18: R
- 18.1: Raw Events
- 18.2: Re-processing
- 18.3: Records
- 18.4: REST
- 19: S
- 19.1: Search extraction
- 19.2: Searchable
- 19.3: Stepper
- 19.4: Stream
- 19.5: Stream Type
- 19.6: StroomQl
- 20: T
- 20.1: Table
- 20.2: Transport Sayer Security (TLS)
- 20.3: Token
- 20.4: Tracker
- 21: U
- 21.1: Unix Epoch
- 21.2: User
- 21.3: Coordinated Universal Time (UTC)
- 21.4: UUID
- 22: V
- 22.1: Visualisation
- 22.2: Volume
- 22.3: Volume group
- 23: W
- 24: X
- 24.1: XML
- 24.2: XML Schema
- 24.3: XPath
- 24.4: XSLT
- 25: Y
- 25.1: YAML
- 26: Z
- 26.1: ZIP
1 - A
1.1 - Account
See Also
- User Accounts.
- Identity Provider (IDP) Identity Provider (IDP) An Identity Provider is a system or service that can authenticate a user and assert their identity. Identity providers can support single sign on (SSO), which allows the user to sign in once to the Identity Provider so they are then authenticated to all systems using that IDP.Click to see more details...
1.2 - API
/datafeed interface for data receipt.See Also
1.3 - API Key
API Keys should therefore be protected carefully and treated like a password. If you are using an external Identity Provider Identity Provider (IDP) An Identity Provider is a system or service that can authenticate a user and assert their identity. Identity providers can support single sign on (SSO), which allows the user to sign in once to the Identity Provider so they are then authenticated to all systems using that IDP.Click to see more details... then tokens for use with the Identity Provider Identity Provider (IDP) An Identity Provider is a system or service that can authenticate a user and assert their identity. Identity providers can support single sign on (SSO), which allows the user to sign in once to the Identity Provider so they are then authenticated to all systems using that IDP.Click to see more details... are generated by the external Identity Provider Identity Provider (IDP) An Identity Provider is a system or service that can authenticate a user and assert their identity. Identity providers can support single sign on (SSO), which allows the user to sign in once to the Identity Provider so they are then authenticated to all systems using that IDP.Click to see more details....
See Also
-
API
API
Application Programming Interface. An interface that one system can present so other systems can use it to communicate. Stroom has a number of APIs, e.g. its many REST APIs and its
/datafeedinterface for data receipt.Click to see more details... - Token Token Typically refers to an authentication token that may be used for user authentication. A Stroom API Key is a form of authentication token.Click to see more details...
- Identity Provider (IDP) Identity Provider (IDP) An Identity Provider is a system or service that can authenticate a user and assert their identity. Identity providers can support single sign on (SSO), which allows the user to sign in once to the Identity Provider so they are then authenticated to all systems using that IDP.Click to see more details...
1.4 - Application permission
Application permissions are generally associated with a screen or functional area of the Stroom application. A lot of the application permissions tend to be more applicable to system administrators but allow fine grained control of the different functional areas in Stroom so these functions can be devolved to other users.
Examples of application permissions are Manage Users, Pipeline Stepping and Data - View.
See Also
2 - B
2.1 - Byte order mark
See Also
3 - C
3.1 - Character encoding
Common examples of character encodings are ASCII, UTF-8 and UTF-16.
Each Feed Feed A Feed is a means of organising and categorising data in Stroom. A Feed contains multiple Streams of data that have been ingested into Stroom or output by a Pipeline. Typically a Feed will contain Streams of data that are all from one system and have a common data format.Click to see more details... has a defined character encoding for the data and context Context data This is an additional stream of contextual data that is sent along side the main event stream. It provides a means for the sending system to send additional data that relates only to the event stream it is sent alongside.Click to see more details.... This allows Stroom to decode the data sent into that Feed.
See Also
3.2 - Condition
=, >, in, etc.3.3 - Content
See Also
3.4 - Context data
This can be useful where the sending system has no control over the data in the event stream and the event stream does not contain contextual information such as what machine it is running on or the location of that machine.
The contextual information (such as hostname, FQDN, physical location, etc.) can be sent in a Context Stream so that the two can be combined together during pipeline processing using stroom:lookup().
See Also
3.5 - Cron
Stroom uses a scheduler called Quartz which supports cron expressions for scheduling. The full details of the cron syntax supported by Quartz can be found here .
See Also
3.6 - CSV
,. Fields may be optionally enclosed with double quotes, though there is no fixed standard for CSV data, particularly when it comes to escaping of double quotes and/or commas.4 - D
4.1 - Dashboard
See Also
- Dashboard User Guide
- Data source Data source The source of data for a Query, e.g. a Lucene based Index, a SQL Statistics Data source, etc.Click to see more details...
4.2 - Data source
There are three types of Data source:
- Lucene based search index data sources.
- Stroom’s SQL Statistics data sources.
- Searchable data sources for searching the internals of Stroom.
A data source will have a Doc Ref Doc Ref A Doc Ref (or Document Reference) is an identifier used to identify most documents/entities in Stroom, e.g. an XSLT will have a Doc Ref.Click to see more details... to identify it and will define the set of Fields Field A named data Field within some form of record or entity, and where each Field can have an associated value. In Stroom, Fields can be the Fields in an Index (or other queryable Data Source or the fields of Metadata associated with a Stream, e.g. Stream ID, Feed, creation time, etc.Click to see more details... that it presents. Each Field will have:
- A name
- A set of
Conditions
Condition
A Condition in an query expression term, e.g.
=,>,in, etc.Click to see more details... that it supports. E.g. aFeedfield would likely supportisbut not>. - A flag to indicate if it is queryable or not. I.e. a queryable field could be referenced in the query expression tree and in a Dashboard Dashboard A Dashboard is a configurable entity for querying one or more Data Sources and displaying the results as a table, a visualisation or some other form.Click to see more details... table, but a non-queryable field could only be referenced in the Dashboard table.
See Also
- Query Query The search Query in a Dashboard that selects the data to display. The Query is constructed using an Expression Tree of terms.Click to see more details...
- Dashboard Dashboard A Dashboard is a configurable entity for querying one or more Data Sources and displaying the results as a table, a visualisation or some other form.Click to see more details...
- Field Field A named data Field within some form of record or entity, and where each Field can have an associated value. In Stroom, Fields can be the Fields in an Index (or other queryable Data Source or the fields of Metadata associated with a Stream, e.g. Stream ID, Feed, creation time, etc.Click to see more details...
4.3 - Data splitter
See Also
- User Guide
- Pipeline Element Reference
- Pipeline Pipeline A Pipeline is an entity that is constructed to take a single input of stream data and process/transform it with one or more outputs. A Pipeline can have many elements within it to read, process or transform the data flowing through it.Click to see more details...
4.4 - Dictionary
in dictionary condition. They can also be used to hold arbitrary text for use in XSLT with the dictionary function.See Also
- XSLT XSLT Extensible Stylesheet Language Transformations is a language for transforming XML documents into other XML documents. XSLTs are the primary means of transforming data in Stroom.Click to see more details...
- dictionary()
4.5 - Doc Ref
It is comprised of the following parts:
-
UUID
UUID
A Universally Unique Identifier for uniquely identifying something. UUIDs are used as the identifier in Doc Refs. An example of a UUID is
4ffeb895-53c9-40d6-bf33-3ef025401ad3.Click to see more details... - A Universally Unique Identifier to uniquely identify the document/entity. - Type - The type of the document/entity, e.g.
Index,XSLT,Dashboard, etc. - Name - The name given to the document/entity.
Doc Refs are used heavily in the REST API for identifying the document/entity to be acted on.
See Also
4.6 - Document
See Also
4.7 - Document permission
See Also
- Document Document Typically refers to an item that can be created in the Explorer Tree, e.g. a Feed, a Pipeline, a Dashboard, etc. May also be known as an Entity.Click to see more details...
- Document Permissions
5 - E
5.1 - Elasticsearch
See Also
5.2 - ELFF
See Also
5.3 - Entity
See Also
5.4 - Event
In a
Raw Events
Raw Events
This is a Stream Type used for Streams received by Stroom. Streams received by Stroom will be in a variety of text formats (CSV, delimited, fixed width, XML, JSON, etc.). Until they have been processed by a pipeline they are essentially just unstructured character data with no concept of what is a record/event. A Parser in a pipeline is required to provide the demarcation between records/events.Click to see more details... an event is typically represented as block of XML or JSON, a single line for CSV data.
In an
Events
Events
This is a Stream Type in Stroom. An Events stream consists of processed/cooked data that has been demarcated into individual Events.Click to see more details...
Stream
Stream
A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details... an event is identified by its Event ID which its position in that stream (as a one-based number).
The Event ID combined with a Stream ID provide a unique identifier for an event within a Stroom instance.
See Also
- Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
- Raw Events Raw Events This is a Stream Type used for Streams received by Stroom. Streams received by Stroom will be in a variety of text formats (CSV, delimited, fixed width, XML, JSON, etc.). Until they have been processed by a pipeline they are essentially just unstructured character data with no concept of what is a record/event. A Parser in a pipeline is required to provide the demarcation between records/events.Click to see more details...
- Events Events This is a Stream Type in Stroom. An Events stream consists of processed/cooked data that has been demarcated into individual Events.Click to see more details...
5.5 - Events
Typically in Stroom an Events stream will contain data conforming to the event-logging XML Schema which provides a normalised form for all Raw Events Raw Events This is a Stream Type used for Streams received by Stroom. Streams received by Stroom will be in a variety of text formats (CSV, delimited, fixed width, XML, JSON, etc.). Until they have been processed by a pipeline they are essentially just unstructured character data with no concept of what is a record/event. A Parser in a pipeline is required to provide the demarcation between records/events.Click to see more details... to be transformed into.
See Also
stroom.data.meta.metaTypes.Click to see more details...
5.6 - Explorer tree
It can also be used to control the access permissions of entities and folders. The tree can be filtered using the quick filter, see Finding Things for more details.
See Also
5.7 - Expression tree
For example:
AND (
Feed is CSV_FEED
Type = Raw Events
)
Expression Trees are used in Processor Filters Processor filter A Processor Filter is used to used to find Streams to process through the Pipeline associated with the Processor Filter. A Processor Filter consists of an expression tree to select which Streams to process and a tracker to track the what Streams have been processed.Click to see more details... and Query Query The search Query in a Dashboard that selects the data to display. The Query is constructed using an Expression Tree of terms.Click to see more details... expressions.
See Also
- Query Query The search Query in a Dashboard that selects the data to display. The Query is constructed using an Expression Tree of terms.Click to see more details...
- Expression functions
6 - F
6.1 - Feed
See Also
- Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
- Pipeline Pipeline A Pipeline is an entity that is constructed to take a single input of stream data and process/transform it with one or more outputs. A Pipeline can have many elements within it to read, process or transform the data flowing through it.Click to see more details...
6.2 - Field
See Also
- Data source Data source The source of data for a Query, e.g. a Lucene based Index, a SQL Statistics Data source, etc.Click to see more details...
- Feed Feed A Feed is a means of organising and categorising data in Stroom. A Feed contains multiple Streams of data that have been ingested into Stroom or output by a Pipeline. Typically a Feed will contain Streams of data that are all from one system and have a common data format.Click to see more details...
- Index Index A Data Source that is backed by a Lucene based search index.Click to see more details...
- Metadata Metadata Metadata refers to the data that describes the Stream data. It is sometimes referred to as just Meta.Click to see more details...
- Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
6.3 - Filter
See Also
- Processor filter Processor filter A Processor Filter is used to used to find Streams to process through the Pipeline associated with the Processor Filter. A Processor Filter consists of an expression tree to select which Streams to process and a tracker to track the what Streams have been processed.Click to see more details...
- Pipeline Pipeline A Pipeline is an entity that is constructed to take a single input of stream data and process/transform it with one or more outputs. A Pipeline can have many elements within it to read, process or transform the data flowing through it.Click to see more details...
6.4 - Fully Qualified Domain Name (FQDN)
server57.some.domain.com.See Also
7 - G
7.1 - Git
The source code for the Stroom software is stored in a Git repository. Stroom also uses Git for managing user content that is held in one or more Git repositories.
See Also
7.2 - Group (users)
See Also
8 - H
9 - I
9.1 - Identity Provider (IDP)
Examples of identity providers are Google, Cognito, Keycloak and Microsoft Azure/Entra AD. Stroom has its own built in IDP or can be configured to use a 3rd party IDP.
9.3 - IP address
192.168.0.1. Typically an IP address is assumed to be an IPv4 address.9.4 - ISO 8601
Valid examples of ISO 8601 dates/times are:
2010-01-01T23:59Z
2010-01-01T23:59:59Z
2010-01-01T23:59:59.123Z
2010-01-01T23:59:59+02:00
2010-01-01T23:59:59.123+02
See Also
10 - J
10.1 - JAR
10.2 - JSON
See Also
11 - K
12 - L
13 - M
13.1 - Markdown
Stroom uses the Showdown markdown converter to render users’ markdown content into formatted text.
Note
Markdown is a somewhat loose standard so different markdown processors support different amounts of markdown syntax. For a definitive guide to the syntax supported in Stroom, see the above link.See Also
13.2 - Metadata
14 - N
14.1 - Namespace
records:2 Namespace into XML in the event-logging:3 Namespace.An XSLT will define short aliases for Namespaces to make them easier to reference within the XSLT document.
For example, in this snippet of an XML document, the aliases are: stroom, evt, xsl, xsi.
<xsl:stylesheet
xmlns="event-logging:3"
xpath-default-namespace="records:2"
xmlns:stroom="stroom"
xmlns:evt="event-logging:3"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
version="2.0">
See Also
- XSLT XSLT Extensible Stylesheet Language Transformations is a language for transforming XML documents into other XML documents. XSLTs are the primary means of transforming data in Stroom.Click to see more details...
15 - O
16 - P
16.1 - Parser
See Also
- Pipeline Element Reference
- Pipeline Pipeline A Pipeline is an entity that is constructed to take a single input of stream data and process/transform it with one or more outputs. A Pipeline can have many elements within it to read, process or transform the data flowing through it.Click to see more details...
- Raw Events Raw Events This is a Stream Type used for Streams received by Stroom. Streams received by Stroom will be in a variety of text formats (CSV, delimited, fixed width, XML, JSON, etc.). Until they have been processed by a pipeline they are essentially just unstructured character data with no concept of what is a record/event. A Parser in a pipeline is required to provide the demarcation between records/events.Click to see more details...
-
Records
Records
This is a Stream Type for Streams containing data conforming to the
records:2XML Schema. It also refers more generally to any XML conforming to therecords:2XML Schema which is used in a number of places in Stroom, including as the output format for the DSParser and input for the IndexingFilter.Click to see more details... - Field Field A named data Field within some form of record or entity, and where each Field can have an associated value. In Stroom, Fields can be the Fields in an Index (or other queryable Data Source or the fields of Metadata associated with a Stream, e.g. Stream ID, Feed, creation time, etc.Click to see more details...
16.2 - Pipeline
See Also
16.3 - Pipeline element
See Also
- Pipeline Element Reference
- Pipeline Pipeline A Pipeline is an entity that is constructed to take a single input of stream data and process/transform it with one or more outputs. A Pipeline can have many elements within it to read, process or transform the data flowing through it.Click to see more details...
16.4 - Processor
The Processor can be enabled/disabled to enable/disable the processing of data through the Pipeline. A processor will have one or more Processor Filters Processor filter A Processor Filter is used to used to find Streams to process through the Pipeline associated with the Processor Filter. A Processor Filter consists of an expression tree to select which Streams to process and a tracker to track the what Streams have been processed.Click to see more details... associated with it.
See Also
16.5 - Processor filter
For example a typical Processor Filter would have an Expression Tree that selected all Streams of type Raw Events in a particular Feed Feed A Feed is a means of organising and categorising data in Stroom. A Feed contains multiple Streams of data that have been ingested into Stroom or output by a Pipeline. Typically a Feed will contain Streams of data that are all from one system and have a common data format.Click to see more details.... A filter could also select a single Stream by its ID, e.g. when Re-processing Re-processing The act of repeating the processing of a set of input data (Stream) that have already been processed at least once. Re-Processing can be done for an individual Stream or multiple Streams using a Processor Filter.Click to see more details... a Stream.
A Pipeline can have multiple Processor Filters. Filters can be enabled/disabled independently of their parent Processor to control processing.
See Also
- Expression tree Expression tree A tree of expression terms that each evaluate to a boolean (True/False) value. Terms can be grouped together within an expression operator (AND, OR, NOT).Click to see more details...
- Feed Feed A Feed is a means of organising and categorising data in Stroom. A Feed contains multiple Streams of data that have been ingested into Stroom or output by a Pipeline. Typically a Feed will contain Streams of data that are all from one system and have a common data format.Click to see more details...
- Pipeline Pipeline A Pipeline is an entity that is constructed to take a single input of stream data and process/transform it with one or more outputs. A Pipeline can have many elements within it to read, process or transform the data flowing through it.Click to see more details...
- Re-processing Re-processing The act of repeating the processing of a set of input data (Stream) that have already been processed at least once. Re-Processing can be done for an individual Stream or multiple Streams using a Processor Filter.Click to see more details...
- Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
16.6 - Property
config.yml configuration file.See Also
17 - Q
17.1 - Query
See Also
User Guide Dashboard Dashboard A Dashboard is a configurable entity for querying one or more Data Sources and displaying the results as a table, a visualisation or some other form.Click to see more details… Expression tree Expression tree A tree of expression terms that each evaluate to a boolean (True/False) value. Terms can be grouped together within an expression operator (AND, OR, NOT).Click to see more details…
18 - R
18.1 - Raw Events
See Also
stroom.data.meta.metaTypes.Click to see more details...
Parser
Parser
A Parser is a Pipeline element for parsing Raw Events into a structured form. For example the Data Splitter Parser that parses text data into Records and Fields.Click to see more details...
18.2 - Re-processing
See Also
18.3 - Records
records:2 XML Schema. It also refers more generally to any XML conforming to the records:2 XML Schema which is used in a number of places in Stroom, including as the output format for the DSParser and input for the IndexingFilter.See Also
-
Stream Type
Stream Type
All Streams must have a Stream Type. The list of Stream Types is configured using the Property
stroom.data.meta.metaTypes.Click to see more details... - Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
- records:2 XML Schema
18.4 - REST
19 - S
19.1 - Search extraction
See Also
User Guide Field Field A named data Field within some form of record or entity, and where each Field can have an associated value. In Stroom, Fields can be the Fields in an Index (or other queryable Data Source or the fields of Metadata associated with a Stream, e.g. Stream ID, Feed, creation time, etc.Click to see more details… Event Event An event is a single auditable event, e.g. a user logging in to a system. A Stream typically contains multiple events.Click to see more details…
19.2 - Searchable
See Also
19.3 - Stepper
The parsers and translations can be edited while in the Stepper with the element output updating to show the effect of the change. The stepper will not write data to the file system or stream stores.
See Also
19.4 - Stream
See Also
- Streams Concept
- Events Events This is a Stream Type in Stroom. An Events stream consists of processed/cooked data that has been demarcated into individual Events.Click to see more details...
-
Stream Type
Stream Type
All Streams must have a Stream Type. The list of Stream Types is configured using the Property
stroom.data.meta.metaTypes.Click to see more details...
19.5 - Stream Type
stroom.data.meta.metaTypes.Additional Stream Types can be added however the list of Stream Types must include the following built-in types:
- Context
- Error
- Events Events This is a Stream Type in Stroom. An Events stream consists of processed/cooked data that has been demarcated into individual Events.Click to see more details...
- Meta
- Raw Events Raw Events This is a Stream Type used for Streams received by Stroom. Streams received by Stroom will be in a variety of text formats (CSV, delimited, fixed width, XML, JSON, etc.). Until they have been processed by a pipeline they are essentially just unstructured character data with no concept of what is a record/event. A Parser in a pipeline is required to provide the demarcation between records/events.Click to see more details...
- Raw Reference
- Reference
Some Stream Types, such as Meta and Context only exist as child streams within another Stream.
See Also
- Streams Concept
- Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
- Events Events This is a Stream Type in Stroom. An Events stream consists of processed/cooked data that has been demarcated into individual Events.Click to see more details...
- Raw Events Raw Events This is a Stream Type used for Streams received by Stroom. Streams received by Stroom will be in a variety of text formats (CSV, delimited, fixed width, XML, JSON, etc.). Until they have been processed by a pipeline they are essentially just unstructured character data with no concept of what is a record/event. A Parser in a pipeline is required to provide the demarcation between records/events.Click to see more details...
-
Property
Property
A configuration Property for configuring Stroom. Properties can be set in the user interface or via the
config.ymlconfiguration file.Click to see more details...
19.6 - StroomQl
See Also
20 - T
20.1 - Table
20.2 - Transport Sayer Security (TLS)
TLS is typically used in Stroom for communications between Stroom-Proxy and Stroom, between Stroom nodes and when communicating with external systems (e.g. an Elasticsearch cluster of a HttpPostFilter destination).
20.3 - Token
Tokens are generally set in the HTTP header Authorization with a value of the form Bearer TOKEN_GOES_HERE.
Tokens may contain information, e.g. a
JSON Web Tokens (JWT)
or simply be long strings of random characters (to essentially make a very secure password), like API Keys.
Tokens are associated with a Stroom User so have the same or less permissions than that user. Tokens also typically have an expiry time after which they will no longer work.
See Also
20.4 - Tracker
See Also
- Processor filter Processor filter A Processor Filter is used to used to find Streams to process through the Pipeline associated with the Processor Filter. A Processor Filter consists of an expression tree to select which Streams to process and a tracker to track the what Streams have been processed.Click to see more details...
- Stream Stream A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details...
21 - U
21.1 - Unix Epoch
1738331628276, and may be referred to as epoch ms or epoch milliseconds.21.2 - User
See Also
- Account Account Refers to a user account in Stroom’s internal Identity Provider.Click to see more details...
- Identity Provider (IDP) Identity Provider (IDP) An Identity Provider is a system or service that can authenticate a user and assert their identity. Identity providers can support single sign on (SSO), which allows the user to sign in once to the Identity Provider so they are then authenticated to all systems using that IDP.Click to see more details...
- Users and Groups
21.3 - Coordinated Universal Time (UTC)
+00:00 and does not change for daylight saving. All international time zones are relative to UTC.Stroom currently works internally in UTC, though it is possible to change the display time zone via User Preferences to display times in another time zone.
See Also
21.4 - UUID
4ffeb895-53c9-40d6-bf33-3ef025401ad3.See Also
- Doc Ref Doc Ref A Doc Ref (or Document Reference) is an identifier used to identify most documents/entities in Stroom, e.g. an XSLT will have a Doc Ref.Click to see more details...
- User Guide
22 - V
22.1 - Visualisation
See Also
22.2 - Volume
Stroom has two types of Volume; Index Volumes and Data Volumes.
- Index Volume - Where the Lucene Index Shards are written to. An Index Volume must belong to a Volume group Volume group A Volume Group is a collection of one or more Index Volumes. Index volumes must belong to a volume group and Indexes are configured to write to a particular Volume Group.Click to see more details....
- Data Volume - Where streams are written to.
When writing
Stream
Stream
A Stream is the unit of data that Stroom works with and will typically contain many Events.Click to see more details... data Stroom will pick a data volume using a volume selector as configured by the
Property
Property
A configuration Property for configuring Stroom. Properties can be set in the user interface or via the
config.ymlconfiguration file.Click to see more details...stroom.data.filesystemVolume.volumeSelector.
See Also
22.3 - Volume group
When Stroom is writing data to a Volume Group it will choose which of the Volumes in the group to write to using a volume selector as configured by the
Property
Property
A configuration Property for configuring Stroom. Properties can be set in the user interface or via the config.yml configuration file.Click to see more details... stroom.volumes.volumeSelector.
See Also
23 - W
24 - X
24.1 - XML
See Also
24.2 - XML Schema
The event-logging XML Schema is an example of an XML Schema.
See Also
24.3 - XPath
24.4 - XSLT
All data is converted into a basic form of XML and then XSLTs are used to decorate and transform it into a common form. XSLTs are also used to transform XML Events Events This is a Stream Type in Stroom. An Events stream consists of processed/cooked data that has been demarcated into individual Events.Click to see more details... data into non-XML forms or XML with a different schema for indexing, statistics or for sending to other systems.
See Also
25 - Y
25.1 - YAML
.yaml or .yml.