This is the multi-page printable view of this section. Click here to print.
Version 7.9
- 1: New Features
- 2: Preview Features (experimental)
- 3: Breaking Changes
- 4: Upgrade Notes
- 5: Change Log
1 - New Features
2 - Preview Features (experimental)
Data Feed Keys
Data Feed Keys are a new authentication mechanism for data receipt into both Stroom-Proxy and Stroom. They allow for a set of hashed short life keys to be placed in a directory accessible to Stroom-Proxy/Stroom for receipt requests to be authenticated against.
The following is an example of a Data Feed Key file:
{
"dataFeedKeys" : [ {
"expiryDateEpochMs" : 1744279266584,
"hash" : "4KP8qRFigfKHAMQgWVHCvgXtf293wsETMwWbaUJcgC2JSqP6qF1YHacyUe8C7CrAAa",
"hashAlgorithmId" : "000",
"streamMetaData" : {
"AccountId" : "1000",
"MetaKey2" : "MetaKey2Val-1000",
"MetaKey1" : "MetaKey1Val-1000"
}
}, {
"expiryDateEpochMs" : 1744279266584,
"hash" : "8U7roYFWcj3Ht8cGbAavemQPm2P93xEC9QnvdUCuth4EKmbvEjVM2NWje1bPkKWx7s",
"hashAlgorithmId" : "000",
"streamMetaData" : {
"AccountId" : "1001",
"MetaKey2" : "MetaKey2Val-1001",
"MetaKey1" : "MetaKey1Val-1001"
}
}
}
A file can contain many hashed keys.
One or more files of this format are placed in the directory defined by the Stroom-Proxy/Stroom config property .receive.dataFeedKeysDir
.
This allows for generating Data Feed Keys with a life of say 26hrs, adding a new file every day and deleting files older than 2 days.
The file(s) will be read on boot and all hashed keys will be stored in memory for receipt authentication. Files added to this directory while Stoom-Proxy/Stroom is running will be read and added to the in-memory store of hashed keys. Files deleted from this directory will result in all entries associated with the file path being removed from the in-memory store of hashed keys.
The accountId
is typically an identifier for a client team that may have one or more systems that require one or more Feeds in Stroom.
This JSON key, which identifies the ownership of the Data Feed Key, is configured via the property .receive.dataFeedKeyOwnerMetaKey
.
Data Feed Keys have an expiry date after which they will no longer work.
An accountID
can have many active Data Feed Keys.
Multiple files can be placed in the directory and all valid keys will be loaded.
The hashAlgorithmId
is the identifier for the hash algorithm used to hash the key.
The system creating the hashed data feed keys must use the same hash algorithm and parameters when hashing the key as Stroom will use when it hashes the key used in data receipt to validate them.
streamMetaData
provides a means to add meta attributes to data received with a Data Feed Key.
The attributes in streamMetaData
will overwrite any matching attribute keys in the received data.
Currently the only hash algorithm available for use is Argon2 with an ID of 000
and the following parameters:
- Hash length: 48
- Iterations: 2
- Memory KB: 65536
A Data Feed Key takes the following form:
sdk_<3 char hash algorithm ID>_<128 char random Base58 string>
The regular expression pattern for a Data Feed Key is
^sdk_[0-9]{3}_[A-HJ-NP-Za-km-z1-9]{128}$
Data Feed Keys are used in the same way as API Keys or OAuth2 tokens, i.e. using the Header Authorization: Bearer <data feed key>
.
Content Auto-Generation
In an effort to streamline the process of getting client systems to send data to Stroom we have added auto-generation of content and removed the need to supply a feed name. The aim is that client systems can provide a number of mandatory headers with their data that will then be used to auto-generate a feed name, create the feed and create content from a template to process that feed’s data.
Note
This feature is optional and controlled with configuration. If not enabled, the Feed header will be mandatory as before.The default Stroom configuration for controlling Data Feed Keys, auto content creation and feed name generation is:
appConfig:
autoContentCreation:
additionalGroupTemplate: "grp-${accountid}-sandbox"
createAsSubjectId: "Administrators"
createAsType: "GROUP"
destinationExplorerPathTemplate: "/Feeds/${accountid}"
enabled: false
groupTemplate: "grp-${accountid}"
templateMatchFields:
- "accountid"
- "accountname"
- "component"
- "feed"
- "format"
- "schema"
- "schemaversion"
receive: # applicable to both appConfig: and proxyConfig:
allowedCertificateProviders: []
authenticatedDataFeedKeyCache:
#...
authenticationRequired: true
dataFeedKeyOwnerMetaKey: "AccountId"
dataFeedKeysDir: "data_feed_keys"
enabledAuthenticationTypes: # Also DATA_FEED_KEY and TOKEN
- "CERTIFICATE"
feedNameGenerationEnabled: false
feedNameGenerationMandatoryHeaders:
- "AccountId"
- "Component"
- "Format"
- "Schema"
feedNameTemplate: "${accountid}-${component}-${format}-${schema}"
metaTypes:
# ...
Feed Name Generation
When the property .receive.feedNameGenerationEnabled
is set to true
, the Feed
header is no longer required.
When data is supplied without the Feed
header, the meta keys specified in .receive.feedNameGenerationMandatoryHeaders
become mandatory.
The property .receive.feedNameTemplate
is used to control the format of the generated Feed name.
The template uses values from the headers, so should be configured in tandem with .receive.feedNameGenerationMandatoryHeaders
.
If the template parameter is not in the headers, then it will be replaced with nothing.
If enabled, Feed name generation happens on data receipt in both Stroom-Proxy and Stroom.
Feed and Group Creation
The .autoContentCreation.enabled
property controls whether auto-generation of content will take place on data receipt.
Content auto-generation happens as part of the feed status check, so will either be triggered when Stroom-Proxy requests a feed status check from Stroom or Stroom receives data directly.
If the generated Feed name does not exist then it will be created in the explorer path as defined by the template in .receive.destinationExplorerPathTemplate
.
If the Feed name does exist then no further content creation will happen.
One or two user groups will be created and be granted permissions on the created content.
This is to provide user groups for users of the client system to be able to access the data.
The name of these groups is controlled by groupTemplate
and additionalGroupTemplate
respectively.
The former group only has VIEW
permission, while the latter has EDIT
permission on the created content.
The latter group is intended for clients to be able to manage the translation of their data while it is going through a take-on-work style process.
If the /datafeed
request was authenticated (e.g. using a token, certificate, API key or Data Feed Key) then the identifier of the subject will be set in the UploadUserId
header.
It will also ensure that a user exists in Stroom for this identity and that the user is added to the two groups created above.
Content Templates
In Stroom a new Content Templates screen has been added, accessible from the menu.
This screen allows a user with the Manage Content Templates
application permission to create a number of content templates.
Each template has an expression that will be used to match on the headers when auto-generation of content has been triggered.
The template expressions are evaluated in order from the top.
If a template’s expression matches, content will be created according to the template. A template currently has two modes:
INHERIT_PIPELINE
- A new pipeline will be created that inherits from the existing pipeline specified in the template. The new pipeline will be created in the folder defined by.receive.destinationExplorerPathTemplate
.PROCESSOR_FILTER
- A new processor filter will be added to the existing pipeline specified in the template.
3 - Breaking Changes
Warning
Please read this section carefully in case any of the changes affect you.Stroom
Feed Status Check
If any proxy instances call into Stroom to make a feed status check using an API key, then the owner of those API keys will now need to be granted the Check Receipt Status
application permission.
Stroom-Proxy
4 - Upgrade Notes
Warning
Please read this section carefully in case any of it is relevant to your Stroom instance.Java Version
Stroom v7.9 requires Java 21. This is the same java version as Stroom v7.8. Ensure the Stroom and Stroom-Proxy hosts are running the latest patch release of Java v21.
Configuration File Changes
Stroom’s config.yml
Cache Config
A new property statisticsMode
has been added to the standard cache config structure as used by all caches.
This controls whether and how cache statistics should be record.
Possible values are:
NONE
- Do not capture any statistics on cache usage. This means no statistics will be available for this cache in the Stroom UI. Capturing statistics has a minor performance penalty.INTERNAL
- Uses the internal mechanism for capturing statistics. This is only suitable for Stroom as these can be viewed through the UI.DROPWIZARD_METRICS
- Uses Dropwizard Metrics for capturing statistics. This allows the cache stats to be accessed via tools such as Graphite/Collectd in addition to the Stroom UI. This adds a very slight performance overhead overINTERNAL
. This is suitable for both Stroom and Stroom-Proxy.
Annotations
The following standard cache configuration blocks have been added for annotations, along with retention configuration.
appConfig:
annotation:
annotationFeedCache:
expireAfterAccess: "PT10M"
expireAfterWrite: null
maximumSize: 1000
refreshAfterWrite: null
statisticsMode: "INTERNAL"
annotationTagCache:
expireAfterAccess: "PT10M"
expireAfterWrite: null
maximumSize: 1000
refreshAfterWrite: null
statisticsMode: "INTERNAL"
defaultRetentionPeriod: "5y"
physicalDeleteAge: "P7D"
Content Auto-Generation
A new branch autoContentCreation
has been added for content auto-generation, see
appConfig:
autoContentCreation:
additionalGroupTemplate: "grp-${accountid}-sandbox"
createAsSubjectId: "Administrators"
createAsType: "GROUP"
destinationExplorerPathTemplate: "/Feeds/${accountid}"
enabled: false
groupTemplate: "grp-${accountid}"
templateMatchFields:
- "accountid"
- "accountname"
- "component"
- "feed"
- "format"
- "schema"
- "schemaversion"
Data Formats
A new property has been added to control the list of data format names that can be assigned to a feed. The property must include at least the values below.
appConfig:
data:
meta:
dataFormats:
- "FIXED_WIDTH_NO_HEADER"
- "INI"
- "CSV"
- "JSON"
- "TEXT"
- "XML_FRAGMENT"
- "YAML"
- "PSV_NO_HEADER"
- "PSV"
- "CSV_NO_HEADER"
- "XML"
- "TSV"
- "SYSLOG"
- "TSV_NO_HEADER"
- "FIXED_WIDTH"
- "TOML"
Data Receipt
The following properties have been added.
appConfig:
receive:
allowedCertificateProviders: []
authenticatedDataFeedKeyCache:
expireAfterAccess: null
expireAfterWrite: "PT5M"
maximumSize: 1000
refreshAfterWrite: null
statisticsMode: "DROPWIZARD_METRICS"
authenticationRequired: true
dataFeedKeyOwnerMetaKey: "AccountId"
dataFeedKeysDir: "data_feed_keys"
enabledAuthenticationTypes:
- "CERTIFICATE"
- "TOKEN"
- "DATA_FEED_KEY"
feedNameGenerationEnabled: false
feedNameGenerationMandatoryHeaders:
- "AccountId"
- "Component"
- "Format"
- "Schema"
feedNameTemplate: "${accountid}-${component}-${format}-${schema}"
x509CertificateDnHeader: "X-SSL-CLIENT-S-DN"
x509CertificateHeader: "X-SSL-CERT"
The following properties have been removed.
appConfig:
receive:
tokenAuthenticationEnabled: false
certificateAuthenticationEnabled: true
Content Security Policy
The default value for the contentSecurityPolicy
property has changed from this:
appConfig:
security:
webContent:
contentSecurityPolicy: "default-src 'self'; script-src 'self' 'unsafe-eval'\
\ 'unsafe-inline'; img-src 'self' data:; style-src 'self' 'unsafe-inline';\
\ connect-src 'self' wss:; frame-ancestors 'self';"
To this:
appConfig:
security:
webContent:
contentSecurityPolicy: "default-src 'self'; script-src 'self' 'unsafe-eval'\
\ 'unsafe-inline'; img-src 'self' data:; style-src 'self' 'unsafe-inline';\
\ frame-ancestors 'self';"
Stroom-Proxy’s config.yml
Cache Config
A new property statisticsMode
has been added to the standard cache config structure as used by all caches.
This controls whether and how cache statistics should be record.
Possible values are:
NONE
- Do not capture any statistics on cache usage. This means no statistics will be available for this cache in the Stroom UI. Capturing statistics has a minor performance penalty.DROPWIZARD_METRICS
- Uses Dropwizard Metrics for capturing statistics. This allows the cache stats to be accessed via tools such as Graphite/Collectd in addition to the Stroom UI. This adds a very slight performance overhead overINTERNAL
. This is suitable for both Stroom and Stroom-Proxy.
Aggregation
The default aggregation frequency has changed from PT1M
to PT10M
.
proxyConfig:
aggregator:
aggregationFrequency: "PT10M"
The property maxOpenFiles
has been replaced with a standard cache configuration branch.
proxyConfig:
aggregator:
openFilesCache:
expireAfterAccess: null
expireAfterWrite: null
maximumSize: 100
refreshAfterWrite: null
statisticsMode: "DROPWIZARD_METRICS"
Data Receipt
Proxy uses the same data receipt config structure as Stroom, so see above for details of the changes.
Database Migrations
When Stroom boots for the first time with a new version it will run any required database migrations to bring the database schema up to the correct version.
Warning
It is highly recommended to ensure you have a database backup in place before booting stroom with a new version. This is to mitigate against any problems with the migration. It is also recommended to test the migration against a copy of your database to ensure that there are no problems when you do it for real.On boot, Stroom will ensure that the migrations are only run by a single node in the cluster. This will be the node that reaches that point in the boot process first. All other nodes will wait until that is complete before proceeding with the boot process.
It is recommended however to use a single node to execute the migration.
To avoid Stroom starting up and beginning processing you can use the migrage
command to just migrate the database and not fully boot Stroom.
See migrage
command for more details.
Migration Scripts
For information purposes only, the following are the database migrations that will be run when upgrading to 7.9.0 from the previous minor version.
Note, the legacy
module will run first (if present) then the other module will run in no particular order.
Module stroom-annotation
Script V07_09_00_001__annotation2.sql
Path: stroom-annotation/stroom-annotation-impl-db/src/main/resources/stroom/annotation/impl/db/migration/V07_09_00_001__annotation2.sql
-- ------------------------------------------------------------------------
-- Copyright 2023 Crown Copyright
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- ------------------------------------------------------------------------
-- Stop NOTE level warnings about objects (not)? existing
SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0;
CREATE TABLE IF NOT EXISTS annotation_tag (
id int(11) NOT NULL AUTO_INCREMENT,
uuid varchar(255) NOT NULL,
type_id tinyint NOT NULL,
name varchar(255) NOT NULL,
style_id tinyint DEFAULT NULL,
deleted tinyint NOT NULL DEFAULT '0',
PRIMARY KEY (id),
UNIQUE KEY `annotation_tag_type_id_name_idx` (`type_id`, `name`),
UNIQUE KEY `annotation_tag_uuid_idx` (`uuid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS annotation_tag_link (
id bigint(20) NOT NULL AUTO_INCREMENT,
fk_annotation_id bigint(20) NOT NULL,
fk_annotation_tag_id int(11) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY fk_annotation_id_fk_annotation_tag_id (fk_annotation_id, fk_annotation_tag_id),
CONSTRAINT annotation_tag_link_fk_annotation_id FOREIGN KEY (fk_annotation_id) REFERENCES annotation (id),
CONSTRAINT annotation_tag_link_fk_annotation_tag_id FOREIGN KEY (fk_annotation_tag_id) REFERENCES annotation_tag (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS annotation_link (
id bigint(20) NOT NULL AUTO_INCREMENT,
fk_annotation_src_id bigint(20) NOT NULL,
fk_annotation_dst_id bigint(20) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY fk_annotation_src_id_fk_annotation_dst_id (fk_annotation_src_id, fk_annotation_dst_id),
CONSTRAINT annotation_link_fk_annotation_src_id FOREIGN KEY (fk_annotation_src_id) REFERENCES annotation (id),
CONSTRAINT annotation_link_fk_annotation_dst_id FOREIGN KEY (fk_annotation_dst_id) REFERENCES annotation (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS annotation_subscription (
id bigint(20) NOT NULL AUTO_INCREMENT,
fk_annotation_id bigint(20) NOT NULL,
user_uuid varchar(255) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY fk_annotation_id_user_uuid (fk_annotation_id, user_uuid),
CONSTRAINT annotation_subscription_fk_annotation_id FOREIGN KEY (fk_annotation_id) REFERENCES annotation (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `annotation_feed` (
`id` int NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `annotation_feed_name` (`name`)
) ENGINE=InnoDB DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
DROP PROCEDURE IF EXISTS V07_09_00_001_annotation;
DELIMITER $$
CREATE PROCEDURE V07_09_00_001_annotation ()
BEGIN
DECLARE object_count integer;
--
-- Add logical delete
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'deleted';
IF object_count = 0 THEN
ALTER TABLE `annotation` ADD COLUMN `deleted` tinyint NOT NULL DEFAULT '0';
END IF;
--
-- Add description
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'description';
IF object_count = 0 THEN
ALTER TABLE `annotation`
ADD COLUMN `description` longtext;
END IF;
--
-- Add data retention time
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'retention_time';
IF object_count = 0 THEN
ALTER TABLE `annotation` ADD COLUMN `retention_time` bigint(20) DEFAULT NULL;
END IF;
--
-- Add data retention unit
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'retention_unit';
IF object_count = 0 THEN
ALTER TABLE `annotation` ADD COLUMN `retention_unit` tinyint DEFAULT NULL;
END IF;
--
-- Add data retention until
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'retain_until_ms';
IF object_count = 0 THEN
ALTER TABLE `annotation`
ADD COLUMN `retain_until_ms` bigint DEFAULT NULL;
END IF;
--
-- Add parent id
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'parent_id';
IF object_count = 0 THEN
ALTER TABLE `annotation`
ADD COLUMN `parent_id` bigint(20) DEFAULT NULL;
END IF;
--
-- Add entry type id
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation_entry'
AND column_name = 'type_id';
IF object_count = 0 THEN
ALTER TABLE annotation_entry ADD COLUMN type_id tinyint NOT NULL;
UPDATE annotation_entry SET type_id = 0 WHERE type = "Title";
UPDATE annotation_entry SET type_id = 1 WHERE type = "Subject";
UPDATE annotation_entry SET type_id = 2 WHERE type = "Status";
UPDATE annotation_entry SET type_id = 3 WHERE type = "Assigned";
UPDATE annotation_entry SET type_id = 4 WHERE type = "Comment";
UPDATE annotation_entry SET type_id = 5 WHERE type = "Link";
UPDATE annotation_entry SET type_id = 6 WHERE type = "Unlink";
ALTER TABLE annotation_entry DROP COLUMN type;
END IF;
--
-- Add entry logical delete
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation_entry'
AND column_name = 'deleted';
IF object_count = 0 THEN
ALTER TABLE `annotation_entry` ADD COLUMN `deleted` tinyint NOT NULL DEFAULT '0';
END IF;
--
-- Remove status
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'status';
IF object_count > 0 THEN
INSERT INTO annotation_tag (uuid, type_id, name, style_id, deleted)
SELECT uuid(), 0, status, null, 0
FROM annotation
WHERE status NOT IN (SELECT name FROM annotation_tag WHERE type_id = 0)
GROUP BY status;
INSERT INTO annotation_tag_link (fk_annotation_id, fk_annotation_tag_id)
SELECT a.id, at.id
FROM annotation a
JOIN annotation_tag at ON (a.status = at.name AND at.type_id = 0)
WHERE a.id NOT IN (SELECT fk_annotation_id FROM annotation_tag_link);
ALTER TABLE `annotation` DROP COLUMN `status`;
END IF;
--
-- Remove comment
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'comment';
IF object_count > 0 THEN
ALTER TABLE `annotation` DROP COLUMN `comment`;
END IF;
--
-- Remove history
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation'
AND column_name = 'history';
IF object_count > 0 THEN
ALTER TABLE `annotation` DROP COLUMN `history`;
END IF;
--
-- Add feed
--
SELECT COUNT(1)
INTO object_count
FROM information_schema.columns
WHERE table_schema = database()
AND table_name = 'annotation_data_link'
AND column_name = 'feed_id';
IF object_count = 0 THEN
ALTER TABLE `annotation_data_link` ADD COLUMN `feed_id` int DEFAULT NULL;
END IF;
END $$
DELIMITER ;
CALL V07_09_00_001_annotation;
DROP PROCEDURE IF EXISTS V07_09_00_001_annotation;
SET SQL_NOTES=@OLD_SQL_NOTES;
-- vim: set shiftwidth=4 tabstop=4 expandtab:
5 - Change Log
-
Fix primitive value conversion of query field types.
-
Issue #4940 : Fix duplicate store error log.
-
Issue #4941 : Fix annotation data retention.
-
Issue #4968 : Improve Plan B file receipt.
-
Issue #4956 : Add error handling to duplicate check deletion.
-
Issue #4967 : Fix SQL deadlock.
-
Fix compile issues.
-
Issue #4875 : Fix select *.
-
Issue #3928 : Add merge filter for deeply nested data.
-
Issue #4211 : Prevent stream status in processing filters.
-
Issue #4927 : Fix TOKEN data feed auth when DATA_FEED_KEY is enabled.
-
Issue #4862 : Add select * to StroomQL.
-
Annotations 2.0.
-
Uplift BCrypt lib to 0.4.3.
-
Add BCrypt as a hashing algorithm to data feed keys. Change Data feed key auth to require the header as configured by
dataFeedKeyOwnerMetaKey
. ChangehashAlgorithmId
tohashAlgorithm
in the data feed keys json file. -
Issue #4109 : Add
receive
config propertiesx509CertificateHeader
,x509CertificateDnHeader
andallowedCertificateProviders
to control the use of certificates and DNs placed in the request headers by load balancers or reverse proxies that are doing the TLS termination. Header keys were previously hard coded.allowedCertificateProviders
is an allow list of FQDN/IPs that are allowed to use the cert/DN headers. -
Add Dropwizard Metrics to proxy.
-
Change proxy to use the same caching as stroom.
-
Remove unused proxy config property
maxAggregateAge
.aggregationFrequency
controls the aggregation age/frequency. -
Stroom-Proxy instances that are making remote feed status requests using an API key or token, will now need to hold the application permission
Check Receipt Status
in Stroom. This prevents anybody with an API key from checking feed statuses. -
Issue #4312 : Add Data Feed Keys to proxy and stroom to allow their use in data receipt authentication. Replace
proxyConfig.receive.(certificateAuthenticationEnabled|tokenAuthenticationEnabled)
withproxyConfig.receive.enabledAuthenticationTypes
that takes values:DATA_FEED_KEY|TOKEN|CERTIFICATE
(whereTOKEN
means an oauth token or an API key). The feed status check endpoint/api/feedStatus/v1
has been deprecated. Proxies with a version >=v7.9 should now use/api/feedStatus/v2
. -
Replace proxy config prop
proxyConfig.eventStore.maxOpenFiles
withproxyConfig.eventStore.openFilesCache
. -
Add optional auto-generation of the
Feed
attribute using propertyproxyConfig.receive.feedNameGenerationEnabled
. This is used alongside propertiesproxyConfig.receive.feedNameTemplate
(which defines a template for the auto-generated feed name using meta keys and their values) andfeedNameGenerationMandatoryHeaders
which defines the mandatory meta headers that must be present for a auto-generation of the feed name to be possible. -
Add a new Content Templates screen to stroom (requires
Manage Content Templates
application permission). This screen is used to define rules for matching incoming data where the feed does not exist and creating content to process data for that feed. -
Feed status check calls made by a proxy into stroom now require the application permission
Check Receipt Status
. This is to stop anyone with an API key from discovering the feeds available in stroom. Any existing API keys used for feed status checks on proxy will need to haveCheck Receipt Status
granted to the owner of the key. -
Issue #4839 : Change record count filter to allow counting of records at a custom depth to match split filter.
-
Issue #4828 : Fix recursive cache invalidation for index load.
-
Issue #4827 : Fix NPE when opening the Nodes screen.
-
Issue #4733 : Fix report shutdown error.
-
Issue #4778 : Improve menu text rendering.
-
Issue #4552 : Update dynamic index fields via the UI.
-
Issue #3921 : Make QuickFilterPredicateFactory produce an expression tree.
-
Issue #3820 : Add OR conditions to quick filter.
-
Issue #3551 : Fix character escape in quick filter.
-
Issue #4553 : Fix word boundary matching.
-
Issue #4776 : Fix column value
>
,>=
,<
,<=
, filtering. -
Issue #4772 : Uplift GWT to version 2.12.1.
-
Issue #4773 : Improve cookie config.
-
Issue #4692 : Add table column filtering by unique value selection.