-
Notifications
You must be signed in to change notification settings - Fork 20
Rules
Rules are the unit of work on redborder CEP. You can add, remove, list or synchronize rules using the REST API.
{
"id": "rule1",
"input": ["inputTopic"],
"output": { "outputStream": "outputTopic" },
"executionPlan": "from inputTopic select fieldA, fieldB insert into outputStream"
}
Rules are compossed by the following fields:
name | value |
---|---|
id | string id of the rule, must be unique along all the rules |
version (optional) | integer version of the rule (default 0) |
input | a json list of topics |
output | a json hash of outputs |
filters (optional) | a json hash of filters (default {}) |
executionPlan | a string in Siddhi Query language |
The ID is just a unique string that identifies this rule. If you try to add a rule with an id that is already present on the system, you'll get an error, unless you specify a greater version (see below for an explanation of the field version).
The version of the rule lets you specify which rules are newer and older. When you add a rule and it's already defined, it won't be overwritten unless the version of the new rule is greater than the version of the old rule.
The default value of version for a rule is 0, so you can't overwrite a rule without version with another rule without a version.
The inputs is a list of kafka topics that will be mapped to siddhi streams, so you can use them in your execution plans freely. Every kafka topic must be specified in the config file as a input for you to use it. The available attributes and theirs types from each input are specified on the config file too.
The output is a json hash that maps a Siddhi Stream with a kafka topic. Each key from this hash is a siddhi stream, and the value associated with this key is the kafka topic associated with it.
Every time an event from siddhi is inserted to a siddhi stream marked as an output, a message will be produced to kafka with all the attributes from the stream.
A filter is a hash that lets you filter events from kafka. If you specify a filter as, for example, { "type": "test" }, only events with the value test on the field type will be added to the input streams.
Execution plans are the body of the rules. They are written in a language called Siddhi Query Language. Currently, this project is using Siddhi 3.0.X, you can see the specification of the language at WSO2's site.
You don't need to define any of your input streams, as they are already defined by us automatically. You can just use them without any prior definition. Take a look at the rule above for an example.