All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Support publishing events consumed from NATS topics. See the documentation for how to get started. #297
- Added validation for reverse proxy configuration. Now it crashes RIG on start when configuration is not valid or returns
400
when using REST API to update configuration. #277 - Added basic distributed tracing support in W3C Trace Context specification with Jaeger and Openzipkin exporters. RIG opens a span at the API Gateway and emits trace context in Cloud Events following the distributed tracing spec. #281
- Added possibility to set response code for
response_from
messages in reverse proxy (kafka
andhttp_async
). #321 - Added new version -
v3
- for internal endpoints to support response code in the/responses
endpoint - Added Helm v3 template to the
deployment
folder #288 - Added detailed features summary on the website with architecture diagrams. #284
- Added documentation section for the JWT Blacklist feature. #156
- Added longpolling examples to the
/examples
folder #235 - Support JSON and Google Cloud Logger (GCL) log formats. Set the new environment variable
LOG_FMT
toJSON
orGCL
to see this in action. #298 - Added CORS headers to unauthenticated Proxy requests #344
- Added rate limiting for maximum number of WS + SSE + Longpolling connections per minute. Configurable via
MAX_CONNECTIONS_PER_MINUTE
env var, by default it's 5000. #257 - Added basic setup for the Phoenix LiveDashboard. Now it's accessible at
:4010/dashboard
. #301 - Added Prometheus metrics for events, subscriptions and blacklist. Check the wiki for more info. #157
- Added custom Grafana dashboard. Check the wiki for more info. #222
-
Incorporated cloudevents-ex to handle binary and structured modes for Kafka protocol binding in a proper way. This introduces some breaking changes:
- Binary mode is now using
ce_
prefix for CloudEvents context attribute headers, before it wasce-
- done according to the Kafka protocol binding
- Binary mode is now using
-
Change above affects also
"response_from": "kafka"
proxy functionality. RIG will forward to clients only Kafka body, no headers. This means, when using binary mode, clients receive only the data part, no CloudEvents context attributes. -
Changed
response_from
handler to expect a message in binary format, NOT a cloud event (kafka
andhttp_async
). #321 -
Updated Helm v2 template, kubectl yaml file and instructions in the
deployment
folder #288 -
Publish Helm Chart to Github pages. With this change, we can simply install the chart using
helm repo add accenture https://accenture.github.io/reactive-interaction-gateway helm install rig accenture/reactive-interaction-gateway
More information, follow the deployment Readme. #319
-
make README smaller, easier to read and highlight features. #284
-
Updated Phoenix LiveDashboard setup to show also metrics based on the Prometheus metrics (for now only proxy and events metrics). #157
-
Updated Channels Example to use Kafkajs and NodeJS 14. Updated Smoke Tests to use NodeJS 14.
-
Updated Kafka SSL configuration to be able to set (skip) only specific SSL certificates. This is needed to allow connection to Azure Event Hubs. See more in the docs. #376
- Fixed a bug where distributed set processes would crash when one of their peers has died but hasn't been removed yet from the pg2 group.
- Fixed wrong endpoint validation for reverse proxy. Now it should correctly check for
path
orpath_regex
. Before it would requirepath
even withpath_regex
in place. #334
- Removed deprecated or unused code/functionality, these are breaking changes #278:
- Removed deprecated internal API
/v1
. - Removed deprecated environment variables:
PROXY_KAFKA_REQUEST_AVRO
,PROXY_KAFKA_REQUEST_TOPIC
,PROXY_KINESIS_REQUEST_STREAM
. This means that you can set topic and schema for publishing to event streams only in the proxy config as described in the docs. - Removed experimental feature Firehose (forwarding events to an HTTP endpoint).
- Removed
path
field in proxy configuration. Reason is that thepath_regex
field is already coveringpath
functionality and thus it doesn't make sense to have both of them. Should cause less confusion and improve maintainability.- Migration:
"path": "/foo"
->"path_regex": "/foo"
"path": "/foo/{id}"
->"path_regex": "/foo/(.+)"
- or pretty much whatever regex you need (e.g. UUID pattern)
- Migration:
- Removed deprecated internal API
- Update to Erlang/OTP 23.2.2, which fixes a critical TLS certificate verification issue.
- Updated dependencies to support OTP 23. We've also replaced the versions file with
.tool-versions
, which makes it easier for those using the asdf package manager - just runasdf install
to obtain the correct versions of Erlang and Elixir. #341
2.4.0 - 2020-05-07
- Added possibility to define Kafka/Kinesis topic and schema per reverse proxy endpoint. The current solution using environment variables is deprecated, but still used as a fallback -- will be removed in the version 3.0. #229
- Added Kinesis + Localstack example. #229
- Upgrade the Elixir version to 1.10 for source code and Docker images. Upgrade version for multiple dependencies. #285
- Added Slackin integration for easier Slack access - check the main page badge! #240
2.3.0 - 2019-12-13
- In addition to SSE and WebSocket, RIG now also supports HTTP long-polling for listening to events. Frontends should only use this as a fallback in situations where neither SSE nor WebSocket is supported by the network. #217
- When terminating an SSE connection after its associated session has been blacklisted, RIG now sends out a
rig.session_killed
event before closing the socket. For WebSocket connections, the closing frame contains "Session killed." as its payload. #261 - New API for querying and updating the session blacklist:
/v2/session-blacklist
, which introduces the following breaking changes (/v1/session-blacklist
is unaffected) #261:- When a session has been added to the session blacklist successfully, the endpoint now uses the correct HTTP status code "201 Created" instead of "200 Ok".
- When using the API to blacklist a session, the
validityInSeconds
should now be passed as an integer value (using a string still works though).
- Fixed usage of external check for
SUBMISSION_CHECK
andSUBSCRIPTION_CHECK
. #241 - Logging incoming HTTP request to Kafka works again and now also supports Apache Avro. #170
- Fixed HTTP response for
DELETE 4010/v1/apis/api_id
andDELETE 4010/v2/apis/api_id
to correctly return204
and no content.
- Removed the
JWT_BLACKLIST_DEFAULT_EXPIRY_HOURS
environment variable (deprecated since 2.0.0-beta.2). #260
- A connection is now associated to its session right after the connection is established, given the request carries a JWT in its authorization header. Previously, this was only done by the subscriptions endpoint, which could cause a connection to remain active even after blacklisting its authorization token. #260
- Upgrade the Elixir and Erlang versions for source code and Docker images. #211
- Automated UI-tests using Cypress make sure that all examples work and that code changes do not introduce any unintended API changes. #227
- Refactor JWT related code in favor of
RIG.JWT
. #244 - Fix flaky cypress tests; this shouldn't be an issue anymore when running Travis builds. #265
2.2.1 - 2019-06-21
- Increased maximum number of Erlang ports from 4096 to 65536 to allow more HTTP connections.
2.2.0 - 2019-06-17
- New Prometheus metric:
rig_proxy_requests_total
. For details seemetrics-details.md
. #157 - The respond-via-Kafka feature uses a correlation ID for associating the response with the original request. This correlation ID is now cryptographically verified, which prevents an attacker on the internal network to reroute responses to other users connected to RIG. #218
- Apache Avro is now supported for consuming from, and producing to, Kafka. The implementation uses the Confluent Schema Registry to fetch Avro schemas.
- Added new set of topics in documentation about the API Gateway, even streams and scaling.
- Added examples section to the documentation website.
- Added new
response_from
option --http_async
together with new internalPOST
endpoint/v1/responses
. You can send correlated response to/v1/responses
and complete initial Proxy request. #213 - Implement HTTP Transport Binding for CloudEvents v0.2. A special fallback to "structured mode" in case the content type is "application/json" and the "ce-specversion" header is not set ensures this change is backward compatible with existing setups. #153
- New request body format for endpoints with
kafka
andkinesis
targets; see Deprecated below.
- The environment variable
KAFKA_GROUP_ID
has been replaced with the following environment variables, where each of them has a distinct default value:KAFKATOFILTER_KAFKA_GROUP_ID
,KAFKATOHTTP_KAFKA_GROUP_ID
,PROXY_KAFKA_RESPONSE_KAFKA_GROUP_ID
. #206 - The default Kafka source topic for the Kafka-to-HTTP event stream has been changed to
rig
. The feature was introduced to forward all incoming events to an (external) HTTP endpoint, so it makes sense to use the default topic for incoming events here too. - Changed
:refresh_subscriptions
GenServer handler fromcall
tocast
to improve performance. #224
- Fixed a bug that caused the subscriptions endpoint to return an internal server error when running RIG in a clustered setup. #194
- Support for forwarding HTTP/1.1 responses over a HTTP/2 connection by dropping connection-related HTTP headers. #193
- Added missing
id
field to swagger spec formessage
API. - Fixed random generation of group IDs for Kafka consumer groups. This led to wrong partition distribution when using multiple RIG nodes. Now consumers will have the same ID which can be changed via environment variable - defaults to
rig
. - When forwarding an HTTP request, the
Host
request header is now set to thetarget_url
defined by the proxy configuration. #188 - Fixed missing
swagger.json
file in production Docker image. - Added missing CORS headers for Kafka/Kinesis target type when not using
response_from
. - Fixed schema registry validation when using binary messages in Kafka consumer. #202
- Forwarding events to HTTP did not contain (all) Kafka messages, as the Kafka consumer group ID was shared with the consumer for forwarding events to frontends. #206
-
Endpoints configured with target
kafka
orkinesis
now expect a different body format (that is, the previous format is deprecated). This aligns the request body format with the other endpoints that accept CloudEvents.For example, instead of using this:
{ "partition": "the-partition-key", "event": { "specversion": "0.2", "type": "what_has_happened", "source": "ui", "id": "123" } }
you should put the partition key in the CloudEvent's "rig" extension instead:
{ "specversion": "0.2", "rig": { "target_partition": "the-partition-key" }, "type": "what_has_happened", "source": "ui", "id": "123" }
2.1.1 - 2019-03-27
- When using the proxy, RIG will now add an additional
Forwarded
header. #113 - Increased length of header value in HTTP requests to 16384 to support long tokens for SAML.
- HTTPS certificates may now be passed using absolute paths. (Previously, the locations of the HTTPS certificates were limited to the OTP-applications'
priv
directoriesrig_api/priv/cert
andrig_inbound_gateway/priv/cert
.) Additionally, for security reasons we no longer include the self-signed certificate with the docker image. Please adapt your environment configuration accordingly. #151 #182 - Validation errors for SSE & WS connections and the subscriptions endpoint should now be a lot more helpful. Invalid JWTs, as well as invalid subscriptions, cause the endpoints to respond with an error immediately. #54 #164
- Parsing of JSON files in proxy module -
api.id
was expected to be an atom, but when using files it's a string. - Kinesis: Support for CloudEvents versions 0.1 and 0.2.
- Fixed channels example with latest RIG API changes.
- Fixed sse/ws examples to use JWT inferred subscriptions correctly.
2.1.0 - 2019-02-15
- Prometheus monitoring endpoint. #96
- The proxy configuration can now also be passed as a JSON string. This allows to run the Docker image in environments where mounting a file in a container is not possible. #159
- Rate limiting. #144
2.0.2 - 2019-01-20
- Upgraded a dependency to fix the Docker build. #149
2.0.1 - 2019-01-20
- A library upgrade caused idle SSE connections to time out after 60 seconds. This timeout is now disabled. #148
2.0.0 - 2019-01-16
- HTTP/2 and HTTPS support. #34
- The SSE and WebSocket endpoints now take a "subscriptions" parameter that allows to create (manual) subscriptions (JSON encoded list). This has the same effect as establishing a connection and calling the subscriptions endpoint afterwards.
- OpenAPI (Swagger) documentation for RIG's internal API. #116
- Support for the CloudEvents v0.2 format. #112
- In API definitions regular expressions can now be used to define matching request paths. Also, request paths can be rewritten (see api.ex for an example). #88
- The SSE and WebSocket endpoints' "token" parameter is renamed to "jwt" (to not confuse it with the connection token).
- When forwarding requests, RIG related meta data (e.g. correlation ID) in CloudEvents is now put into an object under the top-level key "rig". Note that in terms of the current CloudEvents 0.2 specification this makes "rig" an extension. Also, all RIG related keys have been renamed from snake_case to camelCase.
- Previously API definitions for proxy were turning on security check for endpoints by
not_secured: false
which is a bit confusing -- changed to more readable formsecured: true
. - No longer assumes the "Bearer" token type when no access token type is prepended in the Authorization header. Consequently, a client is expected to explicitly use "Bearer" for sending its JWT authorization token. More more details, see RFC 6749.
- All events that RIG creates are now in CloudEvents v0.2 format (before: CloudEvents v0.1).
- When using Kafka or Kinesis as the target, connection related data is added to the event before publishing it to the respective topic/partition. With the introduction of CloudEvents v0.2, RIG now follows the CloudEvent extension syntax with all fields put into a common top-level object called "rig". Additionally, the object's field names have been changed slightly to prevent simple mistakes like case-sensitivity issues. Also, the expected request body fields have been renamed to be more descriptive. To that end, usage information returned as plaintext should help the API user in case of a Bad Request.
- Extractor configuration reload
- Fixed response to CORS related preflight request.
2.0.0-beta.2 - 2018-11-09
- JWT now supports RS256 algorithm in addition to HS256. #84
- Support Kafka SSL and SASL/Plain authentication. #79
- Add new endpoints at
/_rig/v1/
for subscribing to CloudEvents using SSE/WS, for creating subscriptions to specific event types, and for publishing CloudEvents. #90 - Expose setting for proxy response timeout. #91
- Subscriptions inference using JWT on SSE/WS connection and subscription creation. #90
- Allow publishing events to Kafka and Kinesis via reverse-proxy HTTP calls. Optionally, a response can be waited for (using a correlation ID).
- Simple event subscription examples for SSE and WS.
- Kafka/Kinesis firehose - set topic/stream to consume and invoke HTTP request when event is consumed.
- SSE heartbeats are now sent as comments rather than events, and events without data carry an empty data line to improve cross-browser compatibility. #64
- General documentation and outdated info.
- Previous SSE/WS communication via Phoenix channels.
- Events that don't follow the CloudEvents spec are no longer supported (easy migration: put your event in a CloudEvent's
data
field).
2.0.0-beta.1 - 2018-06-21
- Amazon Kinesis integration. #27
- Use lazy logger calls for debug logs.
- Format (most files) using Elixir 1.6 formatter.
- Add new endpoint
POST /messages
for sending messages (=> Kafka is no longer a hard dependency). - Add a dedicated developer guide.
- Release configuration in
rel/config.exs
and customvm.args
(based on what distillery is using). #29 - Production configuration for peerage to use DNS discovery. #29
- Module for auto-discovery, using
Peerage
library. #29 - Kubernetes deployment configuration file. #29
- Smoke tests setup and test cases for API Proxy and Kafka + Phoenix messaging. #42
- Kafka consumer ready check utility function. #42
- List of all environment variables possible to set in
guides/operator-guide.md
. #36 - Possibility to set logging level with env var
LOG_LEVEL
. #49 - Variations of Dockerfiles - basic version and AWS version. #44
- Helm deployment chart. #59
- Proxy is now able to do request header transformations. #76
- Endpoint for terminating a session no longer contains user id in path.
- Convert to umbrella project layout.
- Move documentation from
doc/
toguides/
as the former is the default for ex_doc output. - Revised request logging (currently Kafka and console as backends).
- Disable WebSocket timeout. #58
- Dockerfile to use custom
vm.args
file & removedmix release.init
step. #29
- Make presence channel respect
JWT_USER_FIELD
setting (currently hardcoded to "username"). - Set proper environment variable for Phoenix server
INBOUND_PORT
. #38 - Set proper environment variable for Phoenix server
API_PORT
. #38 - Channels example fixed to be compatible with version 2.0.0. #40
- User defined query auth values are no longer overridden by
JWT
auth type. - Handle content-type correctly. #61
- More strict regex match for routes in proxy. #76
- Downcased response headers to avoid duplicates in proxy. #76
1.1.0 - 2018-01-11
- Basic Travis configuration. #17
- Configuration ADR document. #19
- Websocket and SSE channels example. #22
- Maintain changelog file. #25
- Increase default rate limits. #16
- Make producing of Kafka messages in proxy optional (and turned off by default). #21
- Fix Travis by disabling credo rule
Design.AliasUsage
. #18
- Add
mix docs
script to generate documentation of code base. #6 - Add ethics documentation such as code of conduct and contribution guidelines. #6
- Update configuration to be able to modify almost anything by environment variables on RIG start. #5
- Rework Dockerfile to use multistage approach for building RIG Docker image. #9
- Update entire code base to use
rig
keyword. #13
- Disable Origin checking. #12