- Delete event stream (#203).
- Introduce
mix event_store.migrations
task to list migration status (#207). - Remove distributed registry (#210).
- Hibernate subscription process after inactivity (#214).
- Runtime event store configuration (#217).
- Shared database connection pools (#216).
- Shared database connection for notifications (#225).
- Transient subscriptions (#215)
- Improve resilience when database connection is unavailable (#226).
This release requires a database migration to be run. Please read the Upgrading an EventStore guide for details on how to migrate an existing database.
Usage of EventStore.Tasks.Init
task to initialise an event store database has been changed as follows:
Previous usage:
:ok = EventStore.Tasks.Init.exec(MyApp.EventStore, config, opts)
Usage now:
:ok = EventStore.Tasks.Init.exec(config)
:ok = EventStore.Tasks.Init.exec(config, opts)
- Support appending events to a stream with
:any_version
concurrently (#209).
- Support Postgres schemas (#182).
- Dynamic event store (#184).
- Add
timeout
option to config (#189). - Namespace advisory lock to prevent clash with other applications (#166).
- Use database lock to prevent migrations from running concurrently (#204).
The following EventStore API functions have been changed where previously (in v1.0 and earlier) the last argument was an optional timeout (a non-negative integer or :infinity
). This has been changed to be an optional Keyword list, which may include a timeout (e.g. [timeout: 5_000]
). The stream_forward
and stream_all_forward
functions now also require the optional read_batch_size
argument to be provided as part of the options Keyword list.
These changes were required to support dynamic event stores where an event store name can be included in the options to each function. If you did not provide a timeout to any of these functions then you will not need to make any changes to your code. See the example usages below for details.
EventStore.append_to_stream
EventStore.link_to_stream
EventStore.read_stream_forward
EventStore.read_all_streams_forward
EventStore.stream_forward
EventStore.stream_all_forward
Previous usage:
EventStore.append_to_stream(stream_uuid, expected_version, events, timeout)
EventStore.link_to_stream(stream_uuid, expected_version, events_or_event_ids, timeout)
EventStore.read_stream_forward(stream_uuid, start_version, count, timeout)
EventStore.read_all_streams_forward(start_version, count, timeout)
EventStore.stream_forward(stream_uuid, start_version, read_batch_size, timeout)
EventStore.stream_all_forward(start_version, read_batch_size, timeout)
Usage now:
EventStore.append_to_stream(stream_uuid, expected_version, events, timeout: timeout)
EventStore.link_to_stream(stream_uuid, expected_version, events_or_event_ids, timeout: timeout)
EventStore.read_stream_forward(stream_uuid, start_version, count, timeout: timeout)
EventStore.read_all_streams_forward(start_version, count, timeout: timeout)
EventStore.stream_forward(stream_uuid, start_version, read_batch_size: read_batch_size, timeout: timeout)
EventStore.stream_all_forward(start_version, read_batch_size: read_batch_size, timeout: timeout)
This release requires a database migration to be run. Please read the Upgrading an EventStore guide for details on how to migrate an existing database.
- Use event's stream version when appending events to a stream (#202).
- Prevent double supervision by starting / stopping supervisor manually (#194).
- Use
DynamicSupervisor
for subscriptions.
- Fix
EventStore.Registration.DistributedForwarder
state when running multiple nodes (#186).
- Support multiple event stores (#168).
- Add support for
queue_target
andqueue_interval
database connection settings (#172). - Add support for
created_at
values to be of typeNaiveDateTime
(#175).
- Fix function clause error on
DBConnection.ConnectionError
(#167).
Follow the upgrade guide to define and use your own application specific event store].
Upgrade your existing EventStore database by running:
mix event_store.migrate
Note: The migrate command is idempotent and can be safely run multiple times.
You can drop and recreate an EventStore database by running:
mix do event_store.drop, event_store.create, event_store.init
- Fix issue with concurrent subscription partitioning (#162).
- Reliably start
EventStore.Notifications.Supervisor
on:global
name clash (#165).
- Stop Postgrex database connection process in mix
event_store.init
andevent_store.migrate
tasks after use to prevent IEx shutdown when tasks are run together (asmix do event_store.init, event_store.migrate
). - Ensure the event store application doesn't crash when the database connection is lost (#159).
- Add
:socket
and:socket_dir
config options (#132). - Rename
uuid
dependency toelixir_uuid
(#135). - Subscription concurrency (#134).
- Send
:subscribed
message to all subscribers connected to a subscription (#136). - Update to
postgrex
v0.14 (#143).
-
Replace
:poison
with:jason
for JSON event data & metadata serialization (#144).To support this change you will need to derive the
Jason.Encoder
protocol for all of your events.This can be done by adding
@derive Jason.Encoder
before defining the struct in every event module.defmodule Event1 do @derive Jason.Encoder defstruct [:id, :data] end
Or using
Protocol.derive/2
for each event, as shown below.require Protocol for event <- [Event1, Event2, Event3] do Protocol.derive(Jason.Encoder, event) end
- Use a timeout of
:infinity
for the migration task (mix event_store.migrate
) to allow database migration to run longer than the default 15 seconds.
- Socket closing causes the event store to never receive notifications (#130).
- Subscription with selector function should notify pending events after all filtered (#131).
- Support system environment variables for all config (#115).
- Allow subscriptions to filter the events they receive (#114).
- Allow callers to omit
event_type
when event data is a struct (#118). - Remove dependency on
psql
forevent_store.create
,event_store.init
,event_store.migrate
, andevent_store.drop
mix tasks (#117). - Supports query parameters in URL for database connection (#119).
- Improve typespecs and include Dialyzer in Travis CI build (#121).
- Add JSONB support (#86).
- Add
:ssl
and:ssl_opts
config params (#88). - Make
mix event_store.init
task do nothing if events table already exists (#89). - Timeout issue when using
EventStore.read_stream_forward
(#92). - Replace
:info
level logging with:debug
(#90). - Dealing better with Poison dependency (#91).
- Publish events directly to subscriptions (#93).
- Use PostgreSQL advisory locks to enforce only one subscription instance (#98).
- Remove stream process (#99).
- Use PostgreSQL's
NOTIFY
/LISTEN
for event pub/sub (#100). - Link existing events to another stream (#103).
- Subscription notification message once successfully subscribed (#104).
- Transient subscriptions (#105).
- Transient subscription event mapping function (#108).
- Turn EventStore
mix
tasks into generic tasks for use with Distillery during deployment (#111).
Upgrade your existing EventStore database by running:
mix event_store.migrate
You can drop and recreate an EventStore database by running:
mix do event_store.drop, event_store.create, event_store.init
- Use
Supervisor.child_spec
with an explicitid
for Registry processes to support Elixir v1.5.0 and v1.5.1 (v1.5.2 contains a fix for this issue).
- EventStore migrate mix task read migration SQL scripts from app dir (
Application.app_dir(:eventstore)
).
- Use a UUID field for the
event_id
column, rename existing field toevent_number
(#75). - Use
uuid
data type for eventcorrelation_id
andcausation_id
(#57). - Mix task to migrate an existing EventStore database (
mix event_store.migrate
).
- Append to stream is limited to 7,281 events in a single request (#77).
Upgrade your existing EventStore database by running: mix event_store.migrate
Or you can drop and recreate the EventStore database by running: mix do event_store.drop, event_store.create, event_store.init
- Publisher only notifies first pending event batch (#81).
- Allow optimistic concurrency check on write to be optional (#31).
- Fix issue where subscription doesn't immediately receive events published while transitioning between catch-up and subscribed. Any missed events would be noticed and replayed upon next event publish.
-
Support for running on a cluster of nodes using Swarm for process distribution (#53).
-
Add
stream_version
column tostreams
table. It is used for stream info querying and optimistic concurrency checks, instead of querying theevents
table.
Run the schema migration v0.11.0.sql script against your event store database.
- Fix for ack of last seen event in stream subscription (#66).
-
Writer per event stream (#55).
You must run the schema migration v0.10.0.sql script against your event store database.
-
Use DBConnection's built in support for connection pools (using poolboy).
-
Adds
causation_id
alongsidecorrelation_id
for events (#48).To migrate an existing event store database execute v0.9.0.sql script.
-
Allow single stream, and all streams, subscriptions to provide a mapper function that maps every received event before sending to the subscriber.
EventStore.subscribe_to_stream(stream_uuid, "subscription", subscriber, mapper: fn event -> event.data end)
-
Subscribers now receive an
{:events, events}
tuple and should acknowledge receipt by:EventStore.ack(subscription, events)
- Add Access functions to
EventStore.EventData
andEventStore.RecordedEvent
modules (#37). - Allow database connection URL to be provided as a system variable (#39).
- Writer not parsing database connection URL from config (#38).
- Stream events from a single stream forward.
- Subscriptions use Elixir streams to read events when catching up.
- Upgrade
fsm
dependency to v0.3.0 to remove Elixir 1.4 compiler warnings.
- Stream all events forward (#34).
- Allow snapshots to be deleted (#26).
- Subscribe to a single stream, or all streams, from a specified start position (#17).
- Subscriptions that are at max capacity should wait until all pending events have been acknowledged by the subscriber being catching up with any unseen events.
- Use IO lists to build insert events SQL statement (#23).
- Use
NaiveDateTime
for each recorded event'screated_at
property.
- Read stream forward does not use count to limit returned events (#10)