Skip to main content
Version: 0.14

Real-time subscription

Real-time subscriptions are essential when building a successful event-sourced system. They support the following sequence of operations:

  • Domain model emits new event
  • The event is stored to the event store
  • The command is now handled, and the transaction is complete
  • All the read models get the new event from the store and project it if necessary

Most of the time, subscriptions are used for two purposes:

  1. Deliver events to reporting models
  2. Emit integration events

Subscriptions can subscribe from any position of a stream, but normally you'd subscribe from the beginning of the stream, which allows you to process all the historical events. For integration purposes, however, you would usually subscribe from now, as emitting historical events to other systems might produce undesired side effects.

One subscription can serve multiple event handlers. Normally, you would identify a group of event handlers, which need to be triggered together, and group them within one subscription. This way you avoid situations when, for example, two real models get updated at different speed and show confusing information if those read models are shown on the same screen.

caution

Example (real life horror story)

Imagine an event stream, which contains positions of a car as well as the car state changes, like Parked, Moving or Idle. The system also has two independent subscriptions serve two read model projections - one is the last know car location, and the other one is the car state. Those subscriptions will, with great probability, come out of sync, simply because position events are much more frequent than state updates. When using both of those read models on the map, you easily get a situation when the car pointer is moving, whilst the car is shown as parked.

By combining those two projections in one subscription, they could be both behind with updates, but the user experience will be much better as it will never be confusing. We could also say that those two projections belong to the same projections group.

Subscriptions need to maintain their own checkpoints, so when the service that host a subscription restarts, it will start receiving events from the last known position in the stream.

Most often, you'd want to subscribe to the global event stream, so you can build read models, which compose information from different aggregates. Eventuous offers the All stream subscription for this use case. In some cases you'd need to subscribe to a regular stream using the stream subscription.

In Eventuous, subscriptions are specific to event store implementation. In addition, Eventuous has subscriptions to message brokers like Google PubSub and RabbitMQ for integration purposes.

The wrong way

One of the common mistakes people make when building an event-sourced application is using an event store which is not capable of handling realtime subscriptions. It forces developers to engage some sort of message bus to deliver new events to subscribers. There are quite a few issues with that approach, but the most obvious one is a two-phase commit.

Bad bus

Read more about the Bad Bus →

When using two distinct pieces of infrastructure in one transaction, you risk one of those operations to fail. Let's use the following example code, which is very common:

await _repository.Save(newEvents);
await _bus.Publish(newEvents);

If the second operation fails, the command side of the application would remain consistent. However, any read models, which projects those events, will not be updated. So, essentially, the reporting model will become inconsistent against the transactional model. The worst part is that the reporting model will never recover from the failure.

As mentioned, there are multiple issues of using a message bus as transport to deliver events to reporting models, but we won't be covering them on this page.

The easiest way to solve the issue is to use a database which supports realtime subscriptions to event streams out of the box. That's why we use EventStoreDB as the primary event store implementation.

The right way

It is not hard to avoid the two-phase commits and ensure to publish domain events reliably if you use a proper event store. The aim here would be to achieve the following architecture:

Consistent event flowConsistent event flow

Why is this flow "consistent"? It's because the command handling process always finishes with an append operation towards event store (unless the command fails). There's no explicit Publish or Produce call that happens after the event is persisted. A proper event store should have a capability for to subscribe for getting all the new events when they appear in the store. As the Append operation of the event store is transactional, such a subscription will get the event only once, if it is capable to acknowledge the event correctly back to the subscription.

Subscriptions have many purposes. Most often, you would use a subscription to project domain events to your query models (read models). You can also use subscriptions to transform and publish integration events (see Gateway).

Subscriptions can also be used for integration purposes in combination with Producers.

Implementation

Eventuous implement subscriptions for different kinds of infrastructure (transport). As each transport is different, the way to configure them can be different, as well as the way subscriptions work. Read carefully the guidelines for each transport to understand what transport to use for your use case.