You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order for reads not to miss events during concurrent writes, there needs to be some mechanism to serialise writes such that events being read are totally ordered.
Depending on use case and performance requirements, this total ordering could be at:
log level;
category level; or
stream level
The more coarse grained ordering levels implicitly provide ordering guarantees for the levels below them, i.e., log level ordering by definition provides category and stream level ordering. The more fine grained ordering levels remove ordering guarantees for the levels above them, i.e., if category level ordering is used, there can be no ordering guarantee for reads at the log level.
The choice of ordering level trades off write performance for read flexibility and freshness. In order to provide log level ordering, no two events can be written to the log in parallel, all writes must be in series. This reduces the write throughput of the event store and potentially results in write queueing and therefore increased write latency. Reducing the ordering level increases write throughput but prevents reliable reads above that ordering level. For example, a category ordering level allows distinct categories to be written to in parallel but prevents reliable log level reads.
To represent total ordering within the event log, there needs to be some strictly increasing attribute associated with each event that represents its position in the sequence of events, say the sequence number.
There are a few ways to implement such reliable reads whilst keeping reads as up-to-date as possible:
In ACID-compliant databases, or those with transaction isolation support, the serializable isolation level can be used to serialize writes. This guarantees a strictly increasing sequence number due to strictly isolated transaction execution. Upon detecting conflicting writes, an exception is typically thrown allowing the application to retry the transaction. This solution only allows full log level ordering rather than supporting lower levels of ordering.
In databases that support application defined locks, such as Postgres' advisory locks, locks at log, category or stream level can be used. The locks result in a strictly increasing sequence number at the level of the lock, but not at higher levels. Upon concurrent writes, each write will block and wait (with an optional timeout) until a lock can be taken such that writes are serialised at the level of the lock. If a timeout of zero is used, an exception/retry model can be used. For log level locking, a table-level lock could also be used.
If read freshness is not paramount, there are some alternative ways to implement this:
Reads can trail the head of the log / category / stream by some time, t such that no event with a time greater than now() - t is queried. In this model, reads are reliable so long as t is large enough that no write will take longer than it. However, this is not a guarantee, more an imperfect protection against missed reads. This does however allow for the maximum write throughput and allow for reliable reads at all levels.
Writes can be completely unordered at write time and an independent order labelling process can be used to retroactively label events with their sequence number. This allows for the same write throughput as the time delay based reads without the risk of a write exceeding the allocated time. However, in this model there isn't a correlation between the sequence number and the write time so no chronology can be inferred from the sequence number, it instead only represents a consistent read ordering.
The text was updated successfully, but these errors were encountered:
So far, log-level ordering has been implemented through the use of an in-memory lock for the InMemoryStorageAdapter and through the use of a table level lock for the PostgresStorageAdapter.
In order for reads not to miss events during concurrent writes, there needs to be some mechanism to serialise writes such that events being read are totally ordered.
Depending on use case and performance requirements, this total ordering could be at:
The more coarse grained ordering levels implicitly provide ordering guarantees for the levels below them, i.e., log level ordering by definition provides category and stream level ordering. The more fine grained ordering levels remove ordering guarantees for the levels above them, i.e., if category level ordering is used, there can be no ordering guarantee for reads at the log level.
The choice of ordering level trades off write performance for read flexibility and freshness. In order to provide log level ordering, no two events can be written to the log in parallel, all writes must be in series. This reduces the write throughput of the event store and potentially results in write queueing and therefore increased write latency. Reducing the ordering level increases write throughput but prevents reliable reads above that ordering level. For example, a category ordering level allows distinct categories to be written to in parallel but prevents reliable log level reads.
To represent total ordering within the event log, there needs to be some strictly increasing attribute associated with each event that represents its position in the sequence of events, say the
sequence number
.There are a few ways to implement such reliable reads whilst keeping reads as up-to-date as possible:
If read freshness is not paramount, there are some alternative ways to implement this:
t
such that no event with a time greater thannow() - t
is queried. In this model, reads are reliable so long ast
is large enough that no write will take longer than it. However, this is not a guarantee, more an imperfect protection against missed reads. This does however allow for the maximum write throughput and allow for reliable reads at all levels.The text was updated successfully, but these errors were encountered: