Because every Wedding RSVP website needs to follow DDD, CQRS, Hexagonal Architecture, Event Sourcing, and be deployed on Lambda.
🌎 Website | 📷 Gallery | 🏗️ Infrastructure
This application (and associated infrastructure) documents an approach to building complex systems which require the benefits that DDD, Hexagonal Architecture and Event Sourcing provide. On-top of this it shows how such an application can be combined with Terraform and deployed in a Serverless manor.
Using PHP and the Symfony framework it highlights how such an approach can be laid out, coupled with a sufficient testing strategy and local development environment. Some topics and features covered within this application are:
- Use of PHP 8.1 and Bref for Serverless Lambda environment.
- Docker-based local development environment, which replicates the intended Lambda platform.
- GNU make used to assist in running the application locally and performing CI-based tasks.
- CI pipeline developed using GitHub workflows, running the provided tests and deploying the application to the given stage-environments (staging and production).
- Implements the desired Message buses using Symfony Messenger, with asynchronous transport being handled by SQS/Lambda.
- Runtime secrets pulled in via Secrets Manager (and cached using APCu) using a Symfony environment variable processor.
- Email communication sent using Symfony Mailer (via Gmail), with local testing achieved using MailHog.
- Webpack Encore used to transpile and bundle assets (TypeScript, CSS and Images) used throughout the website.
- Event stream snapshots generated and validated within Application level tests, providing regression testing for present stream structures.
- Automated aggregate event diagrams created using the Application level Event stream snapshots, combined with Graphviz.
- Automated documentation/diagrams generated for the present Commands within the system.
- Use of Deptrac to ensure that desired Hexagonal Architecture layering is maintained.
- Psalm and PHP Coding Standards Fixer employed to ensure correctness and coding standards maintained.
- DynamoDB configured to manage client sessions within a deployed stage-environment.
Prerequisite: ensure you have Docker installed on your local machine.
make start
make open-web
make can-release
All available actions within the local development environment are available (and documented) within the Makefile by running make help
.
The application follows CQRS for interaction between the Ui and Application, Hexagonal Architecture to decouple the Infrastructural concerns, and DDD/Event Sourcing to model the Domain.
Following Hexagonal Architecture, the layers have been defined like so:
Based on the above layers, we employ three distinct message buses (Command, Aggregate Event and Domain Event), modeling the Aggregates using Event Sourcing. The following diagram highlights how these three buses interact during a typical Command/Query lifecycle response.
There are two Aggregates within the Domain (FoodChoice and Invite), the Aggregate Event flow for both goes as follows:
This diagram is automatically generated based on the current implementation, using testable Event snapshots at the Command level.
Application-level Commands which are available for the Ui to interact with the Domain are presented below:
Along with the Command and Command Handlers, this also deptics the associated Domain Events which are emitted.
The testing strategy employed within this application helps aid in following a Test Pyramid, favouring testing behaviour over implementation. In doing this we exercise most of our behavioral assertions at the Application layer, testing the public API provided by the Commands and Query services. This provides us with a clear description of the intended application's behaviour, whilst reducing the brittleness of the given tests as only public contracts are used.
Testing has been broken up into a similar Hexagonal Architecture layered representation as the system itself, like so:
Domain
Low-level domain testing which is heavily coupled to the current implementation. This is used in cases where you wish to have a higher-level of confidence of a given implementation which can not be easier asserted at the Application level.
Application
Unit tests (ala Unit of behaviour) which use the public API exposed by the Commands and Query services to assert correctness. These are isolated from any infrastructural concerns (via test doubles) and exercise the core business logic/behaviour that the application provides. This level provides us with the greatest balance between asserting that the current implementation achieves the desired behaviour, whilst not being over-coupled to the implementation causing the test to become brittle. Depending only on the public API within these tests allows us to refactor the underlying Domain implementation going forward whilst keeping the tests intact. As such, it is desired to have the most amount of testing at this level.
Infrastructure
Contractual tests to assert that the given adaptor implementation completes the required ports responsibility; communicating with external infrastructure (such as a database) in isolation to achieve this.
Ui
Full-system tests which exercises the entire system by-way of common use-cases. This provides us with confidence at the highest-level that the application is achieving the desired behaviour.
The application uses the following linting tools to maintain the desired code quality and application correctness.
- Psalm - used to provide type-checking support within PHP (
app/psalm.xml
). - PHP Coding Standards Fixer - ensures the desired PHP code styling is maintained (
app/.php-cs-fixer.php
). - Deptrac - ensures we adhere to the strict Hexagonal Architectural layering boundaries we have imposed (
depfile.yml
). - Local PHP Security Checker - ensures that no known vulnerable dependencies are used within the application.
- Prettier - ensures the desired JS code style is maintained (
app/package.json
).
These tools can be run locally using make lint
, returning a non-zero status code upon failure.
This process is also completed during a make can-release
invocation.
The application is hosted on AWS Lambda with transient infrastructure (which change based on each deployment) being provisioned using the Serverless Framework. Resources managed at this level include Lambda functions, API-gateways and SQS event integrations. Foundational infrastructural concerns (such as networking, databases, queues etc.) are provisioned using Terraform and can be found in the related repository.
Sharing between Terraform and Serverless Framework is unidirectional, with the application resources that Serverless Framework creates being built upon the foundation that Terraform resources provision. Parameters, secrets and shared resources which are controlled by Terraform are accessible to this application via SSM parameters and Secrets Manager secrets; providing clear responsibility separation.
The application consists of an Event Store which persist aggregate events produced by invoked Commands. It also includes Projections which persist materialised views of these aggregate events, accessible via Query services. These two responsibilities do not require a shared data-store, and can manage their own state as best they see fit.
To highlight this, I have built several persistence implementations of both the Event Store and Projections. This exercises the importance of good abstraction, and the benefits of layering your application using Hexagonal Architecture (Ports and Adaptors). Although only one is used within the production setting (Postgres), as a local demonstration it is interesting to see the differences.
Using the provided Event Store port interface, there are the following implementations:
-
Postgres
- Stores the events in sequential order within a single-table.
- Ensures domain aggregate invariance, using version constraints to prevent competing client updates.
- Uses a transaction to ensure that all aggregate events are persisted to the event stream in one atomic operation.
- Using a single-table design (with a sequential ordering) allows for trivial event store streaming capabilities.
-
DynamoDB
- Stores the events within a single-table, with the aggregate identifier and version mapping well to DynamoDB's partition and sort keys respectively.
- Ensures domain aggregate invariance, using DynamoDB write constraints to prevent competing client updates.
- The aggregate events are currently not persisted within a single write transaction, as the underlying client (AsyncAWS) does not support this yet.
- The sequential ordering is carried out using the event creation time within micro-seconds. Due to DynamoDB's behavioral properties, you are unable to maintain a sequential auto-incrementing identifier. Although not ideal, this is the best we can attain from using such a means.
- Currently, a hot partition key is used based on this event creation time to provide the same sequential ordering required for streaming.
Future developments that I wish to consider, are to instead store the events within S3 for persistence (possibly via DynamoDB streams for durability), and then query for these using Athena when streaming operations are required. In doing this we would move away from the hot key that has been introduced in its current form.
-
EventStoreDB
- Communicates with the EventStoreDB over HTTP via Atom.
- Separate event streams are created per aggregate, following the naming convention #AGGREGATE_NAME#-#AGGREGATE_ID#.
- Ensures domain aggregate invariance, using the event streams expected version constraints which are provided by the persistence layer.
With the HTTP and Atom communication protocols being deprecated, work to replace this with a gRPC client written in PHP would need to be carried out before putting such an implementation in production.