Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add consistency considerations to the docs #51

Open
codeliner opened this issue May 19, 2017 · 0 comments
Open

Add consistency considerations to the docs #51

codeliner opened this issue May 19, 2017 · 0 comments

Comments

@codeliner
Copy link
Member

From the chat:

@YuraLukashik not exactly rules are ensured by aggregates. services may provide information for the aggregate so that it can make a decision. pizza exmaple again:
the user_pizza_count_guard just provides the number of ordered pizzas by user, but the order aggregate decides if it accepts an order or not.
Now the problem is user_pizza_count_guard is eventually consistent so it may provide outdated information and the aggregate makes a wrong decision.
We have a couple of ways to solve that problem:

  1. This should not happen: It is sooo important that the aggregate always have the latest data available to make the correct decision so that we need a larger aggregate. Drawbacks were already listed: locking issues, scaling issues/performance and increased complexity of the aggregate due to too much responsibility
  2. This may not happen: in 99% of the cases user_pizza_count_guard has latest data available and if not we have a second check in the read model (unique constraint in a database f.e.) and we have logging + notification in place so that at least a programmer or admin is informed about the problem and can fix it manually or with a management UI etc.
  3. This can happen and is part of the business (preferred way to handle the problem): so we don't model the unhappy paths with exceptions but with dedicated domain events. This could look something like this: The order aggregate accepts every order no matter the amount of ordered pizzas. In the frontend we try our best to prevent duplicate orders but the backend accepts them. A process manager dispatches a second command PreparePizzaDelivery after each PizzaWasOrderded event. The pizza delivery aggregate now sees that two pizzas should be delivered to the same address at the same time and it can stop one delivery and send out an email to the user to either inform her about the duplicate order or even provide actions for the user to take f.e. let the user decide which order should be canceled.

Alexander Miertsch @codeliner 18:50
4) combination of 2) + 3): you have the user_pizza_count_guard in place in case a script kiddy wants to stress your backend system with a few hundred orders. You block them but if it happens that duplicate orders made it into the system (in what way ever) you still have a path defined in your system to handle the situation. The latter is the important part. In a large and complex domain it is always a good thing if the system is aware of the unhappy paths and that the system knows how to deal with them. Otherwise you block a lot of time of many people: users, support, business, PM, developers, ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant