You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 26, 2019. It is now read-only.
I and @teozkr discussed this a bit today, would be nice to bring this up for a larger discussion on friday.
Should we continue developing micro/nanoservices or should we aim for "domain"-grouped larger services? What is the difficulty/benefit of the different strategies?
What will need scaling? E.g. downloading of images vs normalizing data structures?
How to scale? Or simply, do we think about scaling atm?
Do we aim to scale services by number of workers or by hardware? Perhaps different for different services? How about lambda or similar instead?
How can we manage a fast and clean development environment without current problems (compilation time, dependencies, overhead in docker images)?
How small should services be?
At the current level the number of services will explode in the future. This gives a large overhead in docker images, deployment handling, etc etc
Just writing down some of the thoughts we discussed. Feel free to add more to this issue before friday
The text was updated successfully, but these errors were encountered:
I and @teozkr discussed this a bit today, would be nice to bring this up for a larger discussion on friday.
Should we continue developing micro/nanoservices or should we aim for "domain"-grouped larger services? What is the difficulty/benefit of the different strategies?
What will need scaling? E.g. downloading of images vs normalizing data structures?
How to scale? Or simply, do we think about scaling atm?
Do we aim to scale services by number of workers or by hardware? Perhaps different for different services? How about lambda or similar instead?
How can we manage a fast and clean development environment without current problems (compilation time, dependencies, overhead in docker images)?
How small should services be?
At the current level the number of services will explode in the future. This gives a large overhead in docker images, deployment handling, etc etc
Just writing down some of the thoughts we discussed. Feel free to add more to this issue before friday
The text was updated successfully, but these errors were encountered: