GreenIT | V2 | V3 | V4 |
---|---|---|---|
Life cycle | Tiers | Responsible |
---|---|---|
2. Design | Datacenter | Software Architect/Developer |
Priority | Implementation difficulty | Ecological impact |
---|---|---|
3 | 3 | 4 |
Saved resources |
---|
Queries |
Application load is not constant over time in most cases. For example, there may be few or even no users connected at night. In this case, it is not necessary to keep a large technical infrastructure during low load hours.
It is possible to dynamically and automatically change the infrastructure size to fit the load, thanks to deployment mutualization (see rule #89 "Using virtualized servers"), especially in the cloud. This modification can follow a schedule (for example, turning off applications only used during business hours at night) or can be done in response to the number of requests: when there is load increase, new virtual machines, application instances (containers, processes, or serverless functions, ...) are added and decommissioned when the load is decreasing.
Tools like Docker can be used to create application images that can be easily deployed or decommissioned by orchestration tools like Kubernetes. Cloud providers generally offer services to take advantage from these tools.
Testing and experimentation environments can notably be easily turned off at night and on non-working days.
Implementing an elastic architecture also creates costs savings as fewer server resources are used and billed for.
An elastic architecture has significant implementation costs in terms of workload and increased solution complexity. It is not essential to implement it if your application has low load or if the load is stable.
The number of ... | is |
---|---|
unnecessarily deployed resources used when the load is low | 0 |