You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is only in the idea phase, but would want a way to make HTTP POST requests idempotent, allowing for retry logic.
There's this somewhat conventional usage of the X-Request-ID header that can be used to provide idempotency on any request, given that the server supports it. More is explained in https://stackoverflow.com/a/54356305
A basic implementation would be to add a RequestID field to builds, and when wharf-api receives a POST /api/project/{projectId}/build, it will check in the recent builds if the same request ID has already been used, and if so then just use that build in the HTTP response instead of actually starting a new build.
Same goes for the other POST endpoints.
Alternatively, the wharf-api could hold a cache of recent request IDs and their HTTP responses in memory. But to support scaling the wharf-api, we would require some distributed cache such as Redis. Maybe worth still? The implementation would be so much simpler and wouldn't need to bloat the database.
The text was updated successfully, but these errors were encountered:
This is only in the idea phase, but would want a way to make HTTP
POST
requests idempotent, allowing for retry logic.There's this somewhat conventional usage of the
X-Request-ID
header that can be used to provide idempotency on any request, given that the server supports it. More is explained in https://stackoverflow.com/a/54356305A basic implementation would be to add a
RequestID
field to builds, and when wharf-api receives aPOST /api/project/{projectId}/build
, it will check in the recent builds if the same request ID has already been used, and if so then just use that build in the HTTP response instead of actually starting a new build.Same goes for the other
POST
endpoints.Alternatively, the wharf-api could hold a cache of recent request IDs and their HTTP responses in memory. But to support scaling the wharf-api, we would require some distributed cache such as Redis. Maybe worth still? The implementation would be so much simpler and wouldn't need to bloat the database.
The text was updated successfully, but these errors were encountered: