You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
deployments to our infrastructure sometimes fail unexpectedly. one theory why this happens is that when pulling moderately sized docker images multiple times per day, the github container registry will start aggressively throttling download speeds, which leads to the github action deployment job to time out.
one possible solution to this would be to avoid including image assets in the docker image, thus drastically reducing its size.
we currently already exclude unoptimized source images from content collection entries from the resulting docker image. however, for the current (astro based) clariah.at website, the docker image is still 1.52GB. although most of the space taken by images comes from the clariah "project"'s additionalImages attachments:
/app/client/_astro $ find . -type f -regex '.*\(jpg\|jpeg\|png\)' -exec du -ch {} + | grep total$
684.2M total
/app/client/_astro $ find . -type f -name "*.webp" -exec du -ch {} + | grep total$
89.4M total
(webp are optimized content images, png/jpg are the project attachments, which are copied over as-is)
approach
keystatic by default saves images to the github repository.
there is a cloudImage keystatic field, which is rather simplistic and basically allows providing urls to images stored somewhere else, instead of uploading images to github. source
ideally, content creators would be able to upload images to an s3 bucket at https://s3.acdh-ch-dev.oeaw.ac.at/ via keystatic cms - which would require developing our own custom field widget, and upload via next.js api route.
alternatively, we could try to adapt the built-in cloudImage field, and require content creators to upload images via a separate media-library ui on top of s3.
detailed implementation
s3 object store
the keystatic cms client-side app needs a way to upload images to a acdh-ch ceph/s3 bucket. since the bucket only allows authorized requests, we need a next.js api route (TODO: check if we can use server actions in keystatic) to proxy requests to the s3 api. optionally, in addition to uploading images, we may also need api routes to delete an image, and to list all images in the s3 bucket.
since the acdh-ch s3 api does not allow direct public access to uploaded objects, we generate signed urls with the acdh-ch imgproxy service, which is able to talk to s3. the signed urls can safely be used by any client-side code (e.g. referenced in <img src="">).
auth: keystatic manages auth (github oauth) on its own, the rest of the next.js app has no knowledge of it. since keystatic saves the github access token in a cookie, we can simply read it in the api route and confirm with the github api that the token belongs to a user account with write permissions to the repository. see proof of concept
keystatic image field content: what should the keystatic field save to mdx (i.e. what should the api route return). the object name (i.e. the s3 identifier) probably, because from the imgproxy url alone there is no way to identify a resource, for example to delete it. also, should we expose all imgproxy image optimisation features, and sign urls on the fly, or should we always return a fixed preset like { sm, md, lg }, which could be generated once when the image is uploaded. note that treating signed urls as ephemeral means we can easily rotate signing keys.
the problem
deployments to our infrastructure sometimes fail unexpectedly. one theory why this happens is that when pulling moderately sized docker images multiple times per day, the github container registry will start aggressively throttling download speeds, which leads to the github action deployment job to time out.
one possible solution to this would be to avoid including image assets in the docker image, thus drastically reducing its size.
we currently already exclude unoptimized source images from content collection entries from the resulting docker image. however, for the current (astro based) clariah.at website, the docker image is still 1.52GB. although most of the space taken by images comes from the clariah "project"'s
additionalImages
attachments:(webp are optimized content images, png/jpg are the project attachments, which are copied over as-is)
approach
keystatic by default saves images to the github repository.
there is a
cloudImage
keystatic field, which is rather simplistic and basically allows providing urls to images stored somewhere else, instead of uploading images to github. sourceideally, content creators would be able to upload images to an s3 bucket at https://s3.acdh-ch-dev.oeaw.ac.at/ via keystatic cms - which would require developing our own custom field widget, and upload via next.js api route.
alternatively, we could try to adapt the built-in
cloudImage
field, and require content creators to upload images via a separate media-library ui on top of s3.detailed implementation
s3 object store
the keystatic cms client-side app needs a way to upload images to a acdh-ch ceph/s3 bucket. since the bucket only allows authorized requests, we need a next.js api route (TODO: check if we can use server actions in keystatic) to proxy requests to the s3 api. optionally, in addition to uploading images, we may also need api routes to delete an image, and to list all images in the s3 bucket.
since the acdh-ch s3 api does not allow direct public access to uploaded objects, we generate signed urls with the acdh-ch imgproxy service, which is able to talk to s3. the signed urls can safely be used by any client-side code (e.g. referenced in
<img src="">
).proof of concept: https://github.com/acdh-oeaw/template-app-next/tree/variant/with-image-upload
open questions:
{ sm, md, lg }
, which could be generated once when the image is uploaded. note that treating signed urls as ephemeral means we can easily rotate signing keys.TBD
x-ref #15
The text was updated successfully, but these errors were encountered: