Skip to content

Releases: alephdata/servicelayer

v1.23.2

23 Nov 12:14
v1.23.2
8fcdc9f
Compare
Choose a tag to compare

What's Changed

  • Check redis set membership properly by @stchris in #211
    This fixes a performance regression especially noticeable when there are >10000 jobs queued.
  • Use trusted publishing for PyPI releases

Full Changelog: v1.23.0...v1.23.2

v1.23.1

07 Nov 09:55
v1.23.1
050fc2e
Compare
Choose a tag to compare

What's changed

Bugfixes

Full Changelog: v1.23.0...v1.23.1

v1.23.0

09 Oct 12:17
v1.23.0
131171c
Compare
Choose a tag to compare

⚠️ This release contains breaking changes

The custom messaging queue used by Aleph has been replaced with RabbitMQ. As of this version of servicelayer, Aleph will use a persistent messaging queue. We have seen an increase in stability, predictability and also in the clarity of debugging since making these changes.

The implementation uses a Default, direct Exchange. RabbitMQ allows users to monitor the activity of the messaging queues using a management interface that one can access from the browser, if the proper port is exposed.

In order to populate the System Status view in Aleph, Redis is used to independently track the state of tasks. ⚠️ A breaking change was introduced in terms of the structure of the status API response - we no longer track job_ids, instead tracking tasks (task_ids). The structure of Redis keys has also changed as follows:

Redis keys used by the Dataset object:

  • tq:qdatasets: set of all collection_ids of active datasets (a dataset is considered active when it has either running or pending tasks)
  • tq:qdj:<dataset>:taskretry:<task_id>: the number of times task_id was retried

All of the following keys refer to task_ids or statistics about tasks per a certain dataset (collection_id):

  • tq:qdj:<dataset>:finished: number of tasks that have been marked as "Done" and for which an acknowledgement is also sent by the Worker over RabbitMQ.
  • tq:qdj:<dataset>:running: set of all task_ids of tasks currently running. A "Running" task is a task which has been checked out, and is being processed by a worker.
  • tq:qdj:<dataset>:pending: set of all task_ids of tasks currently pending. A "Pending" task has been added to a RabbitMQ queue (via a basic_publish call) by a producer (an API call, a UI action etc.).
  • tq:qdj:<dataset>:start: the UTC timestamp when either the first task_id has been added to a RabbitMQ queue (so, we have our first Pending task) or the timestamp when the first task_id has been checked out (so, we have our first Running task). The start key is updated when the first task is handed to a Worker.
  • tq:qdj:<dataset>:last_update: the UTC timestamp from the latest change to the state of tasks running for a certain collection_id. This is set when: a new task is Pending, a new task is Running, a new task is Done, a new task is canceled.
  • tq:qds:<dataset>:<stage>: a set of all task_ids that are either running or pending, for a certain stage.
  • tq:qds:<dataset>:<stage>:finished: number of tasks that have been marked as "Done" for a certain stage.
  • tq:qds:<dataset>:<stage>:running: set of all task_ids of tasks currently running for a certain stage.
  • tq:qds:<dataset>:<stage>:pending: set of all task_ids of tasks currently pending for a certain stage.

Tasks are assigned a random priority before being added to the appropriate queues to ensure a fair distribution of execution. The current implementation also allows admin users of Aleph to chose to assign a task either a global minimum priority or a global maximum priority.

What's Changed

Dependency upgrades

Full Changelog: v1.22.1...v1.23.0

v1.22.2

22 Apr 12:43
v1.22.2
252178b
Compare
Choose a tag to compare

⚠️ This release fixes a potential security vulnerability. We strongly encourage you to use this release and disregard previous ones. ⚠️

This release includes a fix for the archive functionality in servicelayer. Previously, the generate_url methods of the Google Cloud Storage archive adapter and the AWS S3 archive adapter were generating URLs instructing AWS S3 and Google Cloud Storage to send a Content-Disposition: inline header in the response.

When sending this header, most browsers will automatically open the file if the file’s MIME type is supported by the browser. This may not be desired in some cases, for example when downloading files from untrustworthy sources.

Starting with this version of servicelayer, the generated URLs will instead instruct AWS S3 and Google Cloud Storage to send a Content-Disposition: attachment header. Browsers won’t open files without user interaction if this header is set.

v1.22.1

21 Nov 13:37
bd68c29
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.22.0...v1.22.1

v1.22.0

13 Oct 11:24
v1.22.0
fddf4d7
Compare
Choose a tag to compare

What's Changed

Dependency upgrades

New Contributors

Full Changelog: v1.21.2...v1.22.0

v1.21.0

14 Jun 17:28
eb343c8
Compare
Choose a tag to compare

What's Changed

  • Add Sentry support to servicelayer workers by @stchris in #88

    This release adds support for sending error tracebacks to sentry.io (or a self-hosted instance). This is controlled by two environment variables: SENTRY_DSN and SENTRY_ENVIRONMENT. Note that you also have to take care of installing the sentry_sdk package.

  • Add and enforce linter (ruff) and code formatter (black) by @stchris in #89

    This updates the development environment and CI configuration to be closer to what we have in Aleph.

Full Changelog: v1.20.7...v1.21.0

v1.20.7

02 May 12:43
7ce4108
Compare
Choose a tag to compare

What's Changed

  • Bump fakeredis to 2.11.2
  • Add release steps to README

New Contributors

Full Changelog: v1.20.6...v1.20.7

v1.20.6

25 Apr 14:20
ec9a07c
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.20.5...v1.20.6

v1.20.5

29 Mar 13:11
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.20.4...v1.20.5