Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert entire image proxy route async #3388

Merged
merged 6 commits into from
Dec 3, 2023
Merged

Conversation

sarayourfriend
Copy link
Collaborator

@sarayourfriend sarayourfriend commented Nov 22, 2023

Fixes

Fixes #2789 by @sarayourfriend

Description

This PR converts image_proxy::get to an asynchronous function. This makes the entire thumbnails route fully asynchronous, meaning we finally start to gain benefits from running the application under ASGI. We should see increased thumbnail throughput after this (though, it might not be noticeable because we probably won't receive more traffic), but as a result, we should be able to lower the number of thumbnail tasks in the mean time.

We should assume this will increase CPU and memory usage, however, as now each thumbnails worker will be able to handle multiple requests, which will require more context switching (read: increased memory). This is really the first "real" async benefit we get though, because the thumbnails route is heavily asynchronous. The vast majority of time our servers are handling thumbnail requests they're merely waiting for an upstream response! That means they're wasting cycles that could be spent serving (preparing) additional requests to the proxy.

This PR will finalise the Django ASGI project as far as the currently created issues. However, I want to clarify something about the project, and have done so in the project thread.

Testing Instructions

Everything should continue to work as before. You should be able to follow the testing instructions in #3020, but on top of that you should also test to confirm that Redis tallying, in particular the "fire and forget" parts, are still working. You can do this via the redis cli: just exec cache redis-cli -n 3 and reading the relevant keys before/after a request to ensure they've incremented as expected.

Check out the new implementation, ask questions as usual. Confirm the unit test changes are only enough to make the existing unit tests work with the async implementation, but that they do not change the actual function of the tests (i.e., that the overall behaviour has stayed the same because the unit tests still test the same outcomes).

Checklist

  • My pull request has a descriptive title (not a vague title likeUpdate index.md).
  • My pull request targets the default branch of the repository (main) or a parent feature branch.
  • My commit messages follow best practices.
  • My code follows the established code style of the repository.
  • I added or updated tests for the changes I made (if applicable).
  • I added or updated documentation (if applicable).
  • I tried running the project locally and verified that there are no visible errors.
  • [N/A] I ran the DAG documentation generator (if applicable).

Developer Certificate of Origin

Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

@github-actions github-actions bot added the 🧱 stack: api Related to the Django API label Nov 22, 2023
@openverse-bot openverse-bot added 🟧 priority: high Stalls work on the project or its dependents 💻 aspect: code Concerns the software code in the repository 🧰 goal: internal improvement Improvement that benefits maintainers, not users labels Nov 22, 2023
@sarayourfriend sarayourfriend mentioned this pull request Nov 22, 2023
2 tasks
@sarayourfriend sarayourfriend marked this pull request as ready for review November 22, 2023 15:08
@sarayourfriend sarayourfriend requested a review from a team as a code owner November 22, 2023 15:08
Comment on lines +23 to +27
@pytest.fixture(scope="session")
def session_loop() -> asyncio.AbstractEventLoop:
loop = asyncio.new_event_loop()
yield loop
loop.close()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can't use get_new_loop because get_new_loop is intentionally function scoped, whereas the session loop, as its name implies, is scoped to the session, and therefore cannot depend on the function scoped fixture.

Comment on lines +61 to +76
@pytest.fixture
def photon_get(session_loop):
"""
Run ``image_proxy.get`` and wait for all tasks to finish.
"""

def do(*args, **kwargs):
try:
res = session_loop.run_until_complete(_photon_get(*args, **kwargs))
return res
finally:
tasks = asyncio.all_tasks(session_loop)
for task in tasks:
session_loop.run_until_complete(task)

yield do
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a side effect of using fire_and_forget, which is a benefit of async code, but does make testing slightly more complex. We need to ensure those tasks settle before asserting anything about the side effects of the function.

We can easily abstract this into a re-usable pattern for the future, something like a task_aware_async_to_sync that accepts a function and returns a session-looped sync version of it for testing. The abstraction isn't necessary now, so I've avoided it, but just wanted to note that fire_and_forget makes this pattern necessary, and we'll need to reuse it anywhere we're testing code paths that utilise that behaviour.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With fire_and_forget do you mean the use of the do_not_wait_for function or is it something else?

On the other hand, it's great that these tests didn't need to change too much with the fixture! 💯

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I meant do_not_wait_for. I wrote this comment before Olga and I had decided on the better name for it. Everything is still accurate in the comment aside from the name.

Copy link
Contributor

@obulat obulat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested well locally, all Redis keys are updated correctly, and the search keeps working as expected. Awesome work!

api/api/utils/image_proxy/__init__.py Outdated Show resolved Hide resolved
@@ -88,9 +122,10 @@ def get(

logger = parent_logger.getChild("get")
tallies = django_redis.get_redis_connection("tallies")
tallies_incr = sync_to_async(tallies.incr)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I'm missing a lot of sync-to-async context since this is the first such PR that I'm reviewing. Why do we not use async def, but need an sync_to_async for tallies?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are using async def, unless I unintentionally missed it somewhere. async def is only necessary when awaiting, though.

sync_to_async is necessary to prevent context issues in the Redis client, which assumes that only one thread is every touching the connection pool. sync_to_async creates a safe context for such synchronous functions. Relevant to Django ORM as well. From Django's documentation:

The reason this is needed in Django is that many libraries, specifically database adapters, require that they are accessed in the same thread that they were created in. Also a lot of existing Django code assumes it all runs in the same thread, e.g. middleware adding things to a request for later use in views.

As an additional point of clarification, Django's own async cache support just wraps the synchronous versions of functions with sync_to_async, rather than using fully async clients. In the future we could switch to an asynchronous Redis client (like https://github.com/Andrew-Chen-Wang/django-async-redis, but I'm not sure that it's 100% stable), but for now it's not necessary and Django's own caching behaviour would still use the sync_to_async adapter approach for this anyway.

So the reason here is to create a new async version of tallies.incr that we can call later with await.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if I answered your question or if I misunderstood. The Django documentation on async adaption is a good read and important for understanding how and when it's necessary to adapt synchronous functions to an async context.

api/api/utils/image_proxy/__init__.py Show resolved Hide resolved
@openverse-bot
Copy link
Collaborator

Based on the high urgency of this PR, the following reviewers are being gently reminded to review this PR:

@krysal
This reminder is being automatically generated due to the urgency configuration.

Excluding weekend1 days, this PR was ready for review 3 day(s) ago. PRs labelled with high urgency are expected to be reviewed within 2 weekday(s)2.

@sarayourfriend, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings.

Footnotes

  1. Specifically, Saturday and Sunday.

  2. For the purpose of these reminders we treat Monday - Friday as weekdays. Please note that the operation that generates these reminders runs at midnight UTC on Monday - Friday. This means that depending on your timezone, you may be pinged outside of the expected range.

Copy link
Member

@krysal krysal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything works the same for me in appearance, which is excellent! 😄 Exciting movement!

api/api/utils/image_proxy/__init__.py Outdated Show resolved Hide resolved
Comment on lines +61 to +76
@pytest.fixture
def photon_get(session_loop):
"""
Run ``image_proxy.get`` and wait for all tasks to finish.
"""

def do(*args, **kwargs):
try:
res = session_loop.run_until_complete(_photon_get(*args, **kwargs))
return res
finally:
tasks = asyncio.all_tasks(session_loop)
for task in tasks:
session_loop.run_until_complete(task)

yield do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With fire_and_forget do you mean the use of the do_not_wait_for function or is it something else?

On the other hand, it's great that these tests didn't need to change too much with the fixture! 💯

@sarayourfriend sarayourfriend merged commit 401cb8e into main Dec 3, 2023
43 checks passed
@sarayourfriend sarayourfriend deleted the add/async-image-proxy branch December 3, 2023 22:02
Copy link

sentry-io bot commented Dec 4, 2023

Suspect Issues

This pull request was deployed and Sentry observed the following issues:

  • ‼️ ClientOSError: [Errno 104] Connection reset by peer /v1/images/{identifier}/thumb/ View Issue
  • ‼️ TimeoutError /v1/images/{identifier}/thumb/ View Issue
  • ‼️ TimeoutError /v1/images/{identifier}/thumb/ View Issue
  • ‼️ ClientPayloadError: Response payload is not completed /v1/images/{identifier}/thumb/ View Issue

Did you find this useful? React with a 👍 or 👎

@sarayourfriend
Copy link
Collaborator Author

That sentry issue is fine. It only looks like a new issue because requests threw an OSError and aiohttp throws a ClientOSError, so our error handling, which always reports the first and then only ever the nth of an error in the proxy handler, thinks they're all new errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💻 aspect: code Concerns the software code in the repository 🧰 goal: internal improvement Improvement that benefits maintainers, not users 🟧 priority: high Stalls work on the project or its dependents 🧱 stack: api Related to the Django API
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Convert the image proxy route to async
4 participants