Skip to content

fix: refresh token updates auth headers and sets realtime auth #1171

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Kabya002
Copy link

@Kabya002 Kabya002 commented Jul 9, 2025

Summary

This PR fixes a critical bug where the Supabase client fails to update the Authorization header after the access token is refreshed, which caused privileged requests like supabase.auth.admin.list_users() to fail with a 403 error.

The bug was tracked in #1143.


What Was the Bug?

  • After the access token was refreshed (via on_auth_state_change), the Authorization header in client.options.headers was not being updated.
  • As a result, internal services (PostgREST, Storage, Functions, Realtime) continued using a stale or incorrect token.
  • Realtime connections were not re-authenticated.
  • Admin API calls, which rely on an up-to-date Authorization header, failed.

What This Fix Does

  • Implements _listen_to_auth_events() on both SyncClient and AsyncClient.
  • Subscribes to auth.on_auth_state_change(...) in the client constructor.
  • On SIGNED_IN, TOKEN_REFRESHED, or SIGNED_OUT:
    • Updates the Authorization header with the new access token.
    • Resets PostgREST, Storage, and Functions clients to ensure fresh instantiation.
    • Re-authenticates the Realtime client using realtime.set_auth(...).

Tests Added

I added new unit tests in both sync and async test suites that verify:

  • The Authorization header updates correctly after a token refresh.
  • Internal service clients (postgrest, storage, functions) are set to None so they reinitialize with the correct headers.
  • The Realtime client receives the new token via set_auth(...).
  • No-op behavior when the header is already using the original service role key (prevents unnecessary work).

Test Files:

  • tests/_sync/test_auth_refresh_sync.py
  • tests/_async/test_auth_refresh_async.py

All tests pass.


Closes

@silentworks
Copy link
Contributor

Hi @Kabya002 thanks for your first contribution. Can you provide some unit tests around the issue that this is fixing please?

@Kabya002 Kabya002 force-pushed the fix/auth-token-refresh branch from e22cf0a to dfd22c0 Compare July 16, 2025 17:53
@Kabya002
Copy link
Author

-Added tests for both sync and async client token refresh behavior
-Verified tests pass locally and in CI
Let me know if you’d like changes or more coverage. Thanks for the review! @silentworks

@Kabya002
Copy link
Author

Thanks for the note, @silentworks!
I’ve updated my understanding: auth.admin.list_users() should always be called with the service_role key, so it shouldn't ever be affected by stale user tokens.
This PR focuses solely on resolving stale Authorization headers for user-authenticated clients after a token refresh via on_auth_state_change.
The admin client logic remains untouched.

@Kabya002
Copy link
Author

I’m currently working on refactoring the fix to ensure that auth.admin.* routes always use the service_role key and aren't affected by user token refresh logic. I appreciate you pointing this out, and I’ll update the PR soon to reflect this improved approach.

If there are any references, patterns, or internal logic you'd recommend looking into (especially around how the client enforces separation of admin vs. user scopes), I’d really appreciate it!

Thanks again for the guidance — I’ll post an update shortly once I’ve implemented the fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: client access token for auth admin API is not properly refreshed
2 participants