-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
useStableQueryArgs + defaultSerializeQueryArgs + serializeQueryArgs - avoiding attempts to serialize huge parameters #4995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We're not treating the user as dumb - if you look at the git blame, you see that this was introduced with #2844 to fix a user-reported bug in a specific valid usage scenario at #2816. Or in other words: it's perfectly valid to want even multiple different queries to override each other in the same cache key sometimes, and then this logic is needed so you can switch between them. |
Hey @phryneas thanks for the reply! First of all apologies if I sounded rude in my late night anger against the poor comment. Obviously you'll know all the possible uses cases where it'd matter better than me. I'll do my homework and read on those issues, but my first thought, assuming you agree my use case is worth considering, would be if it's possible to let the user override even this serialization at the endpoint level if they wanted to, or are you saying it'd never make sense to do that? My understanding of the contract with serializeQueryArgs is that I take on the responsibility of making sure i return a different value for 2 sets of parameters if they should be considered different. And I should clarify: i'm not changing the parameters and expecting the request not to trigger but rather my parameters are consistent, and i'm trying to save you the work of comparing them but you won't let me.
Cheers! |
I do recall that comment in the code is definitely aimed at the internal contributors feeling silly themselves, not trying to treat the users of the library dumb! It does seem ultra niche to want to have it override the behaviour here in a way that just adding additional logic to the endpoint in the skip or queryFn couldn't solve. I could see an easy implementation if this is considered a good idea being basically an extra flag like you suggested, or we expose the defaultQueryArgs function at a createApi level? Another solution for your app's use case could be just using a fixed key and having a listener/middleware decide when the cache should be updated in response to state/serialisation changes? |
Hmm, tbh. I'm not 100% if we'd need to serialize in the first place - this could also be an option: import { useEffect, useRef, useMemo } from 'react'
import { copyWithStructuralSharing } from '@reduxjs/toolkit/query'
export function useStableQueryArgs<T>(queryArgs: T) {
const cache = useRef(queryArgs)
const copy = useMemo(
() => copyWithStructuralSharing(cache.current, queryArgs),
[queryArgs],
)
useEffect(() => {
if (cache.current !== copy) {
cache.current = copy
}
}, [copy])
return copy
} In that case, |
would |
@EskiMojo14 depends on the data. If it's just 500 objects that have absurdly long strings inside them that make the stringified version so big, it would be a very big win. |
apparently I was the one who filed that PR and wrote that comment. I'll be honest, this is a classic "wait, |
I made a PR of what I understood @phryneas suggestion to be. Not sure if there are other aspects of it that need covering though, so happy to make changes there |
I'll try to give it a go sometime this week. I've basically worked it around using a WeakMap to pass around these huge objects since i already had unique identifiers for them.
In my case it's single level deep objects in the thousands with like 200 properties each. Do you think the proposed PR would help in this case? |
Depends of they are at least partially referential equal. Let's see. |
I also came across this issue. In our case we pass an sdk client as a param to some of our queries. However we got errors in the |
I've got a use case where a query takes a bunch of data as input. Like a lot. So since I don't want to end up with a cache key that's huge, I implemented serializeQueryArgs and provided a suitable replacement that uniquely represents the data in a manageable size string identifier with no collisions, perfect for a cache key. Imagine my surprise when i'm seeing long running attempts at memoizing a json the size of Texas and when i trace it I find this:
The comment feels like it's treating the user as dumb. Is there a way to avoid this?
In short: I have a parameter that i need in queryFn and I don't want the library to ever try to serialize it.
The text was updated successfully, but these errors were encountered: