-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for "ignoring a list of specific fields" #32
Comments
Hi @hdw868, Couldn't you preprocess the data yourself before passing to the fixture? Btw, are you using |
Hi @nicoddemus, |
Hi @hdw868, While it might seem convenient to ignore just some top level keys, this doesn't cover nested structures: {
"orders": [
{
"id": "aksjsnj-10291",
}
],
"product" : {
"id": "215",
},
} If a user wants to remove the For the simple use case, the proposal to ignore just a top-level key, the code: del data["id"]
data_regression(data) Becomes: data_regression(data, ignore_key=["id"]) Which is not a big win IMHO. So particularly I'm 👎 on adding this, but let's hear what others think. |
We had the same issue using Below is a recipe for a new fixture based on
I agree with you that we must find a more elegant way of address that. But I don't think that adding an |
Yes, provide a ignore option only may make things complicated anyway, however, the workaround provided by @igortg seems to have the same problem. Just wondering if there is a more elegant way to handle this situation. |
A more elegant way would be building your own serializer to convert objects to dicts without the unwanted fields. Pydantic has something like that, but you have to specify the attribute you want to ignore also in the nested objects. |
Hmm, would adding exclude options like pydantic be useful? I may try to create a PR if it's worth trying. |
As an idea, I think it would be nice to have a more generic solution that can handle not just ignoring specific fields, but also custom comparison behaviour. For example, one may want to be able to
One way to achive this could be by providing a way to override fields' values with any object that defines an from unittest.mock import ANY
from pytest import approx
data = {
"orders": [{
"id": "aksjsnj-10291",
}],
"product" : {
"id": "215",
},
"gravity": 9.81,
# ...100500 other fields
}
data_regression.check(
data,
{'product': {'id': ANY}, 'gravity': approx(10, 0.1)}
) |
At first glance I like this idea, seems to include a nice general way to customize how things are compared. |
I forked the library (https://github.com/adamzev/pytest-regressions) and implemented a solution that works for my use case. I focused just on data regression yml files. For lines you want to ignore you can add a If this would work for other people I'd be happy to look at what needs to be done to make this able to be brought in. |
FWIW, here is a relevant discussion in a similar library syrusakbary/snapshottest#21. I think pointing out that jest does this with their "property matchers" is nice https://jestjs.io/docs/en/snapshot-testing#property-matchers. |
I was using this tool to generate our regression baselines, the result is a JSON string like:
The problem is that there are some fields such as the instanceId field that is using UUID that will always change, and I don't want this false alert.
One workaround would be:
We provide the
ignores=["instanceId",]
to the data_regression function, and the value of this field will be ignored when comparing the result.Or, we can replace the value of specific field using special string, such as "{% ignore %}" or something else;
The text was updated successfully, but these errors were encountered: