-
-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transformation action now executes after failed schema check #436
Comments
It's actually not a bug. We changed this behavior in v0.22.0. But we also added a second parameter to the transformation function that you can use to check if there were any issues. Note that the input type is still correctly typed. |
Feel free to share the exact code of your schema with me. This might help me understand your problem better and find an appropriate solution for you. |
My transformation code expects the validation to pass before executing. const schema = transform(
string('Invalid string', [regex(/^[0-9]+ (milliseconds?|seconds?|minutes?|hours?|days?)/)]),
(value) => {
return spacialParseFunctionThatNeedsTheRegexToMatch(value)
}
} |
The reason for this change is that with the previous implementation, it was not possible to execute the pipeline of an object if any issues occurred before. This leads to problems especially with form validation, because it is quite normal that some fields do not match the requirements when the form is submitted. With the previous implementation, it was not possible to show all issues at once. That's why we now distinguish between issues that affect the data type and issues that don't. This allows us to run the outer pipelines as soon as the types are correct, even if there are minor problems with specific fields. The disadvantage of this change is that we also need to run |
Does the following change to your schema work for you? import * as v from 'valibot';
const Schema = v.transform(
v.string('Invalid string', [
v.regex(/^[0-9]+ (milliseconds?|seconds?|minutes?|hours?|days?)/),
]),
(value, { issues }) => {
if (!issues) {
return spacialParseFunctionThatNeedsTheRegexToMatch(value);
}
return value; // or something else
}
); That's the same with a bit less code: import * as v from 'valibot';
const Schema = v.transform(
v.string('Invalid string', [
v.regex(/^[0-9]+ (milliseconds?|seconds?|minutes?|hours?|days?)/),
]),
(value, { issues }) => (issues ? value : spacialFunction(value))
); Not that you can also outsource the logic into your custom function to further reduce the code: import * as v from 'valibot';
const Schema = v.transform(
v.string('Invalid string', [
v.regex(/^[0-9]+ (milliseconds?|seconds?|minutes?|hours?|days?)/),
]),
spacialFunction
); |
The data type could still be wrong. Good luck fixing the types for that issue XD I need to think about this problem for a while. BTW, Checking the issues list does solve the problem. |
I have a question about this. |
This solution also breaks the output type. const Schema = transform(string([minLength(10)]), (value, { issues }) => {
// We can't transform the data because we expected the validations to pass.
if (issues) {
return value;
}
return Number(value);
});
// The output type is now string | number instead of number
type SchemaOutput = Output<typeof Schema>; |
I don't see anything wrong with executing the pipeline even if an issue happened before. But distinguishing between data type issue's and validation issue's makes no sense to me. |
This would force us to type the input as Strictly speaking, we do not distinguish between data type issues and other issues. Instead, we set This exchange is very important and I am grateful for your feedback. |
I would just pass the number |
That would do the trick. const Schema = transform(object({...}), (value, { issues }) => {
if (issues) {
// Weird and unnecessary
return createNullableVersionOfComplexObject()
// Also suboptimal
return undefined as unknown as ComplexType
}
return transfromStringToComplexObject(value);
});
Exactly, transforming before type validating actually makes no sense. Ironically, typing the input as unknown might be less confusing now. |
It seems to me that we have three choices. We can try to make this behavior configurable. We can leave it as is, with weird behavior in special cases, or we cannot perform transformations if there are issues, and mark the input as untyped even if the type is valid. The disadvantage of the third option is that we will no longer be able to run outer pipelines. |
One question. Do you change the data type in |
I don't know what your thoughts are on this. I'm fine with option 1. Option 3 also seems fine.
Yep. |
I was reading about the problem some more. Skipping the .transform but still performing .refine seems like a bug.
To be honest, after realizing all the edge cases that arise from this feature, I think it might have been better not to support this. XD Keep it simple and predictable. |
In what case do you think this could lead to a security problem?
I agree. The next step would be to figure out the API for it. The implementation should be simple.
Could be, but then external pipelines cannot do their validation.
One of the biggest selling points of Valibot is its small and tree-shakeable bundle sizes. That's why a lot of developers use Valibot for form validation. I myself am the author of a form library called Modular Forms. That's why I prioritized it over the edge case for |
I don't have an example off hand, but unpredictable behavior is why bugs exist.
That makes a lot of sense. Let me think about it some more. |
Thank you for your feedback on this. I look forward to improve Valibot with you! |
Hi, sorry for interrupting the dialog, but I have few comments. |
Yes, feel free to reach out on Discord (fabianhiller) or express your pain points and ideas here in the comments. |
Added you in Discord. |
I don't think we can solve the problem of showing every relevant error in a form with Valibot. function assert_valid_form(input) {
if (typeof input.password !== "string") {
throw new Error('Password is required');
}
if (typeof input.confirm !== "string") {
throw new Error('Confirm password is required');
}
if (input.password !== input.confirm) {
throw new Error('Password should match');
}
} This assert function is fine but can't show all the relevant errors. function assert_password_type(input) {
if (typeof input.password !== "string") {
throw new Error('Password is required');
}
}
function assert_confirm_type(input) {
if (typeof input.confirm !== "string") {
throw new Error('Confirm password is required');
}
}
function assert_if_password_equals_confirm(input) {
if (typeof input.password !== "string" || typeof input.confirm !== "string" || input.password !== input.confirm) {
throw new Error('Passwords should match');
}
} I broke up every relevant error into its own function so that we can execute them in parallel and catch every error. This problem doesn't magically go away when using Valibot. Now let's see how we could write this in Valibot. const schema1 = object({
password: string('Invalid password'),
confirm: string('Invalid confirm password'),
})
// We would need a function to rewrite all error messages inside the schema.
const schema2 = rewriteError(object({
password: string(),
confirm: string(),
}, [
custom((input) => {
return input.password === input.confirm
})
]), 'Passwords should match')
// We would need to validate multiple schemas at once.
parse([ schema1, schema2 ], input) I think we should revert the code to its original behavior. |
Can you explain why they are
For pipes, you can set |
@ivands the default value of an
That's probably true. With Valibot I try to find the best compromises for a very good overall experience. |
I'm using own custom form library. Using Vue.js with Quasar components.
About
This will lead to inccorect types in parent pipes if child transform is not executed. But executing transform without checking for valid input is not good either. |
It should still be an empty string, but I understand the problem, e.g. when using
Thanks for this feedback! Do you have any ideas on how we could design the API for use cases like this?
This is not necessarily true. We can change the implementation to run |
Right now the issue is the "type" check. It allows to bypass some input validation and causes transforms and pipes to run on potentially invalid data. I don't think it gives enough to valibot and causes confusion and inconsistency. I like
It should be possible to mark issues as |
Can you provide code examples of Zod's API and the |
Zod refine/transform API https://zod.dev/?id=relationship-to-refinements: const nameToGreeting = z
.string()
// Transform will not be called on non-strings. Matches valibot "type" check
.transform((val) => val.toUpperCase())
.refine((val) => val.length > 15)
// Tranform will not be called if length is <= 15
.transform((val) => `Hello ${val}`)
.refine((val) => val.indexOf("!") === -1); Fatal issues https://zod.dev/?id=abort-early: const schema = z.number()
.superRefine((val, ctx) => {
if (val < 10) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "should be >= 10",
// Mark issue as fatal preventing next refine/pipe execution
fatal: true,
});
// Need to return custom marker type too
return z.NEVER;
}
})
// Will not be executed if val < 10
.refine(val => val !== 12); |
I have some questions about this approach.
|
|
And how about outer pipelines. Example: object({
field: transform(string(), v => v.length),
}, [
// This pipeline should probably also not execute because of the transform.
custom(v => {
return v.field === 10
})
]) |
Yep. With This leads to one of the downsides of I have contributed |
This isn't a downside of zod tho. |
It only exists when using exceptions: function validate_password_type(input, issues) {
if (typeof input.password !== "string") {
issues['password'].push('Password is required')
}
}
function validate_confirm_type(input, issues) {
if (typeof input.confirm !== "string") {
issues['confirm'].push('Confirm password is required')
}
}
function validate_if_password_equals_confirm(input, issues) {
if (issues['password'].length !== 0 || issues['confirm'].length !== 0) {
return
}
if (input.password !== input.confirm) {
issues['confirm'].push('Passwords should match');
}
}
function validate_schema(input) {
const issues = {
'password': [],
'confirm': []
}
validate_password_type(input, issues)
validate_confirm_type(input, issues)
validate_if_password_equals_confirm(input, issues)
return issues
}
const result = validate_schema({
'password': 'a',
'confirm': 'b'
})
It is a nice idea. You don't even need to define schema twice, you can use
"fatal" flag would cause validations to terminate. But transforms should not be run anyway, I agree with you on this one. The reason both |
I'm having a hard time trying to explain this with just text and some example code. function validate_password_type(input, issues) {
if (typeof input.password !== "string") {
issues['password'].push('Password is required')
}
}
function validate_confirm_type(input, issues) {
if (typeof input.confirm !== "string") {
issues['confirm'].push('Confirm password is required')
}
}
function validate_if_password_equals_confirm(input, issues) {
// This part is the same as doing the type checks again.
// Looking at the issues array to see if the type checks failed is another way of doing the same thing.
if (issues['password'].length !== 0 || issues['confirm'].length !== 0) {
return
}
// Trying to run this code without doing the type checks is what you're trying to do with Valibot.
// But as you can see from this example, that's impossible.
// You might think we can decide what kind of validations are FATAL and what kind aren't.
// But I argue that's up to the developer writing the custom validation.
// Hence, you might as well write a second schema.
// Because the relation between different validations can't be defined in Valibot.
if (input.password !== input.confirm) {
issues['confirm'].push('Passwords should match');
}
} If you agree that transformations should only run when no issues surface, then that should also extend to outer pipelines. object({
field: string([ maxLength(5) ])
}, [
// This pipeline shouldn't run because of the same problems with transform.
// Users will have an expectation that the object schema passed.
custom(v => v.field.length > 10)
]) I hope this makes sense. |
I agree with both of your examples. By default, it should be like you are saying, both transformations and outer pipelines should not run if any field is invalid.
It is not. You are not actually doing the work twice. And validations could be more complex. And you are not writing the code twice.
I'm not trying to do that. As you can see from my code example, I did not remove checks for issues. My suggestion is to make this check explicit:
const schema = object(
{
username: string(),
password: string([length(100)]),
confirm: string(),
},
[
customPartial(
// User explicitly saying that it is fine to run this validation with
// just "password" and "confirm" valid
['password', 'confirm'],
(input) => input.password === input.confirm
),
// This will not run if any field is invalid
custom(input => input.username.length > 5)
]
); |
My apologies then. I was confused with the current solution that Fabian introduced and your proposal I guess. customPartial(
[
//You're being explicit about the validations you're relying on. (That's the part that was missing in the current solution.)
// It could also support field paths
['deepObject', 'password'],
['deepObject', 'confirm'],
],
(input) => input.deepObject.password === input.deepObject.confirm
) And we could create a TS type that will filters out everything except the relevant fields inside the input. type Input = {
deepObject: {
password: string
confirm: string
}
} |
I had a call with @Demivan and thought about the current behavior of |
v0.29.0 is available |
After upgrading to 0.28.1 from 0.20.1 the transformation action executes after a failed schema check.
This should never happen and is a bug.
The text was updated successfully, but these errors were encountered: