-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Refactoring transformers and Zod 3 #264
Comments
The only regret I have about
The problem with them is:
To shake off a lot of dead code in production,
|
That should be it's own issue, it's not really related to this discussion |
Sorry, I thought it could be part of the transformers change. |
This is just a proposal at this stage, there may be a good reason not to do this that I'm not aware of. I understand what you're proposing above and I'm against switching everything to functions, mostly because you lose autocomplete. I don't want to degrade the developer experience for everyone to appease a tiny minority who care about a couple extra kb in their bundle. |
@o-alexandrov I just published a v3 branch with the new transformers implementation. There aren't many changes outside of ZodTransformer. I'd like to publish the v3 beta once we have ESM working. |
To provide a data point: when I saw that Zod v2 had transformers I expected them to work pretty much exactly how you describe here, and I was confused when they didn't. This proposal sounds grand to me. The only thing I'd like to flag is that it sounds like you're going to be implementing multiple transforms as lists of functions. The alternative would be to use function composition, which I suspect would be simpler and more performant. Have you considered that approach? |
👋 Hi, was trying out the proposed interface in
The latter works, but the former results in an error:
The former returns the result of
|
Sorry I haven't implemented the |
The Here's an example of a transform for parsing duration strings: const duration = z
.string()
.transform(ms)
.refine((val) => z.number().safeParse(val).success, {
message: "invalid duration",
}); The first thing I noticed is that I couldn't figure out how to use the existing options from If I want to further refine the schema (e.g. range constraints), I can't use Would it be possible to pass a For example: const duration = z
.string()
.transform(ms)
.refine((z.number());
const durationRange = duration.positive().max(30_000); |
It's an interesting idea but I think it would get confusing fast. Also you can use the ZodNumber methods like this: const duration = z
.string()
.transform(ms)
.refine((val) => z.number().positive().max(30_000).safeParse(val).success, {
message: "invalid duration",
}); It's not quite what you're looking for but it makes everything more explicit. To make this less verbose I could introduce a const duration = z
.string()
.transform(ms)
.refineWithSchema(z.number().positive().max(30_000)); It's basically syntactic sugar over your |
This seems like an incidental detail to me. Is the instance type significant to calling code? I have been reasoning about a schema as something that takes an input and returns a result of some type iff the input meets all the validation criteria. Knowing the type of the input isn't necessarily useful, since often comes from deserializing an input. Having read back over the examples in the proposal, I think the problem isn't with the interface / behavior of IIUC, the existing Let's break down the original problem case: const stringToNumber = z.transformer(
z.string(),
z.number(),
val => parseFloat(val)
);
stringToNumber.default("3.14")
const defaultedStringToNumber = z.transformer(
stringToNumber.optional(),
stringToNumber,
val => val !== undefined ? val : "3.14"
)
I actually would have expected Still, if we recognize that z
.string()
.default("3.14")
.transform(stringToNumber, val => val)
.parse(undefined); // 3.14 We were able to reuse our If parsing a value is expensive, so we'd prefer to provide a default for the tail instead of the head, we can do that with const shortCircuit = z.union([
z.literal(undefined).transform(z.number(), x => 3.14),
stringToNumber
]);
shortCircuit.parse(undefined); // 3.14
shortCircuit.parse("10"); // 10 The composition facilities of Going through these examples, the only significant fault I found with I think having an accurate transform return type and clarifying docs would be sufficient to avoid confusion while retaining the power of the existing functionality. |
Er... I forgot to try my original use case: stringToNumber.positive().max(5).parse("10")
// Property 'positive' does not exist on type 'ZodTransformer<ZodString, ZodNumber>' That doesn't work, presumably for the same reason why
Correction, I think the type |
[Content status: strong opinions held weakly; not passing value judgments, just offering suggestions]
To concur: IMO, a parsing library is (morally) a toolbox for building functions of type If I understand correctly, @colinhacks proposes factoring The question of how to reverse-map downstream errors into a form that reflects the original input is an interesting one, but I don't think it should be a blocker for work on transformers. Just surfacing "validate, then transform" solves a ton of real-world use cases. We can talk about performing validation on the transformed values in a later proposal - IMO in the vast majority of cases it's going to be an anti-pattern because it will completely wreck your error messages and perform likely unnecessary work on invalid inputs. |
Beautifully put. The simplicity and intuitiveness of "validate, then transform" is a big part of the appeal. It also harmonizes nicely with how Zod is generally used: as a source of typesafety that lets you confidently access/manipulate data later. I hadn't considered the problem of corresponding error messages to data structure 😬 Great point. I'm surprised no one's raised this issue since Zod 2 has been in beta. That concept is relevant to another idea I mentioned above: the idea of catching ZodErrors inside refinements and surfacing it to the user (currently Zod intentionally doesn't catch errors that occur inside refinement functions). But since refinements and transformations can be interleaved under this proposal, there's no guarantee that the structure of the data in a refinement bears any resemblance to the input data structure. So I'm probably not doing to do that. @jstewmon composability is still possible with the new proposal, but you do it with functions instead of schemas. I think this is an acceptable if not preferable approach. const stringToNumber = (val:string)=>parseFloat(val);
z
.string()
.default("3.14")
.transform(stringToNumber)
.parse(undefined); // 3.14 You could compose pipelines of transformation functions with existing FP tools externally to Zod as well. |
My original writeup described the ideal error type as "an indexed type family that reifies the structure of A." But I didn't want to derail. It would be an interesting conversation to have (elsewhere); personally I don't know enough about how people actually use error messages to confidently make any kind of concrete proposal. |
@LilRed thanks for the excellent clarifying summary. I think I agree in principle with the "validate, then transform" approach, but there are very common cases with inputs like durations and timestamps (e.g. ISO 8601), which require a transformation to be performed to determine validity. Reflecting on my original feedback, my desire was leverage all of the existing ZodType checks and error messages as a matter of convenience for these cases where the input must be transformed before its validity can be determined. I think the result of doing so is what @LilRed referred to as "utterly incomprehensible error messages" 😅 , since, in my duration example, the input is required to be a string, but if the string isn't a valid duration, the error might be:
However, I wouldn't call this incomprehensible, I would call it subtle. Whether or not the result is comprehensible depends on whether or not the result allows the failed check to be disambiguated, which is possible through inference in many cases without being explicit. (Not to argue that being inferential is better than being explicit). @colinhacks you probably already thought of this, but for the sake of being complete, I want to try to elaborate on what I see as the shortcoming of using refine... Taking this example: const duration = z
.string()
.transform(ms)
.refine((val) => z.number().positive().max(30_000).safeParse(val).success, {
message: "invalid duration",
}); If any of the refine checks fail, we won't know which one because there's no way to percolate the error details. Using refine, we have to call refine with a predicate and options for every check we want to discretely identify. I think this shortcoming would be addressed by your previous suggestion to have a complementary
I suppose you mean that, philosophically, performing validation after a transform shouldn't be supported. But, if refine can be used after transform, why not refine with a schema? |
After some thinking and discussion here are the three families of solutions I can think of, each with various pitfalls. The last one is my favorite. All code is pseudo-Zod because I don't remember the method signatures off the top of my head. The key issue with interleaving transformation and validation steps is that validation errors that come after one or more transformations are difficult to contextualize. Toy example: z.string()
.transform(s => s.slice(4))
.refine(s => s.length > 5, "expected string of length greater than 5")
.parse('foobar');
// Throws: {msg: "expected string of length greater than 5"} This will throw If we really want to support interleaved transformation and validation steps without sacrificing error clarity, we can draw inspiration from stack traces and have every operation provide a contextualization string. z.string()
.transform(s => s.slice(4), "strip first four characters")
.refine(s => s.length > 5, "expected string of length greater than 5")
.parse("foobar");
// Throws: { context: ["strip first four characters"], msg: "expected string of length greater than 5" } With this approach transformation and validation steps can be interleaved to arbitrary depth while still retaining enough information to figure out what happened. I can't say I would personally use this solution if it were available; I suspect it's overkill for almost every use case. Instead of supporting arbitrary interleaving, we could go the opposite way: no interleaving at all! You can have a chain of validation steps followed by a chain of transformation steps. This means that every step that could fail is operating on the original input, so there is no need for contextualizing errors. However, this sucks for the date parsing use case that @jstewmon mentions: z.string()
.refine(s => try { myDateLib.parse(s); return true} catch { return false }, "malformed date")
.map(s => myDateLib.parse(s)); Duplication of work, both in code and at runtime. Ew! Maybe in completely rejecting interleaving we went too far. Instead, we could go with the following design: a chain of validation steps, followed by ONE fallible transformation step, followed by a chain of infallible transformation steps. This still has the property that every fallible step is operating on the original input, so there is no need to contextualize errors. This is my favored solution, so let me elaborate on it a bit by sketching out some types (again pseudo-Zod): type ZodResult<T> = { success: true, value: T } | { success: false, errors: ZodError[] }
// Instances of ZodValidator include z.number(), z.string(), etc.
interface ZodValidator<T> {
refine(predicate: (v: T) => boolean, msg: string): ZodValidator<T>;
// More powerful alternative to refine?
refineAlternative(predicate: (v: T) => ZodError | undefined): ZodValidator<T>;
transform<U>(f: (v: T) => ZodResult<U>): ZodTransformer<T, U>;
map<U>(f: (v: T) => U): ZodTransformer<T, U>;
parse(v: T): U;
safeParse(v: T): ZodResult<U>;
}
type ZodTransformer<T, U> = {
map<V>(f: (v: U) => V): ZodTransformer<T, V>
parse(v: T): U;
safeParse(v: T): ZodResult<U>;
} Usage: z.transform((s) => {
try {
return {value: myDateLib.parse(s), success: true};
} catch {
return {error: 'input string not a date', success: false};
}
}); |
You're right that the logical conclusion of that train of thought is disallowing post-transform refinements — something I'm not willing to do. There is an important difference though between post-transform refinements with As for the "incomprehensible vs subtle" debate...I'm not sure what I think. @LilRed This is a great summary of the options here. Your proposal for |
I find this to be the most appealing because I think is a neat generalization that can satisfy the most use cases. I sketched out some more complex scenarios to reason about how the const csvInput = "1,2,E,4";
const csvInts = z
.string()
.transform(z.array(z.string()), val => val.split(','), "split")
.transform(z.array(z.number()), val => val.map(i => parseInt(i)), "map");
// ZodError { path: [{ xform: "split", path: [] }, { xform: "map", path: [{ property: 2 }] }] }
const csvIntsAlt = z
.string()
.transform(
z.array(
z.string().transform(z.number(), val => parseInt(val), "parseInt")
),
val => val.split(','),
"split"
);
// ZodError { path: [{ xform: "split", path: [{ xform: "parseInt", path: [{ property: 2 }] }] }] }
const embedJSONInput = {
name: "foo",
policy: JSON.stringify({ effect: "allow", principal: "jstewmon", resource: "*" })
}
const embedJSON = z
.object({
name: z.string(),
policy: z.string().transform(
z.object({
effect: z.enum(["allow", "deny"]),
principal: z.string(),
resource: z.string().refine(val => val !== "*", { message: "no wildcards" })
}),
JSON.parse,
"parse json"
)
});
// ZodError { path: [{ property: "policy", path: [{ xform: "parse json", path: ["resource"] }] }] } Note, that this would be a breaking change to the |
@LilRed @colinhacks I wanted to take a step back and say that I think this has been a productive discussion, but I worry that I've been playing the role of antagonist in it, possibly to the detriment of committing to a satisfactory solution and making progress towards the next major release. I know it can be difficult to actually gather feedback from users, so I've been happy to try out the proposed implementation, provide feedback on it, and consider alternatives. My intent isn't to disengage from the discussion, but I think I've adequately expressed my feedback on this RFC, and I wouldn't characterize my position on it as dissenting. After all, most of the examples we've used are contrived, and the practical differences for users border on pedantic. If you're satisfied with the mechanics of the original proposal and want to move forward with it, then you have my support. Maybe some of the ideas we've discussed make their way into the library in the future if needed. If you want to continue deliberating this RFC, I'll keep replying. 😄 Thanks for the amazing library 🙏 |
Please tell me if I'm missing something here. Doesn't this mean we can't introspect the shape of transformed types? If I do a I'm using the library for both parsing and for runtime schemas. I want to be able to introspect transformed types which is why I started to use this library in the first place. :( For example, I want to write an ORM function that replaces every property of an object that has an Jump to the bottom of the code example first to see what I mean. const Person = z.object({
id: z.string(),
name: z.string(),
age: z.number(),
});
type Person = z.infer<typeof Person>;
const Address = z.object({
id: z.string(),
street: z.string(),
owner: Person
});
type Address = z.infer<typeof Address>;
interface HasPropertiesOfType<PropTypes> {
[index: string]: PropTypes
}
type MapPropertiesOfTypeToNewType<T, PropTypes, NewType> = T extends HasPropertiesOfType<PropTypes> ? {
[P in keyof T]: NewType
} : never;
type PickAndMapPropertiesOfTypeToNewType<T, PropTypes, NewType> = MapPropertiesOfTypeToNewType<FilterProps<T, PropTypes>, PropTypes, NewType>;
type PropsOfType<T, TPropTypes> = {
[K in keyof T]: T[K] extends TPropTypes ? K : never;
}[keyof T];
type FilterProps<T, TPropTypes> = Pick<T, PropsOfType<T, TPropTypes>>;
type ID = string;
interface Identified {
id: ID
}
type RelationalRep<T> = PickAndMapPropertiesOfTypeToNewType<T, Identified, ID>;
type AnyObject = Record<string, unknown>;
function isTypeIfPropDefined<T>(obj: Partial<T>, keyGetter: (x: typeof obj) => T[keyof T] | undefined): obj is T {
return keyGetter(obj) !== undefined;
}
const transformToRelational = <T extends AnyObject>(value: T) => {
const relational = {} as any;
//can use the actual type Address type to pre-reflect and memoize the list of properties with id properties.
for (const prop of Object.keys(value)) {
const propValue = Reflect.get(value, prop);
if (isTypeIfPropDefined<Identified>(propValue, x => x.id)) {
relational[prop] = propValue.id;
}
}
const result: RelationalRep<typeof value> = relational;
return result;
};
const AddressRelations = Address.transform(transformToRelational);
type AddressRelational = z.infer<typeof AddressRelations>;
// Resolves to
//
// type AddressRelational = {
// owner: string;
// }
AddressRelations.shape //does not exist :(
|
@akutruff this is an interesting example, and I learned a bunch of stuff from dissecting it. In order to facilitate discussion I've pared it down as much as I could while still preserving the essentials: import * as z from "zod";
const Person = z.object({
id: z.string(),
name: z.string(),
});
type Person = z.infer<typeof Person>;
const Address = z.object({
id: z.string(),
street: z.string(),
owner: Person,
});
type Address = z.infer<typeof Address>;
// Returns the foreign keys on the address object
const getRelationsOnAddress = (address: Address): { owner: string } => {
const relations = {} as any;
for (const prop of Object.keys(address)) {
const propValue = Reflect.get(address, prop);
if ('id' in propValue) {
relations[prop] = propValue.id;
}
}
const result: { owner: string } = relations;
return result;
};
const AddressRelations = Address.transform(getRelationsOnAddress);
AddressRelations.shape; //does not exist :( Recall that It is not likely to be possible to implement
Would you mind elaborating on this point? |
Thanks for taking the time to wade through it! You hit the nail on the head - I totally want to work in the Schema domain. Just like a merge() produces a new schema. I tried taking a crack at writing something like the following, but the generics weren't working out. function makeRelational(myType: ZodType<Shape>) : ZodType<Shape>
{
/*...*
} I tried to copy paste the
Rereading what I wrote, I realize that I shouldn't have mixed that one sentence into the rest of it. Basically, once I have a parsed domain object, I'm going to shuttle along the actual ZodType in some headers. The shape will be used to display UI for object field editing, etc. |
Since I wrote my last comment I played around with Zod internals, and I've come to the conclusion that the current architecture does not in fact support schema-domain computation well! I've opened #307 to further discuss. |
I know I'm late to the party, but there's one (at least to me) very important topic that didn't really get attention yet: Two-Way transformations Since I'm using zod (and plan to use it even more) for transferring data between two systems in a typesafe way, zod is usually involved in two steps:
Or, like this:
Currently, we only ever talked about deserialization, or the If I understand it correctly, that's what a I know that zod-graphql currently is only a placeholder. But considering that such thing should become a reality at some point, there also needs to be a solution for implementing scalars. And scalars are also bi-directional. |
I second this! Currently working on making this work with two one-way schemas, but this would be a huge help. On a related note, while working on this I noticed My use-case is adding/removing fields to/from "API"/"client" schemas post parsing, in a scenario @VanCoding mentioned above. But since most fields in both schemas don't really need transforming (the most obvious use-case would be date-strings/Date objects), I was thinking to define just one of them and get the other one by extending and overriding. Both of these things work on their own, but I haven't found a way to combine them. Thoughts? |
Often transforms aren't reversible, so requiring an "inverse transform" in all cases limits you to toy examples - scalars that can be converted into each other without loss of information. Even something simple like In general I think defining a base object schema, then extending the base with one-way transformers is the way to go. I understand it's not optimal DX.
@VanCoding what do you mean by that?
@Brodzko what do you mean by that? |
Yeah that's a good point. I guess we'd have to define both
Yeah, fairly verbose :/
Sorry, I now realise that was pretty ambiguous. I mean I cannot do this:
I guess the problem would be to reverse the final Now that I think about it, there also isn't a good way to enforce that |
I don't know if that was the correct terminology, but in the context of graphql, there are "scalars", which are decoded when received by the server and encoded again when sent to the client. And that wouldn't be currently possible with zod.
Especially with deeply nested structures, yes. But since encoding/decoding is not the main focus of this library I can somewhat understand your position. Maybe |
@VanCoding out of curiosity, have you tried implementing bidirectional transformers as an extension to the latest Zod v3 pre-release? @colinhacks just made some changes that should help with defining user-side abstractions on top of Zod. If you give this a shot I would love to hear back. |
I am also interesting in bidirectional transformers; started using Zod recently on a project and it feels like a missing piece. Would be curious to hear if there are are plans as part of core Zod, or whether someone has implemented them in userland! |
As @LilRed alluded to, anyone can now simply subclass |
@colinhacks some examples of reversible, non-toy transforms:
string -> base64-decode -> JSON.parse -> { date: -> I've used both of those with a hideous io-ts abstraction in the past, so that you can create AWS Kinesis record parsers really nice and easily, and they hydrate events like magic: export const UserRecords = kinesisEvent(t.type({ firstName: t.string, lastName: t.string })
export const handler = async (_ev: unknown) => {
const event = UserRecords.decode(_ev)
if (ev._tag === 'Left') throw new TypeError('...')
return ev.Records.map(r => r.firstName)
} The same type can then provide strong types for creating records too: UserRecords.encode({
Records: [{ firstName: 'Abc', lastName: 'def' }],
}) Pressed send earlier than I mean to... I meant to say, I'm not actually sure what the right balance is. I moved away from io-ts and I don't miss it, because it feels too much like doing maths homework to be very productive in. Implementing that |
I would like to bring attention to this issue: #586 Also, https://github.com/colinhacks/zod/blob/master/ERROR_HANDLING.md is misleading (not updated after the v3 release?). Consider this example: https://codesandbox.io/s/distracted-hooks-ehisd?file=/src/index.ts While I understand the motivation for this change, there should be a way to get the full list of errors. Currently I can't come up with any clean, non-hackish way to do so. |
Definitely understand your perspective and the pain felt there. I've made the comment in other contexts that I think it makes sense to write your parser in a way that actually reflects your input type, so if your input type is a form, you might consider writing a very stringy/permissive/optional-y schema with individual refinements, or lean on
Thanks for calling this out. We can update the docs to reflect the fact that parsing occurs before refinement. |
TLDR
Transformers are poorly designed. I want to reimplement them but there will be breaking changes. Under this proposal, Zod 2 will never come out of beta and we'll jump straight to Zod 3 with a better implementation of transformers. Skip to "New Implementation" for details.
The reason Zod 2 has been in beta for so long (well, one of the reasons anyway!) is that I’ve been increasingly unsatisfied with Zod’s implementation of transformers. Multiple issues have cropped up that have led me to re-evaluate the current approach. At best, transformers are a huge footgun and at worst they’re fundamentally flawed.
Context
Previously (in v1) the ZodType base class tracked the inferred type in a generic parameter called
Type
. But transformers accept an Input of one type and return an Output of a different type. To account for this,Type
was renamed toOutput
and a third generic property was added (Input
). For non-transformers,Input
andOutput
are the same. For transformers only, these types are different.Let's look at
stringToNumber
as an example:For
stringToNumber
, Input isstring
and Output isnumber
. Makes sense.What happens when you pass a value into
stringToNumber.parse
?stringToNumber.parse
parse
function of its input schema (z.string()). If there are parsing errors, it throws a ZodErrorval => parseFloat(val)
)Here's the takeaway: for a generic transformer
z.transformer(A, B, func)
, where A and B are Zod schemas, the argument offunc
should be theOutput
type of A and the return type is theInput
type of B. This lets you do things like this:The problems with transformers
After implementing transformers, I realized transformers could be used to implement another much-requested feature: default values. Consider this:
Voila, a schema that accepts
string | undefined
and returnsstring
, substituting the default "trout" if the input is ever undefined.So I implemented the
.default(val:T)
method in the ZodType base class as below (partially simplified)Do you see the problem with that? I didn’t. Neither did anyone who read through the Transformers RFC which I left open for comment for a couple months before starting on implementation.
Basically this implementation doesn’t work at all when you use it on transformers.
Let’s walk through why this fails. The input ("5") is first passed into the transformer input (
stringToNumber.optional()
). This converts the string"5"
to the number5
. This is then passed into the transformation function. But wait:val
is nownumber | undefined
, but the transformer function needs to return astring
. Otherwise, if we pass5
intostringToNumber.parse
it’ll throw. So we need to convert5
back to"5"
. That may seem easy in this toy example but it’s not possible in the general case. Zod can’t know how to magically undo the transformation function.In practice, the current definition of
default
in ZodType shouldn’t have even been possible. The only reason the type checker didn’t catch this bug is because there are a regrettable number ofany
s floating around in Zod. It’s not a simple matter to switch them all tounknown
s either; I’ve had to useany
in several instance to get type inference and certain generic methods to work properly. I’ve tried multiple times to reduce the number ofany
s but I’ve never managed to crack it.It’s possible this is a one-off issue. I could find some other way to implement
.default()
that doesn’t involve transformers. Unfortunately this isn’t even the only problem in Zod’s implementation.The
.transform
methodInitially the only way to define transformers was with
z.transformer(A, B, func)
. Eventually I implemented a utility function you can use like this:Some users were executing multiple transforms in sequence without changing the actual data type:
To reduce the boilerplate here, it was recommended that I overload the method definition to support this syntax:
If the first argument is a function instead of a Zod schema, Zod should assume that the transformation doesn’t transform the type. In other words,
z.string().transform((val) => val.trim())
should be equivalent toz.string().transform(z.string(), (val) => val.trim())
. Makes sense.Consider using this method on a transformer:
What type signature do you expect for
transformation_func
?Most would probably expect
(arg: number)=>number
. Some would expect(arg: string)=>string
. Neither of those are right; it’s(arg: number)=>string
. The transformation function expects an input ofnumber
(the Output ofstringToNumber
) and a return type ofnumber
(the Input ofstringToNumber
). This type signature is a natural consequence of a series of logical design decisions, but the end result is dumb. Intuitively, you should be able to append.transform(val => val)
to any schema. Unfortunately due to how transformers are implemented, that's not always possible.More complicated examples
The fact that I incorrectly implemented both
.transform
and.default
isn't even the problem. The problem is that transformers make it difficult to write any generic functions on top of Zod (of which.transform
and.default
are two examples). Others have encountered similar issues. #199 and #213. are more complicated examples of how the current design of transformers makes it difficult to write any generic functions on top of Zod. Nested transformers in particular are a minefield.A path forward
When I set out to implement transformers I felt strongly that each transformer should have a strongly defined input and output transformer. This led to me implementing transformers as a separate subclass of ZodType (ZodTransformer) in an attempt to make transformers compose nicely with other schemas. This is the root of the issues I’ve laid out above.
Instead I think Zod should adopt a new approach. For the sake of differentiation I’ll use a new term "mods" instead of "transformations". Each Zod schema has a list of post-parse modification functions (analogous to Yup’s transform chain). When a value is passed into
.parse
, Zod will type check the value, then pass it through the mod chain.Unlike before, Zod doesn’t validate the data type between each modification. We’re relying on the TypeScript engine to infer the correct type based on the function definitions. In this sense, Zod is behaving just like I intended; it’s acting as a reliable source of type safety that lets you confidently implement the rest of your application logic — including mods. Re-validating the type between each modification is overkill; TypeScript’s type checker already does that.
Each schema will still have an Input (the inferred type of the schema) and an Output (the output type of the last mod in the mod chain). But because we’re avoiding the weird hierarchy of ZodTransformers everything behaves in a much more intuitive way.
One interesting ramification is that you could interleave mods and refinements. Zod could keep track of the order each mod/refinement was added and execute them all in sequence:
Compatibility
I was using the "mod" terminology above to avoid confusion with the earlier discussion. In reality I would implement the "mod chain" concept using the existing syntax/methods:
.default
,.transform
, etc. In fact I think I could switch over to the "mod" approach without any major API changes.A.transform(func)
: instead of returning a ZodTransformer, this method would simply appendfunc
to the "mod chain"A.transform(B, func)
: this would returnA.transform(func).refine(val => B.safeParse(val).success)
z.transformer(A, B, func)
: this could just returnA.transform(func).refine(val => B.safeParse(val).success)
A.default(defaultValue)
: this is trickier but still possible. This function would instantiateA.optional().mod(val => typeof val !== "undefined" ? val : defaultValue)
. Then all the mods of A would be transferred over to the newly created schemaUnder the hood things would be working very differently but most projects could upgrade painlessly unless they explicitly use the ZodTransformer class in some way (which isn’t common).
I would still consider this to be a breaking change of course. If/when I make these changes, I plan to publish them as Zod v3. In this scenario Zod 2 would never leave beta, we’d jump straight to v3.
This transformers issue has caused me a lot of grief and headaches but I’m confident in the new direction; in fact I already have most of it working. I want to put this out for feedback from the community. The issues I’m describing are pretty subtle and not many people have run into them, but I believe the current implementation is currently untenable.
The text was updated successfully, but these errors were encountered: