You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
gulp-cache currently assumes the wrapped task is 1-to-1: it only passes 1 file to the task at a time, and only expects 1 file back. This means it doesn't work with tasks like gulp-concat, gulp-closure-compiler etc which need to operate on multiple files at once.
I’ve got a PR in the works, but thought I’d preface it with some discussion.
The implementation in my PR will pass all input files to TaskProxy(), which will base the cache key on all of these, pass all into the proxied task, and cache all files returned by the task. This will handle 1-to-1, 1-to-many, many-to-1 and many-to-many tasks all correctly.
However, it’s hard to do this without introducing a breaking change: the customizable key, value and success functions are currently only passed 1 file at a time, so it’s likely consumers’ implementations will only expect 1 file — and for many-to-many tasks all 3 of these will need to accept multiple files.
So, my questions:
Is there a reason it wasn’t done like this in the first place?
If not, and we agree this change is desirable: Is this a good case for a breaking change?
The library’s not at 1.0.0 yet, so it might be hard to communicate this change.
I could avoid a breaking change by making this new behaviour optional, e.g. requiring manyToMany: true to be set in the options argument, but this could be awkward to document… the README would have to say something like this, which isn’t the clearest:
key
[Optional] A function to determine the uniqueness of an input file or set of files for this task.
If manyToMany is true, will be passed an array of files as the first argument; otherwise will be passed a single file.
Can return a string or a promise that resolves to a string. Optionally, can accept a callback parameter for idiomatic node style asynchronous operations.
The result of this method is converted to a unique MD5 hash automatically; no need to do this yourself.
Defaults to a concatenation of the current version of gulp-concat plus the contents of each input file which is a Buffer.
Thoughts?
The text was updated successfully, but these errors were encountered:
Ok so on reflection I realise why you’re doing it file-by-file: so for 1-to-1 tasks each file is cached independently, so if you only change one file it doesn’t have to recalculate them all.
So I’ll proceed implementing it as a manyToMany mode.
Seems useful, thanks for reaching out about it. It seems pretty low risk if you're able to do a manyToMany mode that will just branch into new code and not change much of the existing api.
gulp-cache
currently assumes the wrapped task is 1-to-1: it only passes 1 file to the task at a time, and only expects 1 file back. This means it doesn't work with tasks likegulp-concat
,gulp-closure-compiler
etc which need to operate on multiple files at once.I’ve got a PR in the works, but thought I’d preface it with some discussion.
The implementation in my PR will pass all input files to
TaskProxy()
, which will base the cache key on all of these, pass all into the proxied task, and cache all files returned by the task. This will handle 1-to-1, 1-to-many, many-to-1 and many-to-many tasks all correctly.However, it’s hard to do this without introducing a breaking change: the customizable
key
,value
andsuccess
functions are currently only passed 1 file at a time, so it’s likely consumers’ implementations will only expect 1 file — and for many-to-many tasks all 3 of these will need to accept multiple files.So, my questions:
The library’s not at 1.0.0 yet, so it might be hard to communicate this change.
I could avoid a breaking change by making this new behaviour optional, e.g. requiring
manyToMany: true
to be set in the options argument, but this could be awkward to document… the README would have to say something like this, which isn’t the clearest:Thoughts?
The text was updated successfully, but these errors were encountered: