-
Notifications
You must be signed in to change notification settings - Fork 565
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Does undici.fetch support RFC 7234? #1146
Comments
Not at this point. I think it would be great to support it - I would keep it optional as it'd be unclear where to store the cache. |
https://github.com/sindresorhus/got/blob/HEAD/documentation/cache.md const storageAdapter = new Map();
await got('https://sindresorhus.com', {cache: storageAdapter}); Maybe something like this const fetch2 = undici.sudoCreateFetch({
// all super user options, outside whatwg fetch
cache: storageAdapter
}) |
I think if any caching layer is implemented, it should support an asynchronous API - it makes more sense to me |
cc @jasnell |
It would make sense for any caching layer to be implemented as a plugin system. The interceptor receives the Request Opts, and the DispatcherHandler, and simply calls the events on the DispatcherHandler directly to fulfill the request. |
After the #1338 is checked in, I've try to integrate the npm package with custom dispatcher/handler to implement cache. Currently, I think we still have a gap to implement cache gracefully. I want to hook the cache checking opration right before the (pseudo code) function createCacheInterceptor(cache: CacheStorage) {
return (dispatch) => (opts, handler) => {
// check cache
const cachedRequest = cache.get(serializeOptions(opts)) // we cannot have promise here... but normally get cache is an async op
if (!cachedRequest || cachedRequest.isExpire()) {
// no cache or expired cache, we should dispatch request
return dispatch(opts, new CacheHandler(handler, cache))
}
// otherwise, we should mock the request result to the handler
handler.onHeaders(cachedRequest.getHeader())
handler.onData(cachedRequest.getData())
handler.onComplete(cachedRequest.getTrailer())
return false
}
} In the current design, the interface CacheStorage {
get(key: string): Promise<any>
set(key: string, val: any): Promise<void>
} I wonder if there is any chance to make |
The response type of |
Yes, that's what I mean. Currently, we can only have a It's not possible to have async cache read ops in |
That isn't true. dispatch is async. It just isn't async over Promise.
dispatch uses old school callbacks.
…On Sun, 18 Sept 2022 at 23:36, CI010 ***@***.***> wrote:
The response type of dispatch is correct: no error or promises should be
propagated.
Yes, that's what I mean. Currently, we can only have a boolean return
type for dispatch.
It's not possible to have async cache read ops in dispatch function
unless we support Promise<boolean> for dispatch.
—
Reply to this email directly, view it on GitHub
<#1146 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA46MLM2VFQ4PV3MKAXOKO3V64ZJNANCNFSM5KBLTJVQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I agree with "dispatch is async with old school callback", but the return value of dispatch (the boolean) is used to determine if the function createCacheInterceptor(cache: CacheStorage) {
return (dispatch) => (opts, handler) => {
cache.getCache(serializeOptions(opts)).then((cachedRequest) => {
if (!cachedRequest || cachedRequest.isExpire()) {
this.dispatcher.dispatch(opts, handler) // ignore drain status...
} else {
handler.onHeaders(cachedRequest.getHeader())
handler.onData(cachedRequest.getData())
handler.onComplete(cachedRequest.getTrailer())
}
}, (error) => {
// handle error
})
// return false anyway... this will break the Pool and Agent, which relay on the return value
return false
}
} Suppose we can wrap a customized class CustomDispatcher extends Dispatcher {
private cache: CacheStorage
private dispatcher: Dispatcher // wrap another dispatcher
dispatch(opts, handler) {
this.cache.getCache(serializeOptions(opts)).then((cachedRequest) => {
if (!cachedRequest || cachedRequest.isExpire()) {
this.dispatcher.dispatch(opts, handler) // ignore drain status...
} else {
handler.onHeaders(cachedRequest.getHeader())
handler.onData(cachedRequest.getData())
handler.onComplete(cachedRequest.getTrailer())
}
}, (error) => {
// handle error
})
const needDrain = this.dispatcher[kNeedDrainClient]
if (needDrain !== undefined) {
// see Client implementation
return needDrain < 2
}
const needDrainP = this.dispatcher[kNeedDrainPool] // the pool needDrain symbol is different from the Client one (core/symbols.js)
if (needDrainP !== undefined) {
// see PoolBase implementation
return needDrainP
}
// return false anyway... this will break the Pool and Agent relay on the return value
return false
}
} Still, this is solution is not ideal... it's relay on the |
I don't think you'll need to return anything but false in the custom dispatcher: class CustomDispatcher extends Dispatcher {
private cache: CacheStorage
private dispatcher: Dispatcher // wrap another dispatcher
dispatch(opts, handler) {
this.cache.getCache(serializeOptions(opts)).then((cachedRequest) => {
if (!cachedRequest || cachedRequest.isExpire()) {
this.dispatcher.dispatch(opts, handler) // ignore drain status...
} else {
handler.onHeaders(cachedRequest.getHeader())
handler.onData(cachedRequest.getData())
handler.onComplete(cachedRequest.getTrailer())
}
}, (error) => {
// handle error
})
return false
}
} In order for this to work properly, the wrapped dispatcher must be a full FWIW I think this shows a significant issue in the interceptor design cc @ronag @kibertoad. |
@mcollina Should we mark custom dispatchers as experimental in docs prior to release? |
I think we should mark custom interceptors as experimental prior to release. Custom dispatchers have been implemented for a while, and I don't think we can break them in any form. |
Implements bare-bones http caching as per rfc9111 Closes nodejs#3231 Closes nodejs#2760 Closes nodejs#2256 Closes nodejs#1146 Signed-off-by: flakey5 <73616808+flakey5@users.noreply.github.com>
Implements bare-bones http caching as per rfc9111 Closes nodejs#3231 Closes nodejs#2760 Closes nodejs#2256 Closes nodejs#1146 Signed-off-by: flakey5 <73616808+flakey5@users.noreply.github.com>
Implements bare-bones http caching as per rfc9111 Closes nodejs#3231 Closes nodejs#2760 Closes nodejs#2256 Closes nodejs#1146 Signed-off-by: flakey5 <73616808+flakey5@users.noreply.github.com>
Implements bare-bones http caching as per rfc9111 Closes nodejs#3231 Closes nodejs#2760 Closes nodejs#2256 Closes nodejs#1146 Signed-off-by: flakey5 <73616808+flakey5@users.noreply.github.com> Co-authored-by: Carlos Fuentes <me@metcoder.dev> Co-authored-by: Robert Nagy <ronagy@icloud.com>
Implements bare-bones http caching as per rfc9111 Closes nodejs#3231 Closes nodejs#2760 Closes nodejs#2256 Closes nodejs#1146 Signed-off-by: flakey5 <73616808+flakey5@users.noreply.github.com> Co-authored-by: Carlos Fuentes <me@metcoder.dev> Co-authored-by: Robert Nagy <ronagy@icloud.com> Co-authored-by: Isak Törnros <isak.tornros@hotmail.com>
Implements bare-bones http caching as per rfc9111 Closes nodejs#3231 Closes nodejs#2760 Closes nodejs#2256 Closes nodejs#1146 Co-authored-by: Carlos Fuentes <me@metcoder.dev> Co-authored-by: Robert Nagy <ronagy@icloud.com> Co-authored-by: Isak Törnros <isak.tornros@hotmail.com> Signed-off-by: flakey5 <73616808+flakey5@users.noreply.github.com>
This would solve...
The implementation should look like...
https://www.npmjs.com/package/http-cache-semantics
RFC 7234
RFC 5861
I have also considered...
Additional context
The text was updated successfully, but these errors were encountered: