Skip to content

waitUntil() for FetchEvent? #584

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
slightlyoff opened this issue Dec 10, 2014 · 16 comments
Closed

waitUntil() for FetchEvent? #584

slightlyoff opened this issue Dec 10, 2014 · 16 comments

Comments

@slightlyoff
Copy link
Contributor

AKA: should FetchEvent be an ExtendableEvent?

It's possible today to write code that fetches from the network, adds to a cache, and then matches from the cache in order to feed a document. This seems a bit weird...why invoke the Cache matching mechanism for a Response you've already got a handle to and can simply .clone()?

The answer, today, is the lifetime concern: you want the respondWith promise to extend beyond the point where the Cache is savvy to the cache entry. This is a bit weird. Mightn't it be better to give FetchEvent a waitUntil()?

@wanderview
Copy link
Member

Is this a more realistic use case for waitUntil()?

http://jakearchibald.com/2014/offline-cookbook/#stale-while-revalidate

A cache.put() call after resolving the respondWith() promise with a in-progress Response.

@slightlyoff
Copy link
Contributor Author

Yeah, you're right, that is more realistic. Analytics pings are another area that this might be useful in.

@wanderview
Copy link
Member

How long to hold the worker alive with outstanding network requests came up in our internal discussion again today. Would be nice to move forward on this or define some other life cycle guidelines here.

I believe @annevk would like a fully defined life cycle model. While that would be nice, I personally think we can make progress by defining something like waitUntil().

@slightlyoff
Copy link
Contributor Author

I'm not sure those are in opposition. waitUntil is the signal to the UA
that we're not at an interruptable phase in the lifecycle and so can help
us clarify what is allowed to happen when.

@wanderview
Copy link
Member

I agree. I was trying to say I don't want to wait until we have a fully defined life cycle model. I'd like to clarify this particular use case sooner.

@slightlyoff
Copy link
Contributor Author

Ah, right. Good call.

@annevk
Copy link
Member

annevk commented Apr 5, 2015

@jungkees @jakearchibald?

@slightlyoff
Copy link
Contributor Author

Interesting question: do you need to let it run at least as long as the TCP timeout, which 60 seconds on modern Linux? I'd suggest that as a reasonable starting point.

Thoughts? @jakearchibald? @KenjiBaheux? @coonsta? @kinu?

@jungkees
Copy link
Collaborator

jungkees commented Apr 7, 2015

@wanderview In your comment above, did you mean cache.put() in the example should have been called within e.waitUntil()?

@jakearchibald What patterns can you expect for devs with the FetchEvent.waitUntil()? I'm having hard time trying with the code patterns when e.respondWith(r) and e.waitUntil(p) are coming together.

@kinu
Copy link
Contributor

kinu commented Apr 7, 2015

I agree that there could probably be a situation where one wants to explicitly extend the lifetime of an event (and hence ServiceWorker) up to a certain point after it returns a response, assuming that in general we want ServiceWorker to return a response as early as possible for performance reasons.

Reg: TCP timeout-- it's not clear if we need to tie the lifetime of SW to the completion of async network/cache operation. It looks we could say that async network/cache operation that is triggered by a ServiceWorker will NOT be cancelled even after the ServiceWorker is killed (by UA), like we don't for Beacon API on page unload. (Currently the behavior of async operations upon SW termination doesn't seem to be really spec'ed?)

@wanderview
Copy link
Member

@wanderview In your comment above, did you mean cache.put() in the example should have been called within e.waitUntil()?

Yes. I meant if we had a general waitUntil(), it could be used to keep the worker alive until cache.put() completes, even though respondWith() has already been called.

As a side note, I am changing the gecko Cache implementation to keep the worker alive as long as there are Cache operations in flight. This is partly for ipc implementation details, but also partly due to the lack of waitUntil().

@jungkees
Copy link
Collaborator

jungkees commented Apr 8, 2015

but also partly due to the lack of waitUntil().

I believe adding FetchEvent.waitUntil() has been decided from the discussion above. I'll work on it.

@jungkees jungkees added this to the Version 1 milestone Apr 8, 2015
@jungkees jungkees self-assigned this Apr 8, 2015
@jungkees
Copy link
Collaborator

jungkees commented Apr 8, 2015

Reg: TCP timeout-- it's not clear if we need to tie the lifetime of SW to the completion of async network/cache operation.

Yeah, so it sounds like it's an optimization point in the implementation rather than a constraint in the spec.

It looks we could say that async network/cache operation that is triggered by a ServiceWorker will NOT be cancelled even after the ServiceWorker is killed (by UA), like we don't for Beacon API on page unload. (Currently the behavior of async operations upon SW termination doesn't seem to be really spec'ed?)

Yes, I think I need to add more text to make it clear. IMO, ongoing fetch can just be dropped as there's no context to process or relay the response through. And for cache operations, those responses already committed to the request to response map might better be retained unless it fails. Thoughts?

@jungkees
Copy link
Collaborator

jungkees commented Apr 9, 2015

ff93a46 made the FetchEvent an ExtendableEvent.

@jungkees
Copy link
Collaborator

Opened #679 for the default lifetime limit issue.

Closing.

@wanderview
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants