-
Notifications
You must be signed in to change notification settings - Fork 319
allow service workers to created nested dedicated workers #1529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yeah, I think workers are currently linked to a root document, so that needs to be changed to "document or service worker global", or something common between the two. |
I think dedicated workers can be nested under shared workers right now? |
Would the idea be that the dedicated worker lives as long as the service worker is running? |
Correct. Its lifetime would be bound by the lifetime of its parent, just like when creating a dedicated worker in a window or iframe. |
FWIW, It would be useful for the new ManifestV3 browser extensions (currently implemented only in Chrome), which are built on service workers now so they've completely lost the ability to create workers in the background script. Looks like whatwg/html#411 |
I would love to see this become implemented into the service worker spec, I think it would be beneficial for highly computational fetch requests. |
I created a similar issue over in WHATWG: whatwg/html#8362 |
A while back in whatwg/html#8362, I made several comments describing real-world use cases for service workers spawning dedicated workers. I’m concisely summarizing them here too, since I figure this space might be more likely to increase implementer interest. An MDN webpage on offline progressive web apps says, “If you have heavy calculations to do, you can offload them from the main thread and do them in the [service] worker, and receive results as soon as they are available.” In actuality, whenever a service worker needs to do CPU-expensive work, it must resort to complex coordination with at least one dedicated worker belonging to its origin’s browser tab(s), e.g., using Web Locks and some leader-election scheme. (This can get especially complicated when the leader tab gets closed, destroying its dedicated worker, at which the newly elected leader tab must restore that dedicated worker’s state.) It would be considerably simpler for web developers if service workers could “simply” spawn dedicated workers, to which the service workers may offload CPU-intensive tasks. This complexity affects the following use cases:
|
Excellent comment. One more reason for service workers needing to be able to spawn workers is that service workers cannot do dynamic imports. If they could, it would be a partial reason for not needing to spawn a worker. But it would still be best if they could. |
I am concerned about the complexity of detecting script changes in service workers when dynamic imports and dedicated workers are involved. Currently, service workers have a mechanism to automatically detect and reload when their scripts are updated. However, I believe that the introduction of dynamic imports and dedicated workers may pose a challenge to this mechanism. How can service workers reliably detect changes in scripts when they use dynamic imports or include dedicated workers? |
I'm not quite following what your concern is. The main thread and web workers themselves can spawn dedicated workers. Could you please elaborate on what unique challenges service workers present for detecting script changes, and how this relates to them being able to speak dedicated workers? (as for dynamic imports, again, I only brought it up because Id like to be able to spawn a worker from the sw, to do dynamic imports there. Allowing for dynamic imports in the sw is a separate issue #1585) |
I think the most practical approach is to require dedicated workers spawned by ServiceWorkers to be controlled by the ServiceWorker that spawned them, including using the routes for that registration. The ServiceWorker would be responsible for caching the worker scripts, ideally as part of the install phase, but it doesn't seem worth it to try and add enforcement mechanisms for that like we have for ServiceWorker scripts or for the update check mechanism to change. The spec would also need to be clear that the controlled worker clients should not extend the underlying lifetime of the ServiceWorker similar to how ServiceWorker.postMessage from a ServiceWorker to another ServiceWorker should not extend the lifetime of the recipient ServiceWorker beyond the lifetime of the sender. |
You make a good point about the lifetime - a Dedicated Worker spawned by a Service Worker probably should be tied to the SW's lifetime, which I understand to be somewhat unpredictable/ephemeral... At least as far as my use case goes for this, I think that if Android Chromium could ever get around to adding support for Shared Workers, then many issues like this would be largely irrelevant. I, and many others, are doing a lot of convoluted dances between different workers/threads, with leader election and various ways to message between them, all because Shared Workers are not a viable option when missing on 50% of web visitors. If we had shared workers, instead of a SW needing to spawn a dedicated (or shared) worker, it could just pass messages to one that was already created by one of the main threads (which might not even be active anymore. Shared Workers really are the holy grail of all of this - the rest is largely just terrible "polyfill" kludges. (Dedicated Workers could still be effectively used though, tied to its main thread). |
What about letting service worker connect to an already running SharedWorker's port if it's trivial to implement? It'll be kinda weird though that |
The clients API already exposes and identifies shared worker clients. |
Again, the problem with Shared Workers is that 50% of devices don't support it. Perhaps not an issue for some applications, but it's a non-starter for most. https://caniuse.com/mdn-api_sharedworker Hence we need various convoluted dances with Dedicated + service workers, leader election with web locks, broadcast channel to communicate etc to create our own "shared worker polyfill". Hence issues like this, being able to spawn workers from service workers etc... |
Yes, but AFAICT it's unusable and |
I’d like to emphasize that service-workers-spawning-workers are important, even with shared workers.This issue covers many use cases that shared workers do not address, particularly with those described in #1529 (comment) – i.e., fetch handling that requires relatively long-running sync calls. Shared workers wouldn’t help with fetches using data from SQLite or other OPFS sync access; they wouldn’t help with transparent decoding/decompression of unsupported file formats or with async expensive RegExp processing for URL routing either. If Chrome Android supported shared workers, that would help many other use cases that also currently require complex tab leader election and locking. But shared workers don’t address many important use cases in this thread. Even with broad shared-workers support—if we want to enable important fetch-related use cases, then we still need dedicated workers spawned by dedicated workers. @asutherland: I largely agree with the ideas in #1529 (comment). I’m a little confused by what you mean by “enforcement mechanism”—do you mean how the last update check time can get updated, e.g., by I can see it being a developer footgun if a change in a JS resource didn’t trigger an update in a service worker that used it as a dedicated worker…but it also probably would be a footgun if JS modules didn’t use the same module cache as the service worker’s statically I definitely think that And that would mean that changes to dedicated-worker resources wouldn’t contribute to the service worker’s update check, whether or not the dedicated worker is a module. Developers would just have to live changing a comment in the service worker or something similar, each time they also change one of its dedicated workers (or its dynamic |
I think we've discussed this before, but I cannot find the issue. It would be nice to allow service workers to created nested dedicated workers via
new Worker()
. This would allow the service worker to perform CPU intensive operations, like custom decoding, without potentially blocking the thread being used to process further FetchEvents.chromeos/static-site-scaffold-modules#40 (comment) is an example of where this would be useful.
The text was updated successfully, but these errors were encountered: