-
Notifications
You must be signed in to change notification settings - Fork 1.4k
V8 fatal errors #2902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I do believe this is a Node.js error. We did encounter nodejs/node#38418 before, for which we have this workaround: #2778 Your stack trace looks different though. Is it triggered when you interrupt the watcher or does it happen on its own? |
Some of the errors are from terminating the watcher and some are not. I just got |
That does hint at this being a Node.js issue unfortunately. |
I spoke to a friend who works on V8 & Node and they said this looks like it's a bug on the V8 side:
They also provided the following diff for Node, which when compiled from source may fix the issue (I haven't tested it yet): diff --git a/deps/v8/src/objects/backing-store.h b/deps/v8/src/objects/backing-store.h
index 5ba95a2ba8..6b3b15f2bf 100644
--- a/deps/v8/src/objects/backing-store.h
+++ b/deps/v8/src/objects/backing-store.h
@@ -216,10 +216,11 @@ class V8_EXPORT_PRIVATE BackingStore : public BackingStoreBase {
bool holds_shared_ptr_to_allocator_ : 1;
bool free_on_destruct_ : 1;
bool has_guard_regions_ : 1;
- bool globally_registered_ : 1;
bool custom_deleter_ : 1;
bool empty_deleter_ : 1;
+ std::atomic<bool> globally_registered_;
+
// Accessors for type-specific data.
v8::ArrayBuffer::Allocator* get_v8_api_array_buffer_allocator();
SharedWasmMemoryData* get_shared_wasm_memory_data();
diff --git a/src/node_file.cc b/src/node_file.cc
index c7c73669a1..97eaf9863c 100644
--- a/src/node_file.cc
+++ b/src/node_file.cc
@@ -295,7 +295,7 @@ void FileHandle::CloseReq::Resolve() {
InternalCallbackScope callback_scope(this);
Local<Promise> promise = promise_.Get(isolate);
Local<Promise::Resolver> resolver = promise.As<Promise::Resolver>();
- resolver->Resolve(env()->context(), Undefined(isolate)).Check();
+ USE(resolver->Resolve(env()->context(), Undefined(isolate)));
}
void FileHandle::CloseReq::Reject(Local<Value> reason) {
@@ -305,7 +305,7 @@ void FileHandle::CloseReq::Reject(Local<Value> reason) {
InternalCallbackScope callback_scope(this);
Local<Promise> promise = promise_.Get(isolate);
Local<Promise::Resolver> resolver = promise.As<Promise::Resolver>();
- resolver->Reject(env()->context(), reason).Check();
+ USE(resolver->Reject(env()->context(), reason));
}
FileHandle* FileHandle::CloseReq::file_handle() { |
We're also seeing this with Ava 4.2.0 on Node 16.15.0 I believe this might actually be the underlying issue, as the stack traces from the original issue seem somewhat more similar than the original worker-thread issue: nodejs/node#32463 |
I'm also hitting this on Ava 4.3.0 with Node 16.15.0. I don't know that Ava uses any native modules of its own creation though, so I suspect posting this here is barking up the wrong tree. But I'll document it here anyway for posterity's sake and since the Nodejs issue mentioned in @schmod's comment is already closed. It happens intermittently and seems to happen more often the more test files I include in my set. If I find any more identifiable things that increase the failure rate, I'll post them here. Switching My stack trace:
|
@jdharvey-ibm it does look like a Node.js worker thread issue 😞 |
@novemberborn No worries! I'm making do with that setting off in my config. Just curious: will the child process functionality continue to be supported going forward? I see that worker threads is the current default. I ask because I'm making "the big switch" in our repo away from Jest and to Ava, but for the time being I'll be dependent on the non-worker-thread flow because of this v8 issue. Just want to make sure I'm not headed down a dead end road. I assume worker threads is the default because it gives a performance boost or something, but hopefully the child process-based approach will still be supported 😁 |
@jdharvey-ibm there's a variety of scenarios that don't work quite well using worker threads, so yea I think we'll have child process support for a long time still. Certain advanced features (like shared workers) won't be available in child processes since they're a much more natural fit for worker threads. |
Facing the same issue on my Windows machine with Node 16.16.0. Fortunately in our pipeline - or with Node 18 - everything is fine. A new ticket has been opened for this issue: nodejs/node#43617 |
This has been fixed in Node 16.17.0 🎉 |
So far so good on my side as well, really happy for that, I removed |
Uh oh!
There was an error while loading. Please reload this page.
Sorry in advance for the low quality of this report, I'm a bit busy currently and don't have the time to make a more complete report. I'll try and come back with an actual reproduction + better diagnostics.
Versions
AVA version: 4.0.0-rc.1
Darwin Kernel Version 21.1.0: Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64
Description
These started happening recently. I think from the latest release of Node v16? I upgraded a day or so ago.
Changelog for this version.
Nothing in there seems like it'd be causing issues though, maybe the Node upgrade was totally unrelated.
I frequently get these errors while in
--watch
mode, although they also occur in regular test execution (usually fromctrl + c
I think).I remember the stack traces sometimes pointing to worker_threads, might be unrelated though.
Reproduction
I still need to make a proper reproduction.
I encountered these errors while working on this repo: https://github.com/jonahsnider/aoc-2021.
If you really want to try reproducing this right now:
yarn
yarn test --watch
ctrl + c
ing.Errors
1
Not sure how I ran this one.
2
From
ctrl + c
in--watch
3
Not sure how I ran this one.
Didn't
ctrl + c
from what I can tell.4
Not using
--watch
, did not pressctrl + c
The text was updated successfully, but these errors were encountered: