Skip to content

V8 fatal errors #2902

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jonahsnider opened this issue Dec 4, 2021 · 13 comments
Closed

V8 fatal errors #2902

jonahsnider opened this issue Dec 4, 2021 · 13 comments

Comments

@jonahsnider
Copy link

jonahsnider commented Dec 4, 2021

Sorry in advance for the low quality of this report, I'm a bit busy currently and don't have the time to make a more complete report. I'll try and come back with an actual reproduction + better diagnostics.

Versions

AVA version: 4.0.0-rc.1

{
  node: '16.13.1',
  v8: '9.4.146.24-node.14',
  uv: '1.42.0',
  zlib: '1.2.11',
  brotli: '1.0.9',
  ares: '1.18.1',
  modules: '93',
  nghttp2: '1.45.1',
  napi: '8',
  llhttp: '6.0.4',
  openssl: '1.1.1l+quic',
  cldr: '39.0',
  icu: '69.1',
  tz: '2021a',
  unicode: '13.0',
  ngtcp2: '0.1.0-DEV',
  nghttp3: '0.1.0-DEV'
}

Darwin Kernel Version 21.1.0: Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64

Description

These started happening recently. I think from the latest release of Node v16? I upgraded a day or so ago.
Changelog for this version.
Nothing in there seems like it'd be causing issues though, maybe the Node upgrade was totally unrelated.

I frequently get these errors while in --watch mode, although they also occur in regular test execution (usually from ctrl + c I think).

I remember the stack traces sometimes pointing to worker_threads, might be unrelated though.

Reproduction

I still need to make a proper reproduction.

I encountered these errors while working on this repo: https://github.com/jonahsnider/aoc-2021.

If you really want to try reproducing this right now:

  1. Clone that repo
  2. Run yarn
  3. Run yarn test --watch
  4. Save a few files to trigger retests. Maybe try ctrl + cing.
  5. You should be able to get a fatal error within a few minutes of testing.

Errors

1

Not sure how I ran this one.

#
# Fatal error in , line 0
# Check failed: result.second.
#
#
#
#FailureMessage Object: 0x7000066e2490
 1: 0x101677492 node::NodePlatform::GetStackTracePrinter()::$_3::__invoke() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 2: 0x102641b03 V8_Fatal(char const*, ...) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 3: 0x101a6234e v8::internal::GlobalBackingStoreRegistry::Register(std::__1::shared_ptr<v8::internal::BackingStore>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 4: 0x10179af76 v8::ArrayBuffer::GetBackingStore() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 5: 0x1015c6a55 node::ArrayBufferViewContents<char, 64ul>::Read(v8::Local<v8::ArrayBufferView>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 6: 0x1015e756c void node::Buffer::(anonymous namespace)::StringSlice<(node::encoding)3>(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 7: 0x1017f1239 v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 8: 0x1017f0d06 v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 9: 0x1017f047f v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
10: 0x102061399 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
11: 0x39a11f34b 

2

From ctrl + c in --watch

FATAL ERROR: v8::FromJust Maybe value is Nothing.
 1: 0x10b081815 node::Abort() (.cold.1) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 2: 0x109d80aa9 node::Abort() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 3: 0x109d80c1f node::OnFatalError(char const*, char const*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 4: 0x109f03600 v8::V8::FromJustIsNothing() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 5: 0x109d84054 node::fs::FileHandle::CloseReq::Resolve() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 6: 0x109d99438 node::fs::FileHandle::ClosePromise()::$_0::__invoke(uv_fs_s*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 7: 0x10a748958 uv__work_done [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 8: 0x10a74dadb uv__async_io [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 9: 0x10a76184c uv__io_poll [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
10: 0x10a74e061 uv_run [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
11: 0x109cb50af node::SpinEventLoop(node::Environment*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
12: 0x109e27d9e node::worker::Worker::Run() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
13: 0x109e2b792 node::worker::Worker::StartThread(v8::FunctionCallbackInfo<v8::Value> const&)::$_3::__invoke(void*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
14: 0x7ff807cd7514 _pthread_start [/usr/lib/system/libsystem_pthread.dylib]
15: 0x7ff807cd302f thread_start [/usr/lib/system/libsystem_pthread.dylib]

3

Not sure how I ran this one.
Didn't ctrl + c from what I can tell.

#
# Fatal error in , line 0
# Check failed: result.second.
#
#
#
#FailureMessage Object: 0x7000066e2490
 1: 0x101677492 node::NodePlatform::GetStackTracePrinter()::$_3::__invoke() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 2: 0x102641b03 V8_Fatal(char const*, ...) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 3: 0x101a6234e v8::internal::GlobalBackingStoreRegistry::Register(std::__1::shared_ptr<v8::internal::BackingStore>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 4: 0x10179af76 v8::ArrayBuffer::GetBackingStore() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 5: 0x1015c6a55 node::ArrayBufferViewContents<char, 64ul>::Read(v8::Local<v8::ArrayBufferView>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 6: 0x1015e756c void node::Buffer::(anonymous namespace)::StringSlice<(node::encoding)3>(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 7: 0x1017f1239 v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 8: 0x1017f0d06 v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 9: 0x1017f047f v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
10: 0x102061399 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
11: 0x39a11f34b

4

Not using --watch, did not press ctrl + c

#
# Fatal error in , line 0
# Check failed: result.second.
#
#
#
#FailureMessage Object: 0x7ff7b3c3e140
 1: 0x10c3f5492 node::NodePlatform::GetStackTracePrinter()::$_3::__invoke() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 2: 0x10d3bfb03 V8_Fatal(char const*, ...) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 3: 0x10c7e034e v8::internal::GlobalBackingStoreRegistry::Register(std::__1::shared_ptr<v8::internal::BackingStore>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 4: 0x10c518f76 v8::ArrayBuffer::GetBackingStore() [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 5: 0x10c35f71b node::Buffer::Data(v8::Local<v8::Value>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 6: 0x10c405314 node::serdes::DeserializerContext::DeserializerContext(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Value>) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 7: 0x10c56f239 v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 8: 0x10c56ea0d v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<true>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
 9: 0x10c56e45b v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
10: 0x10cddf399 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
11: 0x10cd6f552 Builtins_JSBuiltinsConstructStub [/Users/jonah/.fnm/node-versions/v16.13.1/installation/bin/node]
@novemberborn
Copy link
Member

I do believe this is a Node.js error. We did encounter nodejs/node#38418 before, for which we have this workaround: #2778

Your stack trace looks different though. Is it triggered when you interrupt the watcher or does it happen on its own?

@jonahsnider
Copy link
Author

jonahsnider commented Dec 5, 2021

Some of the errors are from terminating the watcher and some are not. I just got #4 yesterday and that was just from running ava and not touching my keyboard at all.

@novemberborn
Copy link
Member

That does hint at this being a Node.js issue unfortunately.

@jonahsnider
Copy link
Author

I spoke to a friend who works on V8 & Node and they said this looks like it's a bug on the V8 side:

Looks to me like you caught some bugs. You should run [Node] with --trace-backing-store.
I think it's trying to insert two buffers with the same address, like a data race somehow.

I think the two bugs are:

  1. CloseReq::{Resolve,Reject} assert success which could be false if the thread is being terminated
  2. BackingStoreGlobalRegistry::Register does not have an atomic fence around checking globally_registered_

They also provided the following diff for Node, which when compiled from source may fix the issue (I haven't tested it yet):

diff --git a/deps/v8/src/objects/backing-store.h b/deps/v8/src/objects/backing-store.h
index 5ba95a2ba8..6b3b15f2bf 100644
--- a/deps/v8/src/objects/backing-store.h
+++ b/deps/v8/src/objects/backing-store.h
@@ -216,10 +216,11 @@ class V8_EXPORT_PRIVATE BackingStore : public BackingStoreBase {
   bool holds_shared_ptr_to_allocator_ : 1;
   bool free_on_destruct_ : 1;
   bool has_guard_regions_ : 1;
-  bool globally_registered_ : 1;
   bool custom_deleter_ : 1;
   bool empty_deleter_ : 1;
 
+  std::atomic<bool> globally_registered_;
+
   // Accessors for type-specific data.
   v8::ArrayBuffer::Allocator* get_v8_api_array_buffer_allocator();
   SharedWasmMemoryData* get_shared_wasm_memory_data();
diff --git a/src/node_file.cc b/src/node_file.cc
index c7c73669a1..97eaf9863c 100644
--- a/src/node_file.cc
+++ b/src/node_file.cc
@@ -295,7 +295,7 @@ void FileHandle::CloseReq::Resolve() {
   InternalCallbackScope callback_scope(this);
   Local<Promise> promise = promise_.Get(isolate);
   Local<Promise::Resolver> resolver = promise.As<Promise::Resolver>();
-  resolver->Resolve(env()->context(), Undefined(isolate)).Check();
+  USE(resolver->Resolve(env()->context(), Undefined(isolate)));
 }
 
 void FileHandle::CloseReq::Reject(Local<Value> reason) {
@@ -305,7 +305,7 @@ void FileHandle::CloseReq::Reject(Local<Value> reason) {
   InternalCallbackScope callback_scope(this);
   Local<Promise> promise = promise_.Get(isolate);
   Local<Promise::Resolver> resolver = promise.As<Promise::Resolver>();
-  resolver->Reject(env()->context(), reason).Check();
+  USE(resolver->Reject(env()->context(), reason));
 }
 
 FileHandle* FileHandle::CloseReq::file_handle() {

@schmod
Copy link

schmod commented May 26, 2022

We're also seeing this with Ava 4.2.0 on Node 16.15.0

I believe this might actually be the underlying issue, as the stack traces from the original issue seem somewhat more similar than the original worker-thread issue: nodejs/node#32463

@jharvey10
Copy link

I'm also hitting this on Ava 4.3.0 with Node 16.15.0. I don't know that Ava uses any native modules of its own creation though, so I suspect posting this here is barking up the wrong tree. But I'll document it here anyway for posterity's sake and since the Nodejs issue mentioned in @schmod's comment is already closed.

It happens intermittently and seems to happen more often the more test files I include in my set. If I find any more identifiable things that increase the failure rate, I'll post them here.

Switching workerThreads to false seems to avoid the issue for me across multiple back-to-back test runs.

My stack trace:

#
# Fatal error in , line 0
# Check failed: result.second.
#
#
#
#FailureMessage Object: 0x700014673b60
 1: 0x107aa5ef2 node::NodePlatform::GetStackTracePrinter()::$_3::__invoke() [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 2: 0x108a7bf13 V8_Fatal(char const*, ...) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 3: 0x107e948fe v8::internal::GlobalBackingStoreRegistry::Register(std::__1::shared_ptr<v8::internal::BackingStore>) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 4: 0x107bcb0e6 v8::ArrayBuffer::GetBackingStore() [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 5: 0x107a10fcb node::Buffer::Data(v8::Local<v8::Value>) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 6: 0x107a3f9db node::fs::Read(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 7: 0x107c213b9 v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 8: 0x107c20e86 v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
 9: 0x107c205ff v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
10: 0x108493af9 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/jdharvey/.nvm/versions/node/v16.15.0/bin/node]
zsh: trace trap  npm -w packages/api run testa

@novemberborn
Copy link
Member

@jdharvey-ibm it does look like a Node.js worker thread issue 😞

@jharvey10
Copy link

@novemberborn No worries! I'm making do with that setting off in my config. Just curious: will the child process functionality continue to be supported going forward? I see that worker threads is the current default. I ask because I'm making "the big switch" in our repo away from Jest and to Ava, but for the time being I'll be dependent on the non-worker-thread flow because of this v8 issue. Just want to make sure I'm not headed down a dead end road. I assume worker threads is the default because it gives a performance boost or something, but hopefully the child process-based approach will still be supported 😁

@novemberborn
Copy link
Member

@jdharvey-ibm there's a variety of scenarios that don't work quite well using worker threads, so yea I think we'll have child process support for a long time still.

Certain advanced features (like shared workers) won't be available in child processes since they're a much more natural fit for worker threads.

@mil7
Copy link

mil7 commented Jul 8, 2022

Facing the same issue on my Windows machine with Node 16.16.0. Fortunately in our pipeline - or with Node 18 - everything is fine.

A new ticket has been opened for this issue: nodejs/node#43617
The most helpful piece of information yet is that downgrading to some earlier Node 16 helps as well.

@schmod
Copy link

schmod commented Sep 27, 2022

This has been fixed in Node 16.17.0 🎉

@albertossilva
Copy link

This has been fixed in Node 16.17.0 🎉

So far so good on my side as well, really happy for that, I removed --serial and some minutes are being saved.

@albertossilva

This comment was marked as outdated.

@avajs avajs locked as resolved and limited conversation to collaborators Aug 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants