Skip to content

use atomic load/store on curContext_ #187

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

szegedi
Copy link

@szegedi szegedi commented Dec 11, 2024

What does this PR do?:
Instead of a pair of ContextPtr objects (which are a std::shared_ptr<v8::Global<v8::Value>>) and an atomic pointer that switches between them, we use just one ContextPtr and use std::atomic_{load|store}_explicit(std::shared_ptr) to atomically load/store into it.

Motivation:
While doing #186 I realized this can be done and it simplifies the code. It makes evolving the code towards that PR simpler too.

@szegedi szegedi requested a review from nsavoire as a code owner December 11, 2024 21:13
Copy link

github-actions bot commented Dec 11, 2024

Overall package size

Self size: 7.45 MB
Deduped: 7.82 MB
No deduping: 7.82 MB

Dependency sizes | name | version | self size | total size | |------|---------|-----------|------------| | source-map | 0.7.4 | 226 kB | 226 kB | | pprof-format | 2.1.0 | 111.69 kB | 111.69 kB | | p-limit | 3.1.0 | 7.75 kB | 13.78 kB | | delay | 5.0.0 | 11.17 kB | 11.17 kB | | node-gyp-build | 3.9.0 | 8.81 kB | 8.81 kB |

🤖 This report was automatically generated by heaviest-objects-in-the-universe

@szegedi szegedi added the semver-minor Usually minor non-breaking improvements label Dec 11, 2024
@szegedi szegedi force-pushed the szegedi/atomic-context branch from 51fa650 to 455718a Compare December 11, 2024 21:16
@pr-commenter
Copy link

pr-commenter bot commented Dec 11, 2024

Benchmarks

Benchmark execution time: 2024-12-11 21:22:36

Comparing candidate commit 455718a in PR branch szegedi/atomic-context with baseline commit 784f6f6 in branch main.

Found 2 performance improvements and 0 performance regressions! Performance is the same for 92 metrics, 26 unstable metrics.

scenario:profiler-idle-no-wall-profiler-18

  • 🟩 cpu_user_time [-4.783ms; -1.356ms] or [-9.493%; -2.691%]

scenario:profiler-idle-with-wall-profiler-18

  • 🟩 cpu_user_time [-7.680ms; -2.461ms] or [-10.755%; -3.446%]

@nsavoire
Copy link

I see several issues with this approach:

  • std::atomic_load overloads for std::shared_ptr are deprecated in c++20 and removed in c++26
  • even ignoring the point above, atomic load of unmodified std::shared_ptr is a lie, std::atomic_load for std::shared_ptr are implemented by mutexes (cf. here and implementation here and here).
    And as evidence std::atomic_is_lock_free(shared_ptr<T>*) returns false.
  • c++20 introduced a specialization of std::atomic for std::shared_ptr. This specialization changes the implementation of std:shared_ptr to make it more friendly to concurrent access. It does not uses mutexes but still isn't lockfree (std::atomic<std::shared_ptr<T>>::is_lock_free/is_always_lock_free returns false), and uses CAS loops.

It seems like implementing a lock free shared_ptr is really hard (in particular without access to atomic operations wider than 64bits) !

I remember these videos on the subject were really interesting (even if I already forgot almost everything in them 😞 ):
https://www.youtube.com/watch?v=lNPZV9Iqo3U
https://www.youtube.com/watch?v=gTpubZ8N0no

Link to code in godbolt: https://godbolt.org/z/EM1GenTr9

@szegedi
Copy link
Author

szegedi commented Dec 13, 2024

Thanks for looking into it, that's super valuable information, I forgot about the mutex. I knew about the deprecation but figured we can replace it with C++ idioms too. I'll discard this for now and reintroduce the previous approach into the R&D experiment too.

@szegedi szegedi closed this Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
semver-minor Usually minor non-breaking improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants