Skip to content

std.mutex: implement blocking mutexes on Linux #1463

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

shawnl
Copy link
Contributor

@shawnl shawnl commented Sep 3, 2018

see #1455

Eventually I would like to see something like WTF::Lock/WTF::ParkingLot,
but that is a bunch of work.

@shawnl shawnl force-pushed the mutex branch 2 times, most recently from 41ab32c to 8905f96 Compare September 3, 2018 03:59
see ziglang#1455

Eventually I would like to see something like WTF::Lock/WTF::ParkingLot,
but that is a bunch of work.
@andrewrk andrewrk added this to the 0.4.0 milestone Sep 3, 2018
@andrewrk andrewrk added the work in progress This pull request is not ready for review yet. label Sep 3, 2018
Copy link
Member

@andrewrk andrewrk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an important component, so I am compelled to have high standards for acceptance. The logic here is nontrivial and difficult to reason about (because it deals with concurrency and atomic operations). So we need these things in order to accept it:

  • Tests.
  • Comments explaining in detail how the implementation works, so that one can verify the code matches the intended behavior.
  • Documentation comments explaining the API, and what guarantees the user can or cannot get from it.


pub const Held = struct {
mutex: *Mutex,

pub fn release(self: Held) void {
assert(@atomicRmw(u8, &self.mutex.lock, builtin.AtomicRmwOp.Xchg, 0, AtomicOrder.SeqCst) == 1);
if (@atomicRmw(u32, &self.mutex.lock, AtomicRmwOp.Sub, 1, AtomicOrder.Release) != 1) {
self.mutex.lock = 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't have a naked write racing with an atomic write (the cmpxchg on line 42).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why doesn't the @atomicRmw do an xchg and set the value to 0? You can still check if the previous value was 1.

while (@atomicRmw(u8, &self.lock, builtin.AtomicRmwOp.Xchg, 1, AtomicOrder.SeqCst) != 0) {}
var c: u32 = undefined;
// This need not be strong because of the loop that follows.
// TODO implement mutex3 from https://www.akkadia.org/drepper/futex.pdf in x86 assembly.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that really better? Can we have an issue to discuss it rather than this TODO comment?

@andrewrk andrewrk removed the work in progress This pull request is not ready for review yet. label Oct 3, 2018
// spin-lock
}
}
if (@cmpxchgWeak(u32, &self.lock, 0, 2, AtomicOrder.Acquire, AtomicOrder.Monotonic)) |value2| {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this jump straight to the "locked with waiters" state? shouldn't it just go back and try the original cmpxchg again?

@andrewrk andrewrk added the standard library This issue involves writing Zig code for the standard library. label Oct 3, 2018
@andrewrk
Copy link
Member

andrewrk commented Oct 3, 2018

This is actually pretty well tested by the std.atomic.Queue tests. So I'm ok with using that as coverage.

@andrewrk andrewrk closed this in 66cb75d Oct 3, 2018
@shawnl shawnl deleted the mutex branch March 29, 2019 16:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
standard library This issue involves writing Zig code for the standard library.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants