Skip to content

RwLock needs a task-fair locking policy #705

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
parasyte opened this issue Feb 16, 2020 · 15 comments
Open

RwLock needs a task-fair locking policy #705

parasyte opened this issue Feb 16, 2020 · 15 comments

Comments

@parasyte
Copy link

Long story short, I've audited at least three different async RwLock implementations 😁 and they all have the same problem; read-heavy workloads will starve writers.

In the case of async-std, the starvation happens here:

// Increment the number of active reads.
match self.state.compare_exchange_weak(
state,
state + ONE_READ,
Ordering::SeqCst,
Ordering::SeqCst,
) {
by incrementing the reader count without regard for the number of writers waiting to acquire the lock.

See also:

@skade
Copy link
Collaborator

skade commented Feb 16, 2020

@parasyte That looks like a good catch! Would you feel like proposing in a patch?

@parasyte
Copy link
Author

I've been comparing against parking_lot::RwLock. That implementation is admittedly very complex. And in all honesty, I'm not an expert in task scheduling.

Fairness is mostly achieved by scheduling tasks in the order that they attempt to acquire the lock. I can imagine this working in async-std as a potentially-interleaved FIFO queue of wakers. I'm not really sure where or how to start on that...

@skade
Copy link
Collaborator

skade commented Feb 17, 2020

@parasyte Okay, thanks for the report! I'm assigning this to someone else :).

@skade skade assigned ghost Feb 17, 2020
@ghost
Copy link

ghost commented Feb 26, 2020

I think it's generally a good idea to consider priority policies in a RwLock. According to the Wikipedia page, there are at least three of them: read-preferring, write-preferring and unspecified. Only a write-preferring RwLock is needed to avoid the writer starvation problem mentioned in this issue. Are we sure that we want a fair lock that also avoids starving the readers?

@parasyte
Copy link
Author

parasyte commented Feb 26, 2020

Write-preferring locks do not starve readers, at least not in the same sense that read-preferring locks can guarantee writers will be unable to acquire a lock at all in the presence of a steady stream of readers. In a write-preferring lock, a steady stream of writers will not be able to prevent readers from making progress. They can only limit concurrency of readers.

For example, a worst-case scenario would be a perfect interleaving of 1:1 readers and writers in sequence, causing the lock to act almost exactly like a Mutex. Meanwhile the ideal case will have a large number of readers and small number of writers.

@ghost
Copy link

ghost commented Feb 26, 2020

The worst-case scenario depends on the lock's policy. If the lock is taken by a writer while a reader's request comes in, the request needs to be queued to ensure that the reader gets a chance to read when the writer finishes. I believe this is what you're referring to in your comments above.

Another policy is simply to reject the reader's request. When the writer finishes, the lock is again open for anyone to grab. If the writer is really busy, it can re-acquire the lock before any reader does and effectively starve the readers.

I think both policies may have their own use cases. For example, in scenarios where the writer is much more important than the reader, the latter policy (which is really unfair to the readers) will be the desired behavior.

@D1plo1d
Copy link

D1plo1d commented Apr 21, 2020

Is a write preferring RWLock still being considered for async-std?

I am using RWLock to allow infrequent updates to application-wide configurations - in my use case a write preferring RWLock would allow me to guarentee that config updates will be applied.

For reference on a similar design here is Java's ReentrantReadWriteLock (see "fair mode" specifically): https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html

@D1plo1d D1plo1d unassigned ghost Apr 21, 2020
@D1plo1d
Copy link

D1plo1d commented Apr 21, 2020

@kydos
Copy link

kydos commented May 26, 2020

Hello everyone, we are suffering some starvation problems due to this issue on our protocol routing stack. We have workarounds, but it would be great to know if there is a plan to tackle this important issue in a patch release of version 1.6.

Thanks very much for the great work on async-std.

A+

@ghost
Copy link

ghost commented Aug 19, 2020

Published a reader-writer lock with a fair locking strategy: https://github.com/stjepang/async-rwlock

@yoshuawuyts
Copy link
Contributor

During my lunch break I attempted to write a quick patch to replace async-std's internal rwlock impl with async-rwlock. Unfortunately I ran into a compat issue between the two APIs: https://github.com/stjepang/async-rwlock/issues/1 — if this could be resolved it'd be really neat; everyone using async-std could get a better rwlock!

@ghost
Copy link

ghost commented Aug 29, 2020

I have published a new version of async-mutex and async-rwlock that supports T: ?Sized.

EDIT: The crate is async-lock:

@imuni4fun
Copy link

@stjepang as a quick follow-up, is that fair version you referenced integrated into any public crates? This repo is public, the referenced repo 404s for me, and this issue is open so just wondering. Would love to see a fair/write-preferring option. Thanks!

@Fishrock123
Copy link
Member

fwiw async-rwlock is now at https://github.com/smol-rs/async-lock

@imuni4fun
Copy link

fwiw async-rwlock is now at https://github.com/smol-rs/async-lock
@stjepang
sweet, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants