-
Notifications
You must be signed in to change notification settings - Fork 341
RwLock needs a task-fair locking policy #705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@parasyte That looks like a good catch! Would you feel like proposing in a patch? |
I've been comparing against Fairness is mostly achieved by scheduling tasks in the order that they attempt to acquire the lock. I can imagine this working in |
@parasyte Okay, thanks for the report! I'm assigning this to someone else :). |
I think it's generally a good idea to consider priority policies in a RwLock. According to the Wikipedia page, there are at least three of them: read-preferring, write-preferring and unspecified. Only a write-preferring RwLock is needed to avoid the writer starvation problem mentioned in this issue. Are we sure that we want a fair lock that also avoids starving the readers? |
Write-preferring locks do not starve readers, at least not in the same sense that read-preferring locks can guarantee writers will be unable to acquire a lock at all in the presence of a steady stream of readers. In a write-preferring lock, a steady stream of writers will not be able to prevent readers from making progress. They can only limit concurrency of readers. For example, a worst-case scenario would be a perfect interleaving of 1:1 readers and writers in sequence, causing the lock to act almost exactly like a |
The worst-case scenario depends on the lock's policy. If the lock is taken by a writer while a reader's request comes in, the request needs to be queued to ensure that the reader gets a chance to read when the writer finishes. I believe this is what you're referring to in your comments above. Another policy is simply to reject the reader's request. When the writer finishes, the lock is again open for anyone to grab. If the writer is really busy, it can re-acquire the lock before any reader does and effectively starve the readers. I think both policies may have their own use cases. For example, in scenarios where the writer is much more important than the reader, the latter policy (which is really unfair to the readers) will be the desired behavior. |
Is a write preferring RWLock still being considered for async-std? I am using RWLock to allow infrequent updates to application-wide configurations - in my use case a write preferring RWLock would allow me to guarentee that config updates will be applied. For reference on a similar design here is Java's ReentrantReadWriteLock (see "fair mode" specifically): https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html |
Hello everyone, we are suffering some starvation problems due to this issue on our protocol routing stack. We have workarounds, but it would be great to know if there is a plan to tackle this important issue in a patch release of version 1.6. Thanks very much for the great work on async-std. A+ |
Published a reader-writer lock with a fair locking strategy: https://github.com/stjepang/async-rwlock |
During my lunch break I attempted to write a quick patch to replace async-std's internal rwlock impl with |
I have published a new version of EDIT: The crate is |
@stjepang as a quick follow-up, is that fair version you referenced integrated into any public crates? This repo is public, the referenced repo 404s for me, and this issue is open so just wondering. Would love to see a fair/write-preferring option. Thanks! |
fwiw |
|
Long story short, I've audited at least three different async RwLock implementations 😁 and they all have the same problem; read-heavy workloads will starve writers.
In the case of
async-std
, the starvation happens here:async-std/src/sync/rwlock.rs
Lines 195 to 201 in 125fa5b
See also:
RwLock
rust-lang/futures-rs#2082 (comment)The text was updated successfully, but these errors were encountered: