-
Notifications
You must be signed in to change notification settings - Fork 16
WIP: Add Rust implementation for DABA #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Something to keep in mind is that (I believe) Comparing the latency and throughput of We have compared performance of |
rust/src/daba/mod.rs
Outdated
{ | ||
// ith oldest value in FIFO order stored at vi = vals[i] | ||
vals: VecDeque<Value>, | ||
aggs: VecDeque<Value>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We implemented the vals
and aggs
using a single underlying queue. We created an internal struct that wrapped both a value and a partial aggregate, and then made the queue contain that struct. I think that will probably have better performance, both because of locality and by just doing less total work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, that I should definitely fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just loaded up our 2017 paper, and we presented it with parallel queues. :) So I understand why you implemented it that way. Our soon-to-be-submitted journal article presents it in a way closer to our C++ implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still do not think I fully get this algorithm, but thankfully it was pretty easy to implement 😅
Looking forward to see the paper
fn pop(&mut self) { | ||
if self.vals.pop_front().is_some() { | ||
self.aggs.pop_front(); | ||
self.l -= 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure decrementing is the right thing here - it may be, but it should be carefully considered.
You've implemented l
, r
, a
and b
as indices, which is something we're just now grappling with: we implemented them as iterators, but our presentation in the 2017 paper was kinda loose with are they iterators/pointers or indices. When they're indices, that means f==0
and e==vals.len()
is always true, and don't need to explicitly represent them. I think you've correctly done that. But I'm unsure about this decrementing - you're following our published algorithms for fixup()
, which did not assume decrementing on eviction. It's also possible there's a subtly I'm missing with the semantics of VecDeque
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Rust it's possible to overload the indexing operator. The indexing for VecDeque
is an alias for get which indexes from the front (including offset). I decrement all indices when removing the front element to shift them left. I believe it should work but I might be mistaken. If the VecDeque
was replaced with a regular Vec
, whose indexing does not include offset, then I would not decrement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, now I see what you mean. Calling pop_front()
on a VecDeque
effectively shifts the entire deque to the left, so if you had an index for location i
, accessing q[i]
before and after the pop will yield different values. You're decrementing i
so that it continues to point at the same value in the deque.
I was wondering if it would be easier to just use an iterator, and I don't think it is. From what I can tell, there's no easy way to "decrement" a standard Rust iterator as is required in the shrink case. So, in effect, you have to do your own iterator management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we want is a linked list of arrays with "cursors".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the concept of a cursor is essentially what C++ style iterators are, and how we ended up presenting the algorithm. The chunked array queue is a linked list, it just has many elements in each node.
The implementation of By the way, I assume that when you're confident the implementation is correct you'll remove the "WIP"? |
The implementation is getting closer to working but there are some small problems still that I need to fix. The unsafe operations are mostly needed for maintaining multiple cursors simultaneously which can mutate the queue. In safe Rust there can only be one mutable reference (
Sure, will do 👍 |
If it is ok, I will initially create two separate implementations, one for |
This PR builds on #23 and adds a Rust implementation for DABA. It does not however yet add an implementation for the chunked array queue. Instead it uses Rust's
std::collections::VecDeque
. I will give a shot at implementing the Chunked Array Queue soon. It will be interesting to see the performance difference.