-
Notifications
You must be signed in to change notification settings - Fork 190
feat(request): make request ID generation on client configurable #1535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey hey,
Thanks for your PR and it looks generally good but I had in mind to add a the "custom generator" to the IdKind enum to avoid adding extra APIs for this. After this change one can configure the id_format
with ClientBuilder::id_format
and set a custom generator with ClientBuilder::with_request_id_generator
. This means that with_request_id_generator
overrides the configured id_format.
Thus, I would really like to unify them and perhaps replace these APIs with set_id_generator
or something similar.
I reckon that we may need to refactor things and remove the next_id APIs
with a similar generator impl as you added.
So, WDYT?
Thanks for the direction! |
All good, just ping us if you are blocked or need some extra help :) |
481c077
to
51ad234
Compare
client/http-client/src/client.rs
Outdated
@@ -414,12 +414,11 @@ where | |||
None => None, | |||
}; | |||
let batch = batch.build()?; | |||
let id = self.id_manager.next_request_id(); | |||
let id_range = generate_batch_id_range(id, batch.len() as u64)?; | |||
let id_range = self.id_manager.generate_batch_id_range(batch.len()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey again,
So it doesn't make sense to have this id_range
anymore because we are using it to check whether all
requests in a batch was replied to.
Now in the code the actual ID is fetched from the id_manager
, thus it's not necessarily true that the id_range
and self.id_manager.next_request_id()
contains the same ids.
Additionally, because it's possible to use a custom id generator we can't really on these ID ranges anymore anyway.
I reckon that we need to remove the generate_batch_id_range
completely and just push the generated IDs to a Vec such as:
// this will contain the order of the request IDs in the batch
let mut ids = Vec::new();
for (method, params) in batch.into_iter() {
let id = self.id_manager.next_request_id();
batch_request.push(RequestSer {
jsonrpc: TwoPointZero,
id,
method: method.into(),
params: params.map(StdCow::Owned),
});
}
// use ids to check whether to response has responded to all ids in the batch
The reason why we need this is because a server can submit respond to each call in the batch in arbitrary order.
The Response objects being returned from a batch call MAY be returned in any order within the Array. The Client SHOULD match contexts between the set of Request objects and the resulting set of Response objects based on the id member within each Object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The actual logic to match a batch response to the batch request must be also be changed because one assumption is that the every ID is generated from a integer and can be parsed as one which is used to index into the response.
One extra test with an unordered batch response would be great to make sure everything works as intended.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback - makes sense.
According to that, the usage of Range
type in BatchMessage
and in related functions, should also be changed right? It should probably be replaces with something like Vec<Id<'static>>
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI the refactor is in progress but I am facing some issues with the lifetimes. I need to either use Vec<Id<'a>>
which means that a lifetime parameter will be added across a big part of the client or use Vec<Id<'static>>
which is always not optimal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just do the naive implementation now with Vec<Id<'static>>
the default one is still number and cloning it should be "cheap".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated and test added
for (key, _) in self.batches.iter() { | ||
if key.len() == batch.len() && batch.iter().all(|id| key.contains(id)) { | ||
matched_key = Some(key.clone()); | ||
break; | ||
} | ||
_ => None, | ||
} | ||
|
||
if let Some(key) = matched_key { | ||
if let Some((_key, state)) = self.batches.remove_entry(&key) { | ||
return Some(state); | ||
} | ||
} | ||
|
||
None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most likely this could improved if a different data structure is used to store batches on RequestManager
client/http-client/src/tests.rs
Outdated
batch_request.insert("test_echo", rpc_params!["fourth"]).unwrap(); | ||
batch_request.insert("test_echo", rpc_params!["fifth"]).unwrap(); | ||
let server_response = r#"[{"jsonrpc":"2.0","result":"fifth","id":4}, {"jsonrpc":"2.0","result":"hello","id":0}, {"jsonrpc":"2.0","result":"here's your swag","id":2}, {"jsonrpc":"2.0","result":"fourth","id":3}, {"jsonrpc":"2.0","result":"goodbye","id":1}]"#.to_string(); | ||
let res = run_batch_request_with_response::<String>(batch_request, server_response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please change this to use a custom id i.e., you need to set it by Client::id_format
? 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
cool cool, I'll have a proper look next week thanks for working on this |
Sorry @PanGan21 I'll take a look tomorrow 🙏 |
let err_obj = ErrorObject::borrowed(0, "", None); | ||
responses.push(Err(err_obj)); | ||
} | ||
let mut responses_map: BTreeMap<Id<'static>, Result<_, ErrorObject>> = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't use a BTreeMap here because there is nothing that prevents an ID (duplicate ids) to be used more than once with a custom ID generator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
pub(crate) fn complete_pending_batch(&mut self, batch: Vec<Id<'static>>) -> Option<BatchState> { | ||
let mut matched_key = None; | ||
|
||
for (key, _) in self.batches.iter() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, this is inefficient/annoying compared to the old code 🤔
let mut matched_key = None; | ||
|
||
for (key, _) in self.batches.iter() { | ||
if key.len() == batch.len() && batch.iter().all(|id| key.contains(id)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could just use Vec partialeq/eq impl here.
} | ||
|
||
if let Some(key) = matched_key { | ||
if let Some((_key, state)) = self.batches.remove_entry(&key) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't make sense to use a hashmap here anymore, it should most likely be Vec<Vec<Id<'static'>>
or something. Then just use do Vec::retain
to remove the completed batch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Not using retain but a similar approach. Let me know if that works for you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your PR (sorry about slow review though)
I'm a bit annoyed how inefficient the implementation becomes with custom id generator, by using Vec<Id>
all over the place and to search for element in the batch response.
It's not possible to use BTreeMap/HashMap because we can't assume that duplicated IDs are not used by custom request ID generator.
So, what we could do is to sort Vec<Id>
and then binary search through it
However, I'm not convinced that this annoying implementation is worth the effort for such a niche feature though
//cc @jsdw what do you think? Or any smarter ideas?
For some reason after rebase with master branch, the test |
The following pr:
Custom
variant onIdKind
Related: #1524