-
Notifications
You must be signed in to change notification settings - Fork 36
Ability to chunk download from object store #274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I originally submitted this issue in the datafusion repo which I think is the wrong repo. Quote reply from @alamb
|
It should be relatively straightforward to achieve this using buffer_ordered from the futures crate, we may just need to document how to do this |
Maybe it would make a good example |
I can write an example. Using
But it's not obvious for me how to use the stream interface with |
I was imagining that it would look something like making multiple calls to |
A related discussion: |
I believe @crepererum is working on something like this, called "chunked downloading" |
I do. We have code for that at InfluxData and I plan to upstream this in the following order:
|
FWIW my preference would be to build this into the store implementations, e.g. into GetClient, as opposed to adding further wrapper types. I'd very much like to move away from wrapping things at the ObjectStore interface. Edit: Actually my real preference would be to build this into something akin to the buffered interfaces as opposed to baking it into ObjectStore at all. This would allow for out of order chunking, avoid the issue of providing size and Etag information, and generally be far more flexible... |
What do you mean by "buffered interfaces" ? I mean a more general implementation sounds great, but if we have one that is implemented as an |
I am referring to things like BufReader.
My understanding from Marco's comment is that we would need to use the extension mechanism in order to get the size (and possibly ETag) through to the wrapper. Given this already implies a non-standard invocation of the ObjectStore::get API by the caller, I don't really see the advantage over using a separate utility helper akin to BufReader in order to achieve this. We avoid overloading the ObjectStore interface, can return data out of order, and have a more clean and focused API.
TBC I am not suggesting an initial cut needs to implement all of the above, but that we should adopt an approach to this issue that allows for this down the line. Tbh the utility approach should be significantly simpler than an ObjectStore wrapper. |
The utility approach certainly looked nice with an alternate tokio runtime |
I also don't really follow the vague interface design that @tustvold proposes here. TBH I have an implementation that works. It's pragmatic, maybe not the most beautiful one, but it works. |
As a very first cut one could take your existing implementation that yields a stream and wrap it up as follows
One could in future envision extending that with additional functionality, for example
|
So we're back to the discussion of generic/exchangeable interfaces and the issue that the chunked downloader provides a rather large integration churn. |
Isn't churn unavoidable as you need some way to provide the size of the target object and its version information? No approach is going to be a drop-in replacement? |
The version information are NOT required. You get them from the individual sub-GETs and just need to compare them. The size is required if you want to be ideal, but you can also send an initial HEAD request and I would argue that for a 100MB file and 16MB chunks, that's still better than not chunking at all. So that extension is (from a user PoV) optional. |
In which case what did you think of my initial suggestion of adding this logic to GetClient? Also FWIW one could just fetch the first chunk instead of doing a separate head request, so long as some of the range requested is satisfiable it should be ok. I also suspect many users will already be issuing range requests, in which case we just chunk that if necessary. Edit: actually going back to the original description the request is for files on the order of 100s of MB, an approach that returns in order runs the risk of pulling the entire file into memory... This feels problematic... |
Can you request a range that is potentially larger than the file?
I think it depends on your concrete deployment / integration. Enabling chunking also increases cost, so I wouldn't do that by default anyways. Also you can chunk even for very large files if you use a buffered-stream like approach. That enables concurrency while limiting the maximum number of pre-fetched chunks.
I think that could work. I'm just a bit afraid that we have so much complexity/features fully baked into the builtin clients that it is impossible for people to write their own implementations / backends. |
According to the HTTP spec, yes, but I have not tested it.
Provided we encapsulate the functionality, we should be able to expose it much like we expose coalesce_ranges, without needing to expose GetClient itself. |
Sounds like a fair deal to me. |
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
When downloading large objects (> 300MBs) using object_store crate, I often hit timeout using the default configuration (30 seconds connection timeout). Interestingly, when increasing the timeout, the download speed is actual lower (not sure if it's the same for everyone?).
Describe the solution you'd like
I am thinking if it makes sense to chunk a file into smaller ranges (say, 100MB each), and in parallel, download each range with different connection and reconcile them under the same interface.
Describe alternatives you've considered
Not sure if such a capability can be composed using the existing interfaces.
Additional context
The text was updated successfully, but these errors were encountered: