-
Notifications
You must be signed in to change notification settings - Fork 58
Add ref store size threshold property to initiate purge of old streams #4610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Raised for @p-kimberley |
For context, there are two issues here:
Firstly, I suggest I propose two additional settings be created, both of which will cause streams to be purged until the DB(s) are within limits:
|
Note: I take store to mean one LMDB env, containing data for one feed. Reference data is a collection of stores. The store actually contains multiple DBs so will avoid your terminology to avoid confusion with how the code/lmdb is structured. It will have to work slightly differently to what you have suggested due to the way LMDB allocates disk but never frees it. The move to a store per feed has been good, but actually makes things worse when it comes to freeing up data. Each feed store is an independent LMDB env, so loading a new stream of feed XYZ will either make the XYZ store grow to make room, or the XYZ store will remain unchanged as the stream can fit into space reclaimed in the store from previous purges of XYZ streams. All this assumes readsBlockWrites is on to allow write txns to use reclaimed space. So if you do a big re-process on a feed causing it to load a lot of streams, that will use up a lot of disk that can never be used by other feeds. For example, a store could grow to 1Gb on disk, but contain no data due to purges. Only one feed can use this 1Gb. Your point 1. currently won't work, as purges won't free any space on disk. I'm going to look into making a compacted copy of the env and then swapping over. I think this will work to free space on disk, but not sure how quick it is. It will also have to block all other writes to the store. Hopefully it could be a scheduled job, e.g. at a quiet time of day, or done at the end of the purge scheduled job. I think point 2. (the high water mark % on the store) is ok. I just need to be sure I can determine from LMDB the % of the store that is free space. A load of a stream for feed XYZ will need to check the % free against the HWM and if necessary, purge LRU streams from the XYZ store until it is below the HWM. This can also be scheduled too. I think we could also do with the means in stroom to set limits on a per feed basis as you likely have some ref feeds that have much bigger streams than others. |
When a ref store reaches
stroom.pipeline.referenceData.lmdb.maxStoreSize
loads will fail. It would be good if there was an additional threshold % property (e.g. 90%) such that prior to a load, if the store size (as a % of the maxStoreSize) is greater than this threshold, then it will purge old streams until below the threshold.The text was updated successfully, but these errors were encountered: