I doubt most systems will allow you to have ~10K files open at once, which more or less rules out just opening all the files and writing to them as needed.
As such, you probably just about need to create some sort of proxy-ish object to buffer data for each file, and when a buffer exceeds some given size, open the file, write the data to disk, and close it again.
I can see two fairly simple approaches to this. One would be to write most of the code yourself, using a stringstream
as the buffer. The client streams to your object, which just passing through to the stringstream. Then you check if the stringstream exceeds some length, and if so, you write the content to disk and empty the stringstream.
The other approach would be to write your own file buffer object that implements sync
to open the file, write the data, and close the file again (where it would normally leave the file open all the time).
Then you'd store those in a std::map
(or std::unordered_map
) to let you do a lookup from the file name to the matching proxy object.