You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Testing has shown that the file backend will be significantly faster on small concurrent access sizes if there are multiple log files rather than just one, see #40 proof of concept. There are several steps to this (this will be an evolution/superset of the current backend rather than a new file backend).
make target specification for file backend refer to a subdirectory, rather than a singular file
modify log position tracking strategy: rather than extending log and sync'ing to record position on each write, track the next available log offset in memory and write it to the header on clean shutdown. If offset is not available on startup, pessimistically assume that next position is the end of the log.
add integer to region id to specify log index
add ability to initialize multiple log files
json parameter for size increments to expand log files by. This will be used by server for initial allocation when the target is attached as well.
add ability to extend log files once they reach a capacity threshold using an asynchronous fallocate()
experiment with signal handler as example for how to shutdown cleanly on ctrl-c (not just shutdown rpc)
The text was updated successfully, but these errors were encountered:
Testing has shown that the file backend will be significantly faster on small concurrent access sizes if there are multiple log files rather than just one, see #40 proof of concept. There are several steps to this (this will be an evolution/superset of the current backend rather than a new file backend).
The text was updated successfully, but these errors were encountered: