-
Notifications
You must be signed in to change notification settings - Fork 2
Home
After a short research on the design and implementation of rqlite and actordb, TableDB with Raft replication will be started with the NPL Raft implemetation, which is (probably) the fundamental of this project. If we have finish this, we could consider to add more functional features like actordb.
TableDB with Raft implemetation will be much like rqlite, the only difference is TableDB's replication is on the table(collection) level, while rqlite is on the whole sqlite datebase(which may have multiple tables) level,the replication level can be in a higher layer.
A much coarse roadmap:
-
Implement Raft consensus with NPL, according to the Raft paper and jraft- Leader election
- Log replication
- Cluster membership changes
- Log compaction
- Client interaction
- Add more abstract interface to TableDB to adapt to the Raft consensus, with reference to rqlite
- Test. Test correctness and performance of the implementation, and fix.
In order to get a quick(in one month), full feature and correct NPL Raft implementation, it will be helpful to refer to an existing (full feature and correct) implementation. After several days' digging into NPL and various Raft implementations on the raft consensus website, I choose jraft, a Java implemetation. There are several reasons for this:
- NPL is recommended to an OO style coding
- jraft is full feature, correct and still under maintenance
- jraft is straight forward, and have a good understandability, much thanks to the good Java OO style
On the basis of NPL Raft implementation, TableDB Raft implementation will be much easier. But the implementation can differ much, with the compare between actordb and rqlite.
- rqlite is easy, it simply use SQL statement as Raft Log Entry.
- actordb goes more complicate:
Actors are replicated using the Raft distributed consensus protocol. Raft requires a write log to operate. Because our two engines are connected through the SQLite WAL module, Raft replication is a natural fit. Every write to the database is an append to WAL. For every append we send that data to the entire cluster to be replicated. Pages are simply inserted to WAL on all nodes. This means the leader executes the SQL, but the followers just append to WAL.
becaue we don't have sqlite wal hook in the NPLRuntime and we also want to keep the features in TableDB, actordb and rqlite 's implementation will not be feasible. But we can borrow the consistency levels from rqlite.
it is still not hard.
utilize the msg in IORequest:Send
, the log entry looks like below:
function RaftLogEntryValue:new(query_type, collection, query)
local o = {
query_type = query_type,
collection = collection:ToData(),
query = query,
};
setmetatable(o, self);
return o;
end
and commint in state machine looks like below:
--[[
* Commit the log data at the {@code logIndex}
* @param logIndex the log index in the logStore
* @param data
]]--
function RaftTableDB:commit(logIndex, data)
-- data is logEntry.value
local raftLogEntryValue = RaftLogEntryValue:fromBytes(data);
NPL.load("(gl)script/ide/System/Database/IOThread.lua");
local IOThread = commonlib.gettable("System.Database.IOThread");
local collection = IOThread:GetSingleton():GetServerCollection(raftLogEntryValue.collection)
NPL.load("(gl)script/ide/System/Database/IORequest.lua");
local IORequest = commonlib.gettable("System.Database.IORequest");
-- a dedicated IOThread
IORequest:Send(raftLogEntryValue.query_type, collection, raftLogEntryValue.query);
self.commitIndex = logIndex;
end
the client interface keeps unchanged, we can provide a script/TableDB/RaftSqliteStore.lua
, which will send the Log Entry above to the raft cluster in each interface and could also consider consistency levels.
the original interfaces has callbacks, how to achieve this in the cluster mode??
Like rqlite, we use sqlite's Online Backup API to make snapshot.