-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Throughput Test #7
Comments
Phase 1
To boost the throuput, see Plan for Phase 2 |
if it is that slow, I think we need to find the real spot that slows the system, before moving on to other plan how did you test the throughput? Is each record a transaction? We can send next query without waiting the previous one to return, what is the throughput of this? This is more like the real situations, all requests are processed independent of each other in parallel. |
yes. the insertOne method. the code is as below. It seems that there are much time waiting for the reply between each roundtrip(10 records each). I'll try retest with letter timeout and more concurrent jobs to see if become better. function TestInsertThroughputNoIndex()
NPL.load("(gl)script/ide/System/Database/TableDatabase.lua");
local TableDatabase = commonlib.gettable("System.Database.TableDatabase");
-- this will start both db client and db server if not.
local db = TableDatabase:new():connect("temp/test_raft_database/");
db.insertNoIndex:makeEmpty({});
db.insertNoIndex:flush({});
NPL.load("(gl)script/ide/Debugger/NPLProfiler.lua");
local npl_profiler = commonlib.gettable("commonlib.npl_profiler");
npl_profiler.perf_reset();
npl_profiler.perf_begin("tableDB_BlockingAPILatency", true)
local total_times = 10000; -- 10000 non-indexed insert operation
local max_jobs = 10; -- concurrent jobs count
NPL.load("(gl)script/ide/System/Concurrent/Parallel.lua");
local Parallel = commonlib.gettable("System.Concurrent.Parallel");
local p = Parallel:new():init()
p:RunManyTimes(function(count)
db.insertNoIndex:insertOne(nil, {count=count, data=math.random()}, function(err, data)
if(err) then
-- echo({err, data});
end
p:Next();
end)
end, total_times, max_jobs):OnFinished(function(total)
npl_profiler.perf_end("tableDB_BlockingAPILatency", true)
log(commonlib.serialize(npl_profiler.perf_get(), true));
end);
end |
OK. round trip is also taking too much time, figure this out |
After I fix the memory leak in RPC and make the concurrent jobs to 100, |
Roundtripthe roundtrip is about 21.96ms(10000 records/219.625s). the main cost is at the redundant log msg and log consistency check caused by the replication. If we remove the term check in the raft, the roundtrip can be down to 7.61ms(10000 records/76.124s). ReasonAs discribed above, redundant log msg and log consistency check have much performance penalty. They may lag down the consistency of the node in the cluster and makes the performance even worse as time flows. A little about the Raft implementation
Improvementlog consistency check is essential and necessary, but ThroughputBesides the above reason, the because of the lag down of the node state in cluster, parrallel send request( |
OK.
|
No description provided.
The text was updated successfully, but these errors were encountered: