-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The TPS of Quorum #119
Comments
Operations are, in the majority of cases, I/O (disk)-bound, and in some cases CPU-bound. 137 sounds pretty low. Can you please share details about how you're benchmarking (are you sending private or public transactions, what are the contents of your transactions, etc.), how fast the harddrive is (e.g. |
@bigstar119 publish your bench plz |
hi .thank you for answer.. i deploy 3 Quorum nodes ,use raft consensus. 2017-05-22 next Monday I will upload the Detail resource include: |
is anyone ever do the same test like me ? what is your test result ? How many transactions per second in you testcase? can your share something detail to me ? thank you . |
@bigstar119 I doing some tests currently. Will share in a while as soon as will get any results. |
@bigstar119 so you're aware, we added another API endpoint called sendTransactionAsync which does not return the transactionhash immediately (but can push to a callback URL with it): https://github.com/jpmorganchase/quorum/releases/tag/v1.0.2 This should significantly increase performance for sequential sending of private transactions (i.e. for each thread) since it does not wait for the recipient Constellation nodes to acknowledge receipt before returning. Barring CPU/thread starvation, it should increase your number significantly. |
I just ran a test based on Public transactions of Wei. 7 nodes on 7 AWS spot instances I set the separate machine and send transaction sequentially to the node 1 which reuses (unlock 0) default account. I use eth.sendTransaction call. Its fine. Async only helps for private txs. So, my load test tries to push load evenly, scheduling pushes through time (without coordinated omission correction, but fine though). Results: even 100 tx/s (from 1 account to new accounts) loads cluster with a CPU spikes from 30% to 150-300% on each node every committed block. txpool.status reports pending txs spikes and node 1 breaks after a certain amount of time due to the overload of the tx pool, facing a pending tx limit of 4096, while a long cpu spike on a single core. It can eat more with decreased blockmaker window, but not much. |
PProf shows just a big spent in applyTransaction, keccak and update of internal state. |
Unfortunately, each vcpu on EC2 is fairly slow, and geth benefits much more from faster clock speeds than more cores because of its sequential nature. (Constellation does parallelize its operations, so if you're sending asynchronous private transactions, you should benefit somewhat from having more cores.) It's definitely worth looking more at performance on many-slow-core setups, but it may ultimately be something that has to be addressed in geth itself. |
@patrickmn totally agree. was just interested in theoretical perf cap avg. btw, 60% of CPU wasted on RLP encoding (rlp.EncodeToBytes) and Trie (Patricia Merkle Tree) updates. |
@patrickmn |
@patrickmn |
@patrickmn in the release notes it states that eth_sendTransactionAsync is temporary. Once raw transaction signing is supported, do you realistically envisage getting rid of it? I'm thinking out loud if it may be worthwhile adding support for it in web3j-quorum. |
@bigstar119 Yes, we do plan to support sendRawTransaction. The tricky part is that it's the identifier returned from Constellation that needs to be signed, not the original payload. Hopefully we will have an easy approach for this in the next release. @conor10 Yes, eventually it will be deprecated in favor of something that doesn't force you to feed the plaintext of your transaction to the geth node just to have it feed it to Constellation to get the identifier. Our plan is to make libraries available that call out to Constellation or geth respectively only when necessary, in the same vein as what's happening in upstream Ethereum. |
this problem is about safe to be concurrently. the problem link: ethereum/go-ethereum#2950 tips: by the way, in ethereum rust client parity , that's problem shouldn't happen. All methods in RPC are safe to be called concurrently. |
if you need to send concurrent requests with a single originating address, you can emit wallet nonce manually. largely, it's not a client issue. |
There were some concurrent nonce handling issues in go-ethereum, but they were fixed a few releases ago. Please update Quorum accordingly if it did not pull in those updates. |
@karalabe this commit is about that fixed , ethereum/go-ethereum@ea11f7d right ? |
@bigstar119, @patrickmn ,@sitano |
@sitano -- Based on your results, it does not seem reasonable to expect 100 TPS in production.
I really think your results represent a good stress test of the capabilities of the system, using AWS spot instances with plenty of power. What do you think a better, more reasonable expectation is for TPS using quorum as it is today? As a simple benchmark, I tested against sending 1500 transactions each from 3 separate accounts to 3 nodes running on one VM instance-- the 7node example deployment in azure. Each transaction process was run on a separate thread and the http RPC calls were made in parallel with a manually incrementing nonce for each account passed into the send object. I averaged ~12.5 tx/s for 4500 total transactions per benchmark. I've pasted the scripts I wrote this morning to run a crude benchmark. I welcome any thoughts or improvements to the benchmarking methods. Many thanks-- // manager.js used to fork the processes and run on each thread
const Promise = require('bluebird')
const fork = require('child_process').fork
const workers = {
0: fork('./sendTransaction.js'),
1: fork('./sendTransaction.js'),
2: fork('./sendTransaction.js')
}
const accounts = [{
account: "REPLACE WITH THE SENDING ACCOUNT",
rpc: "http://0.0.0.0:22000" // NOTE: actual IPs used were remote hosts
}, {
account: "REPLACE WITH THE SENDING ACCOUNT",
rpc: "http://0.0.0.0:22001"
}, {
account: "REPLACE WITH THE SENDING ACCOUNT",
rpc: "http://0.0.0.0:22003"
}]
function sendWork({ worker, fromAccount, rpc }) {
return new Promise((resolve, reject) => {
try {
workers[worker].on('message', (msg) => {
return resolve(true)
});
workers[worker].on('error', (error) => {
console.log('error', error)
});
workers[worker].send({ fromAccount, rpc });
} catch (error) {
return reject(error)
}
})
}
Promise.resolve(accounts).map((data, i) => {
const { account, rpc } = data
console.time('sendTransaction-benchmark')
return sendWork({
worker: i,
fromAccount: account,
rpc
})
}).then(() => {
console.timeEnd('sendTransaction-benchmark')
console.log('finished')
process.exit(0)
}).catch((error) => {
console.log(error)
process.exit(1)
}) // sendTransaction.js script used to process the transactions
const Web3 = require('web3')
const Promise = require('bluebird')
const NUM_TX_PER_ACCOUNT = 500
// Receiving Accounts
const RECEIVING_ACCOUNTS = [
"0x8520de2650e91063882a2f45dD16CC80224BC451",
"0x7BdD73C8A76B67B2E78dDE46BD18468341063Ed5",
"0xAf2554cBd0f1E6ecEd368A206a8714bdE162327a",
]
function benchSendTransaction({ web3, from, to, nonce, counter }) {
return new Promise((resolve, reject) => {
if (counter == 0) {
return resolve(true)
} else {
// console.time(`eth.sendTransaction, nonce: ${nonce}`)
web3.eth.sendTransaction({ from, to, value: 1e2, nonce })
// .once('transactionHash', (txHash) => { console.log('txHash', txHash) })
// .once('receipt', (receipt) => { console.log('receipt', receipt) })
// .on('confirmation', (confirmation, receipt) => { console.log('confirmation, receipt', confirmation, receipt) })
.on('error', (error) => { return reject(error) })
.then((receipt) => {
// console.timeEnd(`eth.sendTransaction, nonce: ${nonce}`)
return resolve(benchSendTransaction({
web3,
from,
to,
nonce: nonce += 1,
counter: counter -= 1
}))
})
}
})
}
process.on('message', (msg) => {
const { fromAccount, rpc} = msg
const web3 = new Web3(rpc);
let nonce;
Promise.resolve(web3.eth.getTransactionCount(fromAccount)).then((count) => {
nonce = count
return RECEIVING_ACCOUNTS
}).map((toAccount) => {
console.time('eth.sendTransaction')
return benchSendTransaction({
web3,
from: fromAccount,
to: toAccount,
nonce,
counter: NUM_TX_PER_ACCOUNT
})
}).then((finished) => {
console.timeEnd('eth.sendTransaction')
process.send({ finished: true });
}).catch((error) => {
console.log('error', error)
process.send({ error: true });
})
}); |
Hi @Ryanmtate! |
Thanks for the quick response @jpmsam! I can confirm with
Polling the txpool, the worst I've seen is 5 pending transactions so far. I'll look into use a load testing tool for better results. |
so, how to do a TPS benchmark? @bigstar119 , thanks! |
Here is a great walkthrough for conducting your own TPS benchmark: https://hackernoon.com/quorum-stress-test-1-140-tps-792f39d0b43f |
Hi, I want to know How many transactions do Quorum accomplish in a second?
I build a 3 node test environment, the device info is
CPU: 2 Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz
Mem: 4GB
Disk:80GB
in my test , TOP TPS is 137 , is the result reasonable? and do you have any methd to increase the TPS,eg ... change some parameter or so ?
The text was updated successfully, but these errors were encountered: