-
-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reduce bulk size in database writer, before "Prepared statement contains too many placeholders" appears #733
Comments
The database writer works asynchronously. There is a buffer of a 1000 elements to be written in the db. If the buffer is > 50% full, you get the message "query log writer is too slow, write duration: 0 ms channel_len=502". If this buffer is full (1000 elements), new entries will be dropped. We use sql batch insert with prepared statements and I assume, we hit some limitation. |
I think we can limit the bulk size to reduce the data load but this will lead in more insert statements. I'm not sure if it will solve the problem you your case. I think your database is too slow to handle this load. |
@benchonaut Out of curiosity and for future stress tests: How many request per second are handled by your instance? |
should psql be faster than mysql ? |
400-800 |
Blocky supprts only mysql, postgres, csv and console. No redis I can't say that postgres is faster but I use postgres with blocky and I have no problems with this db. My load is definitely lower than yours ;) |
Log to redis is still on my to-do list but may take some time as #375 and #632 are prioritized at the moment. |
when there are many dns requests incoming , there is no option to set the timeout for the logger ( thus flooding stdout ), already switched from csv to mysql ..
also "Prepared statement contains too many placeholders" appears under heavy load and dumps a very long single line to stdout
log:
The text was updated successfully, but these errors were encountered: