Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reduce bulk size in database writer, before "Prepared statement contains too many placeholders" appears #733

Closed
benchonaut opened this issue Nov 8, 2022 · 7 comments · Fixed by #762
Assignees
Labels
🔨 enhancement New feature or request
Milestone

Comments

@benchonaut
Copy link

when there are many dns requests incoming , there is no option to set the timeout for the logger ( thus flooding stdout ), already switched from csv to mysql ..

also "Prepared statement contains too many placeholders" appears under heavy load and dumps a very long single line to stdout

log:

2022-11-08 20:53:35]  WARN  query_logging_resolver: query log writer is too slow, write duration: 0 ms channel_len=502
2022-11-08 20:53:35]  WARN  query_logging_resolver: query log writer is too slow, write duration: 0 ms channel_len=501
2022-11-08 20:53:45] ERROR  server: error on processing request:resolution was not successful, used resolvers: 'upstream 'https://dns.digitale-gesellschaft.ch/dns-query'' and 'upstream 'tcp-tls:fdns1.dismail.de'' errors: [can't resolve request via upstream server https://185.95.218.43:443/dns-query: http return code should be 200, but received 502 can't resolve request via upstream server [2a02:c205:3001:4558::1]:853: dial tcp [2a02:c205:3001:4558::1]:853: connect: cannot assign requested address]

2022-11-08 20:53:48]  INFO /src/querylog/database_writer.go:136 Error 1390: Prepared statement contains too many placeholders
[361.971ms] [rows:0] INSERT INTO `log_entries` (`request_ts`,`client_ip`,`client_name`,`duration_ms`,`reason`,`response_type`,`question_type`,`question_name`,`effective_tldp`,`answer`,`response_code`) VALUES ('2022-11-08 20:53:29.486','127.0.0.1','127.0.0.1',17,'RESOLVED (tcp+udp:4.2.2.4)','RESOLVED','A','facebook.com','facebook.com','A (157.240.9.35)','NOERROR'),('2022-11-08 20:53:29.502','127.0.0.1','127.0.0.1',3,'RESOLVED (tcp+udp:1.1.1.1)','RESOLVED','A','youtu.be','youtu.be','A (142.250.179.142)','NOERROR'),   ... ##  more entries like this

@0xERR0R
Copy link
Owner

0xERR0R commented Nov 9, 2022

The database writer works asynchronously. There is a buffer of a 1000 elements to be written in the db. If the buffer is > 50% full, you get the message "query log writer is too slow, write duration: 0 ms channel_len=502". If this buffer is full (1000 elements), new entries will be dropped.

We use sql batch insert with prepared statements and I assume, we hit some limitation.

@0xERR0R
Copy link
Owner

0xERR0R commented Nov 9, 2022

I think we can limit the bulk size to reduce the data load but this will lead in more insert statements. I'm not sure if it will solve the problem you your case. I think your database is too slow to handle this load.

@0xERR0R 0xERR0R added the 🔨 enhancement New feature or request label Nov 9, 2022
@0xERR0R 0xERR0R changed the title add log writer timeout or make it asynchronous , flush logs before "Prepared statement contains too many placeholders" appears reduce bulk size in database writer, before "Prepared statement contains too many placeholders" appears Nov 9, 2022
@kwitsch
Copy link
Collaborator

kwitsch commented Nov 9, 2022

@benchonaut Out of curiosity and for future stress tests: How many request per second are handled by your instance?

@benchonaut
Copy link
Author

I think we can limit the bulk size to reduce the data load but this will lead in more insert statements. I'm not sure if it will solve the problem you your case. I think your database is too slow to handle this load.

should psql be faster than mysql ?
( could it log to redis with auto-expiration ?)

@benchonaut
Copy link
Author

@benchonaut Out of curiosity and for future stress tests: How many request per second are handled by your instance?

400-800

@0xERR0R
Copy link
Owner

0xERR0R commented Nov 9, 2022

I think we can limit the bulk size to reduce the data load but this will lead in more insert statements. I'm not sure if it will solve the problem you your case. I think your database is too slow to handle this load.

should psql be faster than mysql ? ( could it log to redis with auto-expiration ?)

Blocky supprts only mysql, postgres, csv and console. No redis

I can't say that postgres is faster but I use postgres with blocky and I have no problems with this db. My load is definitely lower than yours ;)

@kwitsch
Copy link
Collaborator

kwitsch commented Nov 9, 2022

I think we can limit the bulk size to reduce the data load but this will lead in more insert statements. I'm not sure if it will solve the problem you your case. I think your database is too slow to handle this load.

should psql be faster than mysql ?
( could it log to redis with auto-expiration ?)

Log to redis is still on my to-do list but may take some time as #375 and #632 are prioritized at the moment.

@0xERR0R 0xERR0R added this to the 0.21 milestone Nov 16, 2022
@0xERR0R 0xERR0R self-assigned this Nov 23, 2022
kwitsch pushed a commit that referenced this issue Dec 2, 2022
* feat: reduce bulk size in database writer (#733)

* test: added tests with sqlmock to cover the database migration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔨 enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants