-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow performance #35
Comments
Thanks for raising the issue @coder-mike. You're right, the performance is really bad under load. The library fires an http request per log entry which is not efficient. The recommended way to collect logs is via the datadog agent. I'd recommend you going with it instead of this library. Refer https://docs.datadoghq.com/logs/log_collection/nodejs/?tab=winston30. |
The agent would also be making network requests to submit the logs to the DataDog server using some protocol. If that protocol is apparently more efficient, is it difficult for this library to support that protocol? I don't really see the theoretical advantage of trying to speed up communication to a remote service by adding an extra middleman (agent) which is essentially tasked with the same objective (communicate to the remote service). I ask because having an agent requires having access to that layer of the VM, which then loses platform independence and makes deployment more difficult in a FaaS or other managed environment. This is illustrated by the fact that the documentation you linked talks about modifying |
@coder-mike I'm not sure how the datadog agent works, so not sure what protocol it uses or if it is faster. But yes installing another agent just to forward logs is not ideal. From the perspective of this library I can think of two improvements:
I think these improvements can bring in better performance. |
I think batching would make a big difference, if the DataDog HTTP protocol supports that. |
I closed the ticket accidentally. |
Here it says that logs can be sent in arrays with up to 1000 entries: https://docs.datadoghq.com/api/latest/logs/#send-logs and according to Winston docs, we can define batching options for HTTP transport: https://github.com/winstonjs/winston/blob/master/docs/transports.md#http-transport, so in theory, we can send logs in batches, but at least in my project, Winston 3.3.3 typings doesn't have batch config properties. EDIT: I've found related PR, batching support should be included in next Winston release: winstonjs/winston#1970 |
any updates on this? |
https://github.com/winstonjs/winston/blob/master/docs/transports.md#http-transport |
I wanted to test out this library with DataDog. I wanted to evaluate the performance under load.
I basically copied the example code and then got it to spit out 10,000 log entries.
What I'm seeing in the DataDog UI is that these 10,000 log entries trickle through over the course of 8 minutes, which is a measly 21 entries per second. I would expect orders of magnitude more than this.
I don't know if I'm just approaching this whole thing the wrong way, if there's another recommended way to get node.js log entries into Datadog, or if I'm supposed to be batching these somehow. Could someone please advise?
The text was updated successfully, but these errors were encountered: