-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak? #450
Comments
This could also be related to issues with R in general. We've identified some issues with how asking if something exists in an environment causes a small mem leak. 2mb over ~7200 requests isn't too bad. I'll ask some of my team to get finer control over when/where the memory is leaking. (Adding help wanted label so people see the issue.) |
if you are able to, run it inside It could be As far as I know, plumber is not tracking anything between requests. With the amount of environments in the R6 objects, which would require "exists in the environment" checks using symbols, it might be the underlying culprit. cc @wch |
R6 shouldn't contribute to memory leaks by itself. R itself can leak memory if there are random symbols being used; that's what the fastmap package is meant to solve. However, I don't see anywhere in plumber where random keys are used, so I don't think that plumber itself is the culprit either. @antoine-sachet What does your code do? It's possible that it's your code (and not plumber) that is causing the leak. |
FWIW, when I use the following, I see no increase in the memory usage.
#* Random numbers
#* @get /random
function(){
rnorm(10)
}
#* Memory usage by the R process
#* @get /mem
function(){
as.numeric(pryr::mem_used())
} From the R console: library(plumber)
r <- plumb("plumber.R")
r$run(port=8000) From a terminal, I use First, find out how much memory is being used:
Then fetch
Then make 5,000 requests to
Given this result, I suspect the issue has to do not with plumber itself, but with the code you're running from plumber. So it looks like there's an increase in memory usage in the beginning, but not after it's been warmed up. My session info (I'm using a build of R-devel):
|
Thanks both. I will run The dev and prod services are running the exact same code, but the prod service has been receiving 300-400 requests per hour whereas the dev service has been idle. The containers have 512MB memory. That's an 11% increase over 10 days, around 5.6MB/day or about 0.5KB/request. The API can do a lot of things but fortunately only 2 endpoints have been called and both are simple wrappers around an AWS comprehend query.
I am hoping it is, that would be easier to fix :)
This may well be it, because the AWS queries require credentials which are looked up using the I'll have to look into it further. Do you have any reference or threads about this "small mem leak" issue? |
Hello. I am seeing a very similar issue with our API endpoint. We have deployed a couple of containers into AWS ECS with EC2 back-end setup. We are using an application load balancer. Our memory increase is more drastic. We send in a small file to be processed. Initial memory usage is around 80MB. After about 1000 API calls, the containers reaches 100% memory and it terminates. I can reproduce the same memory increase with the default Plumber example. Every time I call localhost:8000/mean, I can see memory usage creeps up in |
@mhasanbulli I'm not able to reproduce the problem with the default example. It does increase memory at first, but I believe this is probably due to some internal caching in R, or at the system level. Here's what I did. First, I ran the Docker image with this command line:
Then, in another terminal, I ran
When I visited http://127.0.0.1:8000/mean, the memory increased by about 300KB each time. However, after a certain point, it stops increasing. I used Apachebench to send 1000 requests to the endpoint, with the following:
Eventually, the memory topped out at 89.64MiB, as shown below:
After this point, the memory would still increase when I sent 1000 more requests to it, but it increased a much slower rate. (1000 requests might result in an increase of 0.02MiB, for example.) It is entirely possible that your application is leaking memory somewhere, but it must be due to something that's is not covered by the basic plumber example. It could be in your application code, or it could be in other parts of the plumber code, but we would need a minimal reproducible example in order to track it down. |
Setting this Issue to The rewrite of how POST body values are parsed should help with files that are being POSTed. https://github.com/rstudio/plumber/pull/550/files#diff-f0d8c4442132a6ea9b7dee1b528f1797 |
Hi, |
Locking old thread. If you have a reprex API that you can show has a memory leak, please open a new issue referencing this one. Thank you |
Closing this issue in favor of solution provided by Dmitry in #496 (comment) It seems really fair that the standard memory leak is due to a inefficient malloc linux library. Changing the library had good success according to the comments. |
I am running a plumber API in production and I am observing a slow increase of memory usage over time, about 2MB/day.
The API is running in a container on AWS Elastic Container Service using a serverless setup ("Fargate launch type" is what AWS call it). The container will get terminated when memory usage reaches 100%, which can be a matter of weeks when running with a minimalistic spec.
Plumber is the only thing running in the container, so I assume the memory usage / leak is linked to plumber or more generally to R.
Is this something other people have observed? Any idea what could cause it? How can I troubleshoot it?
FYI I am logging every request by writing to the console with the
logging
package (it all gets picked up by CloudWatch, AWS logging manager). I receive on average 5 requests per minute. Could that be the reason?I appreciate this is not really a bug report, but it felt like an appropriate place to discuss plumber's memory usage over an extended period of time. This would probably get closed on SO.
Thanks
The text was updated successfully, but these errors were encountered: