-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes Memory Leak #231
Fixes Memory Leak #231
Conversation
apps/model/model.js
Outdated
let img2; | ||
tf.tidy(()=>{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
img2 is hanging after tf.tidy
apps/model/model.js
Outdated
// normalized.mean().print(true); // Uncomment to check mean value. | ||
// let min = img2.min(); | ||
// let max = img2.max(); | ||
// let normalized = img2.sub(min).div(max.sub(min)); | ||
} else { | ||
// Pixel Standardization: scale pixel values to have a zero mean and unit variance. | ||
|
||
tf.tidy(()=> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is normalized
returned to
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to keep tf.tidy()
from destroying the tensor it needs to be returned. normalized
is being used afterwards so it doesn't need to be destroyed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The returned object has to be assigned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am using tf.keep()
instead of return. Does this solve the issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It still needs to be assigned, please refer the usage here: https://js.tensorflow.org/api/latest/#keep
The mobilenet example might help you with placing these calls. Please let me know if you are facing some issues. |
@Insiyaa Tried fixing the issues . Let me know if they are ok . |
You can enclose into |
This doesn't work . As given in the documentation here |
@Insiyaa As you mentioned earlier , now I have assigned all the variables returned by |
Oh yes, that is just a warmup run and we don't really need to get the results, so removing that line would do just good. |
apps/model/model.js
Outdated
@@ -471,10 +473,11 @@ function runPredict(key) { | |||
self.showProgress("Model loaded..."); | |||
|
|||
// Warmup the model before using real data. | |||
tf.tidy(()=>{ | |||
const warmupResult = model.predict(tf.zeros([1, image_size, image_size, input_channels])); | |||
warmupResult.dataSync(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realise the line 478 isn't required to get the values, could you remove it?
That seems good, thank you! The same additions would go into segment.js. Also, could you squash the commits into one with a meaningful commit message :) |
If you prefer not to mess with rebasing, we can squash when merging the PR itself. |
So i am facing a problem. This line is returning a tensor which I am not being able to dispose . The documentation says |
In my view, all the required additions are done. You're right; there is probably some leak inside |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not super picky on formatting now, so this seems fine. Thanks!
Fixes #228
Before :
After :
All unused tensors inside the
runPredict()
are being disposed . Performance for multiple predictions has improved a lot. Running multiple predictions dosen't overflow memory now.