Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track-allocation is underreporting memory usage #18595

Closed
mbeltagy opened this issue Sep 20, 2016 · 3 comments
Closed

Track-allocation is underreporting memory usage #18595

mbeltagy opened this issue Sep 20, 2016 · 3 comments

Comments

@mbeltagy
Copy link
Contributor

mbeltagy commented Sep 20, 2016

Here is a little snippet

function inv_alr(y)
    x=Array{eltype(y)}(length(y)+1)
    xr=view(x,1:length(y))
    map!(exp,xr,xr)
    x_end=1.0-sum(x[1:end-1])
    x[end]=x_end
    map!(z->z/x_end,xr,xr)
    x
end    
y=rand(1000000)
x_dash=inv_alr(y);
Profile.clear_malloc_data()
@time x_dash=inv_alr(y);

If I ran this with the --track-allocation=all option, I get the following output from the @time macro

 0.090277 seconds (135 allocations: 15.267 MB)

The .mem file that is generated looks like

        - function inv_alr(y)
        -     x=Array{eltype(y)}(length(y)+1)
  8032336     xr=view(x,1:length(y))
        0     map!(exp,xr,xr)
        0     x_end=1.0-sum(x[1:end-1])
        0     x[end]=x_end
        0     map!(z->z/x_end,xr,xr)
        0     x
        - end    
        - y=rand(1000000)
        - x_dash=inv_alr(y);
        - Profile.clear_malloc_data()
        - @time x_dash=inv_alr(y);
        - 

So it is only detecting half the memory being allocation. But more alarmingly on the wrong line number. It completely missed allocation associated with x[1:end-1] on the fifth line.

P.S. I am running this on "Version 0.5.1-pre+1" under Ubuntu 14.04.

@yuyichao
Copy link
Contributor

I think this is how track-malloc is supposed to work. It doesn't count allocations multiple times so the real allocation needs to be found in the functions you call (most likely map!(z->z/x_end,xr,xr) due to #15276). It's not impossible that the algorithm could be changed though.

P.S. the allocation in your function is actually in the first line and it's fixed on master (likely by #18520).

@mbeltagy
Copy link
Contributor Author

@yuyichao I am not sure I understand. There are two distinct allocations happening, not one.
Here is the output I get using IProfile.jl

count  time(%)   time(s) bytes(%) bytes(k)
       1   0.43   0.000059  50.00      8000     # /tmp/juliaTest/voo.jl, line 4, x = Array{eltype(y)}(length(y) + 1)
       1   0.00   0.000000   0.00         0     # /tmp/juliaTest/voo.jl, line 5, xr = view(x,1:length(y))
       1  51.17   0.007082   0.00         0     # /tmp/juliaTest/voo.jl, line 6, map!(exp,xr,xr)
       1  16.07   0.002224  50.00      8000     # /tmp/juliaTest/voo.jl, line 7, x_end = 1.0 - sum(x[1:end - 1])
       1   0.00   0.000000   0.00         0     # /tmp/juliaTest/voo.jl, line 8, x[end] = x_end
       1  32.33   0.004475   0.00         0     # /tmp/juliaTest/voo.jl, line 9, map!((z->begin  # /tmp/juliaTest/voo.jl, line 9:
            z / x_end
        end),xr,xr)
       1   0.00   0.000000   0.00         0     # /tmp/juliaTest/voo.jl, line 10, x

@yuyichao
Copy link
Contributor

Correction, inlining doesn't really matters here. In any case, what I mean is that it's meant to be like this because the allocation happens in another function. It's arguable not too useful this way (especially since the allocation you see in is also wrong and should actually go to another function).

@mbeltagy mbeltagy reopened this Jun 15, 2017
vtjnash added a commit that referenced this issue Jan 3, 2020
Don't include one-time costs (JIT compilation) so that warm-up isn't generally required.
And adjust codegen emission to charge call entry costs to the caller.

fixes #11753
fixes #19981
fixes #21871
fixes #34054
close #18595
vtjnash added a commit that referenced this issue Jan 15, 2020
Don't include one-time costs (JIT compilation) so that warm-up isn't generally required.
And adjust codegen emission to charge call entry costs to the caller.

fixes #11753
fixes #19981
fixes #21871
fixes #34054
close #18595
vtjnash added a commit that referenced this issue Jan 21, 2020
Don't include one-time costs (JIT compilation) so that warm-up isn't generally required.
And adjust codegen emission to charge call entry costs to the caller.

fixes #11753
fixes #19981
fixes #21871
fixes #34054
close #18595
KristofferC pushed a commit that referenced this issue Apr 11, 2020
Don't include one-time costs (JIT compilation) so that warm-up isn't generally required.
And adjust codegen emission to charge call entry costs to the caller.

fixes #11753
fixes #19981
fixes #21871
fixes #34054
close #18595
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants