Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate clamp!(A, lo, hi) for broadcast!(clamp, A, A, lo, hi)? #22236

Closed
TotalVerb opened this issue Jun 6, 2017 · 24 comments
Closed

Deprecate clamp!(A, lo, hi) for broadcast!(clamp, A, A, lo, hi)? #22236

TotalVerb opened this issue Jun 6, 2017 · 24 comments

Comments

@TotalVerb
Copy link
Contributor

TotalVerb commented Jun 6, 2017

Should we deprecate clamp!(A, lo, hi) for broadcast!(clamp, A, A, lo, hi)? Note the latter can be written A .= clamp.(A, lo, hi).

Related: #22235 (comment)

Part of #20402

@ararslan
Copy link
Member

ararslan commented Jun 6, 2017

Should be clamp.(A, lo, hi), otherwise I agree.

@TotalVerb TotalVerb changed the title Deprecate clamp!(A, lo, hi) for broadcast!(clamp, A, lo, A, hi)? Deprecate clamp!(A, lo, hi) for broadcast!(clamp, A, A, lo, hi)? Jun 6, 2017
@TotalVerb
Copy link
Contributor Author

My bad; the argument order for clamp is unintuitive to me, but I can understand why clamp(x, lo, hi) is read "clamp x between lo and hi".

@giordano
Copy link
Contributor

giordano commented Jun 10, 2017

clamp! is actually useful if the array isn't a named variable.

julia> @benchmark clamp!([1,2,3,4,5,6,7,8,9,10], 2, 7)
BenchmarkTools.Trial: 
  memory estimate:  160 bytes
  allocs estimate:  1
  --------------
  minimum time:     43.174 ns (0.00% GC)
  median time:      46.867 ns (0.00% GC)
  mean time:        50.633 ns (6.12% GC)
  maximum time:     550.135 ns (90.15% GC)
  --------------
  samples:          10000
  evals/sample:     987

julia> @benchmark clamp.([1,2,3,4,5,6,7,8,9,10], 2, 7)
BenchmarkTools.Trial: 
  memory estimate:  320 bytes
  allocs estimate:  2
  --------------
  minimum time:     73.643 ns (0.00% GC)
  median time:      79.063 ns (0.00% GC)
  mean time:        86.230 ns (7.28% GC)
  maximum time:     606.447 ns (85.14% GC)
  --------------
  samples:          10000
  evals/sample:     974

Also for named variable is much faster:

julia> a = randn(100000);

julia> @benchmark clamp.($a, 2, 7)
BenchmarkTools.Trial: 
  memory estimate:  781.33 KiB
  allocs estimate:  2
  --------------
  minimum time:     90.881 μs (0.00% GC)
  median time:      97.769 μs (0.00% GC)
  mean time:        113.206 μs (11.85% GC)
  maximum time:     678.371 μs (82.18% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark clamp!($a, 2, 7)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     36.166 μs (0.00% GC)
  median time:      37.643 μs (0.00% GC)
  mean time:        37.689 μs (0.00% GC)
  maximum time:     96.818 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

@martinholters
Copy link
Member

You should benchmark against the inplace foo(a) = a .= clamp.(a, 2, 7) when comparing to clamp!, no?

@giordano
Copy link
Contributor

Uhm, almost, you have to allocate the array, first. Anyway, you're right, in that case clamp.() isn't much worse than clamp!().

julia> using BenchmarkTools

julia> function foo()
           a = randn(10000)
           a .= clamp.(a, -1, 1)
       end
foo (generic function with 1 method)

julia> bar() = clamp!(randn(10000), -1, 1)
bar (generic function with 1 method)

julia> @benchmark foo()
BenchmarkTools.Trial: 
  memory estimate:  78.20 KiB
  allocs estimate:  2
  --------------
  minimum time:     87.944 μs (0.00% GC)
  median time:      93.500 μs (0.00% GC)
  mean time:        96.606 μs (2.87% GC)
  maximum time:     943.194 μs (86.66% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark bar()
BenchmarkTools.Trial: 
  memory estimate:  78.20 KiB
  allocs estimate:  2
  --------------
  minimum time:     84.859 μs (0.00% GC)
  median time:      87.922 μs (0.00% GC)
  mean time:        91.157 μs (3.08% GC)
  maximum time:     950.349 μs (86.95% GC)
  --------------
  samples:          10000
  evals/sample:     1

@ararslan
Copy link
Member

ararslan commented Jun 12, 2017

I actually get that the broadcast version is faster, if the random number generation is removed from the functions:

julia> a = randn(10_000); b = randn(10_000);

julia> f(x) = (x .= clamp.(x, -1, 1)); g(x) = clamp!(x, -1, 1);

julia> @benchmark f($a)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     8.120 μs (0.00% GC)
  median time:      8.125 μs (0.00% GC)
  mean time:        8.572 μs (0.00% GC)
  maximum time:     30.056 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     3

julia> @benchmark g($b)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     9.607 μs (0.00% GC)
  median time:      9.630 μs (0.00% GC)
  mean time:        12.250 μs (0.00% GC)
  maximum time:     110.585 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

@giordano
Copy link
Contributor

Yes, but I was claiming that clamp!(x, lo, hi) is useful when you don't want/need to allocate x first, and instead you want to directly assign the result to a new array.

@Sacha0
Copy link
Member

Sacha0 commented Jun 12, 2017

Perhaps the following alternatives are sufficient? :)

julia> @benchmark [clamp(randn(), -1, 1) for _ in 1:1000]
BenchmarkTools.Trial:
  memory estimate:  7.94 KiB
  allocs estimate:  1
  --------------
  minimum time:     8.782 μs (0.00% GC)
  median time:      9.878 μs (0.00% GC)
  mean time:        11.005 μs (0.00% GC)
  maximum time:     779.968 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark collect(clamp(randn(), -1, 1) for _ in 1:1000)
BenchmarkTools.Trial:
  memory estimate:  7.94 KiB
  allocs estimate:  1
  --------------
  minimum time:     8.842 μs (0.00% GC)
  median time:      10.252 μs (0.00% GC)
  mean time:        11.660 μs (0.00% GC)
  maximum time:     675.692 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark map(_ -> clamp(randn(), -1, 1), 1:1000)
BenchmarkTools.Trial:
  memory estimate:  7.94 KiB
  allocs estimate:  1
  --------------
  minimum time:     8.834 μs (0.00% GC)
  median time:      9.902 μs (0.00% GC)
  mean time:        10.910 μs (0.00% GC)
  maximum time:     651.152 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark (_ -> clamp(randn(), -1, 1)).(1:1000)
BenchmarkTools.Trial:
  memory estimate:  7.94 KiB
  allocs estimate:  1
  --------------
  minimum time:     8.877 μs (0.00% GC)
  median time:      10.438 μs (0.00% GC)
  mean time:        12.043 μs (0.00% GC)
  maximum time:     800.743 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark clamp!(randn(1000), -1, 1)
BenchmarkTools.Trial:
  memory estimate:  7.94 KiB
  allocs estimate:  1
  --------------
  minimum time:     8.462 μs (0.00% GC)
  median time:      9.914 μs (0.00% GC)
  mean time:        11.508 μs (0.00% GC)
  maximum time:     662.747 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

@TotalVerb
Copy link
Contributor Author

TotalVerb commented Jun 13, 2017 via email

@StefanKarpinski
Copy link
Sponsor Member

I have to confess that I don't see the urgency of removing clamp!. It seems occasionally useful and A .= clamp.(A, lo, hi) is a bit less obvious and requires a name for A.

@fredrikekre
Copy link
Member

Should we go ahead with this since we do the same for flipbits! (#27067)?

@StefanKarpinski
Copy link
Sponsor Member

I'd really rather not. I find all the alternate spellings here far less convenient or obvious or both.

@mbauman
Copy link
Sponsor Member

mbauman commented May 14, 2018

IMO flipbits! is different because it was only defined for BitArray. We had apparently not found a need for it in a generic fashion.

clamp! works for all (mutable) AbstractArrays and describes a reasonable generic function. I'm in favor of keeping it as well.

@fredrikekre
Copy link
Member

Isn't it a bit weird to keep this one function when we have deprecated essentially all other auto-broadcasting functions?

@StefanKarpinski
Copy link
Sponsor Member

What's the alternative spelling again? A .= clamp.(A, lo, hi)? I dunno, that just does not seem as convenient as clamp!(A, lo, hi). I actually generally feel like the in-place clamp! is more fundamental than clamp since clamping data in-place is much more often what you want to do, and clamp! only makes sense on a container. Perhaps that's where this is coming from. I am fine with doing away with vectorized clamp but keeping vectorized clamp!.

@StefanKarpinski
Copy link
Sponsor Member

Note that the same argument cannot be made in the other cases: flipbits! is clearly not more fundamental than ! or ~.

@fredrikekre
Copy link
Member

Fair enough, guess we can close this issue then.

I am fine with doing away with vectorized clamp but keeping vectorized clamp!.

clamp(::AbstractArray) where deprecated for v0.6, so thats where we are already.

@Sacha0
Copy link
Member

Sacha0 commented May 15, 2018

Isn't it a bit weird to keep this one function when we have deprecated essentially all other auto-broadcasting functions?

Regrettably vectorized float and complex remain as well. I still wholly favor nixing those. Best!

@StefanKarpinski
Copy link
Sponsor Member

They remain because they are vector operations, not vectorized operations.

@Sacha0
Copy link
Member

Sacha0 commented May 15, 2018

They remain because they are vector operations, not vectorized operations.

Clarification from a slack conversation:

Stefan's argument is that: 1) one can view complex(A::AbstractArray) as an algebraic, rather than elementwise, operation (think real, imag, and zero for AbstractArrays); and 2) though the former does not hold for float(A::AbstractArray), float and complex are something of a pair, so if complex(A::AbstractArray) methods exist then float(A::AbstractArray) methods might as well.

IIRC, this argument differs from prior arguments, which went roughly as follows: float(A::AbstractArray) and complex(A::AbstractArray) may alias, and that behavior is sometimes convenient (though perhaps hazardous in others). Though the non-aliasing replacement (float.(A)) is convenient, the sometimes-aliasing replacement is less so (presently convert(AbstractArray{float(eltype(A))}, A)).

(Also IIRC, auditing usage of float revealed that the float.(A) replacement was correct in the (vast?) majority of cases. So the degree to which the convert invocation's verbosity is practically problematic is not clear. Additionally, that concern should be weighed against the downside of float's sometimes-aliasing behavior, namely accidental aliasing and downstream mutation.)

Best!

@KristofferC
Copy link
Sponsor Member

Make float copy?

@Sacha0
Copy link
Member

Sacha0 commented May 15, 2018

Make float copy?

You mean float.(A)? 😄

@KristofferC
Copy link
Sponsor Member

No, mean make the array-valued float function make a copy.

@Sacha0
Copy link
Member

Sacha0 commented May 15, 2018

No, mean make the array-valued float function make a copy.

My response was a bit tongue-in-cheek :). More explicitly, I meant to imply that were you to make float(A::AbstractArray) strictly non-aliasing, you might as well just deprecate it to float.(A). Best!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants