Skip to content
This repository has been archived by the owner on Dec 8, 2020. It is now read-only.

Don't overwrite Base.vect for numbers with a different implementation #5

Merged
merged 1 commit into from
Nov 2, 2018

Conversation

ararslan
Copy link

@ararslan ararslan commented Nov 2, 2018

Fixes jump-dev#241

@iamed2
Copy link
Member

iamed2 commented Nov 2, 2018

Double-checked that this compiles out the any call?

@iamed2
Copy link
Member

iamed2 commented Nov 2, 2018

Should we do this with vcat as well?

src/atoms/affine/stack.jl Outdated Show resolved Hide resolved
@ararslan
Copy link
Author

ararslan commented Nov 2, 2018

Double-checked that this compiles out the any call?

It does not, unfortunately, and there is a performance impact on the Base method with this change.

After loading Convex:

julia> @benchmark Base.vect($x...)
BenchmarkTools.Trial: 
  memory estimate:  102.72 KiB
  allocs estimate:  375
  --------------
  minimum time:     36.807 μs (0.00% GC)
  median time:      38.912 μs (0.00% GC)
  mean time:        45.703 μs (12.67% GC)
  maximum time:     33.698 ms (99.77% GC)
  --------------
  samples:          10000
  evals/sample:     1

(here x is a 74-element Vector{Any} consisting of Ints, Float32s, and Complex{Int}s.)

Without loading Convex, same x:

julia> @benchmark Base.vect($x...)
BenchmarkTools.Trial: 
  memory estimate:  55.84 KiB
  allocs estimate:  148
  --------------
  minimum time:     22.036 μs (0.00% GC)
  median time:      23.726 μs (0.00% GC)
  mean time:        28.634 μs (13.89% GC)
  maximum time:     25.832 ms (99.89% GC)
  --------------
  samples:          10000
  evals/sample:     1

@iamed2
Copy link
Member

iamed2 commented Nov 2, 2018

Can we do it the old way then? I know it's technically type piracy...

@ararslan
Copy link
Author

ararslan commented Nov 2, 2018

Okay, I've reinstated the old code but corrected the implementation.

@iamed2
Copy link
Member

iamed2 commented Nov 2, 2018

Hmm what if we changed it to use invoke unconditionally for vect and vcat?

@ararslan
Copy link
Author

ararslan commented Nov 2, 2018

I checked that earlier and there is still a performance impact, unfortunately. Smaller than with the any branch at least, but still slower than without loading Convex. With the current state of the PR, the performance is unchanged after loading Convex (since the code does exactly the same thing).

@iamed2 iamed2 merged commit 0018ab9 into master Nov 2, 2018
@ararslan ararslan deleted the aa/fix-vect branch November 2, 2018 22:33
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants