Skip to content

Commit

Permalink
Merge pull request #18736 from ninjin/nin/pairconv
Browse files Browse the repository at this point in the history
RFC: Pair to Pair conversions
  • Loading branch information
vchuravy authored Oct 1, 2016
2 parents b728145 + 97a46c9 commit 3398c7e
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 0 deletions.
5 changes: 5 additions & 0 deletions base/operators.jl
Original file line number Diff line number Diff line change
Expand Up @@ -964,6 +964,11 @@ reverse{A,B}(p::Pair{A,B}) = Pair{B,A}(p.second, p.first)
endof(p::Pair) = 2
length(p::Pair) = 2

convert{A,B}(::Type{Pair{A,B}}, x::Pair{A,B}) = x
function convert{A,B}(::Type{Pair{A,B}}, x::Pair)
convert(A, x[1]) => convert(B, x[2])
end

# some operators not defined yet
global //, >:, <|, hcat, hvcat, , ×, , , , , , , , , , ,

Expand Down
3 changes: 3 additions & 0 deletions test/operators.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ B = [true true false]
@test reverse(Pair(1,2)) == Pair(2,1)
@test reverse(Pair("13","24")) == Pair("24","13")
@test typeof(reverse(Pair{String,Int64}("a",1))) == Pair{Int64,String}
@test convert(Pair{Float64,Float64}, 17 => 4711) === (17.0 => 4711.0)
@test convert(Pair{Int,Float64}, 17 => 4711) === (17 => 4711.0)
@test convert(Pair{Float64,Int}, 17 => 4711) === (17.0 => 4711)

p = 1=>:foo
@test first(p) == 1
Expand Down

7 comments on commit 3398c7e

@nanosoldier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Executing the daily benchmark build, I will reply here when finished:

@nanosoldier runbenchmarks(ALL, isdaily = true)

@nanosoldier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@Sacha0
Copy link
Member

@Sacha0 Sacha0 commented on 3398c7e Oct 1, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since #18304 (which significantly improved performance in many nullable tests, see #18304 (comment)) several nullable benchmarks have appeared quite noisy. Do those benchmarks need some sort of re-calibration? Or is some of the recent variation in those benchmarks significant? Best!

@jrevels
Copy link
Member

@jrevels jrevels commented on 3398c7e Oct 1, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only way to know for sure is to see if you can reproduce regressions locally. The Nullable benchmarks have definitely been really noisy recently, and some other benchmarks seen to have gotten weirdly noisier as well. Some of my recalibrations should land on Nanosoldier next week, which should help; according to my test runs, more thorough GC sweeps cuts down on a lot of problems, but doesn't fix everything.

On the subject of why the Nullable benchmarks might be noisy: In my experience, any benchmark which performs a significant number of heap allocations is prone to variance when run in the context of our server. I don't really understand why (AFAICT, it's not because of GC, which would be the obvious candidate). The variance is unfortunately tough to reproduce; I can run the same benchmark script that Nanosoldier runs, in the same environment, on the same machine, etc. and never see a timing difference of more than 1% between identical Julia builds. However, if I trigger the process via the server's job submission system instead of running it manually, all of sudden 20% changes in performance start popping up...

@Sacha0
Copy link
Member

@Sacha0 Sacha0 commented on 3398c7e Oct 1, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can run the same benchmark script that Nanosoldier runs, in the same environment, on the same machine, etc. and never see a timing difference of more than 1% between identical Julia builds. However, if I trigger the process via the server's job submission system instead of running it manually, all of sudden 20% changes in performance start popping up...

Perhaps the job scheduler assigns its processes lower priority than those run from a shell?

@jrevels
Copy link
Member

@jrevels jrevels commented on 3398c7e Oct 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the job scheduler assigns its processes lower priority than those run from a shell?

In both cases, I used cset to forcibly bound resources to the benchmarking process, and shield those resources from the scheduler. This document contains all the tricks that I've tried in order to mitigate noise, if you're interested.

@Sacha0
Copy link
Member

@Sacha0 Sacha0 commented on 3398c7e Oct 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That writeup is fantastic --- thanks for sharing!

Please sign in to comment.