-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable precompilation #2456
Enable precompilation #2456
Conversation
Looks good like this. Please do not restrict to functions only from db-benchmark. Code should be as general as possible. Specializing it for particular cases would be little bit like cheating (fyi @bkamins). As long as there is no "startup" task in db-benchmark you will be good, and such task won't happen anytime soon :) |
@nalimilan - please let me know if we want to add it to 0.22 release goal. Thank you! |
`DataFrames.precompile()` precompiles all methods that are used when running tests. This is useful to run benchmarks like H2Oai where the time of the first run should include the time needed to specialize on a particular operation on a particular dataset, but not the time needed for the more general compilation of common parts. It could also be useful for users who would like to generate a precompiled image. To avoid making the package slower to precompile or load, the precompilation code is only included when actually used.
41ce3ed
to
9ff939d
Compare
I've updated the PR to include two levels of precompilation:
Current master: julia> @time using DataFrames
[ Info: Precompiling DataFrames [a93c6f00-e57d-5684-b7b6-d8193f3e46c0]
7.056511 seconds (3.10 M allocations: 180.010 MiB, 1.52% gc time)
julia> df = DataFrame(x=[1, 1, 2, 2], y=rand(4));
julia> @time combine(groupby(df, :x), :y => sum);
5.426770 seconds (14.92 M allocations: 760.778 MiB, 7.97% gc time)
# New clean session
julia> @time using DataFrames
1.370335 seconds (1.80 M allocations: 114.414 MiB, 0.91% gc time) This PR: julia> @time using DataFrames
[ Info: Precompiling DataFrames [a93c6f00-e57d-5684-b7b6-d8193f3e46c0]
37.902554 seconds (3.56 M allocations: 218.251 MiB, 0.47% gc time)
julia> df = DataFrame(x=[1, 1, 2, 2], y=rand(4));
julia> @time combine(groupby(df, :x), :y => sum);
2.215117 seconds (2.82 M allocations: 143.991 MiB, 1.27% gc time)
# New clean session
julia> @time using DataFrames
2.200718 seconds (2.26 M allocations: 152.705 MiB, 1.47% gc time)
julia> DataFrames.precompile(true)
julia> df = DataFrame(x=[1, 1, 2, 2], y=rand(4));
julia> @time combine(groupby(df, :x), :y => sum);
0.591102 seconds (482.27 k allocations: 24.417 MiB) |
I will test it before merging but it looks excellent. |
The downside is that we should update these codes each time we make a release - is this correct? |
You need to update the
|
Results of my benchmarks (and in general I think it is really nice; the only strange thing is that This PR:
fresh session:
master
fresh session:
|
Well we don't have to, as
Ah right, I had run this with the old version waiting for the test fix. I've pushed a commit to fix it. I'm not sure why the constructor is slightly slower with precompilation. Maybe the method table is larger, or the GC has more work to do. I guess it's still worth it since taking 0.02s longer isn't really noticeable for users. |
agreed |
Another test with First run
Second run
Third run
|
The benefit of removing CategoricalArrays.jl precompilation is ~2 sec faster first precompile and 0.1 sec faster startup |
I am done adding the suggestions. GitHub messes up the display, but I hope it will be clear for you what I suggest to remove. |
Co-authored-by: Bogumił Kamiński <bkamins@sgh.waw.pl>
I've just realized that this should probably be tested, to avoid making coverage too low, and because it could break (e.g. when changing types). It takes about 100s in a session after running tests. I guess that's OK? |
It is OK. We have ~15 minutes testing time anyway. It takes 180 seconds on my machine to test it. Please add this test to a separate file (so that it can be easily disabled when developing). Thank you! |
I really want to add precompilation in PrettyTables.jl, but I might need a help, since I have never done something like this for a package (except for small tests). |
@bkamins OK done.
@ronisbr I'm not too familiar either, I just took inspiration from FixedPointNumbers and SnoopCompile docs. You can also use the same sequence of calls that I put in a comment at the top of src/other/precompile.jl in this PR. FWIW, here's the Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Bool})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Char})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Float64})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Int32})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Int64})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Irrational{:π}})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Markdown.MD})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),String})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),Symbol})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),UndefInitializer})
Base.precompile(Tuple{Core.kwftype(typeof(PrettyTables._parse_cell_text)),NamedTuple{(:autowrap, :cell_data_type, :cell_first_line_only, :column_width, :compact_printing, :has_color, :linebreaks, :renderer),Tuple{Bool,DataType,Bool,Int64,Bool,Bool,Bool,Val{:print}}},typeof(PrettyTables._parse_cell_text),UnitRange{Int64}})
Base.precompile(Tuple{typeof(PrettyTables._fill_row_name_column!),Array{String,2},Array{Int64,2},Array{Array{String,1},2},Array{Array{Int64,1},2},Array{Int64,1},Array{String,1},Array{Int64,1},Array{Int64,1},Int64,Bool,Val{:print},String})
Base.precompile(Tuple{typeof(PrettyTables._print_table_data),IOContext{Base.GenericIOBuffer{Array{UInt8,1}}},PrettyTables.Display,PrettyTables.ColumnTable,Array{Array{String,1},2},Array{Array{Int64,1},2},Array{Int64,1},Array{Int64,1},Int64,Array{Int64,1},Array{Int64,1},Array{Union{NTuple{4,Int64}, Symbol},1},Array{Union{Int64, Symbol},1},Array{Symbol,1},NTuple{4,Char},Tuple{},Symbol,Int64,Tuple{PrettyTables.Highlighter},Bool,Bool,Bool,PrettyTables.TextFormat,Crayons.Crayon,Crayons.Crayon,Crayons.Crayon})
Base.precompile(Tuple{typeof(PrettyTables._print_table_header!),IOContext{Base.GenericIOBuffer{Array{UInt8,1}}},PrettyTables.Display,SubArray{String,1,Array{String,1},Tuple{StepRange{Int64,Int64}},true},Array{String,2},Array{Int64,2},Array{Int64,1},Array{Int64,1},Int64,Int64,Array{Int64,1},Array{Int64,1},Array{Symbol,1},Array{Symbol,1},Tuple{},Bool,Bool,PrettyTables.TextFormat,Crayons.Crayon,Array{Crayons.Crayon,1},Array{Crayons.Crayon,1},Crayons.Crayon,Crayons.Crayon})
Base.precompile(Tuple{typeof(copy),Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{1},Tuple{Base.OneTo{Int64}},typeof(PrettyTables.compact_type_str),Tuple{Array{Any,1}}}}) |
@nalimilan normally precompile statements are divided by Julia version and architecture in the packages I looked in the past. Why this is not required here? |
I would say it would be desirable, but it is simply hard do to. We removed 32-bit specific precompilation statements and things that fail on Julia 1.0 manually. Indeed some things might fail on e.g. Julia 1.3 which we do not check, but if we get such a report I will quickly release patch. Let us hope that for 1.0 release we will have a more robust workflow for this. |
I'm not super familiar with precompilation, but if you look at the FixedPointNumbers example that's not the case. AFAICT the only issue with changing Julia versions is that the BTW, I've just noticed that Plots.jl uses CompileBot, which appears to automate what I did here. I guess we could use that in the future (not necessarily via a GitHub Action as running locally works too). |
@bkamins and @nalimilan one question. I added precompilation locally here to PrettyTables.jl and I could reduce the time to render the first table from 1.8s to 1.0s. However, the loading time ( |
I guess it is OK as it reduces total time to first plot 😄. The question is by how much |
I think it is probably linear, like, it will increase 0,22s here. |
Not necessarily. From my experience there can be some differences as packages might have the same dependencies for instance. |
Ok! Let me try. |
Without precompilation of PrettyTables:
With precompilation of PrettyTables:
IMHO, unless we can improve the precompilation, I do not see a real gain here. |
yeah - it is roughly the same |
Yeah it doesn't seem great. What did you include in the precompilation phase? |
I only used the default options of CompileBot. Maybe should I use SnoopCompile directly? |
I've never used CompileBot. Did it generate many |
Yes, there are a lot of statements! All those lines you pasted were also generated by CompileBot. |
OK. So I guess there's nothing to do at this point then. (Precompilation only stores the lowered Julia code currently, so LLVM still has to compile it for the specific CPU, and depending on cases that may be the larger share of the work.) |
DataFrames.precompile()
precompiles all methods that are used when running tests.This is useful to run benchmarks like H2Oai where the time of the first run should include the time needed to specialize on a particular operation on a particular dataset, but not the time needed for the more general compilation of common parts.
It could also be useful for users who would like to generate a precompiled image.
To avoid making the package slower to precompile or load, the precompilation code is only included when actually used.
Some timings on a clean Julia 1.5 session:
@jangorecki Does it sound OK to you to spend 277 seconds on precompilation before running the H2Oai benchmarks? We could precompile only the methods which are relevant for you if you prefer.
See #1502.