-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding OMEinsumContractionOrders.jl as a backend of TensorOperations.jl for finding the optimal contraction order #185
base: master
Are you sure you want to change the base?
Conversation
PS: currently, contents of this PR rely on the latest code in the |
Hi Xuanzhao, |
Thanks a lot. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #185 +/- ##
==========================================
+ Coverage 83.06% 83.50% +0.43%
==========================================
Files 25 29 +4
Lines 2150 2546 +396
==========================================
+ Hits 1786 2126 +340
- Misses 364 420 +56 ☔ View full report in Codecov by Sentry. |
I have added some benchmark results, the results are stored at https://github.com/ArrogantGao/OMEinsumContractionOrders_Benchmark. For some of the small graphs provided by Graph.jl, I tested the complexity by different optimizers, where the vertices of the graph are set as tensors and edges as indices, and the bound dimensions are set as 16, the results are shown below and the
|
For the large tensor network, such as the one mentions in #63, which corresponding to a C60 sphere,
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I think this is a great addition to the package. Our CI complains that there is a formatting error somewhere, you could fix this by running import JuliaFormatter; format("TensorOperations/src/")
.
As promised, let me also add my "wishlist" as well, note that you should definitely not feel obliged to do all of this/do it by yourself, it's just for completeness that I am adding it.
As we discussed, it would make a lot of sense to have a way of using OMEinsumContractionOrders in the context of ncon
, where the dynamic setting gives access to the sizes of the tensors automatically.
I would suggest to hook into this line
function ncon(tensors, network, conjlist=fill(false, length(tensors));
order=nothing, output=nothing, kwargs...)
...
tree = contractiontree(network, order)
return ncon(tensors, network, conjlist, tree; kwargs...)
end
contractiontree(network, ::Nothing) = ncontree(network)
contractiontree(network, ::NTuple{N,A}) where {N,A<:LabelType} = indexordertree(network, order)
function ncon(tensors, network, conjlist, tree; kwargs...)
# copy previous implementation
end
# in the extension:
function contractiontree(network, eincodeoptimizer::T) where {T<:Eincodeoptimizer}
# implementation here
end
function ncon(tensors, network, conjlist, tree::NestedEinsum; kwargs...)
# check if binary contraction tree, otherwise error
# check that no multi-indices exist
# implementation based on Einsum tree
end
(Keep in mind that I am not too familiar with the OMEinsum objects, so there might be mistakes in this pseudocode)
This approach has the benefit that it is easily extensible, and would also allow "expert usage" in the sense that if you want to cache the optimized contraction order, you could directly dispatch to the "four argument ncon
".
There is definitely some bumps and kinks with the traces etc, but this might make a good first implementation.
try | ||
@assert TDV <: Number | ||
catch | ||
throw(ArgumentError("The values of the optdata dictionary must be of type Number")) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit of a strange construction, try ... catch ... end
typically means you would handle the error it throws, not rethrow a new error. You can add messages to the assertion error too:
try | |
@assert TDV <: Number | |
catch | |
throw(ArgumentError("The values of the optdata dictionary must be of type Number")) | |
end | |
@assert TDV <: Number "The values of `optdata` must be `<:Number`" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the mistake, now I simply use @asser
instead of try...catch...
@@ -85,6 +86,12 @@ function tensorparser(tensorexpr, kwargs...) | |||
val in (:warn, :cache) || | |||
throw(ArgumentError("Invalid use of `costcheck`, should be `costcheck=warn` or `costcheck=cache`")) | |||
parser.contractioncostcheck = val | |||
elseif name == :opt_algorithm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think here you will have to be a little careful, in principle there is no order to the keyword arguments.
If I am not mistaken, now if the user first supplies opt=(a = 2, b = 2, ...)
, and only afterwards opt_algorithm=...
, the algorithm will be ignored.
My best guess is that you probably want to attempt to extract an optimizer and optdict, and only after all kwargs have been parsed, you can construct the contractiontreebuilder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for pointing that out, I did not notice that perviously.
In the revised version, the contractiontreebuilder
will be constructed after all other kwargs have been parsed.
test/omeinsumcontractionordres.jl
Outdated
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = GreedyMethod begin | ||
D2[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = TreeSA begin | ||
D3[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = KaHyParBipartite begin | ||
D4[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = SABipartite begin | ||
D5[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = ExactTreewidth begin | ||
D6[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (1 => 5, 2 => 5, 3 => 5, 4 => 5, 5 => 5, 6 => 5, 7 => 5) opt_algorithm = GreedyMethod begin | ||
D7[1, 2, 3, 4] := A[1, 5, 3, 6] * B[7, 4, 5] * C[7, 6, 2] | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it could be nice if you could somehow check that the algorithms are indeed being used. Perhaps you can enable debug logging for this section, and check the logs for the debug message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Debug message has been added, one can enable debug logging by setting ENV["JULIA_DEBUG"] = "TensorOperationsOMEinsumContractionOrdersExt"
.
The logging is enabled in tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to add that you can maybe verify that this worked using: @test_logs
? I am not sure about this however, there might be something weird with evaluation time because it is in a macro.
test/runtests.jl
Outdated
@testset "OMEinsumOptimizer extension" begin | ||
include("omeinsumcontractionordres.jl") | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the compat with OMEinsum only allows julia >= v1.9, so you should probably also restrict the tests to that version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the lowest version of julia
in ci
to 1.9
. Is there any reason that 1.8
needed to be supported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I did not explain properly, what I meant is:
@testset "OMEinsumOptimizer extension" begin | |
include("omeinsumcontractionordres.jl") | |
end | |
if VERSION >= v"1.9" | |
@testset "OMEinsumOptimizer extension" begin | |
include("omeinsumcontractionordres.jl") | |
end | |
end |
I don't think there is any particular reason to support 1.8, but since it works anyways, and until 1.10 becomes the new LTS, I would just keep this supported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I add this in tests and change the lowest support version back to v1.8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We managed to make OMEinsumContractionOrders.jl
support Julia v1.8
in v0.9.2
, I updated the compat so that CI for 1.8
should pass now.
Thank you very much for your detailed suggestions, I will revise the code according to the comments asap. |
The following changes have been made:
I also formatted the code using |
For the
with
and with
The former two function first generate the In this way, I will not need to import the data structure from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes, this looks great too!
I left some more comments about specific things in the meantime.
.github/workflows/ci.yml
Outdated
@@ -21,7 +21,7 @@ jobs: | |||
fail-fast: false | |||
matrix: | |||
version: | |||
- '1.8' # lowest supported version | |||
- '1.9' # lowest supported version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- '1.9' # lowest supported version | |
- '1.8' # lowest supported version |
test/runtests.jl
Outdated
@testset "OMEinsumOptimizer extension" begin | ||
include("omeinsumcontractionordres.jl") | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I did not explain properly, what I meant is:
@testset "OMEinsumOptimizer extension" begin | |
include("omeinsumcontractionordres.jl") | |
end | |
if VERSION >= v"1.9" | |
@testset "OMEinsumOptimizer extension" begin | |
include("omeinsumcontractionordres.jl") | |
end | |
end |
I don't think there is any particular reason to support 1.8, but since it works anyways, and until 1.10 becomes the new LTS, I would just keep this supported.
test/omeinsumcontractionordres.jl
Outdated
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = GreedyMethod begin | ||
D2[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = TreeSA begin | ||
D3[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = KaHyParBipartite begin | ||
D4[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = SABipartite begin | ||
D5[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = ExactTreewidth begin | ||
D6[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b] | ||
end | ||
|
||
@tensor opt = (1 => 5, 2 => 5, 3 => 5, 4 => 5, 5 => 5, 6 => 5, 7 => 5) opt_algorithm = GreedyMethod begin | ||
D7[1, 2, 3, 4] := A[1, 5, 3, 6] * B[7, 4, 5] * C[7, 6, 2] | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to add that you can maybe verify that this worked using: @test_logs
? I am not sure about this however, there might be something weird with evaluation time because it is in a macro.
src/implementation/ncon.jl
Outdated
@@ -1,5 +1,6 @@ | |||
""" | |||
ncon(tensorlist, indexlist, [conjlist, sym]; order = ..., output = ..., backend = ..., allocator = ...) | |||
ncon(tensorlist, indexlist, optimizer, conjlist; output=..., kwargs...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we revert the argument order to:
ncon(tensorlist, indexlist, optimizer, conjlist; output=..., kwargs...) | |
ncon(tensorlist, indexlist, conjlist, optimizer; output=..., kwargs...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may lead to wrong dispatch since conjlist
has default value.
To avoid conflicts, I combine these two cases, given by
ncon(tensorlist, indexlist, [conjlist, sym]; order = ..., output = ..., optimizer = ..., backend = ..., allocator = ...)
where the optimizer is now a kwargs, and we do not allow order
and optimizer
to be specified together.
src/implementation/ncon.jl
Outdated
optdata = Dict{Any,Number}() | ||
for (i, ids) in enumerate(network) | ||
for (j, id) in enumerate(ids) | ||
optdata[id] = size(tensors[i], j) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, you probably want to use tensorstructure(tensors[i], j, conjlist[i])
, as this is part of our interface. In particular, size
might not always be defined for custom tensor types.
(
TensorOperations.jl/src/interface.jl
Line 188 in 2da3b04
tensorstructure(A, iA, conjA) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks you very much for pointing that out, I did not noticed that the conjlist
may change the size of the tensor.
src/implementation/ncon.jl
Outdated
end | ||
end | ||
|
||
tree = optimaltree(network, optdata, TreeOptimizer{optimizer}(), false)[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, it seems a bit strange to me to keep the optimizer
as a Symbol. Would it not make more sense to immediately pass the optimizer
itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously I want to use that to avoid exporting the structure TreeOptimizer
.
Now I export it together with the optimizers
export TreeOptimizer, ExhaustiveSearchOptimizer, GreedyMethodOptimizer, KaHyParBipartiteOptimizer, TreeSAOptimizer, SABipartiteOptimizer, ExactTreewidthOptimizer
so that now we can directly pass the optimizer into the function.
Do you think it's a good idea?
test/omeinsumcontractionordres.jl
Outdated
using OMEinsumContractionOrders | ||
|
||
# open the debug mode to check the optimization algorithms used | ||
ENV["JULIA_DEBUG"] = "TensorOperationsOMEinsumContractionOrdersExt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does set this globally though, so if someone had this set to something else you would overwrite that here.
Perhaps you can make use of a with_logger
clause to temporarily change the logger to enable debug messages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried the @test_logs
or with_logger
, it seems that it does not work correctly with the macro @tensor
, may because of the precompiling.
I removed the global setting, and used @test_logs
to test the ncon
functions, which use the same optimaltree
functions for contraction order optimization as @tensor
.
An extension
OMEinsumContractionOrdersExt
has been added, which providesOMEinsumContractionOrders.jl
as a backend of optimizing the contraction order.We modified the
@tensor
marco and added a keywordopt_algorithm
for selecting a specific optimizer. The default optimizer isNCon
, which is the original one used inTensorOperations.jl
. After the extension is loaded, the following optimizers are provided:GreedyMethod
: greedily select the pair of tensors with the smallest cost to contract at each stepTreeSA
: tree simulated annealing, based on local search and simulating annealingKaHyParBipartite
andSABipartite
: regarding the tensor network as hypergraph and then partition the hypergraph into two parts, and then recursively partition each partExactTreewidth
: method based on exact tree width solverTreeWidthSolver.jl
, using the tree decomposition with minimal tree width to construct contraction order with optimal complexityThe former four optimizers are approximate ones which are suitable for large networks, while the latest one is an exact solver, which is only suitable for small networks.
Here is an example of use