Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove global variables #322

Merged
merged 5 commits into from
Sep 10, 2019

Conversation

ericphanson
Copy link
Collaborator

@ericphanson ericphanson commented Aug 31, 2019

Addresses half of #83 by removing one of the two global variables Convex uses. (Edit: second half addressed in a later commit in this PR; see #322 (comment)).

Currently (i.e. on master), id_to_variables is a global dictionary associating the id_hash field of Variables to the Variable object itself, forming a global registry of variables. This allows one nice feature of Convex: Variables don't have to be explicitly added to a problem, since they are accessible to all problems via the global registry. To populate the registry, each variable must add itself to id_to_variables upon creation.

The aim here is to replace this global registry with a local one for each problem. This has the advantage that when all the problems using a particular variable go out of scope, the variable itself can be GC'd. With the global registry, no variable can ever be GC'd, because it could be accessed via the registry. This causes applications using lots of problems to run out of memory (e.g. solving a new problem for every pixel in an image).

It turns out, however, Convex already creates a sort of local registry for each problem to hold constraints and expressions: unique_conic_forms. Thus, all we need to do is also store variables in this local registry. Variables already have their conic form cached in unique_conic_forms; all we have to do is also store the id_to_variables association too during cache time. It took me several iterations to figure out this obvious solution! (Previous iterations involved traversing the problem tree separately to find and register variables, and were less performant).

The only annoyance is that we need to define Variable before UniqueConicForms to have the type information available to define UniqueConicForms, but we need to define conic_form! for a Variable after we define UniqueConicForms. My solution was to make a separate file for the latter, put the rest of variable.jl first, then conic_form.jl, then this new file. This unfortunately causes a more complicated diff; the only change in this conic_form! method is the addition of the line

 add_to_id_to_variables!(unique_conic_forms, x)

which adds the variable x to the local registry now stored in unique_conic_forms.

Note 1: conic_form! functions have been written with an optional second argument, as, e.g.

function conic_form!(x::Variable, unique_conic_forms::UniqueConicForms=UniqueConicForms())

The second argument should not be optional; it's actually crucial that we pass along the unique_conic_forms generated during Convex.conic_problem, because we take the constraints placed in that object as the constraints on the problem. So if this optional argument actually gets used, we won't be adding constraints to the problem during that conic_form!. However, the good news is that optional argument is never used; one can delete every instance of =UniqueConicForms() (besides the one in Convex.conic_problem), and all the tests pass. Thus, I think these optional arguments should be made mandatory. However, I'll do that in a separate PR to not complicate the diff here.

Note 2: by removing the need for variables to register themselves in the global registry, we present a simpler interface for custom variables once we have #313. That PR thus will need to be updated, but it'll be better for it.

This is priority 2 of #320.

@ericphanson ericphanson changed the title Remove id to variables Remove global variable id_to_variables Aug 31, 2019
@ericphanson ericphanson requested review from ararslan and removed request for ararslan August 31, 2019 23:27
@ericphanson
Copy link
Collaborator Author

Hmm, the benchmarks didn't run on Travis. That means something's wrong with the .travis.yml I added in #321.

@ericphanson
Copy link
Collaborator Author

I asked for help with the benchmarks here: https://discourse.julialang.org/t/pkgbenchmark-jl-workflow/24472/4?u=ericphanson.

@tkf
Copy link

tkf commented Sep 1, 2019

@tkf
Copy link

tkf commented Sep 1, 2019

I guess you also you put ' after .jl by accident in those two lines:

https://github.com/JuliaOpt/Convex.jl/blob/ee2dcd83c03da3556e6fe9a59ed881f023664115/.travis.yml#L20-L23

Also, you might need to instantiate the project before those commands, like this:

julia --project=benchmark -e 'using Pkg; Pkg.instantiate()'

@tkf
Copy link

tkf commented Sep 1, 2019

Note also that --project=benchmark may not work if you have different Project.toml/Manifest.toml files in a PR and the master since PkgBenchmark would change those files between benchmarks.

@ericphanson
Copy link
Collaborator Author

@tkf thank you very much for the help! I made a PR with the changes you suggested: #323

Note also that --project=benchmark may not work if you have different Project.toml/Manifest.toml files in a PR and the master since PkgBenchmark would change those files between benchmarks.

I've tried to work around this possibility by only having PkgBenchmark in that environment, without a Manifest.toml checked in. Then in the actual benchmarks.jl file I put at the top

using Pkg
tempdir = mktempdir()
Pkg.activate(tempdir)
Pkg.develop(PackageSpec(path=joinpath(@__DIR__, "..")))
Pkg.add(["BenchmarkTools", "SCS", "ECOS", "PkgBenchmark"])
Pkg.resolve()

My thought is then if the benchmarking script gets anything dirty, it should be that tmp directory. (So I just need runjudge.jl to not get anything dirty, which I think is true).

@tkf
Copy link

tkf commented Sep 1, 2019

If you want to use the latest of everything I think that's a good way to go. I was implicitly assuming that Manifest.toml was checked in (which is my preferred way).

@ericphanson
Copy link
Collaborator Author

If you want to use the latest of everything I think that's a good way to go. I was implicitly assuming that Manifest.toml was checked in (which is my preferred way).

Ok, that makes sense. I think if we do run into compat problems, we can just specify versions / upgrade levels in the PackageSpecs. I am slightly against checking in a manifest for this particular repo, because there can be long times of inactivity between PRs, and it would be nice to get benchmark data on the latest versions without having to do anything (i.e. update a manifest), or at least the latest versions semvar-compatible (which can be enforced by PackageSpecs).

@tkf
Copy link

tkf commented Sep 1, 2019

I am slightly against checking in a manifest

FYI, I was very against checking in manifests but now I do this everywhere :) Of course, I'm not arguing you should or anything, but I'd say it's been quite useful for me (especially when I want to reliably test against non-released versions of upstream packages).

@ericphanson
Copy link
Collaborator Author

Restarting CI since #323 has been merged.

@ericphanson ericphanson closed this Sep 1, 2019
@ericphanson ericphanson reopened this Sep 1, 2019
@ericphanson
Copy link
Collaborator Author

Benchmark judgement (removing *constraints benchmarks which are likely to be spurious; see #323):

             ["ECOS", "affine_add_atom"]   1.01 (5%)  1.01 (1%) ❌
            ["ECOS", "affine_diag_atom"]   0.99 (5%)  1.01 (1%) ❌
             ["ECOS", "affine_dot_atom"]   0.98 (5%)  1.01 (1%) ❌
          ["ECOS", "affine_negate_atom"]   0.98 (5%)  1.01 (1%) ❌
              ["ECOS", "affine_tr_atom"]   0.99 (5%)  1.01 (1%) ❌
              ["SCS", "affine_add_atom"]   1.02 (5%)  1.01 (1%) ❌
             ["SCS", "affine_diag_atom"]   1.01 (5%)  1.01 (1%) ❌
              ["SCS", "affine_dot_atom"]   1.04 (5%)  1.01 (1%) ❌
            ["SCS", "affine_index_atom"] 1.06 (5%) ❌    1.01 (1%)
           ["SCS", "affine_negate_atom"]   1.01 (5%)  1.01 (1%) ❌
               ["SCS", "affine_tr_atom"]   1.01 (5%)  1.01 (1%) ❌
        ["SCS", "sdp_nuclear_norm_atom"] 0.92 (5%) ✅    1.00 (1%)
      ["formulation", "affine_add_atom"]   1.02 (5%)  1.02 (1%) ❌
     ["formulation", "affine_diag_atom"]   1.03 (5%)  1.02 (1%) ❌
      ["formulation", "affine_dot_atom"]   1.04 (5%)  1.02 (1%) ❌
    ["formulation", "affine_index_atom"]   1.01 (5%)  1.01 (1%) ❌
     ["formulation", "affine_mult_atom"]   1.03 (5%)  1.01 (1%) ❌
   ["formulation", "affine_negate_atom"]   0.99 (5%)  1.02 (1%) ❌
      ["formulation", "affine_sum_atom"]   1.00 (5%)  1.01 (1%) ❌
       ["formulation", "affine_tr_atom"]   0.99 (5%)  1.02 (1%) ❌
["formulation", "affine_transpose_atom"]   1.01 (5%)  1.01 (1%) ❌

@ericphanson
Copy link
Collaborator Author

See also #324 (comment) for context on benchmark reliability. They all look quite close to invariant to me; 1% more memory use possibly for repetition of having several local copies of id_to_variables that might contain the same variables, though I don't know if that actually occurs during the benchmarks. I'll run the benchmarks locally too and post here later.

Seems OK to me given that this implementation is basically as minimal as possible in terms of cost: we make an extra dict for the local id_to_variables, then add each variable to it when we cache their conic form, and that's it. Don't really think we can do better, given we want a local registry of variables!

@ericphanson
Copy link
Collaborator Author

ericphanson commented Sep 1, 2019

I finally looked at the code for the other global variable, conic_constr_to_constr, and it's even easier to remove than id_to_variables, by following the same path. We just keep a local association from the conic constraint objects to the constraint objects by storing it in unique_conic_forms, instead of a global one. It's even simpler because we were always populating the global registry in places we already had access to unique_conic_forms (unlike at variable creation time which is what made the id_to_variables solution non-obvious at first to me).

So with the latest commit I pushed, this checks off the third priority on #320 and closes #83.

@ericphanson ericphanson changed the title Remove global variable id_to_variables Remove global variables Sep 1, 2019
@ericphanson
Copy link
Collaborator Author

ericphanson commented Sep 1, 2019

I ran benchmarks locally, and this is what I got:

  Results
  =========

  A ratio greater than 1.0 denotes a possible regression (marked with ❌),
  while a ratio less than 1.0 denotes a possible improvement (marked with ✅).
  Only significant results - results that indicate possible regressions or
  improvements - are shown below (thus, an empty table means that all
  benchmark results remained invariant between builds).

                                      ID  time ratio memory ratio
–––––––––––––––––––––––––––––––––––––––– ––––––––––– ––––––––––––
              ["ECOS", "GT_constraints"] 0.74 (5%) ✅    1.01 (1%)
              ["ECOS", "LT_constraints"] 0.75 (5%) ✅    1.01 (1%)
             ["ECOS", "affine_add_atom"]   1.01 (5%)  1.02 (1%) ❌
            ["ECOS", "affine_diag_atom"]   1.00 (5%)  1.02 (1%) ❌
             ["ECOS", "affine_dot_atom"]   1.01 (5%)  1.02 (1%) ❌
           ["ECOS", "affine_index_atom"]   1.01 (5%)  1.02 (1%) ❌
            ["ECOS", "affine_mult_atom"]   1.02 (5%)  1.02 (1%) ❌
          ["ECOS", "affine_negate_atom"]   1.02 (5%)  1.02 (1%) ❌
             ["ECOS", "affine_sum_atom"]   1.00 (5%)  1.02 (1%) ❌
              ["ECOS", "affine_tr_atom"]   1.01 (5%)  1.02 (1%) ❌
       ["ECOS", "affine_transpose_atom"]   1.01 (5%)  1.01 (1%) ❌
        ["ECOS", "equality_constraints"] 0.73 (5%) ✅    1.01 (1%)
               ["SCS", "GT_constraints"] 0.81 (5%) ✅    1.01 (1%)
               ["SCS", "LT_constraints"] 0.82 (5%) ✅    1.01 (1%)
              ["SCS", "SDP_constraints"] 0.83 (5%) ✅    1.00 (1%)
              ["SCS", "affine_add_atom"]   1.01 (5%)  1.02 (1%) ❌
             ["SCS", "affine_diag_atom"]   1.01 (5%)  1.02 (1%) ❌
              ["SCS", "affine_dot_atom"]   1.01 (5%)  1.02 (1%) ❌
            ["SCS", "affine_index_atom"]   1.03 (5%)  1.02 (1%) ❌
             ["SCS", "affine_mult_atom"]   1.01 (5%)  1.02 (1%) ❌
           ["SCS", "affine_negate_atom"]   1.01 (5%)  1.03 (1%) ❌
              ["SCS", "affine_sum_atom"]   1.01 (5%)  1.02 (1%) ❌
               ["SCS", "affine_tr_atom"]   1.03 (5%)  1.02 (1%) ❌
        ["SCS", "affine_transpose_atom"]   1.04 (5%)  1.02 (1%) ❌
         ["SCS", "equality_constraints"] 0.81 (5%) ✅    1.01 (1%)
      ["formulation", "affine_add_atom"]   1.03 (5%)  1.03 (1%) ❌
     ["formulation", "affine_diag_atom"]   1.01 (5%)  1.04 (1%) ❌
      ["formulation", "affine_dot_atom"]   1.02 (5%)  1.04 (1%) ❌
    ["formulation", "affine_index_atom"]   0.99 (5%)  1.03 (1%) ❌
     ["formulation", "affine_mult_atom"]   1.00 (5%)  1.03 (1%) ❌
   ["formulation", "affine_negate_atom"]   0.99 (5%)  1.04 (1%) ❌
      ["formulation", "affine_sum_atom"]   1.00 (5%)  1.02 (1%) ❌
       ["formulation", "affine_tr_atom"]   1.00 (5%)  1.04 (1%) ❌
["formulation", "affine_transpose_atom"]   1.01 (5%)  1.02 (1%) ❌
  ["formulation", "sdp_lambda_max_atom"]   1.00 (5%)  1.01 (1%) ❌
  ["formulation", "sdp_lambda_min_atom"]   1.00 (5%)  1.01 (1%) ❌

  Benchmark Group List
  ======================

  Here's a list of all the benchmark groups executed by this job:

    *    ["ECOS"]

    *    ["SCS"]

    *    ["formulation"]

  Julia versioninfo
  ===================

  Target
  ––––––––

  Julia Version 1.2.0
  Commit c6da87ff4b (2019-08-20 00:03 UTC)
  Platform Info:
    OS: macOS (x86_64-apple-darwin18.6.0)
    uname: Darwin 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64 i386
    CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz: 
                speed         user         nice          sys         idle          irq
         #1  2700 MHz     290591 s          0 s     166911 s     919441 s          0 s
         #2  2700 MHz     147156 s          0 s      62721 s    1167024 s          0 s
         #3  2700 MHz     309618 s          0 s     144636 s     922648 s          0 s
         #4  2700 MHz     135593 s          0 s      56068 s    1185240 s          0 s
         
    Memory: 8.0 GB (1924.359375 MB free)
    Uptime: 344259.0 sec
    Load Avg:  1.275390625  1.64404296875  2.4560546875
    WORD_SIZE: 64
    LIBM: libopenlibm
    LLVM: libLLVM-6.0.1 (ORCJIT, broadwell)

  Baseline
  ––––––––––

  Julia Version 1.2.0
  Commit c6da87ff4b (2019-08-20 00:03 UTC)
  Platform Info:
    OS: macOS (x86_64-apple-darwin18.6.0)
    uname: Darwin 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64 i386
    CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz: 
                speed         user         nice          sys         idle          irq
         #1  2700 MHz     292496 s          0 s     167002 s     921407 s          0 s
         #2  2700 MHz     147269 s          0 s      62744 s    1170851 s          0 s
         #3  2700 MHz     312058 s          0 s     144701 s     924106 s          0 s
         #4  2700 MHz     135662 s          0 s      56084 s    1189117 s          0 s
         
    Memory: 8.0 GB (1942.05078125 MB free)
    Uptime: 344655.0 sec
    Load Avg:  1.17529296875  1.3779296875  2.0244140625
    WORD_SIZE: 64
    LIBM: libopenlibm
    LLVM: libLLVM-6.0.1 (ORCJIT, broadwell)

I first closed all programs, exited dropbox, and turned off the internet, to try to get a stable as possible benchmarking environment on my macbook. I'm going to try comparing master-to-master with the same methodology to try to get some reliability info (e.g. why are those *constraints tests so much faster? is that due to code changes or just something weird in the benchmarking?). I'll use the same tuning file too.

@ericphanson
Copy link
Collaborator Author

Running master v master benchmarks locally, I get 100% reliability (i.e. all benchmarks deemed invariant):

Results
  =========

  A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less than 1.0 denotes a possible improvement (marked with ✅). Only significant results -
  results that indicate possible regressions or improvements - are shown below (thus, an empty table means that all benchmark results remained invariant between builds).

  | ID | time ratio | memory ratio | |––––––––––––––––––––––-|––––––|–––––––|

  Benchmark Group List
  ======================

  Here's a list of all the benchmark groups executed by this job:

    •    ["ECOS"]

    •    ["SCS"]

    •    ["formulation"]

  Julia versioninfo
  ===================

  Target
  ––––––––

  Julia Version 1.2.0
  Commit c6da87ff4b (2019-08-20 00:03 UTC)
  Platform Info:
    OS: macOS (x86_64-apple-darwin18.6.0)
    uname: Darwin 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64 i386
    CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz: 
                speed         user         nice          sys         idle          irq
         #1  2700 MHz     298044 s          0 s     167949 s     931952 s          0 s
         #2  2700 MHz     147979 s          0 s      62968 s    1186957 s          0 s
         #3  2700 MHz     319416 s          0 s     145396 s     933091 s          0 s
         #4  2700 MHz     136252 s          0 s      56276 s    1205375 s          0 s
         
    Memory: 8.0 GB (1511.25 MB free)
    Uptime: 347721.0 sec
    Load Avg:  1.30908203125  1.44287109375  1.5361328125
    WORD_SIZE: 64
    LIBM: libopenlibm
    LLVM: libLLVM-6.0.1 (ORCJIT, broadwell)

  Baseline
  ––––––––––

  Julia Version 1.2.0
  Commit c6da87ff4b (2019-08-20 00:03 UTC)
  Platform Info:
    OS: macOS (x86_64-apple-darwin18.6.0)
    uname: Darwin 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64 i386
    CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz: 
                speed         user         nice          sys         idle          irq
         #1  2700 MHz     299746 s          0 s     168027 s     934117 s          0 s
         #2  2700 MHz     148076 s          0 s      62990 s    1190782 s          0 s
         #3  2700 MHz     322054 s          0 s     145451 s     934344 s          0 s
         #4  2700 MHz     136308 s          0 s      56290 s    1209251 s          0 s
         
    Memory: 8.0 GB (1488.34765625 MB free)
    Uptime: 348115.0 sec
    Load Avg:  1.27490234375  1.35595703125  1.458984375
    WORD_SIZE: 64
    LIBM: libopenlibm
    LLVM: libLLVM-6.0.1 (ORCJIT, broadwell)

That's quite interesting, because it means the differences in #322 (comment) are likely meaningful, including the speed ups for large amounts of constraints.

@@ -83,12 +86,9 @@ include("utilities/show.jl")
include("utilities/iteration.jl")
include("utilities/broadcast.jl")

#Temporary workaround for memory leak (https://github.com/JuliaOpt/Convex.jl/issues/83)
# Deprecated workaround for memory leak (https://github.com/JuliaOpt/Convex.jl/issues/83)
function clearmemory()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that GC.gc() should never be mandatory in user code as Convex, the function could be removed altogether

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's true. I thought it should at least be deprecated though, to give users time to remove it so they aren't hit with errors upon upgrading Convex if they have Convex.clearmemory() in their scripts. I wasn't sure about leaving in the GC.gc() or not; I agree they shouldn't need to be using it, but it was something that Convex.clearmemory() was doing, so I thought maybe I should point to it as a replacement in case somehow that's a key part of whatever they are doing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the wording and removed the GC.gc() bit. I don't see any harm in leaving a deprecation message in for a month or two though, to let users comfortably remove the function without getting errors.

@ericphanson
Copy link
Collaborator Author

I ran benchmarks for this branch against master again, locally, and got somewhat different results:

                                      ID  time ratio memory ratio
–––––––––––––––––––––––––––––––––––––––– ––––––––––– ––––––––––––
             ["ECOS", "affine_add_atom"]   0.99 (5%)  1.02 (1%) ❌
            ["ECOS", "affine_diag_atom"]   1.00 (5%)  1.02 (1%) ❌
             ["ECOS", "affine_dot_atom"]   1.01 (5%)  1.02 (1%) ❌
           ["ECOS", "affine_index_atom"]   1.02 (5%)  1.02 (1%) ❌
            ["ECOS", "affine_mult_atom"]   1.00 (5%)  1.02 (1%) ❌
          ["ECOS", "affine_negate_atom"]   1.01 (5%)  1.02 (1%) ❌
             ["ECOS", "affine_sum_atom"]   1.03 (5%)  1.02 (1%) ❌
              ["ECOS", "affine_tr_atom"]   1.00 (5%)  1.02 (1%) ❌
       ["ECOS", "affine_transpose_atom"]   1.02 (5%)  1.01 (1%) ❌
                ["SCS", "GT_constraint"] 1.10 (5%) ❌    1.00 (1%)
                ["SCS", "LT_constraint"] 0.91 (5%) ✅    1.00 (1%)
              ["SCS", "affine_add_atom"]   1.01 (5%)  1.02 (1%) ❌
             ["SCS", "affine_diag_atom"]   1.01 (5%)  1.02 (1%) ❌
              ["SCS", "affine_dot_atom"]   1.02 (5%)  1.02 (1%) ❌
            ["SCS", "affine_index_atom"]   1.02 (5%)  1.02 (1%) ❌
             ["SCS", "affine_mult_atom"]   1.02 (5%)  1.02 (1%) ❌
           ["SCS", "affine_negate_atom"]   1.00 (5%)  1.03 (1%) ❌
              ["SCS", "affine_sum_atom"]   1.02 (5%)  1.02 (1%) ❌
               ["SCS", "affine_tr_atom"]   1.01 (5%)  1.02 (1%) ❌
        ["SCS", "affine_transpose_atom"]   0.97 (5%)  1.02 (1%) ❌
       ["formulation", "GT_constraints"] 1.48 (5%) ❌    1.01 (1%)
       ["formulation", "LT_constraints"] 1.44 (5%) ❌    1.01 (1%)
      ["formulation", "affine_add_atom"]   1.01 (5%)  1.03 (1%) ❌
     ["formulation", "affine_diag_atom"]   1.04 (5%)  1.04 (1%) ❌
      ["formulation", "affine_dot_atom"]   1.01 (5%)  1.04 (1%) ❌
    ["formulation", "affine_index_atom"]   0.99 (5%)  1.03 (1%) ❌
     ["formulation", "affine_mult_atom"]   1.01 (5%)  1.03 (1%) ❌
   ["formulation", "affine_negate_atom"]   0.98 (5%)  1.04 (1%) ❌
      ["formulation", "affine_sum_atom"]   1.04 (5%)  1.02 (1%) ❌
       ["formulation", "affine_tr_atom"]   1.01 (5%)  1.04 (1%) ❌
["formulation", "affine_transpose_atom"]   1.03 (5%)  1.02 (1%) ❌
 ["formulation", "equality_constraints"] 1.49 (5%) ❌    1.01 (1%)
  ["formulation", "sdp_lambda_max_atom"]   0.99 (5%)  1.01 (1%) ❌
  ["formulation", "sdp_lambda_min_atom"]   1.02 (5%)  1.01 (1%) ❌

The only consistent thing is a bit more memory usage in all of these. Note, however, there are 66 benchmarks, so the other 32 were invariant. The *constraints benchmarks involve making setting 1000 separate constraints on a vector variable of length 1000, so they involve a lot of allocations and seem quite inconsistent.

I think I'm ok with the small perf tradeoff in order to close #83. (Previous local attempts to solve this involved re-traversing the problem tree for 1.5-2x slowdowns, which is what led to benchmarking and finding the simple solution in this PR).

@ericphanson
Copy link
Collaborator Author

I'll merge this in 48 hours if I don't hear any objections before then.

@ericphanson ericphanson merged commit b98580c into jump-dev:master Sep 10, 2019
This was referenced Sep 10, 2019
@ericphanson ericphanson deleted the remove-id-to-variables branch September 27, 2019 09:38
ericphanson added a commit to ericphanson/Convex.jl that referenced this pull request Oct 13, 2019
ericphanson added a commit to ericphanson/Convex.jl that referenced this pull request Oct 28, 2019
ericphanson added a commit to ericphanson/Convex.jl that referenced this pull request Nov 27, 2019
ericphanson added a commit to ericphanson/Convex.jl that referenced this pull request Dec 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants