Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Use FillArrays and LazyArrays to reduce memory footprint #258

Closed
wants to merge 2 commits into from

Conversation

ararslan
Copy link
Contributor

Currently sparse vectors and matrices are used in many places, but this can cause memory usage to skyrocket in certain circumstances. See for example issue #254.

This change uses LazyArrays and FillArrays to express matrix structures which don't need to be mutated or immediately materialized, leading to significant memory savings.

Currently sparse vectors and matrices are used in many places, but
this can cause memory usage to skyrocket in certain circumstances. See
for example issue 254.

This change uses LazyArrays and FillArrays to express matrix structures
which don't need to be mutated or immediately materialized, leading to
significant memory savings.
@ararslan
Copy link
Contributor Author

Note that this requires tags of both LazyArrays and FillArrays. Local testing was done using the master branches of both packages.

@iamed2
Copy link
Contributor

iamed2 commented Dec 22, 2018

Any measurements?

@ararslan
Copy link
Contributor Author

Right now this is just "throw laziness at the wall and see what sticks"; I'm mostly making sure that this approach doesn't full-on break anything. Doing long-running computations and monitoring top shows drastically decreased memory usage on this branch, but I don't have individual benchmarks at the moment. I'll do that more formally as I fine-tune this.

@ararslan
Copy link
Contributor Author

ararslan commented Jan 2, 2019

Unfortunately, for the simplest problem:

x = Variable(1)
p = minimize(2.0 * x, [x >= 2, x <= 4])

this PR provides a pretty modest benefit in terms of memory use but a regression in terms of execution time.

Master:

julia> @benchmark Convex.conic_problem($p)
BenchmarkTools.Trial: 
  memory estimate:  38.77 KiB
  allocs estimate:  703
  --------------
  minimum time:     34.333 μs (0.00% GC)
  median time:      42.053 μs (0.00% GC)
  mean time:        138.870 μs (68.14% GC)
  maximum time:     504.959 ms (99.98% GC)
  --------------
  samples:          10000
  evals/sample:     1

PR:

julia> @benchmark Convex.conic_problem($p)
BenchmarkTools.Trial: 
  memory estimate:  30.17 KiB
  allocs estimate:  613
  --------------
  minimum time:     66.901 μs (0.00% GC)
  median time:      76.468 μs (0.00% GC)
  mean time:        107.625 μs (25.10% GC)
  maximum time:     24.867 ms (99.13% GC)
  --------------
  samples:          10000
  evals/sample:     1

Pretty disappointing. I'll keep at this.

@ararslan
Copy link
Contributor Author

ararslan commented Jan 3, 2019

Observation: LazyArrays has some facilities to act as a computation graph in some scenarios, e.g. with Mul representing a multiplication. There is a direct analog here with Convex's own MultiplyAtom. I wonder if there's a way to more fundamentally combine these two.

@ericphanson ericphanson mentioned this pull request May 8, 2019
@odow
Copy link
Member

odow commented Feb 23, 2022

Closing as stale.

Moving more to MOI (e.g., #393) seems like a better way of dealing with the memory issues.

@odow odow closed this Feb 23, 2022
@odow odow deleted the aa/lazyarrays branch January 10, 2024 09:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants