Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add semantic loop unrolling analysis #1370

Closed
wants to merge 4 commits into from
Closed

Conversation

sim642
Copy link
Member

@sim642 sim642 commented Feb 22, 2024

Frustrated with the syntactic loop unrolling (#563), this is a quick shot at a semantic one that I've wanted for a long time: just use path sensitivity to count and keep loop iterations separate.

The benefits are obvious:

  1. This is a lot simpler than Loop Unrolling for the first exp.unrolling-factor Iterations #563.
  2. It automatically provides a clean fix to Witness invariants for unrolled loops are incorrect #1225, etc.
  3. We avoid creating large ASTs, CFGs and constraint systems, especially if the loop has a finite bound and all subsequent iterations are dead.
  4. It allows the unrolling factor to depend on any semantic information that we want because it can be determined during the analysis.

TODO

  • Figure out interprocedural behavior (contexts, enter, combine).
  • Integrate with autotuning again.
  • Check sv-benchmarks results.

@sim642 sim642 added cleanup Refactoring, clean-up feature precision labels Feb 22, 2024
@sim642 sim642 added this to the SV-COMP 2025 milestone Feb 27, 2024
@sim642 sim642 added the sv-comp SV-COMP (analyses, results), witnesses label Feb 27, 2024
@jerhard
Copy link
Member

jerhard commented Jul 30, 2024

@michael-schwarz Brought up at todays Gobcon that as this is using the pathsensitivity functor, this may lead to a quadractic computational cost, as all iterations yet considered have to be considered again in each iteration.

@sim642
Copy link
Member Author

sim642 commented Jul 30, 2024

Syntactic unrolling avoids this by introducing separate nodes and thus constraint unknowns for each iteration. Via that the solver can establish sensible fine-grained dependency structure and avoid recomputations of all iterations so many times.
Conceptually, it should be possible to also semantically create these unknowns in the constraint system construction for paths, which is a more general optimization we maybe should investigate. Similar explosion in paths can happen with must-locksets (as @karoliineh observed in some benchmark).

@sim642 sim642 removed this from the SV-COMP 2025 milestone Oct 22, 2024
@sim642
Copy link
Member Author

sim642 commented Nov 12, 2024

I'm closing this because we now have ideas for an alternative approach that should avoid the inefficiencies.

@sim642 sim642 closed this Nov 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cleanup Refactoring, clean-up feature precision sv-comp SV-COMP (analyses, results), witnesses
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants