Skip to content

Commit

Permalink
.
Browse files Browse the repository at this point in the history
  • Loading branch information
kittobi1992 committed Jul 24, 2023
1 parent 2686d47 commit fb345cf
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion mt-kahypar/partition/refinement/gains/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ The localized FM searches apply node moves to a thread-local partition which are

### Rollback

After all localized FM searches terminate, we concatenate the move sequences of all searches to a global move sequence and recompute the gain values in parallel assuming that the moves are executed exactly in this order. After recomputing all gain values, the prefix with the highest accumulated gain is applied to the global partition. The gain recomputation algorithm iterates over all hyperedges in parallel. For each hyperedge, we iterate two times over all pins. The first loop precomputes some auxiliary data, which we then use in the second loop to decide which moved node contained in the hyperedge increases or decreases the objective function. The implementations for all functions required to implement the parallel gain recomputation algorithm are highly individual for each objective function. We recommend to read one of our papers for a detailed explanation of this technique. Furthermore, you can find the implementation of the gain recomputation algorithm in ```partition/refinement/fm/global_rollback.cpp```. If you do not want to use the parallel gain recalculation algorithm, you can disable the feature by setting ```static constexpr bool supports_parallel_rollback = false;``` in your rollback class. The global rollback algorithm then uses an alternative parallelization which is slightly slower.
After all localized FM searches terminate, we concatenate the move sequences of all searches to a global move sequence and recompute the gain values in parallel assuming that the moves are executed exactly in this order. After recomputing all gain values, the prefix with the highest accumulated gain is applied to the global partition. The gain recomputation algorithm iterates over all hyperedges in parallel. For each hyperedge, we iterate two times over all pins. The first loop precomputes some auxiliary data, which we then use in the second loop to decide which moved node contained in the hyperedge increases or decreases the objective function. The implementations for all functions required to implement the parallel gain recomputation algorithm are highly individual for each objective function. We recommend to read one of our papers for a detailed explanation of this technique. Furthermore, you can find the implementation of the gain recomputation algorithm in ```partition/refinement/fm/global_rollback.cpp```. If you do not want to use the parallel gain recalculation algorithm, you can disable the feature by setting ```static constexpr bool supports_parallel_rollback = false;``` in your rollback class. The global rollback algorithm then uses an alternative parallelization which is slightly slower (but still acceptable).

## Flow-Based Refinement

Expand Down

0 comments on commit fb345cf

Please sign in to comment.