diff --git a/mt-kahypar/partition/refinement/gains/README.md b/mt-kahypar/partition/refinement/gains/README.md index ed27ef475..856358ebc 100644 --- a/mt-kahypar/partition/refinement/gains/README.md +++ b/mt-kahypar/partition/refinement/gains/README.md @@ -123,7 +123,7 @@ The localized FM searches apply node moves to a thread-local partition which are ### Rollback -After all localized FM searches terminate, we concatenate the move sequences of all searches to a global move sequence and recompute the gain values in parallel assuming that the moves are executed exactly in this order. After recomputing all gain values, the prefix with the highest accumulated gain is applied to the global partition. The gain recomputation algorithm iterates over all hyperedges in parallel. For each hyperedge, we iterate two times over all pins. The first loop precomputes some auxiliary data, which we then use in the second loop to decide which moved node contained in the hyperedge increases or decreases the objective function. The implementations for all functions required to implement the parallel gain recomputation algorithm are highly individual for each objective function. We recommend to read one of our papers for a detailed explanation of this technique. Furthermore, you can find the implementation of the gain recomputation algorithm in ```partition/refinement/fm/global_rollback.cpp```. If you do not want to use the parallel gain recalculation algorithm, you can disable the feature by setting ```static constexpr bool supports_parallel_rollback = false;``` in your rollback class. The global rollback algorithm then uses an alternative parallelization which is slightly slower. +After all localized FM searches terminate, we concatenate the move sequences of all searches to a global move sequence and recompute the gain values in parallel assuming that the moves are executed exactly in this order. After recomputing all gain values, the prefix with the highest accumulated gain is applied to the global partition. The gain recomputation algorithm iterates over all hyperedges in parallel. For each hyperedge, we iterate two times over all pins. The first loop precomputes some auxiliary data, which we then use in the second loop to decide which moved node contained in the hyperedge increases or decreases the objective function. The implementations for all functions required to implement the parallel gain recomputation algorithm are highly individual for each objective function. We recommend to read one of our papers for a detailed explanation of this technique. Furthermore, you can find the implementation of the gain recomputation algorithm in ```partition/refinement/fm/global_rollback.cpp```. If you do not want to use the parallel gain recalculation algorithm, you can disable the feature by setting ```static constexpr bool supports_parallel_rollback = false;``` in your rollback class. The global rollback algorithm then uses an alternative parallelization which is slightly slower (but still acceptable). ## Flow-Based Refinement