You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current version of ParallelIDAStar attempts to mimic the work done by IDA* but get the speedup of parallelization. That is evident in the termination condition which was explored a bit in PR #73.
For all layers except the last, the generated and expanded will be the same as both algorithms expand all of the nodes. But, in the last layer, they will probably expand less than all of them as a solution is found.
When looking at the way that the number of nodes expanded/generated (link), all the nodes that were expanded in the last layer are added, even those that were done by threads which are expanding nodes that would have not been reached by IDA*.
Now, these are indeed generated and expanded nodes that should be accounted for, but it might create a difference in results when trying to only get a speedup and not change the behavior of IDA*.
The text was updated successfully, but these errors were encountered:
The current version of ParallelIDAStar attempts to mimic the work done by IDA* but get the speedup of parallelization. That is evident in the termination condition which was explored a bit in PR #73.
For all layers except the last, the generated and expanded will be the same as both algorithms expand all of the nodes. But, in the last layer, they will probably expand less than all of them as a solution is found.
When looking at the way that the number of nodes expanded/generated (link), all the nodes that were expanded in the last layer are added, even those that were done by threads which are expanding nodes that would have not been reached by IDA*.
Now, these are indeed generated and expanded nodes that should be accounted for, but it might create a difference in results when trying to only get a speedup and not change the behavior of IDA*.
The text was updated successfully, but these errors were encountered: