-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compute maximum bisimulation using Paige-Tarjan's Algorithm #1089
Conversation
i had some trouble running |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am going to read more about the algorithm before doing the proper review.
I left some comments about Rust/Python but overall it is in good shape! And the tox -edocs
failure is common, identation is probably mismatched but it should be easy to fix
src/iterators.rs
Outdated
impl PyHash for NodeIndices { | ||
fn hash<H: Hasher>(&self, py: Python, state: &mut H) -> PyResult<()> { | ||
PyHash::hash(&self.nodes, py, state)?; | ||
Ok(()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is not requried I'd rather not make this type hashable. We reserve the right to make it mutable one day which would make it not hashable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I implemented this for the RelationalCoarsestPartition
return type, since custom_vec_iter_impl!
requires this trait. If you would prefer, I can make a new iterator that will act as a container.
Since RelationalCoarsestPartition
is also a pretty specific type name i could change it to something like NodePartition
containing something such as NodePartitionBlock
or such.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I would implement a new type and add a macro custom_vec_iter_impl!
calling the old custom_vec_iter!
and adding the PyHash. The catch is that sometimes these methods can have unintended side effects on performance (e.g. #1096) so it is better to be conservative
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understood this but the hashing from NodeIndices was removed
Pull Request Test Coverage Report for Build 9621638477Details
💛 - Coveralls |
As a heads up, when reading "Three Partition Refinement Algorithms", I renamed a lot of the variables.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding some Rust comments this time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the algorithm looks good after skimming the original paper from 1987 and the BisPy implementation. I will also trust the tests and the code coverage essentially touching all lines.
Given how much data is shared I am also don't mind all Rc
and RefCell
. In fact, it is quite impressive that this is correct and Rust's borrow checker approves. Quite an achievement for a first time contribution!
I think after we address the hashing it should be good to merge. Notice that #1096 will introduce a compile error with your change because of the new optimization, but that should be easyd to fix.
regarding #1075