-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement quaternion arithmetic from numpy-quaternion #281
Conversation
This PR is great! Smart to write it alongside the current implementation for benchmarking. I'm already convinced we should replace our own quaternion multiplication and vector rotation with numpy-quaternion. What I'd like to see in this PR is that change, and that we support the missing shapes in the table. This is necessary for pole figure plotting.
Their dependencies are SciPy, NumPy and Numba, which we already depend on, so this is isn't a downside in my opinion. |
Do you mean in the testing shape results shown above or something else? |
I think the behaviour should be defined. With MTEX >> qs = {quaternion.rand(4, 5), quaternion.rand(6), quaternion.rand(4, 2, 5)};
>> vs = {vector3d.rand(6), vector3d.rand(4, 5), vector3d.rand(3, 6)};
>> for i=1:length(qs)
.. qs{i} * vs{i}
.. end
ans = vector3d (show methods, plot)
size: 20 x 6
ans = vector3d (show methods, plot)
size: 6 x 20
ans = vector3d (show methods, plot)
size: 40 x 18 So here they flatten all instances. Will that be surprising to a user? Which output shape did the user expect? I think it is OK to do so. |
The default NumPy behaviour is to flatten both arrays also: >>> import numpy as np
>>> arr1 = np.random.randn(4, 5)
>>> arr2 = np.random.randn(6, 3)
>>> np.outer(arr1, arr2).shape
(20, 18) However this is currently not the default behaviour in orix: >>> q1 = Rotation.random((4, 5))
>>> q2 = Rotation.random((6, 3))
>>> q1.outer(q2).shape
(4, 5, 6, 3) Personally I like the behaviour in orix, as for each element of >>> q1 = Rotation.random((4, 5))
>>> q2 = Rotation.random((6, 3))
>>> q12 = q1.outer(q2)
>>> q1n = quaternion.from_float_array(q1.data)
>>> q2n = quaternion.from_float_array(q2.data)
>>> q12n = np.outer(q1n, q2n).reshape(q1.shape + q2.shape)
>>> np.allclose(quaternion.as_float_array(q12n), q12.data)
True I would favour keeping the behaviour in orix, as it means not changing the current behaviour (and so potentially breaking others' code). If the consensus is that people prefer the NumPy behaviour then we could adopt this and add how to reshape the output from the flattened |
I see you've given this more thought than I have, as I've only encountered the undesirable |
I will leave this PR as is for a couple days for discussion if anyone would be interested to pull this PR and check the functionality ( |
A note on memory usage between the two implementations, here is the testing script I wrote, run it with Codefrom memory_profiler import profile
import numpy as np
from orix.quaternion import Quaternion, QuaternionNumpy, Rotation
from orix.vector import Vector3d
S1 = (2200, 1001)
S2 = S1[-1]
S3 = (27,)
# conjugation
@profile
def test_q_conj_orix():
q = Quaternion(Rotation.random(S1))
qc = q.conj
qinv = ~q
@profile
def test_q_conj_numpy():
q = QuaternionNumpy(Rotation.random(S1))
qc = q.conj
qinv = ~q
# straight multiplication
@profile
def test_q_multiplication_orix():
q1 = Quaternion(Rotation.random(S1))
q2 = Quaternion(Rotation.random(S2))
q = q1 * q2
assert q.shape == q1.shape
@profile
def test_q_multiplication_numpy():
q1 = QuaternionNumpy(Rotation.random(S1))
q2 = QuaternionNumpy(Rotation.random(S2))
q = q1 * q2
assert q.shape == q1.shape
@profile
def test_v_mutliplication_orix():
q1 = Quaternion(Rotation.random(S1))
vdata = np.random.randn(q1.shape[-1], 3)
v1 = Vector3d(vdata)
v = q1 * v1
assert v.shape == q1.shape
@profile
def test_v_multiplication_numpy():
q1 = QuaternionNumpy(Rotation.random(S1))
vdata = np.random.randn(q1.shape[-1], 3)
v1 = Vector3d(vdata)
v = q1 * v1
assert v.shape == q1.shape
# outer product
@profile
def test_q_outer_orix():
q1 = Quaternion(Rotation.random(S1))
q2 = Quaternion(Rotation.random(S3))
q = q1.outer(q2)
assert q.shape == q1.shape + q2.shape
@profile
def test_q_outer_numpy():
q1 = QuaternionNumpy(Rotation.random(S1))
q2 = QuaternionNumpy(Rotation.random(S3))
q = q1.outer(q2)
assert q.shape == q1.shape + q2.shape
@profile
def test_v_outer_orix():
q1 = Quaternion(Rotation.random(S1))
vdata = np.random.randn(*S3, 3)
v1 = Vector3d(vdata)
v = q1.outer(v1)
assert v.shape == q1.shape + v1.shape
@profile
def test_v_outer_numpy():
q1 = QuaternionNumpy(Rotation.random(S1))
vdata = np.random.randn(*S3, 3)
v1 = Vector3d(vdata)
v = q1.outer(v1)
assert v.shape == q1.shape + v1.shape
if __name__ == "__main__":
# test_q_conj_orix()
# test_q_conj_numpy()
test_q_multiplication_orix()
# test_q_multiplication_numpy()
# test_v_mutliplication_orix()
# test_v_multiplication_numpy()
# test_q_outer_orix()
# test_q_outer_numpy()
# test_v_outer_orix()
# test_v_outer_numpy() NB. run the tests above one by one to avoid intermediate garbage collection noise. ResultsNB. results are in the order above as shown in
It appears that there is a small memory penalty for using Further testing... @profile
def test_v_multiplication_further():
q1 = QuaternionNumpy(Rotation.random(S1))
vdata = np.random.randn(q1.shape[-1], 3)
v1 = Vector3d(vdata)
# from source code
q1 = quaternion.from_float_array(q1.data)
v_intermediate = quaternion.from_vector_part(v1.data)
mult1 = q1 * v_intermediate
q1_inv = ~q1
mult2 = mult1 * q1_inv
v = quaternion.as_vector_part(mult2)
The Overall I think memory footprint is outweighed by the performance (CPU) gains and it would be worth including |
Thanks for doing profiling as well! So quaternion/vector multiplication with numpy-quaternion is 2 times as fast, but uses 1.3 times the memory (as seen in profiling of |
So a note on the failing test: orix/orix/tests/test_miller.py Lines 210 to 220 in ff6e541
orix/orix/tests/test_miller.py Line 216 in ff6e541
This line says the test should have 450 unique vectors, however after using the numpy-quaternion implementation, this is not the case and we get 285 vectors.
On further testing this appears to be possibly related to rounding errors (NB I think line 673 should be removed from this function): Lines 645 to 689 in ff6e541
Using the original implementation without |
14 decimals sounds more than enough. I'll have a look the results with rounding if they are as expected. |
Yes, that's what I thought too. So then it follows as to whether we actually expect all 450 unique vectors, or whether some are the same within this margin. |
Tested the return from from orix.crystal_map import Phase
from orix.vector import Miller
ver = "stable" # 0.8.1
#ver = "paddy" # harripj:numpy-quaternion
diamond = Phase(space_group=227)
# Figure 1 (both hemispheres in all cases)
h = Miller.from_highest_indices(phase=diamond, uvw=[10, 10, 10]) # stable: 9260; paddy: 9260
fig = h.scatter(hemisphere="both", return_figure=True, figure_kwargs=dict(figsize=(20, 10)))
fig.tight_layout()
fig.savefig(os.path.join(dir_test, f"diamond_pf_{ver}.png"), **savefig_kwargs)
# Figure 2
h2 = h.unique(use_symmetry=True) # stable: 450; paddy: 285
fig2 = h2.scatter(hemisphere="both", return_figure=True, figure_kwargs=dict(figsize=(20, 10)))
fig2.tight_layout()
fig2.savefig(os.path.join(dir_test, f"diamond_unique_pf_{ver}.png"), **savefig_kwargs)
# Figure 3
h3 = h[h <= diamond.point_group.fundamental_sector] # stable: 285; paddy: 285
fig3 = h3.scatter(hemisphere="both", return_figure=True, figure_kwargs=dict(figsize=(20, 10)))
fig3.tight_layout()
fig3.savefig(os.path.join(dir_test, f"diamond_sector_pf_{ver}.png"), **savefig_kwargs)
With this PR, the number of Miller indices obtained by considering only unique indices under symmetry, 285, is equal to the number of indices within the fundamental sector, 285 (comparing row 2 and 3). This is not the case for the current release, which returns 450 indices in the first instance, and 285 in the second. If the fundamental sector of m-3m is defined correctly (which I believe it is), the numbers should be the same. Thus, I conclude that there is an error in the current release which this PR solves. What do you think, @harripj? |
By changing this line in Line 675 in ff6e541
to You've fixed a bug here, and I'm very grateful :) |
Yes please! |
This makes sense to me, in fact it would be great if
This test is very informative! So it looks like the correct test value should in fact be 285. On how we got there though, I am not sure that simply implementing |
We could do that within
I'm OK with just changing the test and adding a note to the changelog. orix supports all 32 crystallographic point groups in most cases, only the 11 Laue classes in others. It's on me that the test was wrong in the first place, but there are just so many tests one can write. What's most important to me is that we fix bugs and release patches quickly as we spot them (hint hint #243). This PR changes nothing in the API, I suggest releasing a patch release 0.8.2 after this is merged. |
I didn't mean to say that you shouldn't open the issue, btw. Sorry if that's what it read like. |
Don't worry, I didn't take it that way at all, in fact I was writing #285 as the notifications were coming in! Best to redirect this conversation there as these are now two different issues. |
Actually, I find the assumption that the user wants the vectors within the fundamental sector not strong enough to be used in If you want the direct or reciprocal lattice vectors within the fundamental sector in one call, we should make a convenience method |
@hakonanes I have just added the equations for quaternion-quaternion and quaternion-vector multiplication to the |
Update: this is waiting on #285, after this the tests should pass. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy with this, will let @hakonanes merge.
Description of the change
As discussed in #267 and #264, I thought it best to start to implement quaternion arithmetic from
numpy-quaternion
intoorix.quaternion.Quaternion
given the performance advantage it has shown, amongst other benefits. The purpose of this PR is to lay some groundwork and discuss before making any concrete changes. To this end I have created aQuaternionNumpy
class which inherits fromQuaternion
to help with testing.Currently quaternion multiplication (
Quaternion
andVector3d
), conjugation, and outer products have been overloaded to usenumpy-quaternion
and give the same results. A possible extension would be to subclassorix.quaternion.Quaternion
fromnumpy-quaternion
but this would complicate things further and I don't think this is the right direction to go in. Just by using thenumpy-quaternion
arithmetic we get a computational speed advantage rather easily. Memory profiling to be done later.One (potential) downside is that
numpy-quaternion
will be a further dependency.@din14970 I achieved the current
Quaternion * Vector3d
functionality using your logic as written in moble/quaternion#191. Orix currently doesn't perform the outer product when calculatingQuaternion * Vector3d
so this works well, andquaternion.rotate_vectors
is implemented only inQuaternion.outer
.Progress of the PR
Minimal example of the bug fix or new feature
Some timings...
Vector operations
Quaternion.outer(Vector3d)
seems to be on par for small calculations or quicker for larger ones (usingquaternion.rotate_vectors
):Quaternion operations
For reviewers
__init__.py
.unreleased section in
CHANGELOG.rst
.