-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mpoly docs, functions for discriminant, resultant, term_content, and deflation, and run mod_mpoly doctests. #216
Conversation
We might want to have a test for this a bit like
I'm not quite sure I understand what you mean by this. Firstly I believe that the Is there a simple way to adapt the current code so that instead of using a hard-coded list of modules it can walk the tree looking for Secondly you seem to be saying that there is a plugin but it doesn't work. Or are you saying that we should make a new plugin or send a fix to the existing one? I agree that it would be better if pytest could do this and if we could arrange it so that It would still be nice even if pytest is used if there is a way to run all the doctests like pip install python-flint
python -m flint.test |
This is nice. I wonder for In [6]: from flint import *
In [7]: x = fmpz_poly([0, 1])
In [8]: p = x**4 + x**2 + 1
In [9]: p.deflation()
Out[9]: (x^2 + x + 1, 2)
In [10]: p2, n = p.deflation()
In [11]: p2(x**n)
Out[11]: x^4 + x^2 + 1 Using compose is more awkward for Looking at this across other types it seems to be somewhat unfortunately designed that we have:
So there is no method to get n1 = p1.get_deflation()
n2 = p2.get_deflation()
n = gcd(n1, n2)
p1p = p1.deflate(n)
p2p = p2.deflate(n) |
I'm not sure I understand here. Note that there are many different function in SymPy as well as two different implementations of multivariate polynomials and the fact that any kind of polynomial can have multivariate polynomial coefficients... We have e.g. In [2]: Poly(2*x**2*y + 4*y).primitive()
Out[2]: (2, Poly(x**2*y + 2*y, x, y, domain='ZZ'))
In [3]: Poly(2*x**2*y + 4*y, x).primitive()
Out[3]: (2⋅y, Poly(x**2 + 2, x, domain='ZZ[y]')) In the second case the coefficients would by python-flint types if PolyElement could wrap python-flint's mpoly types. In the first case the Suppose that we had a way to get the coefficients of an def content(p):
content = p.coeffs_vec().gcd()
def content_primitive(p):
primitive_part = p / p.content() # scalar div
return content, primitive Apart from making the leading coefficient positive that is exactly what def content_primitive(p):
primitive_part = p.primitive_part()
content = p / primitive_part # poly div
content = content.leading_coefficient()
if content < 0:
content = -content
primitive_part = -primitive_part
return content, primitive That's fine but needing a polynomial division is unfortunate. Note that SymPy would attach the sign to the polynomial rather than the content (not sure if that's actually a good idea but that is what it does): In [5]: primitive(-x)
Out[5]: (1, -x) |
For resultant and discriminant we have these in SymPy: In [6]: Poly(x + y).resultant(Poly(x - y))
Out[6]: Poly(-2*y, y, domain='ZZ') In this case we need to go from a ring with 2 generators to a ring with 1 generator. That is not what Flint's resultant etc functions do so we would need some way of converting the |
I think that the changes here look good and could be merged if you are done with this. |
Actually I don't think that In [30]: f = x**3 * y + x * y**4 + x * y
In [31]: f
Out[31]: x^3*y + x*y^4 + x*y
In [32]: f_deflated, strides = f.deflation()
In [33]: f_deflated
Out[33]: x + y + 1
In [34]: strides
Out[34]: fmpz_vec(['2', '3'], 2)
In [36]: f_deflated.compose(x**2, y**3)
Out[36]: x^2 + y^3 + 1
In [37]: f_deflated.compose(x**2, y**3) * x*y
Out[37]: x^3*y + x*y^4 + x*y The output is affects by the shifts but they aren't returned. Either this should not use the shifts or it needs to return them. |
Sorry I should have clearer, there is a pytest plugin by the name of
I've briefly looked over the code and it's a small single file plugin (<200LOC), from there I saw mention of the
It seems like the plugin could pick up the "missing" doctests from the modules alone but they are being skipped because
To me this sounds like the existing plugin could be modified to support running doctests from the modules alone, like how the existing
It certainly seems like it, thank you for that. I gave the pytest docs for it a skim but had to move on to other things. I might give patching pytest-cython a crack this weekend but it's low on my list of things to do.
I haven't experimented with this but I believe so.
That is nice just as a sanity check. |
Yeah I'm certainly realising the complexity of it. I'm first allowing a DMP_Flint to hold a fmpz_mpoly for the ZZ and QQ domains
These are the two functions I was referring to, in SymPy
Is this not what
In [1]: from flint import *
In [5]: ctx = fmpz_mpoly_ctx.get_context(2, Ordering.lex, 'x')
In [6]: x, y = ctx.gens()\
In [7]: (x + y).resultant(x - y, 'x0')
Out[7]: -2*x1
In [8]: (x + y).resultant(x - y, 'x1')
Out[8]: 2*x0 This seems to align with what flint computes with respect to the first generator. Not sure if that is a coincidence. I've been having a little difficulty deciphering the DMP docs and implementations. |
I don't know if you've read this: The DMP representation is a recursive representation of multivariate polynomials where a polynomial in say What this means is that a The DMP recursive representation makes it natural that an operation like In [11]: p1 = Poly(x + y)
In [12]: p2 = Poly(x - y)
In [13]: p3 = p1.resultant(p2)
In [14]: p1
Out[14]: Poly(x + y, x, y, domain='ZZ')
In [15]: p2
Out[15]: Poly(x - y, x, y, domain='ZZ')
In [16]: p3
Out[16]: Poly(-2*y, y, domain='ZZ') Notice that In [19]: p1.rep.to_list()
Out[19]: [[1], [1, 0]]
In [20]: p2.rep.to_list()
Out[20]: [[1], [-1, 0]]
In [21]: p3.rep.to_list()
Out[21]: [-2, 0] This is different from what FLINT's resultant function does because while it returns a polynomial free of
In [22]: p1.rep
Out[22]: DMP_Python([[1], [1, 0]], ZZ) The idea that the variables are "x" and "y" is maintained at a higher-level in the Poly type. What this means is that we need a general way to compute resultant of a polynomial wrt main variable and project down to the representation with the main variable removed from the context. We don't care what the variable names are and we always know which one is the main variable. Note that I am not saying that what you have as a resultant function here is wrong for python-flint but just that it needs to be adapted somehow in SymPy and possibly something extra needs to be provided in python-flint to make that possible. What I'm not sure about is how to hold the contexts for this sort of thing in memory. You could imagine a stack of contexts where the context for 3 variables holds a reference to the context for 2 variables etc. (I mean that this stack would be in SymPy, not in python-flint, although it could potentially make sense in python-flint as well). |
Almost but not quite: In [26]: (4*x + 2*x*y).term_content()
Out[26]: 2*x It is the gcd of the terms rather than of the coefficients and so it also includes the gcd of the monomials. SymPy would give: In [31]: primitive(4*x + 2*x*y)
Out[31]: (2, x*y + 2*x)
In [32]: Poly(4*x + 2*x*y).primitive()
Out[32]: (2, Poly(x*y + 2*x, x, y, domain='ZZ')) I suppose in python-flint it can be: In [34]: (4*x + 2*x*y).term_content().leading_coefficient()
Out[34]: 2 It is not ideal to compute the gcd of the monomials when it is not needed though. Note that computing the gcd of many things like Internally FLINT computes the equivalent of There are a few different natural notions of what "content" means when we are working with polynomials. For univariate polynomials there are two cases:
Usually when someone says "content of a polynomial" they mean the gcd of coefficients: In a multivariate context we also need to define our interpretation of what is a coefficient i.e. whether some generators are viewed as being part of the coefficients or part of the monomials for the purpose of defining the content.
The Flint functions related to this are:
There are several ways to adapt these functions so that they do what SymPy's equivalent functions would do:
None of these is quite optimal for just getting the normal idea of the "content" which is just the gcd of the integer coefficients. Generally What I'm not sure about is what names to give to methods that do these things. Probably there should be a method that is just called
It makes sense to have these two functions because it isn't possible to compute the primitive part without first computing the content so you might as well have a function called Note also that we should not necessarily always follow exactly Flint's conventions for names of functions and methods etc. Flint itself is not even fully consistent across all types as in they may or may not have exactly the same methods which may or may not be defined to mean exactly the same things. Rather we should define methods that make sense consistently within python-flint from the perspective of someone using python-flint from Python. In some cases that means we need to piece together a few Flint functions to do something useful rather then just exposing precisely Flint's documented functions for each type. We should also consider that in general it is useful if all types have the same methods e.g. if all polynomial types have primitive and content methods or even if those can be consistent across both univariate and multivariate polynomials. If we are going to have Note that in SymPy content and primitive are defined not just for polynomials with integer coefficients: In [5]: primitive(2*x/3 + 2)
Out[5]: (2/3, x + 3)
In [6]: primitive(2*x + 3, modulus=5)
Out[6]: (1, 2⋅x - 2) The definition is potentially ambiguous over a field but it can still be useful to define some notion of "factor out a scalar". |
To summarise:
|
Note that this is also sort of unnecessary because you can use In [1]: from flint import *
In [2]: ctx = fmpz_mpoly_ctx(2, Ordering.lex, ['x', 'y'])
In [3]: x, y = ctx.gens()
In [4]: f = x**3 * y + x * y**4 + x * y
In [5]: f
Out[5]: x^3*y + x*y^4 + x*y
In [6]: tc = f.term_content()
In [7]: tc
Out[7]: x*y
In [8]: fd, s = (f / tc).deflation()
In [9]: fd
Out[9]: x + y + 1
In [10]: s
Out[10]: fmpz_vec(['2', '3'], 2)
In [11]: tc * fd.compose(x**2, y**3)
Out[11]: x^3*y + x*y^4 + x*y
In [12]: tc * fd.compose(x**2, y**3) == f
Out[12]: True Using m, fd, s = f.shift_deflation()
assert m*fd.inflate(s) == f Another point I realise now is that it would be fine to have In [13]: fd.compose(x**2, y**3)
Out[13]: x^2 + y^3 + 1
In [14]: fd(x**2, y**3) # maybe this could work
---------------------------------------------------------------------------
TypeError That does work for univariate polynomials already: In [15]: x = fmpz_poly([0, 1])
In [16]: p = x**2 + 1
In [17]: p(p)
Out[17]: x^4 + 2*x^2 + 2 |
Regarding this, what should these functions be? If I'm understanding correctly to make the interface consistent with the univariate polynomials the |
That makes sense thank you.
There should still be a context coercion function laying around somewhere. Here it is # TODO: Rethink context conversions, particularly the proposed methods in #132
def coerce_to_context(self, ctx):
cdef:
fmpz_mod_mpoly res
slong *C
slong i
if not typecheck(ctx, fmpz_mod_mpoly_ctx):
raise ValueError("provided context is not a fmpz_mod_mpoly_ctx")
if self.ctx is ctx:
return self
C = <slong *> libc.stdlib.malloc(self.ctx.val.minfo.nvars * sizeof(slong *))
if C is NULL:
raise MemoryError("malloc returned a null pointer")
res = create_fmpz_mod_mpoly(self.ctx)
vars = {x: i for i, x in enumerate(ctx.py_names)}
for i, var in enumerate(self.ctx.py_names):
C[i] = <slong>vars[var]
fmpz_mod_mpoly_compose_fmpz_mod_mpoly_gen(res.val, self.val, C, self.ctx.val, (<fmpz_mod_mpoly_ctx>ctx).val)
libc.stdlib.free(C)
return res It works by composing the mpolys with the generators as the functions to convert between contexts. It's rather rough in this form but could be made to work. I'm not sure if Flint provides a better method than this.
I think context interop and coercion is something is missing from python-flint (and Flint). A tree of contexts and their variables could work but it would have to be lazy or risk blowing up for lots of variables. Either way to change domain we need a new context, I think creating one on the fly and coercing within SymPy is reasonable for now, just storing them in python-flints context cache. It would have to be an explicit operation that's only used either when required or user-side to prevent slow down. |
Thank you for all your comments here, I'll try to get to the rest tomorrow evening. |
At least for DMP we don't need a tree. Remember that DMP does not care what the names of the variables are and only ever wants to compute the resultant wrt to the main variable. So it would be a singly linked list of contexts for different numbers of variables like the 100 variable context holds a reference to the 99 variable context and so on. I suppose that a class Flint_mpoly_ctx:
def __init__(self, ...):
self.fctx = fmpz_mpoly_ctx(...)
self.fctx_drop = None
@property
def drop_ctx(self):
if self.fctx_drop is None:
...
return self.fctx_drop Alternatively we could also just
It does look like I think this is possibly something that the |
We already have the methods I suggest the following: class foo_poly:
def inflate(self, n: int) -> foo_poly:
"""q s.t. q(x) = p(x^n)"""
def deflate(self, n: int) -> foo_poly:
"""q s.t. p(x) = q(x^n)"""
def deflation(self) -> (foo_poly, int):
"""(q, n) s.t. p(x) = q(x^n) for maximal n"""
def deflation_monom(self) -> (foo_poly, int, foo_poly):
"""(q, n, m) s.t. p(x) = m * q(x^n) for maximal n and monomial m"""
def deflation_index(self) -> (foo_poly, int, int):
"""(n, i) s.t. p(x) = x^i * q(x^n) for maximal n and i"""
class foo_mpoly:
def inflate(self, N: list[int]) -> foo_mpoly:
"""q s.t. q(X) = p(X^N)"""
def deflate(self, N: list[int]) -> foo_mpoly:
"""q s.t. p(X) = q(X^N)"""
def deflation(self) -> (foo_mpoly, list[int]):
"""(q, N) s.t. p(X) = q(X^N) for maximal N"""
def deflation_monom(self) -> (foo_mpoly, list[int], foo_mpoly):
"""(q, N, m) s.t. p(X) = m * q(X^N) for maximal N and monomial m"""
def deflation_index(self) -> (foo_mpoly, list[int], list[int]):
"""(N, I) s.t. p(X) = X^I * q(X^N) for maximal N and I""" Then the invariants for both cases are like: assert p.inflate(n).deflate(n) == p
q, n = p.deflation()
assert q.inflate(n) == p
q, n, m = p.deflation_monom()
assert m * q.inflate(n) == p
n, i = p.deflation_index()
q = (p/x**i).deflate(n)
assert q.inflate(n) * x**i == p The last case for This keeps consistency with the different types in python-flint. The equivalent methods in SymPy correspond to |
It wouldn't be too bad to require pytest unconditionally. We could also add it to the requirements as an extra like One thing though is that when I am debugging a crash (core dump etc) that happens during the test suite the output from pytest is particularly unhelpful. Typically I have to add |
Although we need to ensure that it works after install with pytest --pyargs flint and also we should ensure that the currently documented approach still works:
|
I think that the dots are a good thing. You can also pass |
Previously they were identifiable in pytests verbose mode, I've just fixed that |
These both work for me (when run under spin) |
I left a few comments. The only critical ones are about adding hand-written definitions in Otherwise this looks good. I think that the inflate etc functions are doing the right thing now and it is good that they are consistent across types. |
I believe all those should be resolved now. I've also added a skip directive to the |
def content(self): | ||
""" | ||
Return the GCD of the coefficients of ``self``. | ||
|
||
>>> fmpz_poly([3, 6, 0]).content() | ||
3 | ||
""" | ||
cdef fmpz res = fmpz() | ||
_fmpz_vec_content(res.val, self.val.coeffs, self.val.length) | ||
return res |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may need to think a bit about the sign convention for primitive
and content
. Typically the primitive part is defined to be a canonical representative in some sense which may mean that the content should absorb the sign although there are different ways of defining this.
Okay, this looks good. Also the test runner changes are nice. Let's get this in. Thanks! |
A small collection of changes I've just pushed. This PR:
mpoly
docs, addresses Interface for making an mpoly context #195 (comment)mod_mpoly
s to the doc test run list. Previously they were not being run, I've fixed that up and fixed the docs tests themselves. There doesn't appear to be any other modules missing from the list.Ideally these would be discovered and run as part of pytest. There's a pytest plugin for Cython modules, however it relies on in-tree builds. Cython conveniently provides the
__test__
dictionary in modules that contain function names + line no. to doc string for doc strings that contain>>>
. I think is is all that is required so a pytest plugin could work with meson-python's out of tree builds.mpoly
types.primitive_part
function forfmpz_mpoly
.I've added those functions as I intended to use them with SymPy's DMP_Flint, however these are the univariate functions adapted for multivariate polynomial types, not the multivariate variants that SymPy seems to expect. Regardless they're here now.