Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: spec: add a mechanism to allow generic de- and re-structuring of complex values #45049

Open
kortschak opened this issue Mar 16, 2021 · 24 comments
Labels
generics Issue is related to generics LanguageChange Suggested changes to the Go language LanguageChangeReview Discussed by language change review committee Proposal
Milestone

Comments

@kortschak
Copy link
Contributor

kortschak commented Mar 16, 2021

Background

The Type Parameters Generics proposal has been accepted so type-parametric generic functions will be part of the language in the not wildly distant future. Currently there is a gap in the proposal that has been raised in golang-nuts, here, here and here.

The issue in those discussions is that the type parameterisation approach does not currently allow any kind of correlated structured types unless the structured type is a composite type (structs, arrays, slices, and maps); the current approach covers the majority of structured types but misses types that have complex64 and complex128 as their underlying type. Under the current proposal it is not possible to write something like this:

type Complex interface {
	type complex64, complex128
}

func Abs[T Complex](v T) ? {
	return ?(math.Hypot(real(v), imag(v))) 
}

Another example in the opposite direction, showing a use that would useful in generic DSP code is to allow the conversion of a real value to the same value represented as a complex number.

type Real interface {
	type float32, float64
}

func AsComplex[T Real](v T) ? {
	return ?(complex(v, 0))
}

where ? indicates some type specification that would be the float type corresponding to the complex T.

The examples shown here are trivial (though relevant to a generic "math/cmplx" package), but more significant issues exist for example the example given in the first golang-nuts link above, a generic Eigen Decomposition function which takes a real m×n matrix and returns a pair of complex matrices and a complex vector.

type Real interface {
	type float32, float64
}

func Eigen[T Real](m, n int, data []T) (left, values, right []?, ok bool) { ...

If complex values are represented as structs, these issues do not exist, for example, here. However, this shuts out any use of the language's complex types, and it requires that complex types be referred to by their real coefficients' types, rather than by their actual complex type.

Alternatively, the function could be implemented as follows

type Real interface {
	type float32, float64
}

func Eigen[T Real](m, n int, data []T) (leftRI, valuesRI, rightRI []T, ok bool) { ...

where the returned slices hold alternating real and imaginary parts, but the 70s called and they want their language back, and this does not address the issue with non-slice types.

Proposal

I propose that the complex and real keywords be adopted to allow obtaining the correlated type for real and complex values respectively. This would allow the examples above to be written as follows:

type Real interface {
	type float32, float64
}

type Complex interface {
	type complex64, complex128
}

func AsComplex[T Real](v T) complex(T) {
	return complex(T)(complex(v, 0))
}

func Abs[T Complex](v T) real(T) {
	return real(T)(math.Hypot(real(v), imag(v)))
}

func Eigen[T Real](m, n int, data []T) (left, values, right []complex(T), ok bool) { ...

The choice of complex is obvious, though real is perhaps less so, it can be read as the type of the real coefficients of the complex type, rather than the real part. For this reason, imag(T) is not appropriate to obtain the float correspondent of T. If float were still part of the language it could have been chosen.

Since complex values' parts must match, the complex corresponding to a float T, is specified as complex(T) rather than complex(T, T).

A potential alternative syntax instead of complex(T) and real(T) is complex[T] and real[T], though this would be ambiguous with indexing in the code body.

@gopherbot gopherbot added this to the Proposal milestone Mar 16, 2021
@ianlancetaylor ianlancetaylor added the LanguageChange Suggested changes to the Go language label Mar 16, 2021
@ianlancetaylor
Copy link
Member

Note that complex and real are not keywords; they are predeclared identifiers. This proposal effectively extends complex and real to be ordinary (generic) functions when invoked on a value, but to be meta-functions from types to types when invoked on a type. We have many operators that can be used to convert one type to another type (e.g., the * operator converts T to pointer-to-T), but there would be the first functions. I don't think that is a blocker for this proposal, I'm just pointing it out.

CC @griesemer

@kortschak
Copy link
Contributor Author

Thanks, @ianlancetaylor. Yes, that was something I was considering in terms of backwards compatibility. I was trying to see if there was something that the syntax choice here was cause problems with but was unable to find any (of course they may exist). Other non-function based syntax is obviously also possible.

@rogpeppe
Copy link
Contributor

I quite like the view of complex types as generic types.

Suppose that we'd already had generics in the language and wanted to add support for complex values.
We could implement much of the current spec with something like this (defined in global scope):

// not currently in global scope but could be.
type realType interface {
	type float32, float64
}

// not currently in global scope but could be.
type complexType[T realType] struct {
	real, imag T
}

type complex64 = complexType[float32]
type complex128 = complexType[float64]

func imag[T realType](x complexType[T]) T {
	return x.imag
}

func real[T realType](x complexType[T]) T {
	return x.real
}

func complex[T realType](r, i T) complexType[T] {
    return complexType[T]{r, i}
}

Perhaps if we think of the existing complex types as a kind of sugar over the above, then it might make sense to allow complex[float32] and complex[float64] as synonyms for complex64 and complex128 respectively.

One issue with that is that this would make the type complex synonymous with the constructor function complex, which is awkward because then there would be an ambiguity between a type conversion complex(x) and a constructor expression complex(r, i). We could disambiguate by treating it as a type conversion when there's one argument and a constructor when there are two, but that feels a little bit dubious.

It also opens up a question: given that type lists allow any type whose underlying types are mentioned, this would also admit complex[floatType] where floatType is a defined type with underlying type float32 or float64.
Would that be OK?

@Merovius
Copy link
Contributor

If complex values are represented as structs, these issues do not exist, for example, here. However, this shuts our any use of the language's complex types

FWIW I always felt that the inclusion of complex numbers into the language was a bit strange. IMO, adding more strangeness on top to make them work with generics is bringing the sunk cost fallacy to mind. I think it might well be worthy of consideration to deprecate the builtin complex types in favor of a library-implementation of them - which can actually be built with generics. The generic type could live in math, for example.

@atdiar
Copy link

atdiar commented Mar 17, 2021

In my opinion, this example might show one of the limits of the current form of the proposal where constraints are derived from types.

The opposite should happen, i.e. types should be derived from constraints. (with a way to extract the original constraints of a type so that the general user can constrain type parameters)

Then we could have functions of constraints object argument that returns constraints (Propositional functions). This is different from using types as constraints because not all constraints are necessarily types.
They could be propositions (as in "convertible to X")
Propositions and not predicates. I think this ought to be restricted to zeroth-order logic. (application of propositional calculus)

I will reference here again the little warning from "A gentle introduction to semantic subtyping" on page 200 as this seem to remotely apply here.
https://www.irif.fr/~gc/papers/icalp-ppdp05.pdf

Their issue is about subtyping relations but I think the more general issue is about type constraint propagation... i.e. whenever there is a relation between types. In the present case for instance or notably when we deal with interfaces, composite types such as structs etc.

The constraint package would eventually just be the partial constraint store of the default constraint system induced into the type system.
Then we would have complex type constraints resulting from a propositional function applied to float type constraints. Just to be applied to a type parameter if needs be.

Just a note.

@atdiar
Copy link

atdiar commented Mar 17, 2021

I don't know what the downvote is for since I see no explication but said in some more simpler terms, types should be built from constraint sets.

An interface would be like a regular type with some structural and the nominal constraints lifted. Effectively allowing subtyping (but not parametricity here because interfaces are themselves types) .

This is going to be important for any other composite types and especially if we want typelists are regular interfaces.

Propositional functions would allow to define precisely the constraint that form a complex type depending on the constraints of the argument float type.

Funnily enough, this is the goal of semantic subtyping: allowing parametricity in the presence of structural subtyping.
If we define subtyping as partial constraint lifting, it's perhaps easier to understand.

@Merovius
Copy link
Contributor

@atdiar I gave a thumbs-down because it sounds to me like you are suggesting a relatively fundamental change to the generics design - not just an amendment to address OPs concerns. I felt that this is neither the time nor place for that discussion. I also didn't want to drag the issue further off-topic, so I tried leaving it with an emoji reaction.

@atdiar
Copy link

atdiar commented Mar 17, 2021

Yes but OP concerns are to me a part of a more general problem with the design. I think it fits here.

It explains the idea of a complex higher-order function that takes a float type as input for instance.

Thank you for the clarification.

@kortschak
Copy link
Contributor Author

kortschak commented Mar 17, 2021

@Merovius Regarding the removal of built-in complex types, there are two parts that need to be addressed. The first is likely trivial, it would need a rewrite tool to convert the existing infix notation to function calls (we have an extensive collection of functions that work on complex values and a manual rewrite is a non-starter). The second is a judgement call; reading conventional infix notation is much easier for most people. We have other rings (quaternion, dual, hyperdual, dual quaternion and noncom dual complex) implemented and using those systems is harder than for the complex types. If a rewrite tool for thing 1 existed I guess it could be coopted to make a code generation tool to convert from infix to function call forms though.

@Merovius
Copy link
Contributor

Yeah, my comment was originally prompted by this sentence from your mail to golang-nuts:

Indeed, in the current generics proposal, the Gonum quaternion, dual, hyperdual and dual quaternion number types are easier to integrate into a generics system than the built in complex number types because they are based on structs.

I think the same readability concern also transfers to other types, like big.{Int,Float} or GL(n, R). Singling out ℂ seems based on the prior that they are already natively represented in the language - that's where the sunk cost fallacy comes in :)

Personally, it seems more elegant to me to address the readability issue separately, if we are worried about it. With a solution that can be applied to other library-types as well. If Go had operator overloading, that would solve the readability issue of representing ℂ as structs while also helping Quaternions, *big.Int and GL(n,R) code, while not requiring us to bolt on further exceptions for complex to the generics design. If, on the other hand, we think the costs of operator overloading outweigh their benefits, the same calculus that leads us to conclude that for *big.Int should also be applied to ℂ.

In any case, I just wanted to make the option explicit :) I'm content to just wait and see where the rest of the discussion is going :)

@rsc
Copy link
Contributor

rsc commented Apr 7, 2021

I'll just comment that instead of

type complex64 = complexType[float32]

you could also use

type complex64 = typeof(complex(float32(0), float32(0)))

if we had typeof (and that could apply to much more than complex and float problems).

@rsc rsc added the v2 An incompatible library change label Apr 7, 2021
@kortschak
Copy link
Contributor Author

@rsc can I clarify the proposal status of this issue? It has been removed from Incoming, but not placed in Active. Was this intentional?

@griesemer
Copy link
Contributor

griesemer commented Apr 7, 2021 via email

@kortschak
Copy link
Contributor Author

Maybe how that fits in with https://github.com/golang/proposal#readme could be clarified? This otherwise feels kind of arbitrary.

@griesemer
Copy link
Contributor

We have in the past liberally moved proposals between "regular" and "Go2" proposals, so I don't think this is arbitrary.

In the Go2 proposal reviews we tend to spend more time on language design issues, sometimes discussing just one or two proposals at length. It seems that this proposal fits better in that process for now. It might move back once we have decided on the best way forward.

So if anything, this relabelling should free up space for a more in-depth discussion.

@kortschak
Copy link
Contributor Author

OK. Thanks for clarifying that. Perhaps some text in the proposal#readme could be added to note this progress path?

@ianlancetaylor ianlancetaylor added the generics Issue is related to generics label May 4, 2021
@sammy-hughes
Copy link

sammy-hughes commented Sep 10, 2021

I think the same readability concern also transfers to other types, like big.{Int,Float} or GL(n, R). Singling out ℂ seems based on the prior that they are already natively represented in the language - that's where the sunk cost fallacy comes in :)

Is there interest in extending such a proposal to a broader syntax for destructuring uncommon types in general? Something like the following example:

type Person struct {
    Firstname, Lastname string
    NumberOfCats,NumberOfDogs int
}
type Name struct {
    FName, LName string
}
type Pets struct {
    Cats, Dogs int
}

func DestructuringExample(person Person) bool {
    name, pets :=  person.{Name, Pets}
    name.FName = "Ted"
    return person.Firstname == name.FName
}

func DestructuringExample2(person Person) bool {
    name, pets :=  (*Name)(unsafe.Pointer(&person)), (*Name)(add(unsafe.Pointer(&person),sizeof(Name))
    name.FName = "Ted"
    return person.Firstname == name.FName
}

Specific concerns would be whether it would be an issue that this exposes otherwise unexported types, and that this now creates a strong coupling between the involved named types. Nonetheless, this type of destructuring can be currently done with raw pointers, though it is considerable more brittle and is not statically checkable.

P.S. I want this. Omigosh, I want this. If it's tangental to the topic of complex number expansion, regardless of whether this proposal is acceptable, I will absolutely write up a proposal.

@ianlancetaylor
Copy link
Member

ianlancetaylor commented Apr 12, 2023

Rather than using complex as a general type function, we can restrict it to be only used as a constraint. The expression complex(T), where T is a type that must be a floating-point type, is the complex form of that floating-point type, either complex64 or complex128.

We don't need anything else, because we can rely on type inference.

type Float interface { ~float32 | ~float64 }

func ToComplex[F Float, C ~complex(F)](f F) C { return complex(f, 0) }

func RealPart[C ~complex(F), F Float](c C) F { return real(c) }

In these examples, the function ordinary argument determines the first type argument, and the second type argument is inferred from the first type argument.

@ianlancetaylor
Copy link
Member

A slightly different syntax would be to use complex[R] as the constraint. That would make it clearer that we are describing (instantiating) a type rather than calling a function. This also does not introduce new syntax for constraints. I guess this has some similarities to what @rogpeppe suggested above, although it's not clear that we would want to permit this syntax to be used for anything than a constraint.

@griesemer
Copy link
Contributor

griesemer commented Apr 12, 2023

To follow-up on @ianlancetaylor's comment, another way of stating this is to say that complex types are (special, built-in) composite types of which there exist two types: complex[float32] and complex[float64], very much the same as the (special, built-in) composite map and channel types (of which there happen to be infinitely many types).

And for historic reasons we also have the aliases

type complex64 = complex[float32]
type complex128 = complex[float64]

The reason why these complex[T] types are not defined as structs is because they allow operations (such as the usual arithmetic operations) which are not available on structs.

The one anomaly we'd have is the fact that complex[T] is both a built-in type and a built-in function, but that may be the price to pay if we want to keep the name complex. (I suppose one could use something like cmplx[float32] and avoid this issue as well, which is perhaps ok since it would only appear in constraints, given that we also have the aliases.)

@kortschak
Copy link
Contributor Author

@ianlancetaylor @griesemer variously s/float128/float32/g and s/float128/float64/g?

@ianlancetaylor I think probably this

func ToComplex[F Float, C ~complex(F)](f F) C { return complex(f) }

is

func ToComplex[F Float, C ~complex(F)](f F) C { return complex(f, 0) }

Is this correct? If so, I think we could live with this approach.

@ianlancetaylor
Copy link
Member

Yes, your understanding is correct. Sorry for the typos. I think I've fixed them now.

@zephyrtronium
Copy link
Contributor

If we were to have:

type complex64 = complex[float32]
type complex128 = complex[float64]

Then could we write these?

type MyFloat float64
type MyComplex complex[MyFloat]

type ConstraintComplex[T float32 | float64] complex[T]

type AnyComplex128[T ~float64] complex[T]

@griesemer
Copy link
Contributor

@zephyrtronium Yes, I believe all these should be valid but I may be missing something.

@ianlancetaylor ianlancetaylor changed the title proposal: add a mechanism to allow generic de- and re-structuring of complex values proposal: spec: add a mechanism to allow generic de- and re-structuring of complex values Aug 6, 2024
@ianlancetaylor ianlancetaylor added LanguageChangeReview Discussed by language change review committee and removed v2 An incompatible library change labels Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
generics Issue is related to generics LanguageChange Suggested changes to the Go language LanguageChangeReview Discussed by language change review committee Proposal
Projects
None yet
Development

No branches or pull requests

10 participants