-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Regression" in typeinf when concatenate two tuple types #10880
Comments
Actually, I guess I can use a staged function stagedfunction cat_tt(a, b)
Tuple{a.parameters[1].parameters..., b.parameters[1].parameters...}
end Is there a better way to do this?.... |
Isn't the problem just that if isa(Tuple{Int}, Type) wouldn't it work? |
@blakejohnson No. that My point was that the typeinf cannot infer the result of concatenation of tuple types anymore. (But since I noticed that |
Yes, this was maybe the biggest advantage to using tuples of types as types: you got lots of operations on them for free. While working on #10380 I found I needed the following tuple type operations: tuple_type_head I used staged functions, but it feels like kind of a hack. I think I can improve type inference of splatting |
And since it is still possible to do what I want, I guess it's not a "regression" anymore although it might still be helpful to teach typeinf about |
Also, how bad/useful would it be to make the |
i think i've seen this before on the issues list; the primary source of the issue being due to the fact that type inference doesn't know that the |
There used to be a hack in inference that handled this, but now |
Would it be useful if the certain field of a type can be declared as constant (immutable) without making the whole type immutable? I found this old issue but it seems to be merged with the implementation of |
@yuyichao, we decided against that. |
It wouldn't help here, since |
Hi, sorry to bump, but I'd really like to see some of the functionality talked about here and many related threads. In particular, I think solving the splatting of tuple-types into tuple-types as being type-stable would mean all the other problems can be solved by clever Julia programming (without generated functions). Is there any progress on this issue? My interest was sparked in trying to remove generated functions from this attempt at strongly-typed dataframes/tables using metaprogramming along the lines of this attempt of a metaprogramming package. The goal of the latter is to manipulate meta-collections in a type-safe way where every function is a no-op. I'm surprised how far you can get with no generated code, but the big hang-up is in creating (and inspecting) Tuple-types. Type-stable concatenation and splatting would be sufficient to implement the rest of the features (and to use Tuple as a meta-storage container, rather than hacks like I came up with) but @JeffBezanson's list above is even better. @mbauman has some code for concatenation, but it's only defined up to a certain fixed integer length, and I would really appreciate some general built-in functionality. |
this has been largely "solved" by allowing marking type functions (like |
Hmm... the :pure tag didn't seem to work for me. E.g.:
I have tried another couple of functions, but there seems to be no effect of the macro. Perhaps this will resolve itself when more features and abilities are added to |
you're looking at the code for the unspecialized function. unlike generated functions, it doesn't have to regenerate and compile the function for every combination of arguments in order to constant fold.
|
OK great - thanks! I know this is a question for the forums, but how does this work exactly? More precisely, when will it always work? For instance, I see that the wrapping function can have non-trivial inputs:
I also checked Has it got something to do with the REPL living in |
there's no special constraints. you just need to be careful that your function really is |
OK... sorry for being completely stupid, and its likely the answer can be found elsewhere, but why is I'm trying to get to the heart of the problem of when exactly a function like And, what exactly do you mean by
Does this mean in analogy that a |
Ah, this is interesting. I, too, had been thrown off by the function not getting specialized at the REPL - SubArray's trait computation should be pure. As I understand it, the pure optimization is only used while running inference on another function because it can be precomputed and inlined there. It only isn't specialized at global scope and with type-unstable inputs. |
in general can we please all be careful before putting In any case, I agree it's a little better than a staged function since at least we can fallback on not doing anything special and everything should still work. |
Agreed, I'm only proposing it as an alternative to generated, which has those same issues plus more. Usually dispatch can achieve the same result (with better correctness with respect to #265). Explicitly memorized runtime checks are usually better, imo. |
Was this fixed? |
The list of function above can all be implemented with |
Yes I think it should be simple to handle this with the constant prop we have now. |
ok, reopen then ... |
fix #10880, better inference of splatting constant containers
I was under the impression that #10380 can make type inference easier for functions that use tuple types. However, when I want to get the type of the concatenation of two tuples, I couldn't get good type inference anymore. (It is also very likely that I'm just being stupid and don't know the correct way to do it in the new version.)
The code I am using can be simplified to
Before the change of the tuple type, the output is
After the change
Maybe there's better ways to do this.
The text was updated successfully, but these errors were encountered: