You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I understand, Agda's (idealized) core type theory has only top-level definitions, rather than allowing an eliminator to be applied at any point in the program. This doesn't lead to a loss of expressivity, because you can elaborate local definitions and lambdas to top-level ones.
What is interesting about this approach is definitional equivalence. I have to check, but I think that two structurally equal maps out of the booleans (which pattern match) are not definitionally equal. For instance:
open import Agda.Builtin.Equality
data bool : Set where
tt ff : bool
test0 : bool -> bool
test0 tt = tt
test0 ff = ff
test1 : bool -> bool
test1 tt = tt
test1 ff = ff
-- the following doesn't work
-- test3 : test0 ≡ test1
-- test3 = refl
To see that these are equal in some sense, you would use extensionality and proof that they are equal by induction. This would work in Cubical Agda, for instance.
At first glance, this seems pretty brutal, but in all my years of using agda I've actually never found that it was bad. It seems that the way one uses agda in practice tends to avoid any trouble that would come from these not being definitionally equal.
Why am I saying this? Because there's another benefit. The benefit is that it seems like you don't really need to normalize under case branches, or push definitional equivalence under case branches, because they only ever really become equal to anything but themselves when one of the branches is selected and its closure is unleashed. (@mortberg does this sound right?)
In cubical type theory, full normal forms become very very large, and it can be hard to understand the goals when things are just reduced to a billion eliminators (or split exprssions, as in cubicaltt) everywhere. I have to try this stuff out in Cubical Agda, but I'm guessing there is more of a chance for it to work nicely there, user-defined top-level functions are rigid in the sense that their neutral applications don't really unfold past a pattern match.
In order to make progress, you have to introduce in your proofs a bit more pattern matching, but this pattern matching is guided by the goals, instead of tried essentially at random.
Some kind of system, in which we combine the benefits of this style of top-level definition with the benefits of a simple language based on eliminators, would be really cool to try.
(One more remark that I'd say is, i don't think that careful controls on unfolding, opacity etc. are the solution here; these would be useful regardless, but I don't think it resolves the issue in a systematic way.)
As far as I understand, Agda's (idealized) core type theory has only top-level definitions, rather than allowing an eliminator to be applied at any point in the program. This doesn't lead to a loss of expressivity, because you can elaborate local definitions and lambdas to top-level ones.
What is interesting about this approach is definitional equivalence. I have to check, but I think that two structurally equal maps out of the booleans (which pattern match) are not definitionally equal. For instance:
To see that these are equal in some sense, you would use extensionality and proof that they are equal by induction. This would work in Cubical Agda, for instance.
At first glance, this seems pretty brutal, but in all my years of using agda I've actually never found that it was bad. It seems that the way one uses agda in practice tends to avoid any trouble that would come from these not being definitionally equal.
Why am I saying this? Because there's another benefit. The benefit is that it seems like you don't really need to normalize under case branches, or push definitional equivalence under case branches, because they only ever really become equal to anything but themselves when one of the branches is selected and its closure is unleashed. (@mortberg does this sound right?)
In cubical type theory, full normal forms become very very large, and it can be hard to understand the goals when things are just reduced to a billion eliminators (or
split
exprssions, as incubicaltt
) everywhere. I have to try this stuff out in Cubical Agda, but I'm guessing there is more of a chance for it to work nicely there, user-defined top-level functions are rigid in the sense that their neutral applications don't really unfold past a pattern match.In order to make progress, you have to introduce in your proofs a bit more pattern matching, but this pattern matching is guided by the goals, instead of tried essentially at random.
Some kind of system, in which we combine the benefits of this style of top-level definition with the benefits of a simple language based on eliminators, would be really cool to try.
What do you think, @mortberg?
The text was updated successfully, but these errors were encountered: