On the design of unsafe operations #1265
Replies: 1 comment 2 replies
-
Thanks for moving this discussion. Let me start by restoring a little more context:
And you just replied:
The whole reason I started to ask about your definition is because, given that the compiler can't say, it doesn't make any sense to me as an answer about whether something is true in the type system, which was what I asked.
We do not.
Sure; I'm much more worried about long supposedly-safe functions that include a few unsafe operations. Those are not as likely to be carefully vetted by anyone doing API-first code review (the only right way to do it of course 😉). And I'm not entirely attached to marking every unsafe statement but I don't think it can be dismissed out of hand based on conclusions we've made about throwing.
I will assume that by “safe code” you mean “code that, regardless of its inputs (e.g. If this were a safe function, that would be a meaningful difference, but that doesn't fulfill a safety reviewer's obligation for this unsafe function. The reviewer must gather everything they know, including the stated preconditions of the function, and prove that the state preconditions of all unsafe operations are satisfied, which means vetting both of these dereferences. The reason they have to do that, of course, is that code that uses
? Neither one is enforceable by the language. If the first one were enforceable by the language we'd have put it in the type system and uses of
?? It is a
Sorry, but: ??? I don't know what “the marker of Maybe you mean that you could have made this function safe if it weren't for the need to write
The original discussion I was trying to have was a different one; it was not as broad as “the design of unsafe operations,” and the obvious approach to bootstrapping you described (with an unused-in-the-type-system |
Beta Was this translation helpful? Give feedback.
-
This discussion is continuing from this answer and the comments on it. I'm opening a new thread to avoid "polluting" one with the other.
I used "handled" to relate
unsafe
tothrows
. You "handle" an unsafe region by vetting that it is indeed safe in places where the compiler can't say. Let's not get fixated on the definition of "handle" though. I only used it to establish the similarities I believe hold between the effects and because it is a word typically associated with effect systems. What's important to my mental model is thattry
gives you license to usethrows(_:)
andunsafe
gives you license to call an unsafe function. I don't think we disagree on that.That depends on your definition of region. You enter a dynamic region when you start evaluating an expression, and that region typically covers more than the text of the expression. In particular, it can cover a call to an unsafe function. That is important because we must guarantee that no unsafe operation can run outside of a dynamic unsafe region.
I disagree but I can understand the worry that people could write overly long unsafe functions. Then Hylo would devolve into C++ for the users who love clever pointer arithmetic. Writing long unsafe functions should be the smelliest of the smells, but some people have code anosmia. So for the sake of the argument, let's say we want a marker of some sort on unsafe statements but let's not settle on
unsafe
just yet.My problem is with functions like this:
There's an important difference between
unsafe y[]
andunsafe x[]
in this function for the user who wrote it:unsafe
can be vetted. They can look at the pre/post-conditions of all the operations in scope and conclude that they wrote safe code.unsafe
can't be vetted. They don't know in what state the pointee ofx
is when we're called and can only trust that the caller satisfied their precondition. Because that isn't enforceable by the language, they asked for an unsafe capability to make sure the caller pays more attention.The user wouldn't have had to ask for
unsafe
ifx
was alet
parameter. So what I'm trying to illustrate here is that there are fundamentally two kinds of unsafe calls: the ones we can verify and the ones requiring our caller (on which we have no control in general) to have verified some precondition for us. From this observation, I claim that the marker ofunsafe [x]
is really the one we got as an argument, and that's why I don't want to write it.Well, for the purpose of the discussion we do need to think about how we bootstrap the whole "requires
unsafe
" business. Most likely that will start in standard library functions wrapping some built-in operations. And then then the point I discussed above applies. To runbase as* (remote sink Pointee)
I need to be sure thatbase
points to initialized memory. As the author ofPointer. pointee
I have no way to tel. So I documented my precondition and asked for anunsafe
capability to make sure the caller pays extra attention.Beta Was this translation helpful? Give feedback.
All reactions