- Problem
- Background
- Proposal
- Rationale based on Carbon's goals
- Alternatives considered
- Require compile-time-proven initialization
- C and C++ uninitialized
- Allow passing unformed objects to parameters or returning them?
- Allow assigning an unformed object to another unformed object?
- Fully destructive move (Rust)
- Completely non-destructive move (C++)
- Named return variable in place of a return type
- Allow unformed members
How should Carbon handle uninitialized memory and variables? There are a number of goals we would like to satisfy in this space:
- Minimize the overhead (in optimized builds) of unnecessary initialization.
- Detect and diagnose bugs in code where variables are used without being initialized first at lower cost than a separate and unscalable build mode (MSan for C++).
- Avoid the ergonomic, readability, and at times correctness issues when forced to initialize all variables at the point of declaration, even if there is no intentional value to yet use.
- Minimize the security risk posed by accidental access of uninitialized memory.
- Avoid unnecessary overhead for dynamic tracking of whether a variable is initialized.
- Avoid unnecessarily running destructors for variables when they haven't yet been initialized or used.
- Allow most types similar ergonomics and behaviors as a simple machine integer type.
- Offer similar behavior regardless of storage location (stack, heap, thread-local-storage, static / global storage).
- Leave a path to add more strict safety rules in the future, such as requiring explicit marking when the address of an uninitialized variable escapes.
Part of our goal is also to support code patterns such as:
// At block scope of some function...
var temporary: Int;
while (...) {
// ... code not using `temporary` ...
temporary = ...;
// ... code using `temporary` ...
}
That is, we also don't want to force a syntactic distinction when an assignment occurs to an already initialized variable. That would end up with fundamentally similar ergonomic (and potentially performance) overheads as forcing initialization as part of declaration. Throughout this document, the term "assignment" should instead be understood as the operation that can be used in this construct.
These problems are also tightly related to the problems of moved-from semantics. These semantics might range from completely non-destructive (as in C++) to completely destructive (as in Rust). A compelling solution for the variable initialization challenges can also provide a compelling approach for moved-from semantics that represents a middle ground between the two extremes seen in C++ and Rust. For example, we propose that the following should hold equally for moved-from objects and objects without an explicit initializer:
- The object must not be used in any way other than assignment or destruction.
- Note that this refers to the object. Taking the address is fine, but the restrictions here on how the object is used apply equally to any uses through a pointer to the object.
- It must be possible to transition it to a valid, normal object through assignment.
- It should be allowed to destroy them.
- It should not be necessary to destroy them.
- Definitive assignment or definitive initialization (used by Swift, Java, and so on)
- Rust checked uninitialized data
- Elements of Programming
- P2025 - Guaranteed copy elision for named return objects
We propose two fundamental concepts:
- An unformed state for objects.
- Raw, uninitialized storage.
The first of these is a new concept and is discussed in detail below. However,
uninitialized storage in Carbon should work in the same way as an uninitialized
array of unsigned char
or std::byte
in C++. This is intended primarily to
serve the needs of custom data structures such as C++'s std::vector
or
whatever Carbon ends up with as an Optional(T)
type. With such raw storage,
initialization and destruction of any objects within the storage must be managed
manually and externally and the language won't provide any automatic mechanisms.
The detailed syntax for working with raw storage in this manner is left to a
subsequent proposal.
An unformed state is a state of an object. We both classify objects based on whether they are in an unformed state or a fully formed state, as well as classify types which support unformed states at all versus types that require objects to always be in a fully formed state.
An unformed state for an object is one that satisfies the following properties:
- Assignment from a fully formed value is correct using the normal assignment implementation for the type.
- Destruction must be correct using the type's normal destruction implementation.
- Destruction must be optional. The behavior of the program must be equivalent whether the destructor is run or not for an unformed object, including not leaking resources.
Any operation on an unformed object other than destruction or assignment to a fully formed value is an error, including:
-
Comparisons with other objects.
-
Passing an unformed object to a function or returning it.
-
Initializing or assigning with an unformed object, including all forms of self assignment or self swap:
var unformed_x: Int; // Each line mentioning `unformed_x` below is an error: var a: Int = unformed_x; var b: Int = 42; b = unformed_x; var unformed_c: Int; unformed_c = unformed_x; unformed_x = unformed_x; // If we imagine a move operation similar to C++'s `std::move` spelled `~` // and a function `Swap` similar to C++'s `std::swap` but using pointers, // these too are errors: unformed_x = ~unformed_x; Swap(&unformed_x, &unformed_x);
These errors should be diagnosed as early as possible in a debug build and at least at runtime. In a performance build, it may lead to undefined behavior, but in a hardened build we have a strong mitigation that restores safety by defining the behavior using a maximally safe representation.
When a variable is declared without initialization, such as:
var x: Int;
The variable is implicitly initialized to its type's unformed state and represents an unformed object.
It is invalid to declare a variable without explicit initialization unless its
type implements the interface to specify how to do that initialization. For
types that do not support initializing the variable to an unformed state, the
programmer must either explicitly initialize it to a valid and fully formed
state for the type, or must change the type to something like an Optional(T)
that introduces an unformed state (and the dynamic tracking necessary to
implement it) alongside raw storage for the underlying object.
The unformed state can have the same representation as valid and fully formed
states for the object (for example, an empty Optional(T)
). While there is a
semantic difference (any operation other than assignment and destruction is an
error for an unformed object), there is no problem re-using a representation
which is also used for fully formed and valid objects provided it satisfies all
three of the above constraints. The semantic restrictions remain even in this
case. Using the hypothetical Optional(T)
API, we could imagine:
var formed: Optional(T) = .Empty;
// This is of course fine.
if (formed.empty()) { ... }
var unformed: Optional(T);
// This would be an error even though `unformed` might have identical
// in-memory representation or bit-pattern to `formed`:
if (unformed.empty()) { ... }
It is allowed to take the address of an unformed object and escape it to another function. This function may even be responsible for assigning a fully formed object to that location.
fn AssignIntTo(x: Int, destination: Int*) {
*destination = x;
}
fn Example() {
var y: Int;
// `y` is unformed here.
AssignIntTo(42, &y);
// `y` is fully formed and usable.
}
This in turn is part of why it is important that the destructor is allowed to
be run on an unformed object. This gives us a conservatively correct way to
implement code where it may be impossible to know whether the object has been
initialized to a fully formed object, and similarly code where it is impossible
to know whether the object has been moved-from. Consider some HashMap
container with allocated memory where the destructor must be run to release
the memory:
fn MaybeInitializes(map: HashMap*);
fn MaybeMovesFrom(map: HashMap*);
fn Example() {
var map: HashMap;
MaybeInitializes(&map);
if (...) {
// Don't know if we need to destroy `map`. But conservatively we can run the
// destructor unconditionally.
return;
}
// Unconditionally assign a particular value to our map.
map = ...;
MaybeMovesFrom(&map);
// `map` could be unformed (due to being moved from) or still fully formed.
// Run the destructor unconditionally to be safe.
return;
}
The intent is that most types have an unformed state, as it will let types "be
like the int
s" -- we design it to closely match how machine integers actually
behave in modern optimizing compilers: when left "uninitialized" they can be
safely assigned or destroyed. But running the destructor is optional.
Types can opt into this semantic model and define an explicit operation to create a type in the unformed state by implementing an interface. Because this will be defined semantically by the existence of an implementation of the relevant interface, and operationally by providing code to produce an unformed object, there is no specific bit pattern or representation encoded for the unformed state. The interface should provide a default implementation for types where all members have an implementation which recurses member-wise.
It is reasonable to wonder if types where the default implementation is valid implicitly implement the interface, or should it require an explicit opt-in? Initially we suggest this should be explicit. There may grow to be a set of interfaces which should be implicitly implemented due to their prevalence, but it seems better to be conservative at first and approach that systematically.
Note that this state is similar but more restrictive than the partially formed state from Elements of Programming (EoP) in section 1.5.
Raw storage and padding bytes are treated like they always consist of unformed bytes. We propose not allowing sub-byte raw storage or padding.
Padding bytes have specific practical considerations. It is fine for operations
that work across all bytes of raw storage, such as C-style routines like
memset
and memcpy
, to write to padding bytes. But doing so has no observable
effect, and even if they notionally set padding bytes to a known value, the
padding bytes continue to behave as if they are unformed bytes.
Copying from padding bytes is significantly more difficult because it risks opening up the possibility of copying unformed objects more generally. That has significant downsides and we are proposing it is not allowed. Instead, there will need to be a low-level language facility that specifically handles copying storage with padding bytes. Such a facility would have to either exclude the padding bytes from the copy, or provide a strong guarantee that they are only ever directly copied into other padding bytes without any intermediate steps.
Some types will not provide an unformed state. Variables of such types must be initialized when declared:
struct NecessaryInit {
var m: Mutex;
// Potentially other members without a meaningful unformed state.
}
// This line would be an error:
var error_no_init: NecessaryInit;
For these types, the programmer must either explicitly initialize it to a valid
and fully formed state for the type, or must change the type to something like
an Optional(T)
that introduces an unformed state (and the dynamic tracking
necessary to implement it) alongside raw storage for the underlying object:
// This is fine, it contains the boolean necessary to track initialization.
var ok: Optional(NecessaryInit);
There are two optional aspects of unformed states that types may elect to explicitly specify (again by implementing interfaces, here by specializing the implementation rather than relying on the default fallback).
First, types may customize how to further initialize an object that is in the
unformed state to most effectively harden it against erroneous usage. For
example, the unformed state of an Int
is trivial -- it can simply be left
uninitialized. However, it can be hardened against bugs by setting it to zero.
User defined types have the option of specifying a specific routine to do
similar hardening initialization. The entire storage, including any padding
bytes, for all types will be at least be zeroed as part of hardening
initialization. This routine simply allows a user defined type to specifically
customize the hardening initialization of some parts or members of the object to
values other than zero.
The second optional aspect is implementing a test for whether an object is unformed. This test cannot be run by user code -- doing so would be a bug due to referencing an unformed object. However, for types where the unformed state is intrinsically distinct from any other state, they can expose the ability to query this. That allows debug builds of Carbon to verify objects are fully formed prior to use without external tracking state. Types which do not implement this functionality will still get partial checking in debug builds through external state, but may lose dynamic enforcement across ABI boundaries where this extra state cannot be propagated. These are also the boundaries across which the hardening will be done to ensure bugs across those boundaries cannot become security vulnerabilities in the hardened build configuration.
This proposal also has implications for returning values from functions, especially as it relates to how Carbon can provide similar functionality to guaranteed return value optimization (RVO) and named return value optimization (NRVO) in more principled ways than C++ does. While these facilities should allow Carbon to not require tricks like NRVO and the dangers that come with them, the primary motivation is to increase the expressivity of "normal" function returns in order to address complex use cases in object construction.
Consider the following common initialization code:
fn CreateMyObject() -> MyType {
return <return-expression>;
}
var x: MyType = CreateMyObject();
We propose that the <return-expression>
in the return
statement of
CreateMyObject
initializes the variable x
here. There is no copy, etc. It
is precisely the same as:
var x: MyType = <return-expression>;
We also propose this applies recursively. This matches how C++ has evolved with "guaranteed copy elision".
However, while most C++ implementations provide a "named return value" optimization that applies the performance benefits of this to returning a named variable, it has proven extremely difficult to automatically provide "guaranteed copy elision" when returning a named variable. The determining factors for whether it can be safely applied change significantly based on optimization levels and numerous other implementation details. The result has been repeated requests from users and proposals to provide an explicit syntax for marking a named variable as being returned and then both guaranteeing the copy elision takes place, and requiring that all returns are of the variable and compatible with that elision.
While these proposals have hit issues in C++ due to compatibility concerns and complexity in C++ itself, those won't apply to Carbon. We suggest that Carbon should provide a direct mechanism for this with a named return variable which obeys the same initialization rules as any other variable.
An important aspect of any approach to named return variables is that, much like
with normal return values, there is no copy when initializing. This means that
the address of result
in the examples below of CreateMyObject2
and
CreateMyObject3
are the same as the address of y
during the evaluation of
y
's initialization expression. Only one variable is created. It is first
initialized to an unformed state on entry to CreateMyObject2
and assigned to a
fully formed value internally.
We propose to allow a specific variable within the function body to be declared
as returned
from the function. Assuming that MyType
has an unformed state,
the alternative to the above syntax would be:
fn CreateMyObject2() -> MyType {
returned var result: MyType;
// Any code we want here.
result = CreateMyObject();
// Must explicitly use the variable return syntax rather than returning an expression.
return var;
}
And we can initialize the return variable to remove the unformed requirement:
fn CreateMyObject3() -> MyType {
// Any code we want here.
returned var result: MyType = CreateMyObject();
// Potentially more code...
return var;
}
Minor syntax variations (not proposed):
- Just use
return;
rather thanreturn var;
for the actual control flow return.- The ceremony here is intended to distinguish from the case of a function that doesn't have a return value at all, but it could be removed.
- Require
return <name>;
where<name>
is constrained to match the declared variable name.- One concern with this is that the
<name>
here is not an expression -- it isn't evaluated or anything and must be just a name. Worse, there might not be a single name. - However, if there were good syntax, it might help with scoping.
- One concern with this is that the
- Maybe better syntax than
returned var
andreturn var
, open to follow-up proposals to tweak the exact syntax of both sides of this.
We expect subsequent proposals to generalize this facility to non-var
declarations where there is a reasonable and clear semantic. However, it should
be noted that for var
-style declarations in particular where the address of
the object can be taken, the entire declared object must be marked as returned
-- it cannot be a component of the pattern alone.
We also propose a set of rules to use for the types in the pattern, and to what
degree they must match the return type: whatever type would be the allocated
type of the variable declaration must match the return type. For declarations
that produce storage such as var
, the entire allocated object based on the
pattern must match the return type. If this is generalized to non-allocating
patterns, it would be fine to use any pattern that can match the return type.
The initializer expression must initialize the declared return type, and then any pattern match is done.
Control flow presents some challenging questions for this design. Consider code with the following structure:
fn ReturnEitherOr(...) -> MyType {
// ...
if (...) {
returned var result1: MyType = ...;
// ...
return var;
} else if (...) {
returned var result2: MyType = ...;
// ...
} else {
returned var result3: MyType = ...;
// ...
}
// ...
return var;
}
It seems like at least result1
should be valid. What about result2
and
result3
?
More complex control flow patterns grow the complexity this presents. For example:
fn ReturnVarWithControlFlow() -> Point {
if ... {
// Is this allowed?
returned var p1: Point = ...;
if ... {
// This seems fine to return `p1`.
return var;
}
}
if ... {
// But this would have to be an error...
return var;
}
// And what happens here?
returned var p2: Point = ...;
for ... {
// Or here?
returned var p3: Point = ...;
// Even though we might actually return...
if ... {
return var;
}
}
// Definitely no way to know what happens here between p1, p2, and p3.
return var;
}
We propose a set of restrictions to give simple and understandable behavior which remains reasonably expressive.
- Once a
returned var
is in scope, anotherreturned var
cannot be declared. - Any
return
with areturned var
in scope must bereturn var;
and returns the declaredreturned var
. - If control flow exits the scope of a
returned var
in any way other than areturn var;
, it ends the lifetime of the declaredreturned var
exactly like it would end the lifetime of avar
declaration. - There must be a
returned var
declaration in scope when the function does areturn var;
.
A consequence of these rules allows code like:
fn MaybeReturnedVar() -> Point {
{
returned var p: Point = ...;
if ... {
return var;
}
}
return MakeMeAPoint();
}
Constructors of user defined aggregate types can present specific problems with initialization because they intrinsically operate on types prior to all of their invariants being established and when a valid value may not (yet) exist for subobjects such as members. These problems are discussed in detail in the blog post "Perils of Constructors". A primary goal in trying to solve these problems is being able to separate the phase of initializing members (or other kinds of subobjects) prior to the outer object existing and establishing any necessary invariants without creating holes in the type system soundness. An interconnected problem is correctly running destructors (in order to release any acquired resources) in the event of errors during construction.
The behavior of function returns provides interesting build blocks to write code
that addresses these problems. They provide a natural set of tools to build two
phases of a constructor: code that executes before a valid object exists and
code that executes afterward. It even gives a clear demarcation between these
phases. Adding returned
variable declarations provide an important building
block for solving these problems.
struct Point {
var x: Int;
var y: Int;
}
fn PointConstructorPreObjectPhase() -> Point {
// Code that runs prior to an object existing.
// Just assembles inputs needed to initialize the members.
return (<x-expression>, <y-expression>);
}
fn ComplexTwoPhasePointConstructor() -> Point {
// Code that runs prior to an object existing.
var x_init: Int = ...;
var y_init: Int = ...;
returned var p: Point = (x_init, y_init);
// `p` is now fully formed, and its destructor *must* run.
// More code that uses p can go here.
return var;
}
This proposal suggests disallowing unformed members -- objects must be completely formed or completely unformed. Allowing mixing of formed and unformed members in an object introduces significant complexity into the language without advantages, and the code patterns that result scale poorly to real world examples. For details on what it would look like to allow these and how this goes wrong, see the alternative section
Note that inheritance opens a set of closely related but also importantly different concerns as members here and this proposal doesn't take any position one way or another there. If and when there is a proposal for how inheritance should work in Carbon, it should address this question.
This model is designed to give significant control to users of Carbon for maximizing performance around variable initialization, which matches the primary goal of Carbon for performance-critical software. It then tries to provide as many ergonomic and safety tools as possible to support the goals of easy to read and understand code and providing practical safety. Leaving space for future evolution toward stricter checking of potentially unsafe code aligns with the goal for language evolution.
A primary alternative to the approach of unformed objects for initialization is to simply require initializing all variables at compile time. This can take a few forms across the spectrum of compile time analysis complexity.
The simplest approach is to require initialization at the point of declaration or allocation for any variable or memory. An essential property to this approach remaining simple is that it have no escape hatch. Once such an escape hatch exists, all of the semantic questions of this proposal need to be answered.
Advantages:
- No need to specify or support the complexity of objects that have been declared but not fully initialized in the language.
- Simple, easy to teach, easy to implement.
Disadvantages:
- Will result in initialization with nonsensical values that if used remain a bug. We won't have any realistic way to detect these bugs.
- Imposes a non-trivial performance cost in some cases and for some users without providing any effective mechanisms to address this overhead.
Use a simple rule such as requiring initialization before use along all branches.
Advantages:
- Still avoids much of the language level complexity of unformed objects.
- Very simple to implement.
- Reasonably teachable.
- Avoids trivially redundant initializations.
Disadvantages:
- Has significant expressivity limits past which initialization is forced.
- When this happens, the disadvantages of always requiring initialization repeat. While it is expected to happen less often, it still happens.
- Adding an unsafe escape hatch that avoids initialization even in this case returns to the same fundamental complexity and model as is proposed here.
- Would still need a semantic model for moved from objects which would introduce some amount of additional complexity.
We could do significantly more complete analysis of the dataflow of the program to prove that initialization occurs prior to use.
Advantages:
- Still avoids much of the language level complexity of unformed objects.
Disadvantages:
- Some implementation complexity to achieve strong dataflow analysis.
- Significant complexity to teach -- it can be difficult to understand or explain why the data flow analysis fails in edge cases.
- Can result in brittleness where changes to one part of code confuse the dataflow analysis needed by another part of code.
- There still exist rare cases where the analysis fails, and a nonsensical
initialization may be required and the disadvantages of those return (both
bug detection and performance).
- The rate at which this occurs scale inversely proportional to the complexity of the data flow analysis and the teachability.
- Would still need a semantic model for moved from objects which would introduce some amount of additional complexity.
We could pursue the same approach as C and C++ here. However, this path has proven to have severe problems. It has inevitably resulted in prevalent bugs and security vulnerabilities due to programmer errors, And having uninitialized variables in the language makes teaching the debug build to catch these bugs extremely difficult and potentially impossible. We essentially will need developers to heavily use an MSan-like mode (either by integrating it with the debug build or having a separate one) in order to cope with the bugs caused by uninitialized memory. This is the only thing that has allowed coping with C and C++ semantics here and even it is not very effective.
If MSan remains a separate tool that needs propagation and origin tracking (details), the cost and the barrier to entry will remain very high: MSan is currently the least used and the least portable of the sanitizers; it is widely used only in two Google’s ecosystems, only on one OS, where it is maintained with a considerable effort.
If we end up having uninitialized memory, we should aim at having MSan-like checking be part of the normal debug build, which means no or very limited propagation. Any form of propagation will require optional original tracking, which is doable, but will mean extra CPU/RAM overhead for meaningful diagnostics.
Even without propagation, MSan-like checking requires bit-to-bit shadow that would incur a 2x RAM overhead in the debug build. Byte-to-bit shadow, which is < 12% RAM overhead, is prone to racy false positives and false negatives unless a) instrumentation uses expensive bit-wise atomics or b) the language defines data races on 8-byte granularity. Adding this restriction on data races would diverge from C++ and further complicate porting code and interoperability.
This proposal suggests not allowing code to pass unformed objects to other functions as arguments or to return them. They can still be escaped to another function by passing the address.
Advantages of allowing this:
- Would accept more code that can be written in C++ today and doesn't directly exhibit a bug. For example, unused arguments or return values in C++ could be left uninitialized, but in Carbon they would need to use an optional type instead.
Disadvantages of allowing this:
- Makes checking for errors in debug builds both more expensive and less
helpful by potentially moving the error to a separate function rather than
the one most likely containing any bug.
- We have anecdotal evidence that indicates producing an error at the
function boundary would be preferable to delaying the error. This
evidence is a result of MSan delaying any such errors, while UBSan
eagerly errors at the function boundary for a few types (
bool
s andenum
s). Particularly forbool
parameters the early errors from UBSan have essentially always been correctly diagnosed and bugs. The early errors from UBSan are also significantly easier to understand. - Implementing interprocedural checking either requires a more expensive ABI or using more expensive shadow-memory checks which would otherwise be unnecessary.
- We have anecdotal evidence that indicates producing an error at the
function boundary would be preferable to delaying the error. This
evidence is a result of MSan delaying any such errors, while UBSan
eagerly errors at the function boundary for a few types (
- Equivalent functionality can be achieved safely using an optional type, or unsafely by passing the address. The only cases where passing the address would be less efficient than what C++ allows are likely to be amenable to compiler optimization to remove the overhead.
This proposal suggests not allowing assigning an unformed object to another unformed object.
- We should try not allowing this at first.
- This will be enforced the same way as any other use of an unformed object: at compile time when locally detectable, at runtime when sufficient runtime tracking is enabled.
- If we find we must allow this in some cases, we should start by allowing it only using specially named operations.
- If either can be made to work, they are significantly easier to check for bugs.
Advantages of allowing this:
-
Avoids some surprise for users.
- It seems like it should work for integers.
- They may not know when the incoming object is actually unformed resulting in an error far away from the root cause.
- Only some surprise is avoided given that we still can't assign unformed objects to fully-formed objects -- doing so would open up the possibility of unformed subobjects.
-
Makes some idiomatic patterns in C++ either invalid or inefficient. Consider the typical C++ implementation of a
swap
function:template <typename T> void swap(T& lhs, T& rhs) { T tmp = std::move(lhs); lhs = std::move(rhs); rhs = std::move(tmp); }
If this function is called with
lhs
andrhs
referring to the same object, then the first move into a temporary leaves the object in an unformed state, and the second assignment will assign that unformed object. Whatever Carbon code ends up replicating patterns like this will require one of two options:- Check for
lhs
andrhs
referring to the same object, adding a branch and its overhead to the code. - Require that it is never called with
lhs
andrhs
referring to the same object, pushing the cost onto the caller.
We would largely expect (2) to be the dominant choice in Carbon due to performance concerns. However, if the ergonomic burden of (2) becomes significant, we may revisit this rule to carve out enough of a facility for self swap specifically. That might either be the narrow specially-named operation mentioned above, or attempt to special case self-assignment in some way.
-
We could consider narrowly allowing only self-move-assignment or self-swap of unformed objects to lessen the impact of precluding the move-assignment and swap of unformed objects generally. However, that carries surprising complexity cost for the language specification and creates very subtle rules.
At a minimum, types with an unformed state would need to implement move assignment in a way that would be correct even when performing a self-move of an unformed object.
On top of that, the simplest solution would be to allow any self move assignment of an unformed object. However, that would require any move assignment that might be a self-move to be implemented in a checked or sanitized build with a branch to detect that case and not error. To lower this overhead of checked builds, we would need to add language-level swap operation of some form and only exempt that from checking. This would reduce the overhead, but would now make it impossible to even test the self move assignment that would still be required to work in non-checked builds.
- Check for
-
May be important to enable specific code patterns if we allow member access.
Disadvantages of allowing this:
- Will require the core propagation logic and origin tracking that is a source
of large complexity and cost in MSan to detect bugs.
- However, object granularity and lack of arithmetic makes this still somewhat less severe than what MSan deals with.
- Only a concern for detection, hardening is easily accomplished regardless.
- Can only possibly support combined debug build mode if restricted to a
special/designated operation rather than arbitrary assignment.
- Origin tracking will still be required and have significant cost.
Rust has truly destructive moves -- the object is gone after being moved from, and no destructor ever runs past that point. This is possible because the type system models when a variable declared in one function can be moved-from by another function. It is incompatible with allowing move-from through a pointer to a non-local object.
Advantages:
- Precise, safe, and efficient lifetime management.
- Perfect detection of use-after-move bugs.
Disadvantages:
- Doesn't compose with pointers and C++ idioms which involve moving out of
non-local objects through a pointer.
- Would require redesigning systems to model the lifetime in the type system.
C++ moves are never destructive. The result is that moves simply do not impact the lifetime of the moved-from object, only its state.
Advantages:
- Perfect match for C++ semantics and idioms, including interoperability and migrated code and systems.
Disadvantages:
- Pervasive user expectation that destroying a moved-from object is a no-op, but nothing in the language helps enforce this pattern.
- Allows types to provide meaningful semantics for use-after-move.
- Rarely used, resulting in strong user expectations that this isn't allowed but special cases where it is allowed.
- Complicates generic code having to handle both patterns.
- Significantly reduces the effectiveness of use-after-move bug detection.
- Introduces difficulty in optimizing away no-op destruction post move.
- Some cases requires complex rules around ABIs and calling conventions.
- Other cases require expensive optimization techniques (inlining and memory analysis) that can fail in some circumstances.
An alternative approach to
declaring returned
variables is to extend the
syntax for the return type such that it can declare a named variable that holds
the return value. For example, provided that MyType
has an unformed state:
fn CreateMyObject2() -> var result: MyType {
// `result` here is an unformed object.
// so we can assign to it here:
result = CreateMyObject();
// Because there is a named return variable, we cannot
// have an expression in the return statement.
return;
}
We could also explicitly initialize the return variable, which eliminates the requirement that its type has an unformed state:
fn CreateMyObject3() -> var result: MyType = CreateMyObject2() {
// any code we want
return;
}
var y: MyType = CreateMyObject3();
Minor syntax variations:
- Possible to use different arrow symbols (such as
=>
instead of->
) to differentiate different kinds of returns.- However, we don't recommend this given the subtlety of the visual difference.
- Potentially require
return <name>;
where<name>
must exactly match the return variable's name to make it clear what is being returned.- One concern with this is that
<name>
can't really be an expression in general, it needs to just be a name. This may be a bit surprising. - Also complicated as there is no need to have a single name for the
variable: it might be
var (result: MyType, Error: error)
for example, and it becomes unclear what to put in thereturn
.
- One concern with this is that
Advantages:
- The primary advantage of this approach is that it avoids any and all
challenging lifetime questions or restrictions on where the
returned
variable can be declared or what control flow must follow. - It is also slightly less expressive which if sufficient seems desirable.
Disadvantages:
- The syntax space of the return type is very crowded already, and this would add more complexity there rather than fitting in cleanly with existing declaration spaces within the body of the function.
- Likely to be an implementation detail and valid to use on a function with a normal return type in its forward declaration. This isn't obvious when placed in the declaration position, and it might well be unnecessarily added to forward declarations just because of its position.
- Removes the ability to both specify a specific return type and a pattern
for binding parts of that object to names. Instead, the type must be
inferred from the pattern.
- This ended up feeling like a deep and fundamental problem where we would lose important expressivity to disambiguate the return type from patterns. For example, when the return type is some form of sum type or variant, the pattern space is both especially attractive to use and fundamentally fails to express enough information to capture the type.
Regardless of which specific strategy, using the proposed return
facilities
for constructing aggregate objects still presents some ergonomic difficulties
for large objects with many members, or members that are expensive/impossible to
move if we don't have the returned var
facility. To further ease initializing
members, we could allow access to unformed members of an unformed object.
Superficially, this remains safe -- we aren't allowing access to uninitialized
storage, and the only access allowed is what is always allowed for unformed
objects: assignment (and destruction). However, making this work with
destructors adds some complexity to rules around unformed objects.
Disallowing access to unformed members provides a simple rule for the semantics required of the destructor: it must be valid and correct to run on an unformed object or a fully formed object. Destructors always have to be valid for fully formed objects, so the only addition is the unformed state being a no-op.
However, for the same reasons it might be necessary to run the destructor on an unformed object, if we allow assigning to a member of an unformed object the destructor may also need to be run with this mixed state where some members are unformed and others are fully formed:
struct Point {
var x: Int;
var y: Int;
}
fn AssignToX(x: Int, p: Point*) {
p->x = x;
}
fn Example() {
var p: Point;
if (...) {
// Allowed to call the destructor of `p`, sees a fully unformed `Point`.
return;
}
AssignToX(42, &p);
if (...) {
// If `AssignToX` is in another file, don't know if `p` is fully unformed,
// a mixture of `x` fully formed and `y` unformed, or `p` is fully formed.
// The last case can only be handled by running the destructor of `p`
// so that's what we have to do here, and it must handle all the other
// cases as well.
return;
}
p->y = 13;
// ...
}
The added requirement for destructors of types that support an unformed state is that if they are called for an unformed object of that type, they must run the destructors for any members that might themselves be fully formed.
- For private members, the type itself controls which members might be assigned fully formed values and the destructor run. The type's destructor merely needs to agree with the constructors and/or factories of the type to destroy any members that might have been assigned values.
- For public members, the type's destructor must destroy all members as it cannot know which ones might have been assigned a value.
Because of these inherent restrictions around private members, types can ensure any necessary relationship between its private data members hold in order for the destructor to work. See the detailed example below for how this might work in practice.
Despite these added requirements to the destructor, it isn't exempt from the rules governing unformed objects when accessing its members: any that are in fact unformed when the type's destructor runs are only allowed to be assigned to or destroyed.
Lastly, we need some syntax other than assignment for marking when the members
have reached a state where the entire object is fully formed. This document uses
a hypothetical ?reify?
statement that is intended to be placeholder syntax
only. Just like an assignment to an unformed object, after this point it is
fully formed and follows those rules. Our example becomes:
fn EvenSimplerPointConstructor() -> var p: Point {
// p is unformed and no members are referenced.
// Destruction is completely optional.
p.x = ...;
// p.x may now be fully formed. Either p must be destroyed,
// or p.x must be destroyed.
p.y = ...;
// Now either p must be destroyed, or both p.x and p.y must be
// destroyed.
?reify? p;
// Now p must be destroyed, and is a fully formed object.
// ...
return;
}
Before the ?reify?
operation, the compiler can either track the members which
are assigned fully formed values and run their destructors, or it can elect to
run the entire object's destructor if, for example, the entire object escapes
and it becomes impossible to track which members have been assigned fully formed
values. But once the object has the ?reify?
operation performed, its
destructor must be run as it is now itself a fully formed object.
Now that we have all the components of this tool, let's consider a container
like std::vector
using a hypothetical syntax for defining a type, its
implementation of the necessary initialization, its destructor, and a factory
function. This example specifically tries to illustrate how, for private
members, the function doing construction and the destructor can work together to
ensure at any point of destruction, the necessary state and invariants hold.
struct MyContainer(T: Type) {
private var size: Int;
private var capacity: Int;
private var data: Optional(T*);
// ... public API ...
impl NecessaryInit {
// Not necessarily suggesting this specific API.
// Just *some* API we pick for doing the necessary init.
// This API assumes that all members with an unformed state
// have their implementation called already, all we need to do is
// adjust to *specific* values anything that is necessary for this
// type.
fn Init(this: Self*) {
this->data = Optional(T*).Empty;
}
}
fn Create(count: Int, T: value) -> Optional(MyContainer(T)) {
var container: MyContainer(T);
// Set the capacity first so it is available for deallocation.
container.capacity = count;
// And clear the size so we don't need a special case.
container.size = 0;
// Next set the data to the allocated buffer. If the allocation fails,
// this routine should just return null.
container.data = AllocateRawStorage(count, T);
if (container.data == null) {
// Might destroy `container` here.
return .Empty;
}
// Some loop syntax over [0, count).
for (i: Int = 0 ..< count) {
if (not CopyThatMightFail(value, &container.data[i])) {
// Might destroy `container` here.
return .Empty;
}
// Increment size as we go to track how many objects
// need to be destroyed if the next one fails.
++container.size;
}
?reify? container;
// Maybe we make this implicit... somehow make it move...
return .Some(container);
}
// However we end up naming this...
fn Destructor(this: Self*) {
if (this->data.IsEmpty()) {
// If the buffer isn't allocated, nothing to do.
return;
}
// Destroy any successfully created object. The `Create`
// function ensures that `size` is valid for use here.
for (i: Int = 0 ..< this->size) {
// However we spell in-place destroy...
this->data[i].Destructor();
}
// Deallocate the buffer. Here as well the `Create`
// function ensures that `capacity` is available for a
// non-null data pointer.
Deallocate(this->data, this->capacity);
}
}
This kind of type creates a close contract between construction and destruction to simplify both routines by ensuring that the necessary members are valid any time the destructor would need them to continue executing correctly.
While this provides a very flexible and convenient set of tools for construction, it comes at a fairly high cost. There is no way for the language to enforce any contract between the construction and destruction. I don't see any way to provide compile time checking that one or the other doesn't make a mistake. We can verify it dynamically, but at fairly high cost. It also adds non-trivial complexity. Given that this is merely an ergonomic improvement as opposed to an expressivity improvement, it isn't at all clear that we should pursue this until we have clear evidence that the ergonomic need is sufficiently large.
In fact, for this kind of realistic example, the returned
variable approach
appears to be significantly more ergonomic as well as simpler, undermining the
purpose of pursing this direction:
struct MyContainer(T: Type) {
private var size: Int;
private var capacity: Int;
private var data: Optional(T*);
// ... public API ...
impl NecessaryInit {
// Not necessarily suggesting this specific API.
// Just *some* API we pick for doing the necessary init.
// This API assumes that all members with an unformed state
// have their implementation called already, all we need to do is
// adjust to *specific* values anything that is necessary for this
// type.
fn Init(this: Self*) {
this->data = Optional(T*).Empty;
}
}
// Private helper factored out of the destructor.
fn DestroyElements(data: T*, count: Int) {
for (i: Int = 0 ..< count) {
data[i].Destructor();
}
}
fn Create(count: Int, T: value) -> Optional(MyContainer(T)) {
// Try to allocate the storage.
var data: UniquePtr(T) = AllocateRawStorage(count, T);
if (data == null) {
return .Empty;
}
// Some loop syntax over [0, count).
for (i: Int = 0 ..< count) {
if (not CopyThatMightFail(value, &container.data[i])) {
// Here we have to destroy the copies that succeeded.
// But we can easily share the logic with the destructor.
DestroyElements(data, i);
return .Empty;
}
}
// Now we can just initialize the object and all invariants
// hold for its destructor.
returned var Optional(MyContainer(T)): result =
.Some((count, count, data.Release()));
return result;
}
// However we end up naming this...
fn Destructor(this: Self*) {
if (this->data.IsEmpty()) {
// If the buffer isn't allocated, nothing to do.
return;
}
// The above check is the only thing needed, and it is only needed
// to support an unformed state. Once we get here, everything holds.
DestroyElements(this->data, this->size);
Deallocate(this->data, this->capacity);
}
}
With this approach, the necessary steps in the Create
function are all local
constraints, and changes to the destructor can't break their assumptions. There
is no significant added complexity in practice due to needing to form the entire
object at a single point. Normal techniques for factoring common logic are
easily used to address what little added logic is needed.
Advantages:
- Allows incrementally assigning meaningful members to an object.
- Simplifies some initialization patterns for objects with large numbers of members.
Disadvantages:
- The simplifications don't scale up to more realistic scenarios. There, they are eclipsed by the added complexity forced onto the destructor of any such type.
- The design patterns that simplify the organization of the construction and destruction logic for these kinds of types are likely to remove the need for the feature, so it seems better to keep the language simpler.