-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opt out change detection #6659
Opt out change detection #6659
Conversation
Discussed the need for benchmarks with @cart on Discord:
|
Added a Breaking Change label due to this. I think this is the right course of action, but I know at least @TheRawMeatball takes advantage of this quirk. Migration GuideComponents (and resources) with no data can no longer be manually set as changed. If you were taking advantage of this quirk, store internal data, or swap to events. |
/// # Disabling Change Detection | ||
/// | ||
/// By default, Bevy will track every mutative access made to a given component. | ||
/// This may incur a performance cost on types that do not need it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// This may incur a performance cost on types that do not need it. | |
/// This may incur a performance cost (primarily in terms of memory usage) on types that do not need it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I agree with this when it's currently blocking autovectorization, as seen in #6547. It's both a CPU and a memory cost in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks correct, and the docs are solid. In addition to my small comments, I feel pretty firmly that we should automatically opt all ZSTs out of change detection, with no override (or at least, no override with the derive macro). The macro should fail to compile (or panic at startup if that's not possible) if we're discussing a struct with no fields.
That behavior is unintuitive and a memory usage footgun: allowing it results in more complexity than it's worth, especially if set_changed is hard coded to a no-op.
I was thinking people could use Resources didn't have this implemented. I'll go ahead and add it. |
Ah, this makes sense to me. Can we control added and changed separately then? |
Disclaimer: I'm not very familiar with the ECS side of bevy. Does it make sense that change detection is opt-out by default? I would've thought it would be opt-in, so that you don't pay for it unless you actually need it. I could see the worry about forgetting to opt-in and silently not working, but I dislike that all mutable queries pay a performance cost (although, how large in practice?). Thoughts? |
I feel pretty strongly that this should be opt-out by default:
That said, I think that future work should include a schedule analyzer that checks for uses of added / changed filters, and tells you what you could turn off. |
Currently, it's 8 bytes per component in memory, about the same as a However, there are cases, like in the current rendering implementation, where we clear the entire World every frame. In these cases, change detection only adds extra unnecessary overhead. Types that are also smaller than 8 bytes (i.e.
I agree with most of what is said here, but this might be (very) wrong. In the general case, change detection is a fairly minimal overhead, but without #6547, change detection is a hard block to autovectorization for all forms of query iteration (see these results). This is a massive perf downgrade where it would otherwise be possible. Even with #6547, change detection may be a blocker between enabling it for
We could make it opt-in and just spit out a warning when using change detection APIs (if we had custom lints). |
Fair! We could also use schedule analysis, or a debug-only |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some minor nits, still working on a real review.
} | ||
}) | ||
} | ||
|
||
pub const COMPONENT: Symbol = Symbol("component"); | ||
pub const RESOURCE: Symbol = Symbol("resource"); | ||
pub const CHANGED_DETECTION: Symbol = Symbol("change_detection"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub const CHANGED_DETECTION: Symbol = Symbol("change_detection"); | |
pub const CHANGE_DETECTION: Symbol = Symbol("change_detection"); |
@@ -93,6 +118,17 @@ fn parse_component_attr(ast: &DeriveInput) -> Result<Attrs> { | |||
} | |||
}; | |||
} | |||
Meta(NameValue(m)) if m.path == CHANGED_DETECTION => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Meta(NameValue(m)) if m.path == CHANGED_DETECTION => { | |
Meta(NameValue(m)) if m.path == CHANGE_DETECTION => { |
NestedMeta::{Lit, Meta}, | ||
}; | ||
match meta { | ||
Meta(NameValue(m)) if m.path == CHANGED_DETECTION => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Meta(NameValue(m)) if m.path == CHANGED_DETECTION => { | |
Meta(NameValue(m)) if m.path == CHANGE_DETECTION => { |
@@ -111,7 +127,12 @@ use std::{ | |||
/// [orphan rule]: https://doc.rust-lang.org/book/ch10-02-traits.html#implementing-a-trait-on-a-type | |||
/// [newtype pattern]: https://doc.rust-lang.org/book/ch19-03-advanced-traits.html#using-the-newtype-pattern-to-implement-external-traits-on-external-types | |||
pub trait Component: Send + Sync + 'static { | |||
type WriteFetch<'a>: ComponentMut<'a> + DetectChanges<Inner = Self> + DerefMut<Target = Self>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think Fetch
is the right term to use here, since this type is the actual item being returned, not an intermediate value.
impl_methods!(Mut<'a, T>, T,); | ||
impl_debug!(Mut<'a, T>,); | ||
|
||
impl<'a, T: ?Sized> DetectChanges for &'a mut T { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This impl seems like a bit of a footgun. If you have a variable of type &mut Mut<T>
, then it will implicitly use this no-op impl any time you call .is_changed()
.
@BoxyUwU maybe you want to look at this PR? Maybe you'd have valuable input? |
Boxy, James, and I discussed this PR in this discord thread, where we explored some options for where to take this feature: https://discord.com/channels/691052431525675048/749335865876021248/1049094784096141363 then we went back on topic here: https://discord.com/channels/691052431525675048/749335865876021248/1049104605042184283 |
To summarize that discussion, the best option right now might be to use the same "soft constant" approach used by #6800, where we rely on heavy inlining to optimize out branches based on a supposedly runtime-stored value (enabled/disabled) sourced from a constant ( |
Another option that I don't fully believe in but might be worth discussing now (not trying to derail): maybe we make change detection 100% runtime configuration? Pretty sure this has been proposed in the past. I might even have shot the idea down in Bevy's early days 😅
Some pros:
Some cons:
I think the biggest concern is performance, but if we can prove its actually faster in the majority of cases, that would be pretty sweet! |
I'm pretty sure this will break the vectorization gains made in #6547 since the compiler cannot prove at compile time whether or not it's going to branch or not. If possible I would really like this a compile time decision as the branch could easily be more expensive as the write out to the tick buffer. What is described above (a singular enabled/disabled flag) is probably the least intrusive way to approach this. However, if it's not sourced from a constant but rather a runtime field as proposed, the compiler cannot optimize out the branch, and will break many of the optimizations we've been trying to keep. |
It seems like whether change detection is set to true/false is a check that can be done outside of a for loop since its not going to change from one |
Maybe we could support both modes? A compile time "change detection settings" const enum with Then for high traffic types (guaranteed to use change detection) like Transform we could force it to "enabled" for maximum perf. |
It seems worthwhile to impl both approaches and compare. And we could probably just start with (and merge) the compile-time variant, then tack on the DetectedAtRuntime option once we find the time to do the impl. |
@@ -23,6 +23,17 @@ pub const CHECK_TICK_THRESHOLD: u32 = 518_400_000; | |||
/// Changes stop being detected once they become this old. | |||
pub const MAX_CHANGE_AGE: u32 = u32::MAX - (2 * CHECK_TICK_THRESHOLD - 1); | |||
|
|||
pub trait ComponentMut<'a>: DetectChanges { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a quick doc explaining that this trait is just to add the const ENABLED: bool
to either &mut T
or Mut<T>
could be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen some people complain on slack about the 'fake' implementation of DetectChanges
for &mut T
, where is_changed
and is_added
are set to true
; because it can be misleading/confusing.
Ultimately I think the problem is that ComponentMut
(whether or not we return Mut<T>
or &mut T
) should be decoupled from DetectChanges
?
I tried updating your PR to make it work, like this: https://github.com/james7132/bevy/pull/5/files
(it looks like WriteFetch
never makes use of the DetectChanges
bound, because WriteFetch
is only used for ChangeTrackers<T>
, which has its own change_ticks)
- Fixes bevyengine#7732 - Based extensively off bevyengine#6659 - Completely broken as I didn't change any of the ~100 or so assumptions that &T == &T while it now is Ref<T>
- Fixes bevyengine#7732 - Based extensively off bevyengine#6659 - Completely broken as I didn't change any of the ~100 or so assumptions that &T == &T while it now is Ref<T>
- Fixes bevyengine#7732 - Based extensively off bevyengine#6659 - Completely broken as I didn't change any of the ~100 or so assumptions that &T == &T while it now is Ref<T>
This doesn't look like it'll be ready for the 0.11 milestone <3 |
So, what is the status here? This PR missed 0.11 and now also 0.12. I think this work is valuable. Could we try to revisit this for 0.13? |
Objective
Allow users to opt out of change detection at the type level. Change detection comes with a performance cost on every mutative query. Disabling change detection at the type level can act as a performance optimization for all queries it's involved in.
Solution
Add the constant
CHANGE_DETECTION_ENABLED
toComponent
and have it default to true. Extend theComponent
derive macro to allow easy configuration of this constant. Use this constant to return constantfalse
results to change detection queries.Disabling change detection will make
Mut::is_added
,Mut::is_changed
,Added<T>
queries, andChanged<T>
all to return false. Any query using the filters will return empty results.ZSTs will always make
Mut::set_changed
a no-op, even if change detection is enabled.Follow-up