-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low power default option [#366] #397
Conversation
@@ -19,9 +19,15 @@ pub struct WgpuRenderer { | |||
impl WgpuRenderer { | |||
pub async fn new() -> Self { | |||
let instance = wgpu::Instance::new(wgpu::BackendBit::PRIMARY); | |||
|
|||
#[cfg(feature = "low_power")] | |||
let power_preference = wgpu::PowerPreference::Default; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming a feature low_power
and then mapping it to PowerPreference::Default
instead of PowerPreference::LowPower
may be confusing to users (it certainly would be to me).
Suggestion:
- If you intend to force things into low power mode, then map the feature to
PowerPreference::LowPower
- If you want to switch to the more powerful GPU when main power is connected (which is what
PowerPreference::Default
does), then rename the feature so it more accurately describes what it does. Maybeadaptive_power
or something like that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the option to "adaptive_power", added the "low_power" feature as PowerPreference::LowPower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to switch to the more powerful GPU when main power is connected (which is what PowerPreference::Default does)
I don't think this is true any longer. The power
module was removed. I believe PowerPreference::Default
now always prefers an integrated GPU over discrete when both are available. This behavior is odd as nearly every other software behaves in the opposite way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems functional and understandable to me. 👍
It might make more sense to put this into some builder or configuration for wgpu plugin, rather than in a feature, but I'll leave that up to the actual decision makers. 😉
The failing CI build can be fixed by running cargo +nightly fmt --all
and then committing and pushing the formatting fix. Or, if you don't want to install nightly, you can manually make the changes that CI points out here.
I would definitely lean in the direction of a builder or config option for this one. Adding a feature makes it unconfigurable at runtime and increases the number of features you have to test in CI, etc. @cart also expressed the desire to limit the features that we add ( comment ):
|
I'll fix the formatting/ci - thanks @CleanCut for the guidance on how to do that. |
@lee-orr struct Example{
pub runnable: bool,
pub path: String,
pub index: u32,
}
impl Example{
pub fn new() -> Example{
// You set your defaults here
Example{
runnable: false,
path: String::new(),
index: 0u32
}
}
// Basic setters
pub fn set_runnable(self, runnable: bool) -> Self{
self.runnable = runnable
self
}
pub fn set_path(self, path: &str) -> Self{
self.path = String::from(path);
self
}
pub fn set_index(self, index: u32) -> Self{
self.index = index;
self
}
}
fn main(){
let example = Example::new()
.set_runnable(true)
.set_path("./hello_world")
.set_index(1);
} EDIT: There's also a lot of documentation on rust, I suggest you read Rust Book and the Rust by example for more clarity. And welcome to Rust community! |
@lee-orr Congratulations on your first Rust, then! 🎉 So there aren't any guides exactly on how to do it, but I know how we could do it so I can try to walk you through it if you want. 🙂 It's going to be a little involved, but not too bad so hold on. 😉 So the goal is for us to allow you to specify the power preference in a way very similar to how you can specify the max thread count in this example: use bevy::prelude::*;
use bevy_app::DefaultTaskPoolOptions;
/// This example illustrates how to customize the thread pool used internally (e.g. to only use a
/// certain number of threads).
fn main() {
App::build()
.add_resource(DefaultTaskPoolOptions::with_num_threads(4))
.add_default_plugins()
.run();
} So what we do is we do here is we include a Rust use bevy_app::DefaultTaskPoolOptions; This struct contains the the options that are used to configure the task pool. In our case we want a struct that can be used to configure the WGPU power settings so we'll want to create our own struct in the /// The options that WGPU plugin accepts
#[derive(Clone)]
struct WgpuOptions {
power_pref: WgpuPowerPreference
}
/// Enum represents possible power preferences
#[derive(Clone)]
enum WgpuPowerPreference {
Default,
HighPerformance,
LowPower,
} That covers the data we need for our options, now we also want to create a convenient way to create the options and we will want to use the builder pattern, like @MGlolenstine mention. So we'll add a impl WgpuOptions {
// Create default WGPU options
pub fn new() -> Example {
// Create default options
WgpuOptions {
power_pref: false,
}
}
/// Set the power preference
pub fn power_preference(self, preference: WgpuPowerPreference) -> WgpuOptions {
self.power_pref = preference
self
}
} Now we can pass these in when we create our app like this: App::build()
.add_resource(
WgpuOptions::new()
.power_preference(WgpuPowerPreference::LowPower)
)
.add_default_plugins()
.run(); OK, so now we have our options and the ability to pass them in when we create the app, we just need to take those into account when setting up WGPU. In the WGPU plugin we have the following code: pub fn wgpu_render_system(resources: &mut Resources) -> impl FnMut(&mut World, &mut Resources) {
let mut wgpu_renderer = pollster::block_on(WgpuRenderer::new());
// other stuff we don't care about...
} Remember when we created out app the we used let wgpu_options = resources.get::<WgpuOptions>().cloned().unwrap_or_else(|| WgpuOptions::new()); What the Now that we have obtained our options, we want to pass them into the pub fn wgpu_render_system(resources: &mut Resources) -> impl FnMut(&mut World, &mut Resources) {
let wgpu_options = resources.get::<WgpuOptions>().cloned();
let mut wgpu_renderer = pollster::block_on(WgpuRenderer::new(wgpu_options));
// other stuff we don't care about...
} Now we need to go and edit the Over in the pub async fn new() -> Self {
// Do other stuff we don't care about..
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions {
power_preference: wgpu::PowerPreference::HighPerformance,
compatible_surface: None,
})
// Do other stuff we don't care about..
} We need to add an argument that takes pub async fn new(wgpu_options: WgpuOptions) -> Self {
// Do other stuff we don't care about..
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions {
power_preference: match wgpu_options.power_pref {
WgpuPowerPreference::Default => wgpu::PowerPreference::Default,
WgpuPowerPreference::HighPerformance => wgpu::PowerPreference::HighPerformance,
WgpuPowerPreference::LowPower => wgpu::PowerPreference::LowPower,
},
compatible_surface: None,
})
// Do other stuff we don't care about..
} Wow! That turned out a little longer than I expected and maybe I overestimated the value of putting essentially the whole PR here in guide format, but if you are adventurous enough to follow, it hopefully will lead you to something that works. I may not have gotten everything perfect, and it touches quite a few aspects of the Rust language which might be confusing if you have never written it before. If you need help on a point or something doesn't work, just ask. 😃 |
@zicklag & @MGlolenstine - Thanks for the detailed answers! I'll try to update the branch today. |
created an options object & builder for the resource, as well as an example of how to make it work. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Good job!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, there are a couple of clippy warnings that should be fixed. Then CI should be happy. The changes to be made are pretty much shown in the error messages:
https://github.com/bevyengine/bevy/runs/1053656852#step:6:246
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, this exceeded my expectations! Thanks, @zicklag, @lee-orr and @MGlolenstine!
I have a couple suggestions. The comment suggestions are just opinions, so you can take them or leave them. The code changes are necessary to get past clippy in CI.
Love it! Good ones @CleanCut. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely a useful addition. Thanks!
@zicklag @CleanCut @MGlolenstine : I really appreciate the focus on getting @lee-orr ramped up here.
I just have one comment (on the usage of builder pattern)
crates/bevy_wgpu/src/lib.rs
Outdated
LowPower, | ||
} | ||
|
||
impl WgpuOptions { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should just cut this impl in favor of a direct Default impl. In general in Bevy I don't want to use the builder pattern over the Default pattern unless we have a specific reason to make an exception (ex: we need statefulness or special get/set logic).
I pushed some changes removing the builder pattern in favor of a default impl. I also removed the example as I think this feature is pretty niche / the need for it will decrease over time. |
Adding a
low_power
feature to bevy, that changes from using the High Performance graphics card to the default one. This is meant to provide a workable solution for #366 until such a time as the underlying issue is resolved, without requiring developers to clone the bevy repository and make changes to it.I was unsure whether doing something like this would match the standards/expectations for bevy, but I figured it's worth creating a PR just in case.
The actual problem & solution used here were found by https://github.com/fopsdev & @MGlolenstine