-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wgpu::PowerPreference::HighPerformance works in wgpu-rs but not in bevy #402
Comments
I believe a forked version of wgpu is being used for bevy at the moment. Perhaps there is an upstream fix that we don't have. This is speculation on my part. Tangentially related, there is an open PR to allow choosing which |
@CleanCut
looks to me like a regular package of wgpu is used. |
@fopsdev Thanks, that's good to know. I should have just looked at the deps myself instead of lazily relying on fuzzy memories. 😄 |
3-4 people a day were reporting an issue with Default being used, without being offensive I have only seen you two report this way being an issue. I think it's good to have that merged in, as people could more easily select default again, but I definitely don't think it should be the regular default again. Just something that needs fixed in wgpu. |
@Dispersia I don't have an issue. I'm just helping out where I can. |
I was referring to lee-or (from the previous issue) and fopsdev, sorry I should have been more clear there :) |
No worries. Just wanted to make sure I wasn't unintentionally amplifying reports of problems. |
Just had another case where somebody needed to switch to Default. |
Yeah ideally this just works the "right way" transparently to users. People playing Bevy games shouldn't need to think about "what gpu to use". In general I just want it to use the "best" gpu first (with the most gpu/platform-compatible backend) and then fall back from there to lower power options (also picking the most gpu/platform-compatible backend). Sorry to ping you @kvark, but I'm assuming that behavior is already your goal and eventually we won't need to care? Should we try to get fancier with our initialization logic or just let wgpu do its thing? |
Please note that if there is no issue on
I believe that should be up to Bevy to figure out, not
What's happening? what's the call stack? what's the OS? Also, check if the versions of |
Noted. At this point its unclear to me that there even is a wgpu-rs issue.
We currently use the For cases where At least for now, I don't want to get too fancy with using battery state to pick gpus. I'd prefer to just always use the fastest compatible gpu available.
Yeah we probably should have dug into that a bit further before pulling you in. Sorry about that! |
To clarify, the My guess about what happened is - one of the gfx backends (maybe Vulkan?) saw the adapter for "radeon 500" but failed to initialize this properly. That's why I recommended to check the actual versions of the backends used. We fixed a few issues in this area as patch releases, which can be gotten as easy as |
Ok i've freshly cloned bevy today (from master) |
On my machine the samples doesn't run when adapter is created using
wgpu::PowerPreference::HighPerformance
You will find that setting inside
crates\bevy_wgpu\src\wgpu_renderer.rs
it works when using this setting in the wgpu-rs samples (https://github.com/gfx-rs/wgpu-rs)
So since bevy is using wgpu there must be something fishy with the device initialisation.
I did some prior work on:
#366 (comment)
to track down this issue. but its a bit beyond me.
btw. i have a amd radeon 500 series and a onboard intel uhd620
The text was updated successfully, but these errors were encountered: