-
-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dim bright windows #247
Dim bright windows #247
Conversation
Codecov Report
@@ Coverage Diff @@
## next #247 +/- ##
==========================================
- Coverage 31.65% 31.31% -0.35%
==========================================
Files 45 45
Lines 8206 8315 +109
==========================================
+ Hits 2598 2604 +6
- Misses 5608 5711 +103
|
Thanks for the PR! This is definitely a useful feature, but I am not so sure about the approach taken. I need to think this through first. |
I don't quite understand why generating mipmaps have CPU overhead? |
I believe this is mostly for two reasons:
But simply by commenting |
What if we use a render step to downsample the texture, then average the pixels to get the brightness? |
I have not tried to implement this, but this sounds like it should be almost the same as currently implemented sampling technique, but for each pixel exactly once. I will try to do some testing (to see CPU/GPU load). If GPU load will not be significant, maybe this would be better solution. |
yes, i am a little bit worried about doing the downsampling once per vertex. |
So this part is more complex than I anticipated if rendering is done in single pass. I was unable to come up with a way to do that. Therefore I am planning to experiment a bit with multiple passes and to retest mipmap generation (as previously I have tested them on old glx backend). |
@Jauler rendering is already done in multipasses for blur, so you don't really need to worry about it. |
Ok, so unfortunately I was unable to work on this for a few weeks, but I am back on track. I attempted again to implement mipmaps (as I believe this would be the best solution), except this time on new glx backend. Unfortunately I hit a wall again... New glx backend is using Now I am testing implementation where I take texture, and I recursively render it into 1/2 width and 1/2 height until it is small enough to be averaged on CPU. Unfortunately this kind of rendering works well only if texture width and height are powers of two, otherwise some pixels has more weight to the average than others. To work around that I decided to set After getting brightness value, it is pretty simple to perform actual dimming. I have implemented most of the above, but CPU usage is still somewhat too high for my liking (~15% single core overhead for brightness estimation alone on my machine), I believe that this could be improved by creating and deleting less framebuffer objects and textures. Also there still seems to be some variance in resulting brightness which depends on texture width or height. I believe this might be due to the fact, that I am not forcing exact averaging of four pixels. I am planning to fix these problems, but at the same time I would appreciate any feedback about these ideas. Because the code is not ready, I pushed it to separate branch on my fork instead of the one related to this PR. |
@Jauler Thank you for putting so much effort into this!
Can we just render straight to 1 pixel? Dose the recursive rendering have better performance? Also, you probably still should do the final averaging on the GPU anyways. |
Well I enjoy this :D It is something I would like to have on my setup and I have never done any real graphics programming before, so I am learning a ton of new stuff :).
Well I was unable to find any good way to do this. Mostly this boils down to the fact that texture sampling (interpolation if
I probably could just pass downscaled (to 1x1) texture as a uniform variable to the fragment shader of actual rendering, then brightness values will not have to be read by CPU anytime. Cool, I have not thought of this before. |
Ah, right. I see. Another thing I am not sure about is the dimming formula. c.rgb = c.rgb * (1.0 - brightness * sensitivity); First thing, you probably should multiply brightness by dim. In general, I am not sure what the dimming formula should be. Yours seems straightforward, but it's kind of hard to intuitively describe exactly what it does. It also doesn't preserve ordering, i.e. after dimming, a brighter window could become dimmer than a previously dimmer window. I don't really know if this could be a problem. The other option is to "normalize" the brightness: if (brightness > target_brightness)
c.rgb = c.rgb * (target_brightness / brightness) What are your thoughts? |
I believe You might have looked at the wrong branch. Currently I left fragment shader unchanged, but because I had brightness value on CPU I just used dim variable. This is admittedly somewhat hacky implementation (that
To me it looks like the desired behaviour would be to calculate brightness on windows with dimming (and other effects) applied, and as a last step - apply dimming resulting from brightness and sensitivity setting. This way windows can only be dimmed and never made brighter and it somewhat serves its purpose of trying not to blind you with bright windows. On the other hand, inactive-dim feature is used mainly to highlight active/passive windows, and this would lower the difference. So I am not sure about this one... |
Forgot to mention,
Here for me it seems logical that for pure black window - no dimming would be applied, for pure white window, it would be dimmed as much as sensitivity setting is configured and everything in between - interpolated linearly. For example, if sensitivity setting is set to 0.25 it means, that for pure white window, it cuts "25%" of all pixel values (multiplying by 0.75). I realise that human perception is somewhat more logarithmic than linear, so it is also possible to do non-linear interpolation, but with a little bit of testing I was unable to see big difference, just had to set different sensitivity value. |
This has behavior some might consider counter-intuitive, when the sensitivity is > 0.5. For example, when sensitivity is 0.6, pure white window is reduced to 0.4; while a 90% white window is reduced to 0.414. |
Sorry, clicked on the wrong button |
It's not really linearly with respect to the brightness of the window. It's actually quadratic. |
Hey,
So I implemented this, as some PoC code, it seems to be working fine, but some optimization to lower amount of generated textures and framebuffers would be nice, this is still pending. And at the same time I am testing:
formula, it actually seems to be more intuitive and probably serves the purpose of "trying not to blind you with bright windows" better, as windows below some brightness threshold does not blind you and therefore there is no need to dim them 👍 . |
I was also thinking more about how bright window dimming should interact with opacity of windows and inactive-dim feature. inactive-dim seems to be used in order to more easily distinguish between active/inactive windows, in my opinion there are two choices, either we estimate brightness of windows without inactive-dim applied and this way we keep the dimming differences between active/inactive windows somewhat similar. The other choice would be to estimate the brightness of window with inactive-dim applied, this would reduce the difference between active/inactive bright window (as bright window dimming would dim less, because its already dimmed by inactive-dim feature). I am not sure which is better approach. NOTE: in implementation it is probably simpler to estimate brightness always on windows without inactive-dim applied, and if needed - compensate brightness by dim value. But this is implementation detail, I am not sure which behaviour is preferred... Opacity seems to be simpler, because whatever is "below" current transparent window should have its bright-window-dimming applied and therefore we probably could just ignore alpha channels when estimating brightness. |
Hmm, I feel that means we should apply the alpha channel when we estimate brightness?
ah, I see. This is indeed a interesting dilemma. I see your point. We probably should estimate brightness before applying the inactive-dim. However, inactive-dim is not the only way dimming could be applied, it's kind of difficult to consider all cases. |
But if we "look" at the colors of background window during brightness estimation, then we kind of apply this dimming twice, once for background window itself and once for whatever is "see-through" in foreground window. |
On the other hand, maybe we actually want this kind of behaviour, because basically if we have almost transparent bright foreground window, and black one in the background when we probably do not want dimming to be applied. If the situation is reversed (background is bright and foreground is dark), when using your suggested formula should still apply correct amount of dimming. |
Oh and one more thing, it seems that this feature will not be able to work without For this I am just thinking to either issue an error during validation if dimming is enabled and no |
Good catch. I would prefer an explicit error over things working magically. |
Hey, So I have implemented something less of a proof-of-concept and more of a real code. As for Transparency is a bit of a headache, because in order to get transparent window brightness, it is needed to render everything that might be see-through while estimating brightness. This adds quite a bit of complexity and does not seem to be super important, therefore I decided not to implement that. |
bb1e12d
to
caa6c30
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a quick look through the commits, and it looks mostly good 👍
I will give this a more thorough review this weekend and hopefully merge it.
@Jauler I thought about the transparent window problem. Transparency comes in in a number of different ways, in this analysis I will just lump them together. So, let's assume the window has a uniform transparency, then the final brightness of this window will be:
(Note We know that
Obviously we need to clamp However, this analysis breaks down if the alpha value is not uniform across the entire window. Some of the most prominent examples are: shaped windows, windows with transparent frames, windows that are intrinsically transparent. I don't know what's the best thing to do about them. |
caa6c30
to
ec76c5f
Compare
Due to squashing, I had to force-push Also I additionally squashed commit caa6c30 "clarify warning message about use-damage and max-brightness option compatability" into ec76c5f "parameterize max brightness limit", because that commit only changed messages introduced in previsouly. |
Currently it is implemented in this way. In the interpolating shader opacity is always set to 1 and only rgb channels are rendered. |
Maybe transparent pixels should have less weight towards the average? For example for shaped windows we would like transparent pixels not to affect the average color. |
Everything looks good! I will merge this. Thanks for all your hard work on this! |
@Jauler Turns out the CPU usage is still quite noticeable (AMDGPU), I will try and see if I can come up with a better way... |
Hmm, unfortunately I have no computer with either AMD or NVIDIA cards... Maybe you would be able to profile the executable? (e.g. with |
@Jauler looks like glTexImage2D is slow on mesa. We probably should reuse the allocated texture. |
It still eats CPU after I reused the texture :/ |
Reusing the texture does help, I just need to make sure the textures are reused all the way through the lifetime of the gl_image. See f6a51be |
This looks good, maybe glActiveTexture call is a bit out of place, but this is probably minor. Unless somewhere non-GL_TEXTURE0 is left as active. I am not sure what the CPU situation is with AMD cards, but it if we can allow us some tighter coupling between On my 3gen i5 laptop, I have ~1-2% cpu increase if I enable max-brightness clamping (in total htop reports ~7-8% for picom), on my desktop with 8gen i7, htop reports 0-2% regardless of max-brightness is enabled or not, so I thought that above optimization is not necessary. |
Yeah this could be AMD (or Mesa/Gallium) specific. One of the pains you face when you write an OpenGL program that runs not just on your own machine :) |
for reference of the general case: window filters #215 chjj#266 |
@dicktyr Note this feature is impossible to implement as a general filter without this PR. This is because the dimming requires information about how bright the window is, which a fragment shader could not obtain. |
autodetection is interesting but configuring filters for specific windows remains useful in any case I simply mean to remind that filtering is the key here in any case |
I have a question about this PR: the I'd like to control the brightness threshold that will determine if a window should be dimmed or not. Is there any way to do that with the code as it is now? |
Ok, actually it seems like the I think one way to solve this problem is using the median window brightness instead of the mean. I think I'm going to try this out. Curious if anyone else has had a similar experience! @Jauler |
That is definately interesting idea to try out. But in current implementation brightness value is calculated on GPU itself, forming sort of mipmap. I am not sure how to acquire median value in such a context 🤔 |
I was just trying to think about it and couldn't come up with a good solution : /. Altho to be fair I'm not used to GL coding at all, so there might be a function out there that would help that I don't know about. Do you use this feature in your daily work currently @Jauler ? I feel like I might just be holding it wrong. Currently, it's not usable and I'm continuing to use https://github.com/kovasap/dotfiles/blob/0183bff7d65d3f85b4eec3d6717abfc29a198d35/bin/run-compton.bash#L6 to apply blanket dimming to all windows that are not my terminal. I would love to use this PR though if I can get to it work! |
Hey,
I have no idea if anybody is interested in this, so if You think it is useless for general case, just close this PR 😉.
Anyway, I had this annoying problem, that when frequently switching between dark-themed windows and light-themed ones (especially when they take up most of the screen) there is no good screen brightness for both of them. Either dark-themed ones are too dark to comfortably look at them or light ones are too bright.
So I implemented this feature hoping to reduce that gap a bit by dimming windows depending on how bright they are.
Now initially I tried to implement this using mipmaps for brightness estimation, but there was significant CPU overhead on my computer (intel integrated GPU from 3rd gen processor, I also tested on 8th gen). What is more in order for them to work as expected, textures had to be scaled/padded in some way to power of two, this was somewhat inconvenient. So then I just decided to sample some pixels from texture in vertex shader, get an estimate on brightness, and pass it to fragment shader which can do the dimming. This, at least on my tests, did not incur any significant CPU or GPU load (during testing I used sample size of 40 for each dimension, so in total 1600 samples).
I also experimented a bit with different ways of estimating lightness, but I did not notice big differences on real windows, so I just left average for simplicity.
This works only on newer glx backend.