Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dim bright windows #247

Merged
merged 3 commits into from
Nov 9, 2019
Merged

Dim bright windows #247

merged 3 commits into from
Nov 9, 2019

Conversation

Jauler
Copy link

@Jauler Jauler commented Sep 30, 2019

Hey,

I have no idea if anybody is interested in this, so if You think it is useless for general case, just close this PR 😉.

Anyway, I had this annoying problem, that when frequently switching between dark-themed windows and light-themed ones (especially when they take up most of the screen) there is no good screen brightness for both of them. Either dark-themed ones are too dark to comfortably look at them or light ones are too bright.

So I implemented this feature hoping to reduce that gap a bit by dimming windows depending on how bright they are.

Now initially I tried to implement this using mipmaps for brightness estimation, but there was significant CPU overhead on my computer (intel integrated GPU from 3rd gen processor, I also tested on 8th gen). What is more in order for them to work as expected, textures had to be scaled/padded in some way to power of two, this was somewhat inconvenient. So then I just decided to sample some pixels from texture in vertex shader, get an estimate on brightness, and pass it to fragment shader which can do the dimming. This, at least on my tests, did not incur any significant CPU or GPU load (during testing I used sample size of 40 for each dimension, so in total 1600 samples).

I also experimented a bit with different ways of estimating lightness, but I did not notice big differences on real windows, so I just left average for simplicity.

This works only on newer glx backend.

@codecov
Copy link

codecov bot commented Sep 30, 2019

Codecov Report

Merging #247 into next will decrease coverage by 0.34%.
The diff coverage is 1.69%.

Impacted file tree graph

@@            Coverage Diff             @@
##             next     #247      +/-   ##
==========================================
- Coverage   31.65%   31.31%   -0.35%     
==========================================
  Files          45       45              
  Lines        8206     8315     +109     
==========================================
+ Hits         2598     2604       +6     
- Misses       5608     5711     +103
Impacted Files Coverage Δ
src/config.h 23.52% <ø> (ø) ⬆️
src/config.c 42.19% <ø> (ø) ⬆️
src/utils.h 52.63% <ø> (ø) ⬆️
src/backend/gl/gl_common.h 0% <ø> (ø) ⬆️
src/utils.c 0% <0%> (ø) ⬆️
src/dbus.c 0% <0%> (ø) ⬆️
src/backend/xrender/xrender.c 0% <0%> (ø) ⬆️
src/backend/gl/glx.c 11.78% <0%> (-0.05%) ⬇️
src/config_libconfig.c 54.78% <0%> (-0.43%) ⬇️
src/backend/gl/gl_common.c 0% <0%> (ø) ⬆️
... and 7 more

@yshui
Copy link
Owner

yshui commented Oct 1, 2019

Thanks for the PR!

This is definitely a useful feature, but I am not so sure about the approach taken. I need to think this through first.

@yshui
Copy link
Owner

yshui commented Oct 1, 2019

I tried to implement this using mipmaps for brightness estimation, but there was significant CPU overhead on my computer

I don't quite understand why generating mipmaps have CPU overhead?

@Jauler
Copy link
Author

Jauler commented Oct 1, 2019

I don't quite understand why generating mipmaps have CPU overhead?

I believe this is mostly for two reasons:

  1. In this case mipmaps have to be generated for every frame as window content can change at any time.
  2. This is more of a guess, but maybe some drivers (especially on older HW, like 3rd gen intel), does not offload everything to GPU.

But simply by commenting glGenerateMipmap(...) call, CPU usage dropped from ~20% to 1-3%.

@yshui
Copy link
Owner

yshui commented Oct 1, 2019

What if we use a render step to downsample the texture, then average the pixels to get the brightness?

@Jauler
Copy link
Author

Jauler commented Oct 1, 2019

I have not tried to implement this, but this sounds like it should be almost the same as currently implemented sampling technique, but for each pixel exactly once.

I will try to do some testing (to see CPU/GPU load). If GPU load will not be significant, maybe this would be better solution.

@yshui
Copy link
Owner

yshui commented Oct 1, 2019

but for each pixel exactly once.

yes, i am a little bit worried about doing the downsampling once per vertex.

@Jauler
Copy link
Author

Jauler commented Oct 4, 2019

for each pixel exactly once.

So this part is more complex than I anticipated if rendering is done in single pass. I was unable to come up with a way to do that.

Therefore I am planning to experiment a bit with multiple passes and to retest mipmap generation (as previously I have tested them on old glx backend).

@yshui
Copy link
Owner

yshui commented Oct 4, 2019

@Jauler rendering is already done in multipasses for blur, so you don't really need to worry about it.

@Jauler
Copy link
Author

Jauler commented Oct 23, 2019

Ok, so unfortunately I was unable to work on this for a few weeks, but I am back on track.

I attempted again to implement mipmaps (as I believe this would be the best solution), except this time on new glx backend. Unfortunately I hit a wall again... New glx backend is using glXBindTexImageEXT() to build texture which will then be rendered by compton. My understading is that in order to generate mipmaps on such texture, GLX_BIND_TO_MIPMAP_TEXTURE_EXT should be passed as an attribute set to GL_TRUE, when calling glXChooseFBConfig() and GLX_MIPMAP_TEXTURE_EXT when calling glXCreatePixmap(). Unfortunately testing on my machines with Intel graphics drivers glXGetFBConfigAttribChecked() would return GL_FALSE no matter what attributes I pass. Looking at the sources seems that this is actually not supported by i915 and other Intel drivers. So I abandoned mipmap idea... again...

Now I am testing implementation where I take texture, and I recursively render it into 1/2 width and 1/2 height until it is small enough to be averaged on CPU. Unfortunately this kind of rendering works well only if texture width and height are powers of two, otherwise some pixels has more weight to the average than others. To work around that I decided to set GL_TEXTURE_WRAP_[ST] to GL_CLAMP_TO_BORDER and set border color to pure black in addition to rendering first step with dimensions set to next closest power of two. Because I know that this additional area is pure black, I should be able to compensate final average for that.

After getting brightness value, it is pretty simple to perform actual dimming.

I have implemented most of the above, but CPU usage is still somewhat too high for my liking (~15% single core overhead for brightness estimation alone on my machine), I believe that this could be improved by creating and deleting less framebuffer objects and textures. Also there still seems to be some variance in resulting brightness which depends on texture width or height. I believe this might be due to the fact, that I am not forcing exact averaging of four pixels. I am planning to fix these problems, but at the same time I would appreciate any feedback about these ideas.

Because the code is not ready, I pushed it to separate branch on my fork instead of the one related to this PR.

@yshui
Copy link
Owner

yshui commented Oct 24, 2019

@Jauler Thank you for putting so much effort into this!

I recursively render it into 1/2 width and 1/2 height until it is small enough to be averaged on CPU.

Can we just render straight to 1 pixel? Dose the recursive rendering have better performance? Also, you probably still should do the final averaging on the GPU anyways.

@Jauler
Copy link
Author

Jauler commented Oct 24, 2019

Thank you for putting so much effort into this!

Well I enjoy this :D It is something I would like to have on my setup and I have never done any real graphics programming before, so I am learning a ton of new stuff :).

Can we just render straight to 1 pixel? Dose the recursive rendering have better performance?

Well I was unable to find any good way to do this. Mostly this boils down to the fact that texture sampling (interpolation if GL_LINEAR is set as GL_TEXTURE_MIN_FILTER) only looks at neighbouring pixels. So if we just do a normal render to 1x1 framebuffer, only 4 pixels are averaged on GPU. Now it is possible to write custom fragment shader and average all pixels on that, but because we are rendering into 1x1 framebuffer, only one fragment shader invocation will be done and all of the averaging will be done in that single invocation. In other words, averaging will not be able to parallelize on GPU.

Also, you probably still should do the final averaging on the GPU anyways.

I probably could just pass downscaled (to 1x1) texture as a uniform variable to the fragment shader of actual rendering, then brightness values will not have to be read by CPU anytime. Cool, I have not thought of this before.

@yshui
Copy link
Owner

yshui commented Oct 24, 2019

@Jauler

only looks at neighbouring pixels.

Ah, right. I see.

Another thing I am not sure about is the dimming formula.

c.rgb = c.rgb * (1.0 - brightness * sensitivity);

First thing, you probably should multiply brightness by dim.

In general, I am not sure what the dimming formula should be. Yours seems straightforward, but it's kind of hard to intuitively describe exactly what it does. It also doesn't preserve ordering, i.e. after dimming, a brighter window could become dimmer than a previously dimmer window. I don't really know if this could be a problem.

The other option is to "normalize" the brightness:

if (brightness > target_brightness)
    c.rgb = c.rgb * (target_brightness / brightness)

What are your thoughts?

@Jauler
Copy link
Author

Jauler commented Oct 24, 2019

I believe You might have looked at the wrong branch. Currently I left fragment shader unchanged, but because I had brightness value on CPU I just used dim variable. This is admittedly somewhat hacky implementation (that has_brightness flag and other similar stuff, also I am not sure whether I should use _gl_compose() to do downscaled rendering or implement something specific to downscaling), but this is mainly to allow me to do some testing. My main focus is on figuring out a good method to calculate brightness value.

It also doesn't preserve ordering, i.e. after dimming, a brighter window could become dimmer than a previously dimmer window. I don't really know if this could be a problem.

To me it looks like the desired behaviour would be to calculate brightness on windows with dimming (and other effects) applied, and as a last step - apply dimming resulting from brightness and sensitivity setting. This way windows can only be dimmed and never made brighter and it somewhat serves its purpose of trying not to blind you with bright windows.

On the other hand, inactive-dim feature is used mainly to highlight active/passive windows, and this would lower the difference. So I am not sure about this one...

@Jauler
Copy link
Author

Jauler commented Oct 24, 2019

Forgot to mention,

apply dimming resulting from brightness and sensitivity setting

Here for me it seems logical that for pure black window - no dimming would be applied, for pure white window, it would be dimmed as much as sensitivity setting is configured and everything in between - interpolated linearly.

For example, if sensitivity setting is set to 0.25 it means, that for pure white window, it cuts "25%" of all pixel values (multiplying by 0.75).

I realise that human perception is somewhat more logarithmic than linear, so it is also possible to do non-linear interpolation, but with a little bit of testing I was unable to see big difference, just had to set different sensitivity value.

@yshui
Copy link
Owner

yshui commented Oct 24, 2019

@Jauler

For example, if sensitivity setting is set to 0.25 it means, that for pure white window, it cuts "25%" of all pixel values (multiplying by 0.75).

This has behavior some might consider counter-intuitive, when the sensitivity is > 0.5.

For example, when sensitivity is 0.6, pure white window is reduced to 0.4; while a 90% white window is reduced to 0.414.

@yshui yshui closed this Oct 24, 2019
@yshui yshui reopened this Oct 24, 2019
@yshui
Copy link
Owner

yshui commented Oct 24, 2019

Sorry, clicked on the wrong button

@yshui
Copy link
Owner

yshui commented Oct 24, 2019

and everything in between - interpolated linearly.

It's not really linearly with respect to the brightness of the window. It's actually quadratic.

@Jauler
Copy link
Author

Jauler commented Oct 28, 2019

Hey,

I probably could just pass downscaled (to 1x1) texture as a uniform variable to the fragment shader of actual rendering, then brightness values will not have to be read by CPU anytime. Cool, I have not thought of this before.

So I implemented this, as some PoC code, it seems to be working fine, but some optimization to lower amount of generated textures and framebuffers would be nice, this is still pending.

And at the same time I am testing:

if (brightness > target_brightness)
    c.rgb = c.rgb * (target_brightness / brightness)

formula, it actually seems to be more intuitive and probably serves the purpose of "trying not to blind you with bright windows" better, as windows below some brightness threshold does not blind you and therefore there is no need to dim them 👍 .

@Jauler
Copy link
Author

Jauler commented Oct 28, 2019

I was also thinking more about how bright window dimming should interact with opacity of windows and inactive-dim feature.

inactive-dim seems to be used in order to more easily distinguish between active/inactive windows, in my opinion there are two choices, either we estimate brightness of windows without inactive-dim applied and this way we keep the dimming differences between active/inactive windows somewhat similar. The other choice would be to estimate the brightness of window with inactive-dim applied, this would reduce the difference between active/inactive bright window (as bright window dimming would dim less, because its already dimmed by inactive-dim feature). I am not sure which is better approach.

NOTE: in implementation it is probably simpler to estimate brightness always on windows without inactive-dim applied, and if needed - compensate brightness by dim value. But this is implementation detail, I am not sure which behaviour is preferred...

Opacity seems to be simpler, because whatever is "below" current transparent window should have its bright-window-dimming applied and therefore we probably could just ignore alpha channels when estimating brightness.

@yshui
Copy link
Owner

yshui commented Oct 28, 2019

because whatever is "below" current transparent window should have its bright-window-dimming applied

Hmm, I feel that means we should apply the alpha channel when we estimate brightness?

inactive-dim seems to be used in order to more easily distinguish between active/inactive windows
[snip]

ah, I see. This is indeed a interesting dilemma. I see your point. We probably should estimate brightness before applying the inactive-dim. However, inactive-dim is not the only way dimming could be applied, it's kind of difficult to consider all cases.

@Jauler
Copy link
Author

Jauler commented Oct 28, 2019

Hmm, I feel that means we should apply the alpha channel when we estimate brightness?

But if we "look" at the colors of background window during brightness estimation, then we kind of apply this dimming twice, once for background window itself and once for whatever is "see-through" in foreground window.

@Jauler
Copy link
Author

Jauler commented Oct 28, 2019

On the other hand, maybe we actually want this kind of behaviour, because basically if we have almost transparent bright foreground window, and black one in the background when we probably do not want dimming to be applied.

If the situation is reversed (background is bright and foreground is dark), when using your suggested formula should still apply correct amount of dimming.

@Jauler
Copy link
Author

Jauler commented Oct 28, 2019

Oh and one more thing, it seems that this feature will not be able to work without --no-use-damage, because if I understand correctly then only parts of the window are rerendered if needed. Unfortunately if window brightness has changed, this is not damage, but it still has to be rerendered fully.

For this I am just thinking to either issue an error during validation if dimming is enabled and no --no-use-damage is specified or just enable it.

@yshui
Copy link
Owner

yshui commented Oct 29, 2019

Unfortunately if window brightness has changed, this is not damage, but it still has to be rerendered fully.

Good catch. I would prefer an explicit error over things working magically.

@Jauler
Copy link
Author

Jauler commented Nov 7, 2019

Hey,

So I have implemented something less of a proof-of-concept and more of a real code.

As for use-damage validation I looked at the validation code and implemented it in similar fashion to other options.

Transparency is a bit of a headache, because in order to get transparent window brightness, it is needed to render everything that might be see-through while estimating brightness. This adds quite a bit of complexity and does not seem to be super important, therefore I decided not to implement that.

Copy link
Owner

@yshui yshui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a quick look through the commits, and it looks mostly good 👍

I will give this a more thorough review this weekend and hopefully merge it.

src/backend/gl/gl_common.c Show resolved Hide resolved
src/backend/gl/gl_common.c Outdated Show resolved Hide resolved
@yshui
Copy link
Owner

yshui commented Nov 8, 2019

@Jauler I thought about the transparent window problem.

Transparency comes in in a number of different ways, in this analysis I will just lump them together.

So, let's assume the window has a uniform transparency, then the final brightness of this window will be:

brightness_of_windows_below_it * (1 - alpha) + brightness * alpha

(Note brightness here is brightness calculated ignoring the alpha value)

We know that brightness_of_windows_below_it <= max_brightness, thus the above formula is less than or equal to:

max_brightness * (1 - alpha) + brightness * alpha.

Obviously we need to clamp brightness to max_brightness. Therefore, for transparent windows, we need to estimate the brightness ignoring the alpha (every kind of alpha, including the alpha of the window itself, inactive-opacity, etc.).

However, this analysis breaks down if the alpha value is not uniform across the entire window. Some of the most prominent examples are: shaped windows, windows with transparent frames, windows that are intrinsically transparent. I don't know what's the best thing to do about them.

@Jauler
Copy link
Author

Jauler commented Nov 9, 2019

Due to squashing, I had to force-push

Also I additionally squashed commit caa6c30 "clarify warning message about use-damage and max-brightness option compatability" into ec76c5f "parameterize max brightness limit", because that commit only changed messages introduced in previsouly.

@Jauler
Copy link
Author

Jauler commented Nov 9, 2019

Obviously we need to clamp brightness to max_brightness. Therefore, for transparent windows, we need to estimate the brightness ignoring the alpha (every kind of alpha, including the alpha of the window itself, inactive-opacity, etc.).

Currently it is implemented in this way. In the interpolating shader opacity is always set to 1 and only rgb channels are rendered.

@Jauler
Copy link
Author

Jauler commented Nov 9, 2019

However, this analysis breaks down if the alpha value is not uniform across the entire window. Some of the most prominent examples are: shaped windows, windows with transparent frames, windows that are intrinsically transparent. I don't know what's the best thing to do about them.

Maybe transparent pixels should have less weight towards the average? For example for shaped windows we would like transparent pixels not to affect the average color.

@yshui
Copy link
Owner

yshui commented Nov 9, 2019

Everything looks good! I will merge this.

Thanks for all your hard work on this!

@yshui yshui merged commit c98e7fa into yshui:next Nov 9, 2019
@yshui
Copy link
Owner

yshui commented Nov 9, 2019

@Jauler Turns out the CPU usage is still quite noticeable (AMDGPU), I will try and see if I can come up with a better way...

@Jauler
Copy link
Author

Jauler commented Nov 9, 2019

Hmm, unfortunately I have no computer with either AMD or NVIDIA cards...

Maybe you would be able to profile the executable? (e.g. with perf or something similar).

@yshui
Copy link
Owner

yshui commented Nov 10, 2019

@Jauler looks like glTexImage2D is slow on mesa. We probably should reuse the allocated texture.

@yshui
Copy link
Owner

yshui commented Nov 10, 2019

It still eats CPU after I reused the texture :/

@yshui
Copy link
Owner

yshui commented Nov 10, 2019

Reusing the texture does help, I just need to make sure the textures are reused all the way through the lifetime of the gl_image.

See f6a51be

@Jauler
Copy link
Author

Jauler commented Nov 10, 2019

See f6a51be

This looks good, maybe glActiveTexture call is a bit out of place, but this is probably minor. Unless somewhere non-GL_TEXTURE0 is left as active.

I am not sure what the CPU situation is with AMD cards, but it if we can allow us some tighter coupling between _gl_compose() and gl_average_texture_brightness() it is possible to reuse some setup (like vertex attribute buffers and similar) for rendering.

On my 3gen i5 laptop, I have ~1-2% cpu increase if I enable max-brightness clamping (in total htop reports ~7-8% for picom), on my desktop with 8gen i7, htop reports 0-2% regardless of max-brightness is enabled or not, so I thought that above optimization is not necessary.

@yshui
Copy link
Owner

yshui commented Nov 10, 2019

I thought that above optimization is not necessary.

Yeah this could be AMD (or Mesa/Gallium) specific. One of the pains you face when you write an OpenGL program that runs not just on your own machine :)

@dicktyr
Copy link

dicktyr commented Dec 1, 2019

for reference of the general case: window filters #215 chjj#266

@yshui
Copy link
Owner

yshui commented Dec 1, 2019

@dicktyr Note this feature is impossible to implement as a general filter without this PR. This is because the dimming requires information about how bright the window is, which a fragment shader could not obtain.

@dicktyr
Copy link

dicktyr commented Dec 1, 2019

autodetection is interesting but configuring filters for specific windows remains useful in any case
(you know: some software is just consistently worth filtering)

I simply mean to remind that filtering is the key here
i.e. not only brightness but also contrast, saturation, &c.

in any case
thanks for giving it attention :}

@kovasap
Copy link

kovasap commented Jan 7, 2022

I have a question about this PR: the max-brightness docs say that this feature will dim windows "above this threshold", where I assume "this" refers to the decimal value provided (default 1.0). However, in practice, I don't see this to be the case. It seems like instead the max-brightness value determines how much the windows will be dimmed by.

I'd like to control the brightness threshold that will determine if a window should be dimmed or not. Is there any way to do that with the code as it is now?

@kovasap
Copy link

kovasap commented Jan 10, 2022

Ok, actually it seems like the max-brightness parameter is probably working as intended. I was confused by the fact that windows that are mostly white, but have some dark regions (like light-mode github, with its dark grey top bar) seem to have their average brightness pushed down very low. This effect is strong enough that if I set max-brightness = 0.2 and just scroll up and down on github the window will dim dramatically to 0.2 brightness when scrolled past the dark grey top bar, and will return at full brightness when the top bar is visible.

I think one way to solve this problem is using the median window brightness instead of the mean. I think I'm going to try this out. Curious if anyone else has had a similar experience! @Jauler

@Jauler
Copy link
Author

Jauler commented Jan 10, 2022

I think one way to solve this problem is using the median window brightness instead of the mean. I think I'm going to try this out. Curious if anyone else has had a similar experience! @Jauler

That is definately interesting idea to try out. But in current implementation brightness value is calculated on GPU itself, forming sort of mipmap. I am not sure how to acquire median value in such a context 🤔

@kovasap
Copy link

kovasap commented Jan 10, 2022

I was just trying to think about it and couldn't come up with a good solution : /. Altho to be fair I'm not used to GL coding at all, so there might be a function out there that would help that I don't know about.

Do you use this feature in your daily work currently @Jauler ? I feel like I might just be holding it wrong. Currently, it's not usable and I'm continuing to use https://github.com/kovasap/dotfiles/blob/0183bff7d65d3f85b4eec3d6717abfc29a198d35/bin/run-compton.bash#L6 to apply blanket dimming to all windows that are not my terminal. I would love to use this PR though if I can get to it work!

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants