-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emission units #85
Comments
Agree we should be more specific about that in section 2.4 "Emission model" (which is supposed to be a discussion of how emission fits into the slab formalism, before we get to the specifics of the layer structure and parametrization). We just say:
The "thus" doesn't really follow I guess, since there are different kinds of photometric quantities/units, not just luminance/nits. Also we didn't really define what an emission distribution function is supposed to mean -- presumably mathematically it gives the photometric luminance (in nits) at the surface point, as a function of direction. Note we do specify that the But I think this needs rationalization in the earlier text. |
We also say:
I'm not sure I fully understand how this translates into the process to obtain the RGB emission radiance in the renderer's color space, i.e. what do we compute inside the renderer given:
Currently e.g. in Arnold, we just take the given I assume the luminance specifies the Y coordinate of the XYZ tristimulus. So perhaps we then construct XYZ (how exactly?), map that to the spec color space RGB and apply the tint color (which is also in the spec color space), then map that to the working color space RGB' ? @anderslanglands or @AdrienHerubel, do you know the details of how this should work? |
Leaving aside the colour space part of the question (see my reply on the
metadata thread) for now, this is similar to the discussion we had on the
UsdLux channel recently.
Basically the units for light transport in an RGB renderer don’t really
make any sense :) my preferred way to handle it is just to call “pure
white” emission nits and multiply color tints on after that. All lights
therefore are implicitly emitting the spectrum corresponding to the
rendering colour space white point, normalised to 1 nit.
Under this interpretation then it really is just emission_luminance *
emission_color and a spectral renderer (using uplifting) would give the
same result.
…On Mon, 28 Aug 2023 at 05:47, Jamie Portsmouth ***@***.***> wrote:
We also say:
The emission_luminance parameter controls the luminance the emissive layer
would have when emission_color is set to (1, 1, 1) and in the absence of
coat and fuzz. The emission_color acts as a multiplier, thus the resulting
luminance may be less than the input parameter, or even zero if the color
multiplier is set to (0, 0, 0).
I'm not sure I fully understand how this translates into the process to
obtain the RGB emission radiance in the renderer's color space, i.e. what
do we compute inside the renderer given:
- a surface point with emission_luminance and emission_color parameters
- the assumed color space of the colors in the spec
- the working color space of the renderer
Currently e.g. in Arnold, we just take the given emission_luminance *
emission_color as the RGB radiance in the working color space, but that
can't be right since it should differ depending on both the renderer
working color space and the color space used in the spec (ACEScg by
default).
I assume the luminance specifies the Y coordinate of the XYZ tristimulus.
So perhaps we then construct XYZ (how exactly?), map that to the spec color
space RGB and apply the tint color (which is also in the spec color space),
then map that to the working color space RGB' ?
@anderslanglands <https://github.com/anderslanglands> or @AdrienHerubel
<https://github.com/AdrienHerubel>, do you know the details of how this
should work?
—
Reply to this email directly, view it on GitHub
<#85 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOYQXOJ4G3KGLYZTJ3YAEDXXOB4BANCNFSM6AAAAAA3ZFDDKA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Does this mean that the emission spectrum depends on whatever the renderer working color space is then? That can be anything in principle, so it seems like that would make it (technically) undefined what the surface is emitting. |
It's definitely defined, but it's defined to be dependent on the rendering space. |
So if I understand the proposed process correctly:
That sounds reasonable.. If the working RGB color spaces are different, i'm not even sure it's necessarily technically possible to ensure the photometric luminance is exactly the same (is it?). So maybe this is the best we can do. I do think it would be beneficial to explain this in detail in the spec though. |
@KelSolaar do you have any thoughts on this? My point was mostly that I am not entirely clear what the exact process should be to map the |
Yes, it's just (L,L,L) * RGB2 |
For the spec you could say something like "exitant radiance in an RGB renderer should be computed as I have a slightly wordier version in the doc comments of lightAPI.h here: PixarAnimationStudios/OpenUSD@f70949a |
Thanks @anderslanglands! I will attempt to make a PR clarifying it, and point you to it for review. |
Can we even talk about radiance with a RGB renderer? Everything has already been integrated by the colour matching functions and is happening in the Standard Human Observer space. All the values are weighted by its sensitivity to light so it is very much the photometric domain and not the radiometric one. |
I’ve been using “RGB luminance” as a shorthand but I’ve not found a term I
like either.
…On Fri, 23 Feb 2024 at 11:25, Thomas Mansencal ***@***.***> wrote:
Can we even talk about *radiance* with a RGB renderer?
Everything has already been integrated by the colour matching functions
and is happening in the Standard Human Observer space. All the values are
weighted by its sensitivity to light so it is very much the photometric
domain and not the radiometric one.
—
Reply to this email directly, view it on GitHub
<#85 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOYQXMUKAG2V2AXQJXDJ2LYVDUKFAVCNFSM6AAAAAA3ZFDDKCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRRHA3TANJUHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It has the benefit of at least pushing the terminology into the photometry realm, maybe more emphasis should be made about the difference between radiometry and photometry and explain where and how the "conversion" happens. |
In the UsdLux proposal I linked above I give a brief overview of that in the "Quantities and Units" section. @portsmouth you could crib that if you like? The other terms I've used you can see there: "integrated radiance" and "tristimulus weight" |
The rendering color space white-point will in general be at some chromaticity where the Y tristimulus is equal to So mathematically, don't we require (according to your definition) the SPD for those specific It's not even clear that such an Instead perhaps one can just do something like say the emission is e.g. D65 (or in general the illuminant of the model color space) with the specified This then makes no reference to the "renderer working color space", it's defined entirely in terms of the model color space, so is unambiguous. |
what's the "model color space"? |
Ah sorry, by "model" I meant just the color space assumed by the OpenPBR shading model (which can vary by asset). We say currently:
I suppose in practice, the colors may be driven by texture inputs which specify their own color space (potentially different per color). |
Actually, the most obvious way to define it is just to compute the chromaticity and job done (can then transform to any desired RGB space). |
I feel like you're overcomplicating this. Why not just define it in terms of a multiplication of the rendering colour space white? |
I think that doesn't really make sense, since then you're not defining a specific color of emission, you're tying the definition of the emission color of the object to whatever the renderer chose for its working color space, so the same model parameters would lead (technically) to different physical emission in different renderers. I guess you assume this isn't an issue in practice? If we want to use this model to share assets between different renderers, that can use whatever internal color space they like, I think it would be problematic. Also I don't really understand what doing the RGB multiplication of But anyway what I just said above doesn't work since we said we needed So to be precise:
In this interpretation, the Maybe this actually corresponds to what you propose if we just work in the model color space, which (if I understand correctly) is just to make the emission be |
There is no "physical emission" in an RGB renderer. It's all just multipliers on white. In a spectral renderer the emission spectrum could be user-defined, but in most cases would be a reference white, e.g. D65. In both cases the expectation should be that you transform your |
Yes, they're scale factors (same as diffuse colour). They should be positive, but no reason for them to be bounded otherwise. |
Agreed, I should have said "perceptual emission" since we're dealing with photometry. I only mean that e.g. if the OpenPBR model says the emission color is (1, 0.5, 0.2) in ACEScg, and emission luminance is 100 nits, that this should produce the same perceptual color in renderer A as renderer B. That won't work if the meaning of the emission color is defined to depend on each renderer's working color space choice, it needs to be unambiguously defined by the data "emission color is (1, 0.5, 0.2) in ACEScg, and emission luminance is 100 nits". I think it's as simple as saying that the RGB color in the model color space (which is not a free choice of the renderer, but specified by the model) is |
OK I think we're saying the same thing here ultimately. The discussion of SPDs above was confusing. Utlimately, as we said above, the process needs to be:
|
Yep I think that makes sense. I think it deserves a couple extra sentences in the spec to make it totally clear, so will put together a small PR for that. |
Just to note, we're assuming a color with the chromaticity of the white-point and luminance So we probably want that the color corresponding to |
You're defining |
I just mean, if the standard illuminant (of the model color space) is D65 say, then color (1, 1, 1) means light with the white-point color and luminance It seems that the exact normalization of the illuminant is arbitrary though, so we can just specify that Technically I think you do have to state the normalization you're assuming for the illuminant, to be complete. (Alternatively for you maybe " |
Yes, I think saying emission_luminance is in nits is enough, and more to the point is simple enough to avoid confusion. Anyone who's just using an RGB renderer doesn't need to think about it any further, and anyone who's writing a spectral renderer will know what that means for their implementation. |
@anderslanglands Please check out the linked PR, to make sure it's looking like you would expect. 🙏 |
Following the discussion of #85.
In the emissive section it says "emissive properties are specified in photometric units", but then doesn't say what units they're actually specified in. Assuming this means nits, this should be specified.
The text was updated successfully, but these errors were encountered: