Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow 2dcontexts to use deeper color buffers #299

Closed
grorg opened this issue Oct 30, 2015 · 74 comments
Closed

Allow 2dcontexts to use deeper color buffers #299

grorg opened this issue Oct 30, 2015 · 74 comments
Labels
addition/proposal New features or enhancements topic: canvas

Comments

@grorg
Copy link
Contributor

grorg commented Oct 30, 2015

It is now common to come across displays that support more than 8 bits per channel. HTML Image elements can use these wider-gamut displays by embedding a color profile in the resource. The CSS Working Group is adding a way to specify the profile used in a color definition, and ensuring that colors are not clipped to 0-255.

That leaves Canvas objects. We need at least a few things in order to use better colors in canvas:

  1. When calling getContext("2d", options) we need to specify in the options that we want a deeper backing store. This could be done a few ways (a boolean indicating 10bpp, explicitly saying the depth, or some keywords).
  2. A way to ask an existing canvas object (or the context) how deep its backing store is.
  3. Alternative forms of getImageData and putImageData that are not restricted to 0-255 bytes. For example, we might need a ImageDataFloat that uses floats in the 0-1 range, and get/setImageDataFloat methods.

All existing methods that take images or ImageData would still keep the existing behaviour. That is, if you create a deep canvas and putImageData into it, that data is assumed to be in sRGB.

@zcorpan zcorpan added the addition/proposal New features or enhancements label Oct 30, 2015
@grorg
Copy link
Contributor Author

grorg commented Nov 2, 2015

If we did make an ImageDataFloat, it would be nice to use 16-bit floating point rather than 32. For this type of data we don't need 4 bytes.

@annevk
Copy link
Member

annevk commented Nov 3, 2015

We could perhaps overload ImageData rather than introduce a new object. And that if the user agent supports the "10bpp" option, .data returns a different view.

@heycam
Copy link
Contributor

heycam commented Nov 3, 2015

Am I right that a 16-bit floating point value has just enough precision for 10bpp images, but not higher depths? I wonder if instead we should just be using integers with a greater range. If there is concern that authors would forget to check for the specific channel depth, and only expect values in the range [0,1023] say, we could normalise the values to be always in the range [0,65535] or something.

Are there image formats where colours are really floating point values, where it would definitely make sense to use a floating point type to expose them?

@grorg
Copy link
Contributor Author

grorg commented Nov 3, 2015

I'm (we're) still not sure which is better:

A. Expose 32-bit floats for each channel and allow the implementation to squash them into however much precision the backing store provides. Pros: same code "works" on all backing stores. Cons: performance since everything needs to be converted + copied, memory use in the ImageData, and colors would not match on different profiles.

B. Same as A but with 16-bits. Cons: Same except slightly better memory wastage. Also, no real system support for half-float types.

C. Expose 32 or 16-bit integers for each channel and a way to detect how many bits are available (and thus the maximum value you can put in the data, 2^8 for 8bpc, 2^10 for 10bpc). Pros: Allows exact precision. Cons: still needs a copy + conversion when coming from a 10bpc backing store.

D. As Cameron suggests, 32 or 16-bit integers that are effectively normalized into a 0-1 range. Pros and Cons are similar to A and B.

E. Depending on the backing store type, expose the buffer as directly as possible. For example, if you have RGB101010 then each pixel is a 32-bit unsigned integer. Pros: Copying is easy. No conversion needed. Cons: Requires ugly bit math to use the data.

F. Something else.

I kind-of think option A, B or D are the nicest to use on the developer side. However, it will likely be very slow and is definitely wasteful. E is efficient but probably horrible to use.

@zcorpan
Copy link
Member

zcorpan commented Nov 4, 2015

Is it possible/practical to implement A/B/D on top of E in a JS lib?

What is the solution to specify wide gamut colors in CSS?

@grorg
Copy link
Contributor Author

grorg commented Nov 4, 2015

Exposing A/B/C/D is all possible on top of E. And E has the benefit of being the most performant, so I'm tempted to suggest that.

We'd still need some extra information on the result, such as the layout. For example, a wide gamut opaque buffer might be 10-10-10-2, and a non-opaque buffer might be the same with a chunk of 8s appended for the alpha.

As for CSS colors, see the thread http://www.w3.org/mid/5B15F03C-533C-4CBD-B266-4A499E230C18@apple.com

@annevk
Copy link
Member

annevk commented Nov 4, 2015

@grorg what is the backing store that OS X has?

JavaScript doesn't have 16-bit floating point arrays, so I think that's out. We don't want to introduce another TypedArray object unless TC39 is interested in that.

It sounds like 16-bit integers is a poor mapping for what is going on underneath?

E sounds tempting, if it's likely that another OS would have the same representation. Otherwise it seems like we might want to have at least one layer of abstraction.

@ojhunt
Copy link

ojhunt commented Nov 4, 2015

So my opinion (as a former canvas implementer/spec writer-type person), and as an engineer on a javascript engine.

Don't bother with floats unless you're thinking in terms of dynamic range (and very important thing to think about these days). If we do believe floats are the way to go, don't mess around with halfs, CPU don't support them, so they will be slow -- all operations will require half->double expansion which does not have hardware support, followed by a subsequent double->half conversion on store, in addition to all the clamping you'd be expecting,

For integral sized backing stores, efficiency means running in multiples of 8bit. Also, I have encountered cameras that produce 12bpp RAW, so defining things in terms of 10bpp is just asking for many subsequent variants on the spec for 11,12,23... bit.

My recommendation is to provide an option for a 16bpp backing store, we can trivially support it, there are already defined Uint16Array types, there's no increase memory cost vs 8>bpp<16, graphics operations can be performed efficiently internally as CPUs have hardware to handle 16bit integers directly. It's also somewhat future proof vs. ever increasing channel depth.

The only time you need to worry about 16bpp vs 10bpp (etc) is when extracting the canvas content as an actual image, where it may be desirable to set the destination channel depth if the image format supports higher channel depths.

@kornelski
Copy link

I see it as two separate issues:

  1. Supporting gamut wider than sRGB
  2. Supporting precision higher than 8 bits

For 1 you technically don't need > 8bbp. Most image formats use a color profile and "stretch" the 8 bits of image data to the destination gamut. That's simple and convenient.

For the gamut case I'd suggest API that takes a color profile from an image: createContext('2d', {useSameColorProfileAsThisImage: 'http://example.com/whatever-image-with-my-profile-embedded.jpeg'});. I suggest getting profile out of an image to make it simple for authors to create such file. Exporting a raw ICC profile is tricker. edit: actually that wouldn't fly, since getContext is synchronous and it'd be too weird to have profile change dynamically

Pre-defined names like "adobe rgb 1998" might work too, although then it'd be impossible to support custom profles from digital cameras.
As for 2:

  • For sanity, pixels must be multiples of byte size, so please don't do actual 10-bit per gun pixels.
  • 30-bit RGB pixels padded to 32bit are fine, but slightly inconvenient as they require bit-fiddling.
  • 4x 32-bit float pixels with intensities from 0...1.0 are convenient, especially when using SIMD. Make sure to define gamma and how out-of-range values are handled (I'd prefer clamped).

@ojhunt
Copy link

ojhunt commented Nov 4, 2015

I consider both of those good reasons for float (I had previously considered the possibility of float per channel). Just note that it will be substantially slower than integer paths.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

@ojhunt said:

there's no increase memory cost vs 8>bpp<16

That's not quite true. We'll probably expose a buffer that is 10bpp, which can fit into a 32-bit integer when opaque. i.e. won't be any additional memory cost over 8bpp, but will be less than 16bpp.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

@pornel said:

Supporting gamut higher than sRGB

For 1 you technically don't need > 8bbp. Most image formats use a color profile and "stretch" the 8 bits of image data to the destination gamut. That's simple and convenient.

Yep. This is what we already do with our wider gamut hardware. We just can't create a canvas with a profile.

For the gamut case I'd suggest API that takes a color profile from an image: createContext('2d', {useSameColorProfileAsThisImage: 'http://example.com/whatever-image-with-my-profile-embedded.jpeg'});. I suggest getting profile out of an image to make it simple for authors to create such file. Exporting a raw ICC profile is tricker. Pre-defined names like "adobe rgb 1998" might work too, although then it'd be impossible to support custom profles from digital cameras.

Yes, this is what I was proposing, although I think we can start with just well-known names. On the CSS discussion (linked above) I suggested P3 and Rec.2020, although maybe just Rec.2020 is enough.

However, I think that you are going to want the larger backing store, which I'll reply to separately.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

@pornel said:

For sanity, pixels must be multiples of byte size, so please don't do actual 10-bit per gun pixels.

Whose sanity? The developer? I sort-of agree, but there is a big advantage to exposing the same format as the backing store is that the get and put operations don't require any conversion and will be fast. Yes, you'll have to do some ugly bitmasking and shifting, but the users of these APIs are probably already doing some nasty things for every pixel.

30-bit RGB pixels padded to 32bit are fine, but slightly inconvenient as they require bit-fiddling.

Yep.

4x 32-bit float pixels with intensities from 0...1.0 are convenient, especially when using SIMD. Make sure to define gamma and how out-of-range values are handled (I'd prefer clamped).

The issue here is that you're probably exploding the memory use by a factor of 3 or 4, and requiring a conversion for every channel of every pixel when reading and writing, and losing some precision in the process.

On the positive side, it means you can write code that will work with any depth of backing store, including the current RGBA8.

Another positive is that this form can be polyfilled from the direct-access method. i.e. if I gave you the raw 30-bit RGB + 8-bit Alpha buffers, you could provide a 32-bit floating access in JS.

Note about the lack of half float: while the CPU doesn't support it, we could use unsigned short and normalize values from 0-2^16 into 0-1. There would still be some memory wastage (if the backing store is 10bpc) and conversion.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

@grorg said:

We'll probably expose a buffer that is 10bpp

I meant 10bpc (channel). That is a 32-bit value that is 10bits of R, G and B.

@ojhunt
Copy link

ojhunt commented Nov 5, 2015

you van fit 10bpc in a 32bit value only if you drop alpha, given we're talking about canvas that seems like an unreasonable tradeoff.

Furthermore (given we're talking canvas apis) direct fiddling is via typed arrays, which do not have non-byte multiple variants, and the bit fiddling required for 10bpc would make it unreasonably expensive -- even pure indexing becomes slow.

Sure the images can be 10bpc -- safari already supports that (and more) -- but thats because image files are constructed to be as small as possible. The performance trade-offs for live (in memory) content are different than for files being stored.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

@ojhunt said:

you van fit 10bpc in a 32bit value only if you drop alpha, given we're talking about canvas that seems like an unreasonable tradeoff.

Yes, but nothing stops you from providing alpha in another byte. For example, given a canvas that is w * h, the data array could be (w * h * 5) long, with the first (w * h * 4) bytes being used for (w * h) 32 bit integers that are 10/10/10/2 for RGB (the last 2 is unused). Then the next (w * h) bytes are 8-bit alpha.

That way you get wider color while only using 1 extra byte per pixel. And this is a backing store format that we're considering using internally.

However, I understand if people want to only expose a 64bpp (16/16/16/16) store, under the assumption that it is wide enough for practical use and doesn't require bit masking/shifting. On the downside, we'd still have to describe how 10,11,12 bit values are converted into 16 bits (clamped, normalized?). Also, since there isn't a native half-float type, the values would be unsigned shorts which I assume we will normalize to 0-1.

Maybe this is the easiest solution, even if it does potentially waste a bit of memory.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

@oHunt and I talked in person and we're now agreed on a 64bpp (16*4) backing store. This would be an option to the initial getContext() call. This would make the R,G,B and A values Uint16s.

We could then keep get/putImageData as they are, but we'll need tweaks to ImageData. We'll need a Uint16Array member. Someone else here can tell me if this is better exposed as a new type of ImageData, or extra attributes on the existing one, or something else.

@grorg
Copy link
Contributor Author

grorg commented Nov 5, 2015

The constructor will still need a colorspace identifier to allow colors outside of sRGB. For this I suggest waiting to see what CSS decides on (hopefully some keywords).

@annevk
Copy link
Member

annevk commented Nov 6, 2015

I was thinking we can just overload the data getter and make it return a Uint16Array when the ImageData's "mode" is 64bpp rather than 32bpp. And we should probably expose "mode" (or whatever we call it) too, etc.

@grorg
Copy link
Contributor Author

grorg commented Nov 6, 2015

Sounds good.

@svgeesus
Copy link

svgeesus commented Nov 7, 2015

Supporting gamut wider than sRGB
Supporting precision higher than 8 bits
For 1 you technically don't need > 8bbp.

That is true in the sense that current 8 bit stores technically don't need 8, either. They could use 6 for example (and some displays do exactly that, although with dithering to mask the effects). It just looks worse.
However, sRGB is at the point where 8 bits is normally sufficient toavoid posterisation. For wider gamut, depending on how wide, you need 10 or 12 bits to give the same resolution as you had inside the sRGB gamut.
Also, the color spacing is not even due to the transfer function. The colors are bunched up in the lighter areas and wider apart in the darker ones. And if the gama is implemented with an 8-bit lookup table you end up with less than 256 unique values.
I agree with those asking for a 16bit store and agree that it should be exposed as a JS number i.e. float, although Uint16Array would also work. Also agree that colorspace identifiers are needed; it is not enough to just say "wide gamut" or "extended precision".
On keywords, I suspect one for AdobeRGB would be populart withthe design community, DCI P3 would be popular once people realise that is what the 5k iMacs use :) and the higher-end digital photo folks would love ProPhotoRGB (which is huge). Do wider-gamut video sources also need a keyword? I think they are using YCC with headroom and footroom in 16bits, so would need conversion to RGB in any case.

@grorg
Copy link
Contributor Author

grorg commented Nov 9, 2015

I agree with those asking for a 16bit store and agree that it should be exposed as a JS number i.e. float, although Uint16Array would also work.

Yeah, I think we've settled on 16bpc store now. It should be noted however that this will cause a significant increase in memory use, especially if you getImageData on a very large canvas. At that point you'll have two instances of a large buffer (the actual backing store and the ImageData copy).

As for floating point vs Uint16, I don't have a strong opinion. We don't have a half-float type in JS, which is why we've basically settled on integers. And it also is consistent with the existing API, which uses 0-255. People know that they divide/multiply by 255 to normalize.

Also agree that colorspace identifiers are needed; it is not enough to just say "wide gamut" or "extended precision".

Definitely.

On keywords, I suspect one for AdobeRGB would be populart withthe design community, DCI P3 would be popular once people realise that is what the 5k iMacs use :) and the higher-end digital photo folks would love ProPhotoRGB (which is huge). Do wider-gamut video sources also need a keyword? I think they are using YCC with headroom and footroom in 16bits, so would need conversion to RGB in any case.

The new iMacs don't use DCI P3, but something very close. Regarding AdobeRGB - does it map well to a display workflow? DCI P3 only covers 94% of Adobe RGB. I fear that while it might be popular with designers, they are likely to run into the same trouble as today if they use it.

This is why I think we should keep the keywords to a small set that map to the displays we expect to see in the nearish future. That would be P3-ish and Rec. 2020-ish. We can always add more later, and I liked the suggestion that you can point to an HTMLImageElement to get a profile.

@Oletus
Copy link

Oletus commented Feb 19, 2016

Note that for GPU-accelerating the canvas, using 16-bit floats is a lot more convenient than using 16-bit integers internally. There are some kind of 16-bit integer texture formats in all modern graphics APIs, but there's no normalized filterable format in any version of GLES. 16-bit float formats on the other hand are widely supported and used. I think this makes specifying around 16-bit integers a no-go. Maybe the spec could be such that both alternatives could be allowed under the hood, so you could implement the spec efficiently both on CPU and GPU, but one of the alternatives would probably end up being less accurate in this case.

I agree with grorg's view on color spaces, that having "P3-ish" and "Rec-2020-ish" tiers would make sense.

@MarkCallow
Copy link

Using 10bpc is not future proof and does not match the human visual system. Thankfully you seem to have settled on 16 bits. However the comment that "you need 10 or 12 bits to give the same resolution as you had inside the sRGB gamut" is not correct. To smoothly shade from black to white with a linear encoding, such that a human will perceive no steps, needs about 14 bits/channel. See http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html#smoothly_shade.

16 bits/channel is what was used in the Pixar computer aeons ago.

Assuming 16 bits/channel and a linear encoding, perhaps it makes sense to define a color profile for the backing store or use the same profile as the display. Incoming images and color values can be converted to this profile. On output to the display the colors can be converted as necessary. On output via getImageData the colors can be converted to a requested profile. Having a linear encoding makes blending easier.

@junov
Copy link
Member

junov commented Mar 4, 2016

The number of bits per channel required for a linear encoding that matches the precision of human vision depends a lot on the dynamic range of the display device that is being targeted. 16bit-float has similar precision characteristics to logarithmic encodings, while preserving the convenience of linear arithmetic. This is a big deal. Having high resolution in the near-black range is key to being scalable to future high dynamic range devices.
Let's do some math based on Poynton's notes (link from msc@'s post)... The smallest positive 16-bit float value (including denormalized), is 5.96046 × 10−8. To avoid banding, multiply that by 100 to respect the rule of a ratio of 1.01 with the next increment. This means, 16-bit float linear encodings can scale well for future HDR devices with contrast ratios up to 167000:1. Applying the same calculation to 16-bit integers tells us we could handle contrast ratios up to 655:1 without banding in the blacks. That does not even meet the specs of many current high-end monitors and projectors. So 16-bit float is the more future proof option, as long as we are talking about linear encodings.

@grorg
Copy link
Contributor Author

grorg commented Mar 6, 2016

I agree that we should use a 16bpc (float) backing store.

The issue is what do we do about ImageBuffer and get/putImageData. We don't have a 16-bit float type in JS, nor a TypedArray specialisation. I guess we'll just have to use Float32 and live with the fact that using these functions will be both slow (conversion), inaccurate (clipped) and wasteful (double the memory use).

@annevk
Copy link
Member

annevk commented Mar 6, 2016

@bterlson @domenic we could just standardize Float16Array if it is necessary, no?

@kenrussell
Copy link
Member

@annevk @grorg @bterlson @domenic I strongly feel Float32Array should be used for the purpose of getImageData/putImageData to high-bit-depth backing store canvases, and not try to standardize a Float16Array. A Float16Array built in to the JavaScript VM will not be significantly faster than a pure JS implementation of the same, and is a lot more assembly code to be built in to the virtual machine. Further, the dominant cost of reading back the contents of a canvas will be the readback from the GPU, and not the conversion from Float16 -> Float32 and vice versa.

@cabanier
Copy link

cabanier commented May 10, 2016

BT.2020 specifies either 10 or 12 bit per component. It is such a wide gamut that 8 bits per component is like sRGB with 5 bits - you will certainly get banding.

According to Sean, the primary use case is to minimize memory and maximize performance. Also, if the display is 8 bit and we don't lose information when going to screen, would we introduce extra banding?

@brianosman
Copy link

brianosman commented May 10, 2016

It's not enough to say that the display is 8 bit - we need to know the gamut of the display. Ultimately, the answer is "it depends". If the display has a wide gamut (approaching Rec.2020), then 8 bit output is going to be fairly banded, but it's going to look no worse than naive output of typical applications rendering to 8 bit and not doing color management.

If the display is significantly narrower than Rec.2020, then the rendering intent applied when reducing the gamut will have a big impact on the output. Perceptual mapping will cause the final result to be fairly similar to have simply done the work in sRGB - which is about as high quality can be produced on that hardware. But applying either of the Colorimetric intents will produce significantly more banding, as the number of 8-bit (Rec.2020) values that actually fit in the sRGB or similarly small gamut will be quite small.

@kornelski
Copy link

kornelski commented May 10, 2016

@cabanier To squeeze large rec-2020 in 8-bits you would either have to drop high bits that enable the wide gamut or drop low bits that are needed for precise non-posterized colors. Both options seem unappealing to me, because it'll either be as limited as sRGB, or look worse than sRGB.

@cabanier
Copy link

If the display is significantly narrower than Rec.2020, then the rendering intent applied when reducing the gamut will have a big impact on the output.

true. The intent is that you'd only use this mode if you go to a wide gamut display. Otherwise, an author should stay in the sRGB color space.

@cabanier
Copy link

To squeeze large rec-2020 in 8-bits you would either have to drop high bits that enable the wide gamut or drop low bits that are needed for precise non-posterized colors.

I will ping Sean to get his thoughts on the matter.

@junov
Copy link
Member

junov commented May 10, 2016

There are use cases where banding is not a concern (because there are no long gradients), where people may want use wide gamuts.

@kornelski
Copy link

kornelski commented May 10, 2016

Low precision also makes alpha blending worse. What are actual uses for low-quality, but very saturated color?

@cabanier
Copy link

Low precision also makes alpha blending worse. Is low-quality, but very saturated color a common use case?

Can you elaborate why that is the case?

@kornelski
Copy link

When blending semitransparent colors, the rgb channels are multiplied, added together and then truncated back to original precision. When you do that on small numeric range, then the numeric error from the computation is relatively bigger. Transparent blending may need to change the backround slightly, but with fewer bits of precision small-enough changes may not possible, so colors will drift. Multiple blendings in an overlapping area (e.g. for smoke particles in a game, multiple layers in an image editor) amplify that error.

@cabanier
Copy link

cabanier commented May 11, 2016

Yes, if you will do lots of compositing, the quality will degrade. According to our color people, 10 bit is a minimum for animated content so 8 bit shouldn't be used for that. For simple content it should be ok.
I pinged Sean on how strongly he feels about this but he is out for another week

@cabanier
Copy link

@junov can we add the p3 color space to match the media query?
https://drafts.csswg.org/mediaqueries-4/#color-gamut

@brianosman
Copy link

Yes, we were planning to suggest exactly the same thing, for different reasons. P3 (or Adobe, they're pretty close) is by far the most common gamut on new desktop monitors (at least sampling all of the ones in my office). It also provides a gamut where we can reasonably work in 8 bit. Although I previously argued that 8-bit 2020 was possible, I still don't think it's a good idea. I also had a hard time imagining that any application wanting that much range would be willing to do math at 8 bit precision. More likely, in my mind, is that apps will simply want "more than sRGB", and P3 or similar will fit the bill nicely.

The only real "problem" is that Adobe RGB and DCI P3 have mis-match in both directions. If you had to pick arbitrarily, the media query argument suggests P3, it just means that the common case of users with an Adobe RGB display will be subject to slight gamut mapping when viewing P3 canvases.

@cabanier
Copy link

cabanier commented May 11, 2016

@brianosman I agree. I didn't know the gamut of 2020 was that much bigger than P3/Adobe RGB.

@cabanier
Copy link

@brianosman are you suggesting that you would like to have both p3 and Adobe RGB?

@brianosman
Copy link

Not really, although I could certainly see someone making that argument. (It's not likely that any particular application is going to prefer one or the other, but if we keep the "optimal" logic, then perhaps a photo editing app would want the better match for the attached display). I think that one or the other is sufficient. We were already going to be requiring the browser to handle mapping rec2020 to much smaller gamut monitors - the mis-match between P3 and Adobe should be far less of a problem.

@kornelski
Copy link

kornelski commented May 11, 2016

P3 may be confusing, because Apple Display P3 has different gamma than DCI-P3.

I actually like idea of not having a wide choice of color profiles. To me color profiles are analogous to character encodings. In the '90s we had applications try to offer all the different encodings and convert between them, and it was a mess. Eventually we've settled on ASCII + one or two Unicode encodings.

I see sRGB as the ASCII of imaging, and I think it'd be great if there was just one wide "Unicode" equivalent for pixels. Linear Rec.2020 may be it. So as an application developer, instead of having to support a number of encodings and juggling barely enough bits of precision when converting between them, I strongly prefer to just hardcode and support only one that's big enough.

@brianosman
Copy link

P3 may be confusing, because Apple Display P3 has different gamma than DCI-P3.

Do you have a source for this? I'm not turning anything up (but that might just be due to lots of user/media confusion?)

@kornelski
Copy link

@brianosman

Apple's P3 gamut target appears to be a little bit different than the one you get if you generate a profile using DCI's specified coordinates. AnandTech revealed that Apple's P3 profile uses 2.2 gamma, whereas DCI's spec calls for a gamma of 2.6.

From: http://www.astramael.com/ (it's a great description of wide gamut and Apple's P3)

@junov
Copy link
Member

junov commented May 12, 2016

The idea behind linear-rec-2020 was to limit the number of color spaces that need to be natively supported by implementations by providing a one size fits all space that meets or exceeds the gamuts and precisions required for: a) targeting any current or near-future consumer device; b) HDR and wide gamut image processing.

In terms of implementation practicality, supporting spaces with different primaries is not a big deal (it's just a tweak to the conversion matrix). What would be drag is having to support a multitude bit-depths, and gamma curves that don't have built-in HW support.

Apple P3 displays don't just have wider gamuts, they also have a higher dynamic range, which is why they have 10-bits per component. So if we want to save on space with respect to 16bpc rec-2020, without losing the precision or gamut of an Apple P3 display, we'd have to do something in the middle like a 10bpc or 12bpc mode, which is obviously impractical to implement (even more impractical for WebGL). I am not sure it is reasonable to bake into the spec a mode that would be so device-specific.

I have an alternate suggestion: Make provisions in the spec for vendor-specific color spaces that can be selected via {colorspace: "optimal"}. That way, implementors are free to go the extra mile to optimize for a specific device, and we won't have to revisit the spec every time a new awesomer class of devices hits the market. For the purposes of interoperability, it would simply be required that any vendor-specific color space that is selectable via "optimal" must obey the rules that compositing, filtering and interpolation be done in linear space. In such spaces, getImageData and putImageData may require a format conversion in order to map component values to native JS types. Similar issue with toBlob/toDataURL. If we retain this idea, we'll have to figure out the specific format conversion rules for those cases. We'll cross that bridge when we get there... To avoid fingerprinting, the vendor-specific spaces should be limited in number and should not be directly mapped the device's output profile.

So, does that general idea sound reasonable? Or does the prospect of explicitly allowing non-standard spaces sound scary?

About AdobeRGB: this color space is practical because 8-bit AdobeRGB is pretty close to the display profiles of a lot of current devices, so offering it in 8-bit format makes sense. The gamma curve is a pure gamma function that does not exactly match the sRGB curve. Perhaps it would make sense to not put AdobeRGB in the spec, and let browsers implement more optimal alternatives as vendor-specific color spaces, such as a franken-color space that uses the AdobeRGB primaries with the sRGB transfer curves (which we get for free on GPUs). If we do put AdobeRGB in the spec, I would expect that implementors would cut corners by using the sRGB curves on it, which is probably not a big deal. Thoughts on that?

@cabanier
Copy link

Make provisions in the spec for vendor-specific color spaces that can be selected via {colorspace: "optimal"}

Are you suggesting that we set up another page separate that lists a limited number of spaces?

must obey the rules that compositing, filtering and interpolation be done in linear space

Even though linear is the "ideal" way of doing compositing, few applications and rendering engines do so. Is this also something that all UAs can implement? (ie smfr mentioned that they don't have control over this but maybe I misheard)
Als, how much extra processing does this add?

If we do put AdobeRGB in the spec, I would expect that implementors would cut corners by using the sRGB curves on it, which is probably not a big deal. Thoughts on that?

The "Adobe" part of the name is an issue. It's ok to create something with a different name but the exact same values.

@junov
Copy link
Member

junov commented May 13, 2016

Make provisions in the spec for vendor-specific color spaces that can be selected via {colorspace: "optimal"}

Are you suggesting that we set up another page separate that lists a limited number of spaces?

All I am suggesting is that we should keep the number of standard color spaces to a minimum. Just what we need to cover all fundamental use cases. Then browser vendors could add their own non-standard color spaces (which, let's face it, was going to happen anyway), some of which may become standard in the future. Regarding the "optimal" option, all I am suggesting is that the spec give some guidance on the behavior of non-standard color spaces that can be selected by the UA when the user asks for "optimal", to guarantee some degree of standardization in the behavior. That said, browser vendors could also implement whacky modes that break all the rules, as long as "optimal" never ends up selecting that mode.

Even though linear is the "ideal" way of doing compositing, few applications and rendering engines do so. Is this also something that all UAs can implement? (ie smfr mentioned that they don't have control over this but maybe I misheard) Als, how much extra processing does this add?

Good point. Browsers have been mostly doing things the "wrong" way since forever, and everything was fine. Or was it? One of the objectives of this proposal is to break away from our old ways to offer better standardized behavior so that apps that do care about this sort of detail can lean on a reliable standard.
That said, there are good arguments for going both ways on this issue. This feature should probably reflect that. Perhaps there could be two variants of "optimal", a strict version and a permissive version. The strict version would only be allowed to select a color space setting that does linear space compositing, etc. I'd just call the modes "optimal" and "optimal-strict". WDYT?

The "Adobe" part of the name is an issue. It's ok to create something with a different name but the exact same values.

LOL. So many Adobe branded technologies have become de facto standards that we sometimes forget it's a trademark. It's a compliment, really. Thanks for pointing it out.

That said, we'll need to do a review of issues with IP that is referenced by the proposal (e.g. rec-2020) before making it a standard.

@junov
Copy link
Member

junov commented May 13, 2016

There are people who helped formulate this feature proposal who are unable to participate in the discussion in this venue. For that reason I intend to move this discussion to a thread on W3C's WICG. I will capture the feedback gathered from this thread into an updated proposal that I will use to start a new WICG discussion later today. As soon as that is set-up I will point you all to the new thread. Don't worry, the thread will be open to all (not just W3C members), as long as you agree to the terms.

@domenic
Copy link
Member

domenic commented May 13, 2016

Yeah, it seems reasonable to incubate such a feature in a venue like the WICG. Looking forward to its graduation into the HTML Standard in the future :). We can continue using this issue to track that graduation.

@junov
Copy link
Member

junov commented May 13, 2016

Update: my W3C account is in a broken state. Not sure when I'll be able to start the new thread.

@junov
Copy link
Member

junov commented May 14, 2016

W3C WICG thread started on Discourse, with an updated proposal that integrates recent ideas, issues and objections that were raised in this thread and in the Khronos thread and conference call.

Please continue the discussion here:
https://discourse.wicg.io/t/canvas-color-spaces-wide-gamut-and-high-bit-depth-rendering/1505

@RenaKunisaki
Copy link

If you're not sticking to just sRGB, why stick to the arbitrary 0-255 range, instead of 0-1? I think that would be much more intuitive.

@fserb
Copy link
Contributor

fserb commented Nov 9, 2018

This is a summary of the current implementation surface on Chrome to address this issue: #4167

@annevk
Copy link
Member

annevk commented May 18, 2023

  1. display-p3 support has been added.
  2. Float16Array + HTML Canvas #8708 covers a wider backing buffer.

I think with that this can be considered resolved/a duplicate.

@annevk annevk closed this as not planned Won't fix, can't repro, duplicate, stale May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addition/proposal New features or enhancements topic: canvas
Development

No branches or pull requests