Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incomplete Specification for the Processing Method #1

Closed
TJCoding opened this issue Apr 18, 2020 · 4 comments
Closed

Incomplete Specification for the Processing Method #1

TJCoding opened this issue Apr 18, 2020 · 4 comments

Comments

@TJCoding
Copy link
Contributor

TJCoding commented Apr 18, 2020

The processing method described in the paper “Color Transfer in Correlated Color Space” [Ref 1] is ill-defined. It implies that when the covariance matrices are decomposed using the SVD algorithm, then the eigenvalues along the diagonal of each scaling matrix can be identified by the letters R G B respectively. This notation is inappropriate because the associated eigenvector matrix defines new axes that are no longer aligned to the original red, green and blue axes.

By applying the same notation to both the source image matrix and the target image matrix, the paper assumes a correspondence between the two sets of matrices, but this assumption is not justified. The rotations defined by the source image eignvectors define one set of axes and those defined by the target image eignvectors define another set of axes.

By convention, the eigenvalues are presented in descending order and the eigenvectors are ordered accordingly. This often leads to the situation where there is a coincidental correspondence between the ordered axes for the source image and the ordered axes for the target image but this correspondence does not occur for all instances of source image and target image.

The screen shot below shows one instance where the correspondence breaks down for the current Matlab implementation of the Xiao method.

Original

Here, in the output image, the grass is coloured purple and the mountain is coloured green when it should be the other way around. It is interesting to note that the implementation in C++ at [Ref 2] does give an image which appears correct. For certain other image pairs, both implementations give images that are wrong, whereas both implementations give a correct output for the original images utilised in [Ref 1] .

In order to ensure that the output image is correct, it is necessary to achieve the best alignment between the axes derived from the source image and those derived from the target image. This can be achieved by permuting the ordering of one of the sets of axes (say the target axes) and finding the permutation where the vector dot products between the sets of axes have a maximum summation value. (In practice two axes may be aligned but pointing in the opposite direction, so it is the sum of the absolute values of the dot products that needs to be maximised.)

New code will be proposed which uses a function ‘MatchColumns’ to achieve the best alignment between axes as described above. With this code the output image is as shown below.

Ruggedised

It has been established that the revised code offers correct processing over a wider range of images than the original code. It is not known whether it achieves an optimal output for all possible input image pairs. Given that it took 14 years to identify the shortcomings of the original method, it might take a while to determine the shortfall if any of the revised method.

The images above can be reproduced with the new code by processing images 2.jpg and 1.jpg, with ‘ruggedisation’ set ‘false’ in the function ‘cf_Xiao06.m’ and ‘ruggedisation’ set ‘true’ respectively.

Ref 1
Xiao, Xuezhong, and Lizhuang Ma. "Color Transfer in Correlated Color Space." in Proceedings of the 2006 ACM international conference on virtual reality continuum and its applications, pp. 305-309. ACM, 2006.

Ref 2
https://github.com/ZZPot/Xiao-transfer

Addendum:

There should be a few more words here about the characteristics of the processing method.

At first sight the Xiao method appears to necessarily be better than the standard Reinhard method. The Reinhard method achieves a match of the mean and standard deviation values in the L-alpha-beta domain. The Xiao method produces an output where the mean values, the standard deviations and the cross covariance values of the colour channels in the output image are exactly matched to those of the target colour image.

However, the situation is not as simple as it seems. In the first output image shown above, the green channel has a mean value, a standard deviation value and cross correlation values (with the other channels) matching those of the green channel in the target colour image. The green channel has the right characteristics but it is predominately associated with the wrong part of the picture. The mountains are shaded green but the green shading should be applied to the grassy plain. It is no consolation that the green channel has the requisite statistical characteristics if green colouration is applied to the wrong part of the image.

The ruggedisation processing can reduce any gross error but there is still a residual mismatch error in the second output image shown above. Effectively, the source rotation matrix defines three new orthogonal axes A, B, C which lie at some orientation to the RGB axes. Likewise, the target rotation matrix defines three new orthogonal axes A’, B’, C’ which are at some orientation to the RGB axes. Normally the orientations of the two sets of derived axes will not be coincident but the processing method implicitly assumes that they are.

Pixels in the source image are initially represented by their RGB co-ordinates and are then represented as points relative to the ABC axes. Scaling is applied and the points occupy new positions in the ABC space. The modified points are then mapped back to RGB co-ordinates. This latter remapping is performed on the basis that a point which has co-ordinates a, b c relative to the ABC axes also has co-ordinates a, b, c relative to the A’B’C’ axes. This processing leads to an outcome where the mean values, the standard deviations and the cross covariance values of the colour channels in the output image are exactly matched to those of the target colour image; however this is typically achieved at the expense of a shift in colour. Pixels which were pure red or pure green or pure blue in the source image are now a shade of red or a shade of green or a shade of blue in the output image. Conversely pixels which appear as pure red or pure green or pure blue in the output image were originally a shade of red or a shade of green or a shade of blue in the input image. The extent of the colour shift depends upon the extent of misalignment between the ABC axes and the A’B’C’ axes.

A colour shift need not be a problem per se. If the source image has green grass and the target image has grass which is a different shade of green, then it would be expected that the colour of the grass would have undergone a shift in the final image. This expectation would apply whether the Xiao or the Reinhard method was used for processing. The difference for the Xiao method is that a colour match may be achieved in part by moving the position of colours within in the picture whether this is a gross shift or a lesser shift (according to whether ruggedisation is applied or not).

The images below compare the output images from three methods of processing. They are respectively ‘Enhanced Reinhard’ processing [‘alpha-beta’ variant, in Ref 3], ‘Standard Reinhard’ processing [Ref 4] and ‘Ruggedised Xiao’ processing as described here.

scotland_composite

It can be seen that the third image has different colour characteristics from the others. The green at the base of the cottage looks more fluorescent, the grass in front has a slight orange tint which doesn’t seem to be present in the palette of the target image. It could be argued that the first two images better match the look and feel of the colour target image. Based on these images it is clear that that the Xiao method can produce images which have distinct colour differences to images produced by other methods. Further study would be required to determine definitively whether the difference is a positive or a negative.

Ref 3
https://github.com/TJCoding/Enhanced-Image-Colour-Transfer

Ref 4
https://github.com/hangong/reinhard_color_transfer

@hangong
Copy link
Owner

hangong commented Apr 19, 2020

The code hosted in that repository is to provide the consistent implementation with the paper. So, if there are some loopholes in the original design, I won’t fix it. However, you are encouraged to submit the amended MATLAB code with your patch. I’m happy to include it in a separate directory but will make it clear that this is not the original design.

Could be better if you can ensure the code is compatible with Octave — not everyone has a MATLAB licence.

hangong pushed a commit that referenced this issue Apr 22, 2020
hangong pushed a commit that referenced this issue Apr 22, 2020
Code added in m_ruggedisation_update to address Issue #1 "Incomplete Specification for the Processing Method"

Original folder 'm' (dated 14/09/2015) added to overwrite the previous update.
hangong added a commit that referenced this issue Apr 22, 2020
@hangong
Copy link
Owner

hangong commented Apr 27, 2020

Many thanks. Terry. The workaround solution works great. I will close this issue then?

@TJCoding

This comment was marked as off-topic.

@hangong
Copy link
Owner

hangong commented Apr 27, 2020

Thanks. The bottomline is that users should have a solid reason to use it. If not for academic publication reasons, professional user comparison experiments are not necessary.

@hangong hangong closed this as completed Apr 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants