-
-
Notifications
You must be signed in to change notification settings - Fork 852
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gif and Quantization Improvements #637
Conversation
Codecov Report
@@ Coverage Diff @@
## master #637 +/- ##
==========================================
+ Coverage 89.16% 89.16% +<.01%
==========================================
Files 890 890
Lines 37889 37946 +57
Branches 2652 2661 +9
==========================================
+ Hits 33782 33835 +53
- Misses 3299 3302 +3
- Partials 808 809 +1
Continue to review full report at Codecov.
|
Ok. Good to review once tests have passed. |
private const float ToleranceThresholdForPaletteEncoder = 0.2f / 100; | ||
// This is bull. Failing online for no good reason. | ||
// The images are an exact match. Maybe the submodule isn't updating? | ||
private const float ToleranceThresholdForPaletteEncoder = 0.2273F; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super-strange. I hope it's not my algorithm that sucks.
Maybe we could print the "last modified" timestamp of submodule files, or something like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s weird. The exact same difference before and after the update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JimBobSquarePants I think I figured it out!
Actually, it fails for me locally when I change the tolerance back to the original value. This is my actual output image for palette size 80:
It differs from both the current and the past version of the same image in the ReferenceOutput repository. Can you confirm that it means that the actual output of the execution on my PC differs from yours? This could be only explained by hardware (CPU) differences. I suspect a method like WuFrameQuantizer<T>.Volume()
as the source.
If that is the case, this is exactly the reason why we have a tolerance in our comparer :) I think you have overshoot the tolerance however, based on my local tests 1.5F/100
should be good enough. It's not a percentage value in this class, you defined 22.73%
actually! (I usually postfix my stuff with ***Percent when I mean percentage.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow yeah! It's definitely different to the output on my machine. That explains that then.
I need to get my head around the tolerance values. At the moment writing 1.5F/100
(Is that 1% over 100 pixels?) isn't immediately intuitive and we should write a helper that makes it more clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JimBobSquarePants tolerance == 0.01
means 1%, which can be interpreted as the following:
- Either: An RGB image should be white, but 1% of white pixels are black (the rest is white)
- Or: all the pixels in the image have a 1% difference compared to the expected values
I tried to explain this here but I think I wasn't clear enough.
ImageComparer.TolerantPercentage()
is a helper that expects tolerance value scaled to 0-100
. The value will be scaled back to the 0-1.0
interval internally. It's useful because tolerance values are typically very small.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's much more clear, thanks!
Hey @iamcarbon Check this, 6x speedup! With the new global palette mode we're down to 56 seconds with 11.3Mb output. With the local palette mode it's 1 minute with 13Mb output. Here's the two images to compare. Local, of course is flawless, global suffers slightly since the image reuses the palette from the first frame. |
@JimBobSquarePants Heck yes! This is a gigantic speedup!!!! |
public enum GifColorTableMode | ||
{ | ||
/// <summary> | ||
/// A single color table is calculated from the first frame and reused for subsequent frames. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably allow the user to select the baseline frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did think about this but if someone creates a custom PaletteQuantizer
implementation then it doesn't matter what frame is used. Plus it would vary from image to image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation looks good. We should probably consider allowing the selection of the palette frame to be used for quantization in GifColorTableMode.Global
mode to support users with strange gifs.
this.WriteLogicalScreenDescriptor(image, stream, index); | ||
// Write the LSD. | ||
int index = this.GetTransparentIndex(quantized); | ||
bool useGlobalTable = this.colorTableMode.Equals(GifColorTableMode.Global); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you swear you never coded in Java? 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've read a lot of Java source 😄
@@ -74,9 +74,21 @@ public void Dither<TPixel>(ImageFrame<TPixel> image, TPixel source, TPixel trans | |||
{ | |||
image[x, y] = transformed; | |||
|
|||
// Equal? Break out as there's nothing to pass. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still many optimization opportunities left here, but we need to change (extend?) the IErrorDiffuser
API for that.
Not related to this PR but [MethodImpl(MethodImplOptions.AggressiveInlining)]
is unnecessary on Dither<TPixel>()
. The compiler won't be able to inline it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought there were improvements to what can be inlined that would allow this. I could be wrong though. 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you invoke a method through interface or virtual call on a base class, it can't be inlined, because the code is not known at JIT time. (The virtual invocation is responsible to dispatch the method at runtime.)
Prerequisites
Description
These changes do a few things (All related) Fixing #630 and #632