Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gif and Quantization Improvements #637

Merged
merged 17 commits into from
Jun 28, 2018
Merged

Gif and Quantization Improvements #637

merged 17 commits into from
Jun 28, 2018

Conversation

JimBobSquarePants
Copy link
Member

@JimBobSquarePants JimBobSquarePants commented Jun 26, 2018

Prerequisites

  • I have written a descriptive pull-request title
  • I have verified that there are no overlapping pull-requests open
  • I have verified that I am following matches the existing coding patterns and practice as demonstrated in the repository. These follow strict Stylecop rules 👮.
  • I have provided test coverage for my change (where applicable)

Description

These changes do a few things (All related) Fixing #630 and #632

  • Adds palette options to gif encoding Global (default) vs Local. This means smaller animated gifs.
  • Improves performance and memory usage of quantizers ~3x faster
  • Improves performance and quality of dithering algorithms.

@codecov
Copy link

codecov bot commented Jun 27, 2018

Codecov Report

Merging #637 into master will increase coverage by <.01%.
The diff coverage is 92.68%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #637      +/-   ##
==========================================
+ Coverage   89.16%   89.16%   +<.01%     
==========================================
  Files         890      890              
  Lines       37889    37946      +57     
  Branches     2652     2661       +9     
==========================================
+ Hits        33782    33835      +53     
- Misses       3299     3302       +3     
- Partials      808      809       +1
Impacted Files Coverage Δ
...ts/ImageSharp.Tests/Formats/Png/PngEncoderTests.cs 98.44% <ø> (ø) ⬆️
...sts/Processing/Processors/Dithering/DitherTests.cs 100% <ø> (ø) ⬆️
...ssing/Quantization/Processors/QuantizeProcessor.cs 0% <0%> (-100%) ⬇️
.../Processing/Quantization/QuantizedFrame{TPixel}.cs 87.5% <100%> (+12.5%) ⬆️
...Dithering/Processors/PaletteDitherProcessorBase.cs 100% <100%> (ø) ⬆️
...sts/ImageSharp.Tests/Formats/GeneralFormatTests.cs 97.43% <100%> (-0.04%) ⬇️
src/ImageSharp/PixelFormats/Rgba64.cs 97.26% <100%> (ø) ⬆️
src/ImageSharp/Common/Constants.cs 100% <100%> (ø) ⬆️
...tion/FrameQuantizers/FrameQuantizerBase{TPixel}.cs 97.91% <100%> (+0.77%) ⬆️
...zation/FrameQuantizers/WuFrameQuantizer{TPixel}.cs 94.38% <100%> (-0.29%) ⬇️
... and 18 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6949cde...ad8c2e2. Read the comment docs.

@JimBobSquarePants JimBobSquarePants changed the title WIP Gif and Quantization Improvements Gif and Quantization Improvements Jun 27, 2018
@JimBobSquarePants
Copy link
Member Author

Ok. Good to review once tests have passed.

private const float ToleranceThresholdForPaletteEncoder = 0.2f / 100;
// This is bull. Failing online for no good reason.
// The images are an exact match. Maybe the submodule isn't updating?
private const float ToleranceThresholdForPaletteEncoder = 0.2273F;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super-strange. I hope it's not my algorithm that sucks.

Maybe we could print the "last modified" timestamp of submodule files, or something like that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It’s weird. The exact same difference before and after the update.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JimBobSquarePants I think I figured it out!

Actually, it fails for me locally when I change the tolerance back to the original value. This is my actual output image for palette size 80:
palettecolortype_wuquantizer_palette-8bpp__palettesize-80

It differs from both the current and the past version of the same image in the ReferenceOutput repository. Can you confirm that it means that the actual output of the execution on my PC differs from yours? This could be only explained by hardware (CPU) differences. I suspect a method like WuFrameQuantizer<T>.Volume() as the source.

If that is the case, this is exactly the reason why we have a tolerance in our comparer :) I think you have overshoot the tolerance however, based on my local tests 1.5F/100 should be good enough. It's not a percentage value in this class, you defined 22.73% actually! (I usually postfix my stuff with ***Percent when I mean percentage.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow yeah! It's definitely different to the output on my machine. That explains that then.

I need to get my head around the tolerance values. At the moment writing 1.5F/100 (Is that 1% over 100 pixels?) isn't immediately intuitive and we should write a helper that makes it more clear.

Copy link
Member

@antonfirsov antonfirsov Jun 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JimBobSquarePants tolerance == 0.01 means 1%, which can be interpreted as the following:

  • Either: An RGB image should be white, but 1% of white pixels are black (the rest is white)
  • Or: all the pixels in the image have a 1% difference compared to the expected values

I tried to explain this here but I think I wasn't clear enough.

ImageComparer.TolerantPercentage() is a helper that expects tolerance value scaled to 0-100. The value will be scaled back to the 0-1.0 interval internally. It's useful because tolerance values are typically very small.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's much more clear, thanks!

@JimBobSquarePants
Copy link
Member Author

Hey @iamcarbon Check this, 6x speedup!

With the new global palette mode we're down to 56 seconds with 11.3Mb output.

global color palette performance

With the local palette mode it's 1 minute with 13Mb output.

local color palette performance

Here's the two images to compare. Local, of course is flawless, global suffers slightly since the image reuses the palette from the first frame.

gifs.zip

@iamcarbon
Copy link
Contributor

@JimBobSquarePants Heck yes! This is a gigantic speedup!!!!

public enum GifColorTableMode
{
/// <summary>
/// A single color table is calculated from the first frame and reused for subsequent frames.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably allow the user to select the baseline frame.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did think about this but if someone creates a custom PaletteQuantizer implementation then it doesn't matter what frame is used. Plus it would vary from image to image.

Copy link
Member

@antonfirsov antonfirsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation looks good. We should probably consider allowing the selection of the palette frame to be used for quantization in GifColorTableMode.Global mode to support users with strange gifs.

this.WriteLogicalScreenDescriptor(image, stream, index);
// Write the LSD.
int index = this.GetTransparentIndex(quantized);
bool useGlobalTable = this.colorTableMode.Equals(GifColorTableMode.Global);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you swear you never coded in Java? 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've read a lot of Java source 😄

@@ -74,9 +74,21 @@ public void Dither<TPixel>(ImageFrame<TPixel> image, TPixel source, TPixel trans
{
image[x, y] = transformed;

// Equal? Break out as there's nothing to pass.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are still many optimization opportunities left here, but we need to change (extend?) the IErrorDiffuser API for that.

Not related to this PR but [MethodImpl(MethodImplOptions.AggressiveInlining)] is unnecessary on Dither<TPixel>(). The compiler won't be able to inline it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there were improvements to what can be inlined that would allow this. I could be wrong though. 🤷‍♂️

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you invoke a method through interface or virtual call on a base class, it can't be inlined, because the code is not known at JIT time. (The virtual invocation is responsible to dispatch the method at runtime.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants