meshopt_encodeVertexBuffer not making very compressible data. #776
Replies: 10 comments 16 replies
-
Actually, upon further testing, it seems meshopt_encodeVertexBuffer performs poorly for uint16 vertex coordinates as well: meshopt vert compressed_size: 110177 B standard vertex compressed_size: 95158 B |
Beta Was this translation helpful? Give feedback.
-
It's not really designed for floating point data without the use of filters (meshopt_encodeFilterExp for example); for poorly compressible data like this zstd might find more opportunities to reuse individual float values for some meshes, but it certainly depends. For integer data, you should also make sure that the vertex order is correct per documentation. One good way to see what the reasonable compressed size would be to use gltfpack (with Either way I don't think it's a bug so I'm going to convert this to a discussion. |
Beta Was this translation helpful? Give feedback.
-
And yeah if you can share the file I could double check what the behavior is on some builtin code. FWIW I'm planning to look into improving compression at some point, but that requires expanding the existing compression scheme to have a little more freedom to encode data in certain cases and it's not clear if significant wins are possible without sacrificing decoding speed - which is to say, within the current encoding scheme the compression rates are what they are, because the encoder has relatively little latitude and relies on external processing code to remove noise and optimize order. |
Beta Was this translation helpful? Give feedback.
-
I'll try out your float filters. In the mean time, some more data: vertex_data.size(): 225296 B (raw size) unfiltered_compressed_size: 101825 B (zstd applied directly to original vertex data) So meshopt_encodeVertexBuffer is actually making the data significantly less compressible, relative to not using it. |
Beta Was this translation helpful? Give feedback.
-
No offense hopefully, but I think there's something a bit wrong going on here. Regardless of vertex ordering or float filtering, I don't think an encoding scheme should negatively effect the achievable compression ratio this much. |
Beta Was this translation helpful? Give feedback.
-
will supply data so u can test with it shortly |
Beta Was this translation helpful? Give feedback.
-
Here you go, this is the raw uncompressed and unfiltered vertex data: Vertex format is |
Beta Was this translation helpful? Give feedback.
-
What I'm seeing by just running this data through a simple test binary is that zstd does result in better compression ratio on level 19, but is a little worse at 9 or default 6. Same goes for gzip. lz4 sees significant wins. All of this is more or less aligned with my expectations so far (except the zstd -19 results!) Three notes on this (to address the "is something wrong?" comment):
The reason why I say "this is not a bug" is because of pt2 - I'm not aware of any decisions that the encoder can be making dramatically better right now. This is not to say that there are no improvements possible here! But that's a project to design the v1 format and implement an encoder for it, not a project to fix some sort of issue in the encoder. |
Beta Was this translation helpful? Give feedback.
-
Here's some results from gltfpack, this is not quite apples to apples because the normal & texture coordinate storage is different. But just noting that within the type of data that encoder expects right now, it's doing pretty well.
One thing I will note is likely a significant problem here is a 10-bit encoded normal in your data. It is spread across the byte boundaries. This means it's basically incompressible other than LZ references zstd is doing, and deltas probably hurt this even more. I'd assume this is a major source of inefficiency here, and ability to at least disable delta compression on this stream would help. |
Beta Was this translation helpful? Give feedback.
-
Ok got some good results using filtering:
|
Beta Was this translation helpful? Give feedback.
-
Hi,
Am testing out meshopt_encodeVertexBuffer, hoping to get some good compression wins. However it's not working very well in some cases. In fact, if my code and measurements are correct, it's performing much worse than my very simple vertex attribute de-interleaving filtering, in the case where vertex positions coordinates are float32s. For uint16 coordinates it works much better though.
For the float32 case:
meshopt vert compressed_size: 150207 B
standard vertex compressed_size: 130799 B
So my simple attribute deinterleaving results in a much smaller compressed vertex size. Compression method used is zstd, compression level 20. (similar result different for compression level 9 etc.).
You can see my de-interleaving and compression code here: https://github.com/glaretechnologies/glare-core/blob/5b34793a7a53bf8b58b2f503a18a62d49e197988/graphics/BatchedMesh.cpp#L858
Vertex format is
float vert_pos[3];
uint32 packed_normal;
half uv[2];
uint32 mat_index;
This is a little disappointing, was hoping for some wins even with float vertex positions.
meshopt_encodeIndexBuffer on the other hand works fantastically.
I can supply you with the vertex data if you want to test.
cheers,
Nick
Beta Was this translation helpful? Give feedback.
All reactions