Use QVector<quint16> rather than QByteArray for TBC fields and RGB output #418
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a respin of a patch I started last year...
The general idea is to use (named typedefs of) QVector rather than QByteArray for 16-bit TBC and RGB samples. This substantially reduces the number of places where the tools need to use reinterpret_cast, and generally simplifies the code working with fields and decoded frames - the only remaining unsafe casts in the video tools are when doing file IO. ld-chroma-decoder gets the most benefit from this, but most of the tools are affected to some degree.
I'm reasonably confident in the chroma-decoder/dropout-correct/process-vbi changes, but my testsuite doesn't do anything much with diffdod/discmap yet - please check those changes look OK!
See the commit messages for a couple of other minor improvements rolled into this.
One more thing I spotted but haven't changed: several functions in ld-process-vbi compute the zero crossing point as
videoParameters.white16bIre - videoParameters.black16bIre
- should that actually be(videoParameters.white16bIre + videoParameters.black16bIre) / 2
(the midpoint)?