NASA offers space probes Voyager 1 and 2’s images in a compressed raw format, however their decompression program hasn’t been updated since August 1989, so it doesn’t come close to compiling. I modernised it, simplified it, fixed bugs, librarified it, made it re-entrant, made it output 32-bit float TIFF images and made it work on any number of files you drag-and-drop onto it. So you can select a whole folder worth of Voyager’s .imq files, drop them on cdcomp.exe and it will convert them all into .tif files in the same folder.
You can download whole volumes of those .imq images to decode. Either get the Windows binary from the releases or compile the program yourself. The only dependency is rouziclib, just put the whole rouziclib folder in your include path.
I structured the files voyager_decomp.h and voyager_decomp.c so that they can be used as a library, so that we can call the code from another program (like a viewer) if we so desire. With one rather important caveat, that code relies on many of rouziclib’s features. It’s possible to remove the need for the whole of rouziclib by copying over only the needed features, but I for one am not going to do that.
The raw images are 800x800 with 8 bits per pixel. However those 8 bits are on a linear scale and not sRGB (gamma compressed) like 8 bit per channel images typically are. So to conserve the superior accuracy that these images have in the brights we need a higher precision, that’s why I chose to convert to 32-bit floating point TIFF. Here’s a photo of Jupiter in March 1979 (c1631738.imq) taken through a violet filter, as converted by this program:
NASA claims that "Each picture element (pixel) in the two dimensional image array is represented as an 8-bit value, and the value—in the range 0 to 255—is proportional to the amount of light detected at that point", and I decoded the images accordingly. But is this really correct? From the way the pictures look overly bright, washed out, with an overly bright sky (which should be much closer to black), it’s tempting to think that NASA’s description is actually wrong. Applying an arbitrary gamma correction (a gamma of 0.5, like squaring every pixel value) seems to drive that point home:
If anyone knows anything about the topic I would like to hear about it (open an issue). In the meantime it’s probably sensible to apply a gamma when processing the images.