Spectral JPEG XL utilizes a technique used with human-visible images, a math trick called a discrete cosine transform (DCT), to make these massive files smaller […] it then applies a weighting step, dividing higher-frequency spectral coefficients by the overall brightness (the DC component), allowing less important data to be compressed more aggressively.
This all sounds like standard jpeg compression. Is it just jpeg with extra channels?
Yeah, it compresses better too though, and jpeg XL can be configured to compress lossless, which I imagine would also work here
Lossless JPEG would be amazing.
In my experience, as you increase the quality level of a jpeg, the compression level drops significantly, much more than with some other formats, notably PNG. I’d be curious to see comparisons with png and gif. I wouldn’t be surprised if the new jpeg compresses better at some resolutions, but not all, or with only some kind of images.
Last I checked, JPEG XL takes a lot of time and resources to encode (create) an image, if you actually want it to be far more optimized than JPEG.
What,
pickle.dump
your enormous Numpy array not good enough for you anymore? Not even fancyzlib.compress(pickle.dumps(enormousNumpyArray))
will satisfy you? Are you a scientist or a spectral data photographer?I guess part of the reason is to have a standardized method for multi and hyper spectral images, especially for storing things like metadata. Simply storing a numpy array may not be ideal if you don’t keep metadata on what is being stored and in what order (i.e. axis order, what channel corresponds to each frequency band, etc.). Plus it seems like they extend lossy compression to this modality which could be useful for some circumstances (though for scientific use you’d probably want lossless).
If compression isn’t the concern, certainly other formats could work to store metadata in a standardized way. FITS, the image format used in astronomy, comes to mind.
Saving arbitrary metadata is the exact use case for
pickle
module, you just put it together with your numpy array into a tuple. jpeg format has support for storing metadata, but they are an afterthought like .mp3 tags, half of applications do not support them.I can imagine multichannel jpeg to be used in photo editing software, so you can effortlessly create false-color plots of your infrared data, maybe even apply a beauty filter to your Eagle Nebula microwave scans.
I agree that pickle works well for storing arbitrary metadata, but my main gripe is that it isn’t like there’s an exact standard for how the metadata should be formatted. For FITS, for example, there are keywords for metadata such as the row order, CFA matrices, etc. that all FITS processing and displaying programs need to follow to properly read the image. So to make working with multi-spectral data easier, it’d definitely be helpful to have a standard set of keywords and encoding format.
It would be interesting to see if photo editing software will pick up multichannel JPEG. As of right now there are very few sources of multi-spectral imagery for consumers, so I’m not sure what the target use case would be though. The closest thing I can think of is narrowband imaging in astrophotography, but normally you process those in dedicated astronomy software (i.e. Siril, PixInsight), though you can also re-combine different wavelengths in traditional image editors.
I’ll also add that HDF5 and Zarr are good options to store arrays in Python if standardized metadata isn’t a big deal. Both of them have the benefit of user-specified chunk sizes, so they work well for tasks like ML where you may have random accesses.