cross-posted from: https://programming.dev/post/37278389
Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements. To tackle this challenge, we introduce a high‑dimensional neural representation of blur—the lens blur field—and a practical method for acquisition.
The lens blur field is a multilayer perceptron (MLP) designed to (1) accurately capture variations of the lens 2‑D point spread function over image‑plane location, focus setting, and optionally depth; and (2) represent these variations parametrically as a single, sensor‑specific function. The representation models the combined effects of defocus, diffraction, aberration, and accounts for sensor features such as pixel color filters and pixel‑specific micro‑lenses.
We provide a first‑of‑its‑kind dataset of 5‑D blur fields—for smartphone cameras, camera bodies equipped with a variety of lenses, etc. Finally, we show that acquired 5‑D blur fields are expressive and accurate enough to reveal, for the first time, differences in optical behavior of smartphone devices of the same make and model.
…for every photo
That’s a challenge, but even moving to a “generic” lens should help to reduce the identifiability.
Why would a generic lens be any better? These distortions are part of the lens design and manufacturing. Arguably, a lower quality lens would be easier to identify.
But not identifiable to a specific type of phone.
I didn’t see anything to see that these aberrations indicated anything about a type of phone? They’re unique for each lens…
Ah I see. Do non phone specific generic lenses exist? They all seem pretty specialized to me.
I know that a lot of the cheap Android handsets, which we mostly encounter as prepaid, have interchangeable camera bits.