cross-posted from: https://programming.dev/post/37278389

Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements. To tackle this challenge, we introduce a high‑dimensional neural representation of blur—the lens blur field—and a practical method for acquisition.

The lens blur field is a multilayer perceptron (MLP) designed to (1) accurately capture variations of the lens 2‑D point spread function over image‑plane location, focus setting, and optionally depth; and (2) represent these variations parametrically as a single, sensor‑specific function. The representation models the combined effects of defocus, diffraction, aberration, and accounts for sensor features such as pixel color filters and pixel‑specific micro‑lenses.

We provide a first‑of‑its‑kind dataset of 5‑D blur fields—for smartphone cameras, camera bodies equipped with a variety of lenses, etc. Finally, we show that acquired 5‑D blur fields are expressive and accurate enough to reveal, for the first time, differences in optical behavior of smartphone devices of the same make and model.

  • Seefra 1@lemmy.zip
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    1 day ago

    It’s old news that you should never use the same camera for two images that need separate identities.

    The same applies to radio transmitters and every analogue medium like probably microphone or preamp or ADC.

    Anything that doesn’t work on purely digital domain is most likely traceable and I wouldn’t be surprised if proprietary software like Adobe started embedding hidden fingerprints into their files to “enforce their copyright” or “better collaborate with law enforcement”

    I tend to complain that ROMs like Graphene OS don’t allow spoofing IMEI which should be basic functionally of every privacy-enabled phone. Yet if you require real privacy the electronic “fingerprint” of the radio itself is probably enough to track someone if they really want to.

    There’s also a thing where they can track someone’s time and location just from listening to oscillations on the utility power’s frequency

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      The same applies to radio transmitters and every analogue medium like probably microphone or preamp or ADC.

      exactly why when you buy any halfway decent mic there’s the option to buy them in sets: they’ll have come off the production line together so that their imperfections are as close to each other as possible so that they sound as identical as they can be

    • Mgineer@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      16 hours ago

      Anything that doesn’t work on purely digital domain is most likely traceable

      at this point I believe that digital is easier to trace as every device ever connected to the Internet or connected to a device that has, has probably been bugged

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      It’s old news that you should never use the same camera for two images that need separate identities.

      Sanatize metadata and Exif data?

      • Seefra 1@lemmy.zip
        link
        fedilink
        arrow-up
        12
        ·
        1 day ago

        That’s probably enough to stop your online mates from doxing you, but a powerful enough adversary can trace the little unique nuanced fingerprints that a camara lens introduces to the picture, and compare it with images from other sources like social media.

        There are are many steps that can introduce patterns, like the way the lens blurs as explained in the article, sensor readout noise patterns, a speckle of dust, scratches, I bet chromatic aberrations are probably also different between multiple copies of the lens.

  • CookieOfFortune@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 day ago

    That means we could get better processing by applying this analysis in reverse. Would also reduce this type of fingerprinting.

    • Eagle0110@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      Uhhh I feel like reversing this wouldn’t be much easier than trying to reverse hash functions lol

      • CookieOfFortune@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        14 hours ago

        We already apply distortion correction based on lens profiles. Probably all cell phone lenses do this. They just aren’t corrected based on individual phones.

  • icelimit@lemmy.ml
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    1 day ago

    This blur persists past digital (automatic) post processing? Or is otherwise still uniquely traceable?

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 hours ago

      i’d guess that the digital processing is a well known change so you can account for it

      after all, modern post processing on a phone afaik is done in the raw sensor data, so is using a lot more data than is actually stored in the JPEG: it probably leads to more information being available than if it weren’t done (more shadow detail rather than crushed blacks, etc)

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      arrow-up
      44
      ·
      2 days ago

      Have a coordinated volunteer project where people print and photograph special patterned image designed to map the blur and other aberration of their particular lens. With hundreds of thousands of sample, we train a micro-distortion ML model that subtly shifts and distorts the pixels just enough to make positive lens identification impossible. Then have something to auto-apply this filter (and discard originals) on every pictures before they even have a chance of being uploaded to the cloud.

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    Is it possible to use some kind of random noise algorithm to modify the image so that devices can’t be uniquely identified like this anymore? Or would that not work?

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      The would have to be enough to obscure the lens’ aberration, that would be an obnoxious amount of noise. Instead I think a better solution is to add micro distortion strategically to make identification ambiguous/inconclusive

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 hours ago

        perhaps simply putting something like cling wrap over the lens and moving it for each photo would be enough: adding some scratches and roughness that slightly changes each time you move it

    • Ross_audio@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      Just my guess. I could be wrong:

      As the lens blur is mathematically fairly simple and spread across the whole image it’s likely already consistently replicated by AI in a similar way to real photos.

      It’s easier for generative AI to spot, “understand”, and replicate a mathematical pattern than the number of fingers on a hand or limbs on a body.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        12 hours ago

        It also helps that the current generation of image generation models essentially work by “deblurring” some random noise. Having a blur in the resulting image just means the model has to do less work in a sense.

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 day ago

        Also a guess, isn’t a hand or any biological form not also the result of a mathematical pattern?

        I do see how ai could replicate “a” blur but what it might not be able to do (yet) is replicate the unique blur of a specific device.

        So maybe you couldn’t proof something is AI, but the physical lens as proof that it is not.

        • aashd123@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          You wouldn’t share your physical lens for high-risk work (i.e. where you are anonymous) and since there’s no way to know whether a specific “blur” was produced by a physical lens or by AI, this won’t help in proving if something is AI.

        • Ross_audio@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          Hands appear differently in different positions all over the frame in the photo so I maintain the hand pattern is less consistent and harder than lens blur.

          But you’re right as the blur is a fingerprint you can match it to a lens and prove a photo is real that way.

          It could be a useful tactic as much of AI detection is a way to find and prove AI fake so far.

  • Endymion_Mallorn@kbin.melroy.org
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 day ago

    So you’re saying, always ‘scratch’ your lens and get a repair shop to replace it with a generic lens. And if possible, get the CCD changed to a compatible one as well.