I’m a #SoftwareDeveloper from #Switzerland. My languages are #Java, #CSharp, #Javascript, German, English, and #SwissGerman. I’m in the process of #LearningJapanese.

I like to make custom #UserScripts and #UserStyles to personalize my experience on the web. In terms of #Gaming, currently I’m mainly interested in #VintageStory and #HonkaiStarRail. I’m a big fan of #Modding.
I also watch #Anime and read #Manga.

#fedi22 (for fediverse.info)

  • 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: March 11th, 2024

help-circle
  • Update 7/31/25 4:10pm PT: Hours after this article was published, OpenAI said it removed the feature from ChatGPT that allowed users to make their public conversations discoverable by search engines. The company says this was a short-lived experiment that ultimately “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”

    Interesting, because the checkbox is still there for me. Don’t see things having changed at all, maybe they made the fine print more white? But nothing else.

    In general, this reminds me of the incognito drama. Iirc people were unhappy that incognito mode didn’t prevent Google websites from fingerprinting you. Which… the mode never claimed to do, it explicitly told you it didn’t do that.

    For chats to be discoverable through search engines, you not only have to explicitly and manually share them, you also have to then opt in to having them appear on search machines via a checkbox.

    The main criticism I’ve seen is that the checkbox’s main label only says it makes the chat “discoverable”, while the search engines clarification is in the fine print. But I don’t really understand how that is unclear. Like, even if they made them discoverable through ChatGPT’s website only (so no third party data sharing), Google would still get their hands on them via their crawler. This is just them skipping the middleman, the end result is the same. We’d still hear news about them appearing on Google.

    This just seems to me like people clicking a checkbox based on vibes rather than critical thought of what consequences it could have and whether they want them. I don’t see what can really be done against people like that.

    I don’t think OpenAI can be blamed for doing the data sharing, as it’s opt-in, nor for the chats ending up on Google at all. If the latter was a valid complaint, it would also be valid to complain to the Lemmy devs about Lemmy posts appearing on Google. And again, I don’t think the label complaint has much weight to it either, because if it’s discoverable, it gets to Google one way or another.











  • Here’s a question regarding the informed consent part.

    The article gives the example of asking whether the recipient wants the AI’s answer shared.

    “I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want.”

    Do you (I mean generally people reading this thread, not OP specifically) think Lemmy’s spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.