Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into acceptin…
The “bot blog poisoning other bots against you and getting your job applications auto-rejected” isn’t really something that would play out with people.
Its not a 1:1 correlation. The efficacy of an AI spreading a rumor to other AI has the potential to be far more rapid, pervasive, and much more dangerous than humans spreading rumors amongst themselves.
Are you saying you have specific evidence of this (then please do show exactly how AI will do something people haven’t already), or are you saying “potential” because you don’t?
I know we live in a post-truth world, but your aggressive refusal to acknowledge decades (if not centuries) of reality, in order to freak out over baseless fantasy, is a disturbing example of that.
The “bot blog poisoning other bots against you and getting your job applications auto-rejected” isn’t really something that would play out with people.
They’re called rumors
Rumors don’t work remotely the same way as the suggested scenario.
It’s a 1:1 correlation. Are you not familiar with any of the age-old cautionary tales about them?
https://youtu.be/ajBrcoEQauU
Its not a 1:1 correlation. The efficacy of an AI spreading a rumor to other AI has the potential to be far more rapid, pervasive, and much more dangerous than humans spreading rumors amongst themselves.
Are you saying you have specific evidence of this (then please do show exactly how AI will do something people haven’t already), or are you saying “potential” because you don’t?
Obviously its my opinion, but you don’t have evidence that people spreading rumors is just as effective either, so nice gotcha. Nice try.
I know we live in a post-truth world, but your aggressive refusal to acknowledge decades (if not centuries) of reality, in order to freak out over baseless fantasy, is a disturbing example of that.