OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS
That’s the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they’ll do whatever is most convenient for themselves.
Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.
The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn’t build a positronic brain without them.
You can’t expect just whatever random AI to spontaneously decide to follow them.
Asimov did write several stories about robots that didn’t have the laws baked in.
There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.
There was one about a robot that helped humans while the humans were on a dangerous mission… I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.
These are the two that come to mind immediately. I have read a lot of Asimov’s robot stories, but it was many years ago. I’m sure there are several others. He wrote stories about the laws of robotics from basically every angle.
He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.
He wrote so many short stories about robots that it would be quite a feat if you had read all of them. When I was a child, I would always go to Half-Price Books and purchase whatever they had by Asimov that I hadn’t already read, but I think he wrote something like 500 books.
Saw your comment as mine got posted, exactly! Those were cautionary tales, not how-tos! Like, even I, Robot, the Will Smith vehicle, got this point sorta’ right (although in a kinda’ stupid way), how are tech bros so oblivious of the point?!
OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS
That’s the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they’ll do whatever is most convenient for themselves.
Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.
The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn’t build a positronic brain without them.
You can’t expect just whatever random AI to spontaneously decide to follow them.
Asimov did write several stories about robots that didn’t have the laws baked in.
There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.
There was one about a robot that helped humans while the humans were on a dangerous mission… I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.
These are the two that come to mind immediately. I have read a lot of Asimov’s robot stories, but it was many years ago. I’m sure there are several others. He wrote stories about the laws of robotics from basically every angle.
He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.
I remember most of the R Daneel books, but I admit I haven’t read all the various robot short stories.
He wrote so many short stories about robots that it would be quite a feat if you had read all of them. When I was a child, I would always go to Half-Price Books and purchase whatever they had by Asimov that I hadn’t already read, but I think he wrote something like 500 books.
Saw your comment as mine got posted, exactly! Those were cautionary tales, not how-tos! Like, even I, Robot, the Will Smith vehicle, got this point sorta’ right (although in a kinda’ stupid way), how are tech bros so oblivious of the point?!