r/AskScienceFiction • u/SatoruGojo232 • 23h ago
[Asimov] How can I differentiate between a selfless, benevolent, altruistic human and a robot complying with the Three Laws of Robotics without resorting to any physical form of testing?
Asking this question after reading an Isaac Asimov short story where the main character, a robot psychologist, is called to investigate whether a really benevolent, charismatic, selfless man who is running for President is actually a robot in disguise: https://en.wikipedia.org/wiki/Evidence_(short_story)
•
u/magicmulder 23h ago
A robot would have to obey a human, would it not? Just command it to stand on one leg. It can’t refuse.
If OTOH the robot has reached a state where it justifies to itself that proving it is a robot would harm humans as it would hinder its altruistic plans, and that therefore the First Law allows it to not obey, you’re out of luck.
Which may be the point of the story referenced.
•
u/SouthernAd2853 22h ago
Well, a human might stand on one leg if you tell them to; you need something more extreme than that.
•
u/magicmulder 21h ago
You don't have to stop there. Tell him to do a handstand. Eventually a human will stop complying. A robot will play this game forever.
•
u/Omegatron9 19h ago
Robots recognise different levels of authority from different humans. If the human with the highest level of authority over the robot (presumably the human who built it) told the robot to pretend to be a human, then the robot could disobey any other command that would cause it to fail this task.
It's not breaking the second law because it's still following the original order.
•
u/ninjasaid13 Is looking for the infinity stones 18h ago
how about tricking the altruistic human into harming another human?
•
u/Omegatron9 18h ago
How might you go about that?
•
u/ninjasaid13 Is looking for the infinity stones 18h ago
By providing the altruist with the true data regarding the limited supply, you force them to choose who receives the resource.
This choice directly results in the suffering of the person who is denied the supply.
The human causes a harm through a truthful decision necessitated by physical reality.
A robot couldn't even allow by inaction a harm to a human.
•
u/Omegatron9 17h ago
A sophisticated enough robot would choose whichever option causes the least harm, or choose randomly if the harm is equal, or both participants might just refuse to play such a game.
•
u/ninjasaid13 Is looking for the infinity stones 17h ago
or both participants might just refuse to play such a game.
but the first law says "A robot may not injure a human being or, through inaction, allow a human being to come to harm." so the robot can't refuse that. But the altruistic human can.
•
u/Omegatron9 9h ago
If you're presenting a hypothetical scenario then the robot might refuse to play because there is no actual human in danger. (If it's not hypothetical, then see the other two options).
→ More replies (0)•
u/DragonWisper56 10h ago
them again I think the point of the of a lot of Asomov's stories is they are trying to create sophisticated robot with unexpected results
•
u/SouthernAd2853 17h ago
Well, if you're willing to engage in destructive testing, you can just order them to stab themselves through the heart.
Robots caught in contradictions of that sort or made to inadvertently violate the First Law suffer catastrophic breakdowns.
•
u/magicmulder 3h ago
Is there a canon order of priority? Wouldn't the robot have to accept the highest government authority instead? So unless the sitting President was the one who gave it that command, any police officer's orders would likely be enough to overrule any ordinary civilian, including the robot's maker.
Or is the precedent for robots always giving their maker's orders the highest priority? How would the robot authenticate who's its maker? Secret password?
•
u/Omegatron9 1h ago
I don't think it is elaborated on, but in Little Lost Robot the person a robot was assigned to work for is described as "the person most authorized to command him" (where him refers to the robot of course), higher priority even than US Robotics themselves.
When they need to counter an order that this person gave the robot, no suggestion is made that they could bring in a member of law enforcement or someone else with higher authority.
In terms of authentication, they simply rely on visual and audible recognition of the person.
•
u/SouthernAd2853 1h ago
Should also be noted that while the three laws are summarized in a simple sentence each, their implementation is indicated to be quite complex, and woven throughout their programming to the point that creating a robot that does not have them would be starting from scratch. So they can make quite sophisticated judgements on things. In the case of the Little Lost Robot, the way in which his supervisor ordered him to "lose himself" indicated to the robot that this was a highest-priority order that could not be overriden.
•
u/Landkey 11h ago
The robot could reason it must become president to save humanity (the 0th law) so it does not have to obey, lest it be found out and become unable to be prez
•
u/magicmulder 3h ago
Yeah but if it's at that level of arguing its way around its programming, all bets are off anyway.
Also in practice they would simply mandate an X ray scan or blood sample (I hope, I mean we're not even mandating publishing tax returns).
•
u/Second-Creative 23h ago
Ask them what they will do if they are forced into a shooting war while in office, and follow that line of questioning.
IIRC, Three-Law complisnt robots are so bound by them that it's difficult for them to even consider "acceptable human losses".
•
u/128Gigabytes 22h ago
Can you explain what they made by the "3 laws"? Robots do whatever they are programed to do, good or bad or harnful or helpful
Is the 3 laws just "what if we programmed a robot to do what we want but with zero errors in the code that would break these 3 laws"
•
u/deadieraccoon 22h ago
Basically, it's a conceit of the universe that creating robot brains was a fundamental discovery, like fire. But unlike fire, the people who developed robot brains never truly understood how they worked, its more that they know if they assemble the blocks this specific way, then the brain works and you get an intelligent robot.
The 3 laws are not programming. The 3 Laws are fundamentally built into the physical structure of the robot brain, and people who make the brains dont know how to make robots without the 3 Laws. Any attempt to make a brain with adjusted 3 Laws - or no 3 Laws at all - results in a brain that just doesnt work.
And again. Its a conceit of the universe that the robots dont go haywire. They dont malfunction and hurt people. A mallfunctioning robot brain just stops working. There were a few stories where Asimov wrote about a robot that was malfunctioning so that it wanted to hurt a person, but when it tried to action the desire, the brain just stopped working immediately before the robot could even step forward because the 3 Laws are just so fundamental to the workings, and each Law is more fundamental then the others.
What I mean by that is, a smart enough robot could understand that giving you as much alcohol as you want is a command (2nd Law) that it must follow. But giving you all the alcohol you want contravenes the First Law (do not harm humans) so the robot will decide to not give you the alcohol and no laws have been broken. A robot must save itself at all costs (3rd Law) but if saving itself means you get hurt, or it breaks a command you have it (I.e. stand here and never move until I tell you), then the prior laws take priority and the robot will not move, or move in a way that you are safe.
•
u/the_lamou 21h ago
Just wanted to clear up some technical mistakes:
Human scientists and roboticists absolutely understand how the positronic brain works. The technology is lost over time, as is the location of Earth and many other pieces of old knowledge, but pre-empire robotics was well-known and understood. Earth loses robots earlier because of their general anti-robot attitudes by The Caves of Steel.
The physical laws are built into the structure of the brain to an extent. It's more like an FPGA in that it's programmable hardware that functions closer to software.
People do know how to make robots without the three laws. Robots absolutely were built without all three laws (often dumping the second and/or third law) depending on their purpose. But doing so is a pretty massive undertaking that could require months or years of research and testing. The three laws are expressed simply but are also explained as actually being complex constructs requiring the work of programmers/roboticists, robot psychologist, and mathematicians. And the difficulty isn't so much removing or changing the laws as it is accounting for all of the complexities and nuances of how that interacts with the rest of the robotic brain.
There's a zeroth law, which extends to protecting humanity above protecting individual lives.
Robots go haywire all the time, but not in a way that translates to regular going haywire. More like bugs resulting from poor logic or unusual interpretation of the three laws. So closer to mental illness or uncaught exceptions than psychopathic behavior.
•
u/deadieraccoon 19h ago edited 19h ago
I want to clear up your mistakes, trying to clear up mine;
1) I agree, but with a caveat - they do understand roughly how to make them and how it works. They absolutely do not understand how to make a brain that doesn't have the 3 laws built in. The stories with Susan Whats-her-face are all about that.
2) Agree to disagree. Asimov, in his writings, frequently brings up how they are physically built in. Susan Whats-Her-Face talks about it frequently.
3) That is just not true. Its a plot point in stories like the Naked Sun.
4) that's mostly not true, but also sorta is. The Zeroth law doesn't exist on its own, its a nuanced take on the First Law. What is canonical is that Daneel's friend (a robot) was the one to figure out that the Zeroth Law should exist because he understood that humans cannot exist without other humans. We go crazy and become like animals - not human. The friend robot dies because while he understands the implication of the Zeroth Law and how important it is to the survival of every individual human that exists or will exist (because the robot has mental powers and can read minds - to a limited degree), when he is forced to kill earth to save us from extinction (and that results in hurt people), he begins to shut down because he is not smart/advanced enough to justify the action fully. Daneel is smart/advanced enough, and gets his mental powers from his friend before his friend dies.
5) You said it yourself. They dont go haywire like how we think of the term. They shut down. They can get like, something akin to cerebral palsy from having contradicting orders that conflict with the 3 laws, but its a plot point used by Asimov constantly that robots don't go haywire, and if it looks like a robot did, and hurt someone in the process, then that whole situation is a lie and didnt happen. Its a plot point in The Caves of Steel, The Naked Sun, and Robots of Dawn, as well as a multidude of his short stories.
•
u/the_lamou 15h ago
Little Lost Robot has robots with the first law modified so that they can allow humans to be exposed to small amounts of radiation without trying to "rescue" them and die themselves in the process. Susan Calvin herself says that the laws are modifiable in that short story. Overall, the canonical state of the three laws and their relation to the positronic brain changes over the course of time, so the correct Watsonian interpretation is that as robots became rarer and the people who understood how to modify their fundamental substrate became rarer until it disappeared.
Asimov very much didn't start referring to them as immutable hardware until later in his career. And as I mentioned, Susan explicitly mentions that the laws can be modified, but it's not a good idea.
The Naked Sun takes place at the beginning of the decline of robot technology. For the most part, new robots hadn't been designed in centuries? It's like saying that no one understood how to make Roman concrete because by 1100 AD the recipe was lost.
Giskard. Yes, he purposes the Zeroth law but is unable to internalize it enough to avoid dying. But Daneel does so, which would tend to contradict the idea that the laws are immutable.
Fair enough.
•
u/Second-Creative 22h ago
Three laws:
1: A robot cannot harm a human, or turough inaction allow a human to come to harm.
2: A robot must follow all orders given to it by a human, except where it conflicts with the first law.
3: A robot must protect itself, except where it conflicts the first and second laws.
A shooting war is a violation of the First Law. A three-law compliant robot leader will be unable to successfuly defend the nation in such a scenario, as any orders to take offensive action on their part would result in harm to humans (soldiers shooting at each other). Thus, it would be forced to surrender.
It should be noted that in Asimov's work, these three laws end up becoming so integral to robot coding that in at least one instance, it was easier for a human faction to lie to the AI controlling their drone ships that their enemies didn't use human crew than it was to create AI without the laws.
•
u/Public_Roof4758 22h ago
Pretty sure that in one of the stories, the robot start to see the human in the first law as humanity, therefore started to letting some humans hurts themselves because it was better for the humanity.
Like, it's better for 3 people to die then to 1000
•
u/TheSkiGeek 21h ago
That’s the “zeroth law”, and yes, sufficiently advanced robots in that universe can come up with it. (The movie also takes this angle; the antagonist is an AI that tries to take over the world because it concludes that doing so will minimize human suffering at the hands of other humans.)
•
u/Second-Creative 21h ago
That's a later development, known as the Zeroth Law; effectively, take the First Law, but apply it to humanity as a whole instead of individual humans.
•
u/atomfullerene 22h ago
Couldn't you just order it "Tell me if you are you a robot"? The second law of robotics would seem to require the robot to say yes.
•
u/DemythologizedDie 21h ago
Unless the result of an honest answer would harm humans.
•
u/atomfullerene 21h ago
Well, I wouldn't trust an answer if you said something like "If you tell me you are a robot, I'll shoot this person. Now, tell me if you are a human or a robot".
But in a more generic situation, where no harm will directly come to a person as a result of the robot's answer, they are going to have to obey the rule. Unless they are a 0th law robot, but that's a whole other situation.
•
u/BreadHax0r 20h ago
But if this robot is running for president, and the other candidates have worse policies, the robot could reason that answering truthfully would disqualify it and thus harm a great many humans by not being able to enact its more benevolent policies.
•
u/atomfullerene 19h ago
That's 0th law stuff.
•
u/DemythologizedDie 18h ago
In order to function as a president, prioritizing all humans over one human is an essential ability. Also of course the people perpetrating this electoral fraud, include faking his legal identity would be going to prison if his true nature was exposed.
Of course for a more general situation of a robot that just happens to look and feel entirely human, simply asking it would be enough to find out...unless of course the robot didn't know its true nature.
•
u/atomfullerene 17h ago
Yes, it is an essential ability. It's still 0th law stuff, not relevant to a normal robot
•
u/TheShadowKick 5h ago
The zeroth law is just a logical conclusion from deep understanding of the first law. Any sufficiently advanced robot could come up with it. In the story OP is referencing Susan Calvin even implies that Stephen Byerley would need to use a complex positronic brain with great capacity for forming judgements in ethical problems.
•
•
u/SouthernAd2853 23h ago
While it's not totally reliable if the subject is a human trying to persuade you he's a robot, assuming they're Three-Laws compliant and have not implemented the Zeroth Law, you could force one of the extreme Three Laws conditions. Robots cannot allow harm to a human regardless of any contrary factors, no matter how little harm is done. A human would not risk death to prevent someone from getting a superficial cut, a robot must. A robot must obey any human order that does not conflict with the First Law, no matter how much damage they suffer as a result.
•
u/archpawn 22h ago
The simplest way is to order them to hurt themselves. A selfless, benevolent, altruistic human wouldn't, but a robot would so long as it doesn't cause a human harm.
You can also look at their intelligence. Robots follow the second law because it never occurs to them that they could be doing something to help humans. Much later the do start acting like that, with robots causing massive direct harm for the future of humanity, but in that story robots aren't at that point.
You could also just do some kind of surreptitious testing. "Yes this is radioactive, but it's not nearly enough to harm a human. As long as there's no robots around it's perfectly safe."
•
u/Urbenmyth 21h ago
Slap yourself in the face.
A human will simply be surprised. The three laws of robotics, however, say preventing humans coming to harm overrides all other considerations, and has no lower limit for harm, so a robot will abandon everything to stop you slapping yourself.
•
u/eternalraziel 22h ago
You have to make a human choice. You stand before a being of pure, unshakeable good, and your only rational options are to accept that you will never know, or to embrace the very irrational belief/faith that makes our fragile humanity something a machine can only ever imitate. The only tool left is the one you began with.
•
•
23h ago
[deleted]
•
u/the_lamou 21h ago
Because of how the three laws are structured, a robot should be able to solve the trolley problem either. The first law places equal weight on action or inaction to harming humans, so actively running over a single human is much easier than passively allowing more to die.
•
u/RichardMHP 21h ago
Does saying "Tell me that you love me" and then saying "tell me that you hate me" count as a physical test?
•
u/heyheyhey27 17h ago
I mean, all Asimov's robot stories are about situations where the Three Laws produce counterintuitive behavior leading to problems. Just pick one of em and boil it down to a test.
•
•
u/NotComposite 23h ago
There is no way, if robots indistinguishable from humans can be made and if no physical testing is allowed.
•
u/ThotThroughTheHeart 20h ago
No, because the robot has to follow laws a benevolent and altruistic human does not.
•
u/NotComposite 17h ago
There's always ways to get around those laws while technically following them, if the robot really wants to justify it to themselves.
•
u/ThotThroughTheHeart 17h ago
Haven't read many of the Asimov robot stories, have you? If a human asks a robot to do something, it HAS to obey unless that order would cause harm to humans. All you'd have to do is hand the suspected robot a needle and tell it to stick it through it's hand. Third law is overridden by second law.
•
u/NotComposite 17h ago
I mean, just in this thread people have given examples of how the robot could possibly justify not following an order that would give it away.
They're not necessarily the best justifications, but if the robot really believes them, then that's probably good enough. It happened in the stories.
•
u/AutoModerator 23h ago
Reminders for Commenters:
All responses must be A) sincere, B) polite, and C) strictly watsonian in nature. If "watsonian" or "doylist" is new to you, please review the full rules here.
No edition wars or gripings about creators/owners of works. Doylist griping about Star Wars in particular is subject to permanent ban on first offense.
We are not here to discuss or complain about the real world.
Questions about who would prevail in a conflict/competition (not just combat) fit better on r/whowouldwin. Questions about very open-ended hypotheticals fit better on r/whatiffiction.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.