How long do you think it will be before somebody wants to marry their personal robot? A while, of course; non-experimental personal robots don’t exist yet, and it’s likely to be a good long while before personal robots acquire enough personality to attract marriage proposals. But then people get awfully attached to pets. What about a cuddly robot with the inscrutable personality of, say, a cat? Should we be thinking about laws regarding robot marriage? Probably not, yet. More plausibly, should we be thinking about the legal situation of robots that injure people? Especially if they do it by choice. Can you sue a robot?
You may have heard about Asimov’s Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Forget ‘em. It will be a long, long time before robot intelligence is sufficient to deal with these ‘laws.’ I’m not even sure human intelligence can handle them. No, long before we get to such a subtle matrix of laws, we’ll get to a fuzzy area in legal responsibility.
Traditional product law, in simple terms, holds the manufacturer liable for injuries caused by its products. There are limiting circumstances, which can be quite complicated and vary from country to country (and industry to industry). It can be assumed that robots will fall under some variant of normal product laws. In most cases you would sue the manufacturer. At least you would until it can be shown that a robot can make its own decisions.
There is a vast gray area between the strict command programming of a robot, and a robot with the freedom to ‘think for itself.’ The gray area is the terrain created by a robot programmed with a set of options for a given situation, and also programmed with the ability to make a choice among them. The command robot doesn’t do anything that isn’t explicitly programmed – turn left, stop, feed the dog. A robot with the ability to ‘think for itself’ is already in the science fiction zone where Asimov’s Laws of Robotics are feasible. The gray area, which we are already entering with experimental robots, is where the robot can sense a situation; be aware of multiple alternatives in behavior; and will choose one of those alternatives.
As it is for humans, only more quickly for semi-intelligent robots, at some point life becomes too complex to make good judgments. Sooner or later a robot that can make its own choices will make a mistake and a human will be injured. Can the manufacturer be sued? Surely the manufacturer cannot be held responsible for all the manifold situations a robot may be in – especially when the robot does have the ability to make choices? Can the owner of the robot be sued? Perhaps, if there is some sort of misguided or intentionally damaging situation created by the owner.
Can the robot be sued? No. Not now and possibly never. Not only would the robot need to have its own identity; it would need its own agency – the ability to respond legally and be responsible for its actions and penalties. After all, we don’t sue pets, or children. That’s where legal responsibility for robots really lies – in the same territory as laws applying to the actions of children, only in this case, unlike parents, the ‘maker’ and the ‘owner’ is almost always different. That means laws regarding semi-intelligent robots will be more difficult to adjudicate than typical product laws, or more complicated than cases involving parental responsibility.
As often the case, technology seems to open fertile new fields for the legal profession to till.