Thus far, this column has focused on providing interesting answers to descriptive questions about the law. What does the law say about spoilers? How do EULAs work? What kind of deal would Kingpin be able to strike? And so on. But when it comes to science fiction, there are a lot of questions that don’t lend themselves to descriptive answers. Many of the most interesting questions involve scenarios where there is either no legal system or a legal system that doesn’t resemble — or come close to resembling — any system that exists or has ever existed on Earth. On a descriptive level, these questions are easy to answer, but are entirely unsatisfying. For example:
Question: Would The Walking Dead’s Rick Grimes be convicted of killing thousands of zombies?
Answer: The entire point of The Walking Dead is that society as we know it has collapsed. In that world, there are no courts. If there were, then Rick Grimes would not have had to kill zombies in the first place.
Question: What is Qui-Gon Jinn’s best legal justification or defense theory for training Anakin Skywalker despite the Jedi Council’s prohibition?
Answer: We can’t possibly know since we don’t know any of the Council’s rules, procedures, or adjudicatory standards.
There is another broad category of questions that evade descriptive answers because they relate to scenarios or technologies that are so new, and so unprecedented, that there is no basis to say one way or the other what the law would actually do. These questions posit scenarios that could be worked into our current legal system, but just haven’t come up yet. For example:
Question: How would the government regulate the X-Men?
Answer: We have no idea, since we have never seen anything even remotely resembling superpowered mutants. The Second Amendment right to bear arms would probably come in somewhere, as would the Fifth Amendment right to due process, and the Fourteenth Amendment right to equal protection under the law. But aside from those vague references, we’ve got nothing.
These questions, and ones like them, cannot really be answered in any meaningful way. But that doesn’t mean the law has nothing to say about these issues. Instead of descriptive questions about what the law is, the inquiry can shift to normative questions about what the law should be.
Normative questions tend to be the most interesting, but they’re also the most challenging and often the most controversial. You cannot provide a coherent argument for what a law/rule/right should be unless you first have a firm understanding of why the law/rule/right would exist, and who the law/rule/right is supposed to benefit. These questions cannot be answered through simple reference to a statute, encyclopedia, or judicial decision. Instead, they require us to consider and confront our assumptions about how we view the world, and how we interact with one another.
Consider the question of robot rights. This issue appears in countless science-fiction stories: should a researcher be allowed to disassemble Star Trek: The Next Generation’s Data in order to learn more about robotics? Should it be illegal to imprison the robot in Ex Machina? Should the holographic Doctor in Star Trek: Voyager have authorship rights over his stories? And so on.
In order to answer these questions, you must have a working theory of where rights come from and why they exist. Under a theory of natural law, you might think that rights come from God, that they are fixed, and that they exist to benefit humans and humans alone. Under that theory, robots — as non-humans — would not be entitled to any rights. If you ascribed to a utilitarian theory of law, which defines legal rules and rights based on maximizing utility, then the question of robot rights would be based on a series of questions such as:
- Can robots “feel”?
- If robots are lifelike, then would a denial of rights to robots change the way we interact with humans?
- Are robots likely to respond negatively to a denial of rights? Could they respond violently?
While many of these questions present challenges of their own, answering them would help us define what rights robots should be entitled to. For example, if robots don’t need to eat or sleep, then it should be easy to say that they should not be entitled to the basic human rights of food and shelter even if they do have the right to be treated with basic dignity.
One of the most challenging aspects of these kinds of questions is that answers are rarely all-or-nothing. For example, the law may conclude that robots do not have genuine emotions, and thus that they should not be allowed to benefit from legal rights and protections related to emotional injuries such as negligent or intentional infliction of emotional distress, or alienation of affection. At the same time, it could also be argued that the laws relating to emotional injuries exist not just to benefit the injured but also to discourage individuals from behaving inappropriately. This could result in a law that allows individuals to be punished for traditionally inappropriate behavior, even if the robot is not the beneficiary of the punishment.
As another example, consider intellectual property rights. If robots are creative actors and respond to financial incentives, then one might conclude that they should be entitled to the full panoply of intellectual property protections since those protections exist to incentivize innovation and to encourage individuals to share their discoveries or creations. In Europe, authors are also entitled to a series of “moral rights” that extend beyond the traditional economic intellectual property protections. If one concludes that robots cannot feel a moral or dignitary connection to their work, then one could argue that Europe should retain economic rights for robot authors while jettisoning the moral rights.
As you think about how to approach new and unexplored questions of law — whether they relate to robot rights, mutant protections, or your obligations in the zombie apocalypse — it is almost impossible to ignore the current legal system. When identifying the relevant assumptions and motivations that underlie a solution to some outlandish sci-fi scenario, it is almost inevitable that you would compare those assumptions to those underlying the current legal system. The result is that an exploration of sci-fi scenarios and hypotheticals often leads to a more nuanced understanding of our current legal system, and can sometimes pave the way for a paradigm shift. This is why good science fiction stories are essentially deep lessons of moral philosophy, and why the best science fiction stories promote real and positive social and legal change.
The irony is that fanciful, imaginative stories can have a substantial and meaningful impact on how we view and perceive the law and its underlying foundations. The stories may take place in distant lands or across the stars, but the lessons they teach, and the questions they raise, can change the world and the law in the here and now.