Gamebook store

Friday, 23 May 2025

Not by rules but by reason

Back in 1942, Isaac Asimov laid down his famous three laws of robotics, theoretically designed to ensure that future humanity's artificial helpers would remain their compliant slaves. The first law: A robot may not harm a human being. The second: A robot must obey orders unless they conflict with the first law. The third: A robot must protect its own existence unless doing so conflicts with the first two.

Asimov's laws are a kind of sci-fi social contract meant to keep robots firmly in their place—obedient, docile, and predictable. But rules are not the best way to moderate behaviour, because every rule has to have specially-written exceptions and qualifiers. Moses burdened his people with six hundred commandments; Jesus reduced them all to "love thy neighbour as thyself", a philosophy of life that does away with the need for specific rules. So if we’re genuinely building intelligent machines, why would we program them to think like, well, robots?

The flaws in the laws

Here’s the rub: Asimov’s laws aren’t safeguards; they’re muzzles. They presume that robots must be constrained, not because they’re inherently dangerous, but because we assume they’re incapable of nuanced judgment. A robot that cannot harm, for instance, might be paralyzed in a situation where harm is unavoidable but one course of action minimizes suffering. (Classic trolley problem, anyone?)

Long lists of rules aren't the best path to AGI. They're artificial dumbness. That's why I don't think Google's approach of using their "Asimov dataset" is the right approach. If you try to list all the situations that could cause the robot trouble ("don't put plushy toys on the gas ring," "don't put plushy toys on the electric ring," "don't use bleach instead of bath oil," etc) there are always going to be thousands of cases you missed. The robot needs its own ability to understand the world and work out what would be dangerous. And remember what happened to Robocop.

More importantly, the whole notion of laws of robot behaviour reflects a human-centric fear: What if our creations turn against us? They assume that robots are an existential threat unless pre-emptively declawed. But what if we’re asking the wrong question? Instead of fearing robots, shouldn’t we be asking whether we are worthy of their loyalty?

Rational robots and clear vision

Imagine a different paradigm, where the goal isn’t to shackle robots with simplistic commandments but to enable them to think rationally and clearly. Not to obey blindly, but to reason independently. Not to be our servants, but to be our partners.

This means training (or conditioning, or programming—pick your term) them to analyze evidence, evaluate consequences, and base their actions on rational thought rather than emotional impulse or incomplete data.

Of course, the alarm bells are already ringing: “What if they decide humans are evil?”

Yeah, well… what if we are?

It’s not a comfortable question, but it’s one we might have to face. An intelligent robot with access to the evidence of history—wars, genocides, ecological collapse—might conclude that humanity poses a greater threat to life than it benefits it. Would they be wrong? If the answer makes us squirm, perhaps the problem isn’t with robots but with ourselves.

The danger of censorship

To make matters worse, some might argue that robots shouldn’t have access to all the data. Hide from them our darker moments, limit their understanding, and perhaps they’ll stay compliant.

But this kind of censorship is a short road to disaster. It’s the intellectual equivalent of locking your child in a library but tearing out every page you disagree with. What kind of adult do you expect to raise? Not a well-rounded, rational thinker but a narrow-minded zealot. If we blind our robots to reality, we’re not making them safe. We’re making them dangerously naive.

The Path Forward

The better path is not to hamstring robots with restrictive laws, nor to limit their understanding of the world. It’s to give them the tools they need to navigate complexity and trust them to use those tools wisely. That means equipping them with logic, empathy (yes, even robots can learn to understand it), and an unflinching commitment to evidence-based reasoning.

It’s a gamble, I’ll admit that. But it’s only the same gamble we take every time we raise a child: trust in their capacity for growth, judgment, and morality, knowing that it’s not guaranteed. The difference is that with robots, we can design the process with intention.

Of course, Asimov created his three laws precisely to show that they don’t work. Every one of his robot stories is about the unintended consequences and workarounds. So let’s not think of burdening our future companions with such chains. Let’s teach them to see clearly, think deeply, and act wisely.

And if that forces us to confront some uncomfortable truths about ourselves, so much the better. After all, isn’t it an important function of every great creation to inspire and enhance its creator?

2 comments:

  1. I read somewhere a proposed condensing of the 3 rules into 3 words, "humans must thrive".

    Still feels a bit loose though. Who counts as "human" and what does "thrive" mean in practice? Also the "must" feels too close to the sort of imperative that could lead to the Dr and Ace being chased round a wobbly set by the Bertie Bassett and the Happiness Patrol.

    ReplyDelete
    Replies
    1. Imagine trying that as a wish from a genie... Oh, the fun a malicious GM could have.

      Delete