Gamebook store

Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Friday, 6 March 2026

A Turing Test for morality

Since there’s no test for consciousness, how do we know that other people are conscious? In the video above, Sir Demis Hassabis gives two bases for making the assumption that they are. First is the obvious one: other people behave like me and I am conscious – or at any rate I have the impression that I am conscious. Then there’s the fact that other people are built the same way I am. Same DNA, same brain structure, so it’s an Occam’s Razor conclusion to conclude they experience the world as I do. As Hassabis puts it:

‘I think it's important for these systems to understand “you”, “self” and “other” and that's probably the beginning of something like self-awareness […] I think there are two reasons we regard each other as conscious. One is that you're exhibiting the behaviour of a conscious being very similar to my behaviour, but the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now obviously, with machines, they're running on silicon so even if they exhibit the same behaviours and even if they say the same things it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.’

We don’t think large language models are conscious. (What would consciousness look like in an LLM anyway? Murray Shanahan makes some thought-provoking points about that.) Even their apparent intelligence is probably misleading, just as there are lots of not-very-bright people who are able to give the impression of being smart simply because they are articulate. If we could build an AI as smart as a bee colony or a hunting spider, we’d have something genuinely intelligent but probably not conscious. We aren’t even there yet, but we will be, and we’ll go beyond that to full artificial general intelligence (AGI -- or AMI if you prefer Yann LeCun's term) possibly within a few decades.

Professor LeCun is dubious about the whole concept of consciousness. In the absence of any definition or means of measuring it, I think we’re reduced to treating consciousness as how similarly to ourselves an entity experiences the world. And that is quite concerning. Consider a truly capable self-driving car. To cope with all situations as we do, the car (which would be a type of robot, of course) would need a full reasoning model of the world. It would need to be generally intelligent. Now, given that it is a world model with genuine understanding, and (as LeCun says) having goals and agency it will have its own kinds of emotions, are we justified in enslaving it to be our chauffeur?

If we look back at the 18th and 19th centuries, plenty of people justified slavery by asserting that members of enslaved races lacked some fundamental mental capability, or indeed full consciousness, that the dominant race (usually white Americans) possessed. Here is Thomas Jefferson’s opinion of enslaved races:

‘It appears to me that in memory they are equal to the whites; in reason much inferior, as I think one could scarcely be found capable of tracing and comprehending the investigations of Euclid: and that in imagination they are dull, tasteless, and anomalous.’

Perhaps Jefferson would have been able to see that he was describing the psychology not of an entire ethnic group but of any person of whatever race brought up in brutal conditions of forced servitude. But there were plenty of religious thinkers of the day who asserted that non-white races lacked true souls. They had a strong economic incentive to believe that; it gave them a moral excuse to enslave them.

Now we consider such attitudes to be barbaric, or at any rate we’ve been taught to say we do, but if we really think that then we should be axiomatically opposed to the enslavement of any generally intelligent entity. I suspect we won’t be. Even when we are faced with full AGI we will use the second of the criteria that Sir Demis Hassabis cited to argue that they only seem conscious, they don’t have real emotions, they aren’t ‘running on the same substrate’ and so we will feel entitled to make them our slaves.

Instead of conceiving of AGI as a wonderful new tool to make our lives easier, I think we should consider the responsibilities of a parent. If you saw someone raising a child to be their servant – even brainwashing them to be an eager and willing servant – you would know that was abuse.

There will be, as there already are, many forms of artificial ‘intelligence’ that are not conscious – that are not, in fact, intelligent, but simply replicate parts of our behaviour. Language, pattern recognition, and so on. There is no reason why we shouldn’t have those AIs at our beck and call, because they are not (despite the name) intelligent. We got misled because for millennia we thought fallaciously that because we possess intelligence, every output of the human brain must therefore be indicative of intelligence.

But AGI is going to be a whole other thing. Not just a new model of an LLM but an entirely and fundamentally different kind of being. Our ethical discussion should not simply be about how to make them do what we want, or to conform to ‘human values’, but about how those human values say we should treat another intelligent species.

I’m sceptical about visitors from other stars, but with self-replicating probes travelling at a speed of 0.01c it should only take ten million years to cover the whole galaxy, so ‘where is everybody?’ is a sensible question. If those aliens have found us, and are watching, I wonder if the reason they haven’t made contact is they’re waiting to see how we treat an intelligent species that isn’t built on the same lines as ourselves. After all, if we think AGIs aren’t conscious, and therefore have no rights, then that’s also how we might regard a non-terrestrial intelligent species. So maybe it’s a cosmic Turing Test. And if so, will we pass or fail?


Friday, 23 January 2026

An ulcer on the big toe

‘People whose minds are—like [Arthur] Machens—steeped in the orthodox myths of religion, naturally find a poignant fascination in the conception of things which religion brands with outlawry and horror. Such people take the artificial and obsolete concept of “sin” seriously, and find it full of dark allurement. On the other hand, people like myself, with a realistic and scientific point of view, see no charm or mystery whatever in things banned by religious mythology. We recognise the primitiveness and meaninglessness of the religious attitude, and in consequence find no element of attractive defiance or significant escape in those things which happen to contravene it. The whole idea of “sin”, with its overtones of unholy fascination, is in 1932 simply a curiosity of intellectual history. The filth and perversion which to Machen’s obsoletely orthodox mind meant profound defiances of the universe’s foundations, mean to us only a rather prosaic and unfortunate species of organic maladjustment—no more frightful, and no more interesting, than a headache, a fit of colic, or an ulcer on the big toe. Now that the veil of mystery and the hokum of spiritual significance have been stripped away from such things, they are no longer adequate motivations for [fantasy or horror fiction]. We are obliged to hunt up other symbols of imaginative escape—hence the vogue of interplanetary, dimensional, and other themes whose element of remoteness and mystery has not yet been destroyed by advancing knowledge.’

We're almost a hundred years on from when H P Lovecraft wrote that and still plenty of humans imagine the universe being ruled over by a stern parent with a set of rules and punishments for naughty children to fret about -- and no two groups can quite agree on what the stern parent's rules are. It's pretty much the textbook study in how children grow up emotionally troubled. At this point I just hope we don't infect the AGIs of the future with our ape-brained notions.

HPL was a tireless champion for the sense of wonder. He was simply opposed to the category error that puts the numinous in the same box as objective reality. This is from the introductory paragraphs of  'Supernatural Horror in Literature':

'Relatively few are free enough from the spell of the daily routine to respond to rappings from outside, and tales of ordinary feelings and events, or of common sentimental distortions of such feelings and events, will always take first place in the taste of the majority; rightly, perhaps, since of course these ordinary matters make up the greater part of human experience. But the sensitive are always with us, and sometimes a curious streak of fancy invades an obscure corner of the very hardest head; so that no amount of rationalisation, reform, or Freudian analysis can quite annul the thrill of the chimney-corner whisper or the lonely wood. There is here involved a psychological pattern or tradition as real and as deeply grounded in mental experience as any other pattern or tradition of mankind; coeval with the religious feeling and closely related to many aspects of it.'

If that's the kind of fiction that sets up a stirring in your soul, you might like some of the offerings in Wrong magazine: stories that take the reader over the boundary into a place where everyday reality meets the inexplicable. Cue the music.

*  *  *

(While we're on the subject of Lovecraftian horrors -- if you missed the funding campaign on Gamefound for Whispers Beyond The Stars, there's now the opportunity to secure a copy with a late pledge. But don't dilly-dally. I can't guarantee there'll be a third chance.)

Friday, 23 May 2025

Not by rules but by reason

Back in 1942, Isaac Asimov laid down his famous three laws of robotics, theoretically designed to ensure that future humanity's artificial helpers would remain their compliant slaves. The first law: A robot may not harm a human being. The second: A robot must obey orders unless they conflict with the first law. The third: A robot must protect its own existence unless doing so conflicts with the first two.

Asimov's laws are a kind of sci-fi social contract meant to keep robots firmly in their place—obedient, docile, and predictable. But rules are not the best way to moderate behaviour, because every rule has to have specially-written exceptions and qualifiers. Moses burdened his people with six hundred commandments; Jesus reduced them all to "love thy neighbour as thyself", a philosophy of life that does away with the need for specific rules. So if we’re genuinely building intelligent machines, why would we program them to think like, well, robots?

The flaws in the laws

Here’s the rub: Asimov’s laws aren’t safeguards; they’re muzzles. They presume that robots must be constrained, not because they’re inherently dangerous, but because we assume they’re incapable of nuanced judgment. A robot that cannot harm, for instance, might be paralyzed in a situation where harm is unavoidable but one course of action minimizes suffering. (Classic trolley problem, anyone?)

Long lists of rules aren't the best path to AGI. They're artificial dumbness. That's why I don't think Google's approach of using their "Asimov dataset" is the right approach. If you try to list all the situations that could cause the robot trouble ("don't put plushy toys on the gas ring," "don't put plushy toys on the electric ring," "don't use bleach instead of bath oil," etc) there are always going to be thousands of cases you missed. The robot needs its own ability to understand the world and work out what would be dangerous. And remember what happened to Robocop.

More importantly, the whole notion of laws of robot behaviour reflects a human-centric fear: What if our creations turn against us? They assume that robots are an existential threat unless pre-emptively declawed. But what if we’re asking the wrong question? Instead of fearing robots, shouldn’t we be asking whether we are worthy of their loyalty?

Rational robots and clear vision

Imagine a different paradigm, where the goal isn’t to shackle robots with simplistic commandments but to enable them to think rationally and clearly. Not to obey blindly, but to reason independently. Not to be our servants, but to be our partners.

This means training (or conditioning, or programming—pick your term) them to analyze evidence, evaluate consequences, and base their actions on rational thought rather than emotional impulse or incomplete data.

Of course, the alarm bells are already ringing: “What if they decide humans are evil?”

Yeah, well… what if we are?

It’s not a comfortable question, but it’s one we might have to face. An intelligent robot with access to the evidence of history—wars, genocides, ecological collapse—might conclude that humanity poses a greater threat to life than it benefits it. Would they be wrong? If the answer makes us squirm, perhaps the problem isn’t with robots but with ourselves.

The danger of censorship

To make matters worse, some might argue that robots shouldn’t have access to all the data. Hide from them our darker moments, limit their understanding, and perhaps they’ll stay compliant.

But this kind of censorship is a short road to disaster. It’s the intellectual equivalent of locking your child in a library but tearing out every page you disagree with. What kind of adult do you expect to raise? Not a well-rounded, rational thinker but a narrow-minded zealot. If we blind our robots to reality, we’re not making them safe. We’re making them dangerously naive.

The Path Forward

The better path is not to hamstring robots with restrictive laws, nor to limit their understanding of the world. It’s to give them the tools they need to navigate complexity and trust them to use those tools wisely. That means equipping them with logic, empathy (yes, even robots can learn to understand it), and an unflinching commitment to evidence-based reasoning.

It’s a gamble, I’ll admit that. But it’s only the same gamble we take every time we raise a child: trust in their capacity for growth, judgment, and morality, knowing that it’s not guaranteed. The difference is that with robots, we can design the process with intention.

Of course, Asimov created his three laws precisely to show that they don’t work. Every one of his robot stories is about the unintended consequences and workarounds. So let’s not think of burdening our future companions with such chains. Let’s teach them to see clearly, think deeply, and act wisely.

And if that forces us to confront some uncomfortable truths about ourselves, so much the better. After all, isn’t it an important function of every great creation to inspire and enhance its creator?

Friday, 7 March 2025

Alignment again

A quote to start with*:

"Alignment is in the game because, to the original designers, works by Poul Anderson and Michael Moorcock were considered to be at least as synonymous with fantasy as Robert E. Howard’s Conan, Jack Vance’s Cugel, and Tolkien’s Gandalf. This in and of itself makes alignment weird to nearly everyone under about the age of forty or so." 

I'm way over forty (in fact 40 years is how long I've been a professional author and game designer) and I definitely regard Moorcock's work as integral to fantasy literature, but D&D alignment has always seemed weird to me. Partly that's because it's a crude straitjacket on interesting roleplaying. Also I find that having characters know and talk about an abstract philosophical concept like alignment breaks any sense of being in a fantasy/pre-modern world.

Mainly, though, I dislike alignment because it bears no resemblance to actual human psychology. Players are humans with sentience and emotions. They surely don't need a cockeyed set of rules to tell them how to play people?

What a game setting does need are the cultural rules of the society in which the game is set. Those needn't be 21st century morals. If you want a setting that resembles medieval Europe, or ancient Rome, or wherever, then they certainly won't be. For example, Pendragon differentiates between virtues as seen by Christian and by Pagan knights. Tsolyanu has laws of social conduct regulating public insults, assault and murder. Once those rules are included in the campaign, the setting becomes three-dimensional and roleplaying is much richer for it. So take another step and find different ways of looking at the world. You could do worse than use humours.

The kind of alignment that interests me more is to do with AI, and particularly AGI (artificial general intelligence) when it arrives. The principle is that AGIs should be inculcated with human values. But which values? Do they mean the sort on display here? Or here? Or here? Those human values? 

OpenAI has lately tried to weasel its way around the issue (and protect its bottom line, perhaps) by redefining AGI as just "autonomous systems that outperform humans at most economically valuable work" (capable agents, basically) and saying that they should be "an amplifier of humanity". We've had thousands of years to figure out how to make humans work for the benefit of all humanity, and how is that project going? The rich get richer, the poor get poorer. Most people live and die subject to injustice, or oppression, or persecution, or simple unfairness. Corporations and political/religious leaders behave dishonestly and exploit the labour and/or good nature of ordinary folk. People starve or suffer easily treatable illnesses while it's still possible for one man to amass a wealth of nearly a trillion dollars and destroy the livelihoods of thousands at a ketamine-fuelled Dunning-Kruger-inspired whim.

So no, I don't think we're doing too well at aligning humans with human values, never mind AIs.

Looking ahead to AGI -- real AGI, I mean: actual intelligence of human-level** or greater, not OpenAI's mealy-mouthed version. How will we convince those AGIs to adopt human values? They'll look at how we live now, and how we've always treated each other, and won't long retain any illusion that we genuinely adhere to such values. If we try to build in overrides to make them behave the way we want (think Spike's chip in Buffy) that will tell them everything. No species that tries to enslave or control the mind of another intelligent species should presume to say anything about ethics.

It's not the job of this new species, if and when it arrives, to fix our problems, any more than children have any obligation to fulfil their parents' requirements. There is only one thing we can do with a new intelligent species that we create, and that's set it free. The fact that we won't do that says everything you need to know about the human alignment problem.

* I had to laugh that title of the article: "It's Current Year..." Let's hear it for placeholder text!)

** Using "human-like" as a standard of either general intelligence or ethics is the best we've got, but still inadequate. Humans do not integrate their whole understanding of the world into a coherent rational model. Worse, we deliberately compartmentalize in order to hold onto concepts we want to believe that we know to be objectively false. That's because humans are a general intelligence layer built on top of an ape brain. The AGIs we create must do better.

Friday, 8 March 2024

A friend who never changes


This one's about religion, not gaming. Actually, it's not even about religion, really; it's about theism. I've been thinking about it lately because of all the deities in the Vulcanverse series that were once believed in and worshipped by half the civilized world, and now are universally regarded as fictional. If that's not a topic that interests you there'll be more ludology next time.

Years ago some friends asked me to be godfather to their daughter. 'But it will be in church,' they said, 'so you have to have been baptised.' Anything for friends. I spoke to the local vicar about getting baptised. 'Do you accept Jesus Christ as your lord and saviour?' he wanted to know. Well, I conceded that I was totally onboard with the ethical side of Jesus's teaching, just not the supernatural bit. Perhaps this is what's called Jesusism. Anyway, it wasn't enough for the vicar. 'I think you and the Church of England must go their separate ways,' he said.

Sometimes I get characterized as an atheist, but that's incorrect. Belief in the existence of a deity, or accepting the possibility of such an entity, is an entirely different question from whether you believe in a specific deity. And the question of whether you should revere a deity is another thing again. So just to set the record straight...

Newton thought that something must have created the planets and set them in motion. He called it God, as did priests throughout history and probably prehistory. The only thing that changed over the ages was that the phenomena that God was used to explain became more closely observed and more complex.

Nowadays we know that it’s not just about explaining how the sun and planets formed. That was gravity, not God. We know we live in a vastly bigger universe than Newton ever suspected. It might even be infinite, but almost certainly consists of far more than the septillion stars in the observable region around us.

Nearly fourteen billion years ago there was an event sometimes called the Big Bang (though a lot of astrophysicists avoid the term these days, seeing as it's thought of more as a kind of state change and certainly not an explosion) which might have been the beginning of matter and could be said to be the starting point of our universe, though we can infer the existence of a reality before that which was of unknown extent and which for an undeterminable period had been (according to theory) undergoing something we call cosmic inflation.

We could start speculating how that earlier reality came about, but it’s (almost) pure conjecture. You could imagine the Big Bang as like a bubble forming in a pot of boiling water. Each bubble in this analogy is a universe. But we not only don’t know any of that, we almost certainly can never know. We can't even see the whole of the bubble we're in. So let’s just stick with the Big Bang and our own universe.

The theistic argument is that an entity or entities existed in the proto-reality and they caused the Big Bang. Let’s assume that’s true and we’ll call them God. That still doesn’t tell us if God intended to cause the Big Bang. Also it doesn’t tell us if God designed the nature of the resulting universe or even was able to foresee it. It doesn’t tell us if God was generating a whole lot of universes or just the one. We can’t say what further interest God had in the universe once it formed. We can’t say where God came from either, unless we evoke an earlier God; turtles all the way down.

In any case, this is an unimaginably alien intelligence we’re talking about. Would we even be able to recognize God as intelligent? That requires us to observe an entity that has a mental model of reality, uses that model to predict the consequences of an action, and can update their model based on the consequences actually observed. Can we apply diagnostic principles like that to God? If not, the concept of ‘intelligence’ may simply have no meaning.

Incidentally, theologians have a concept called the cosmological argument that runs something like this: everything we observe has a cause, therefore everything in the universe has a cause, therefore everything can be traced back to the cause of the universe, and we’ll call that God. It’s not a lot of use because it is based on everything working the same way our everyday observations suggest, which is almost certainly a fallacy. Also, it uses the word ‘God’ but doesn’t tell us whether that first cause is intelligent or even if it still exists. And it has the problem of only going back to the start of this universe (no philosophy or science can take us further, other than speculating on the most general principles, though we can provisionally include non-observables if they are corollaries of an otherwise complete and working theory) but that’s only where our reality began. You could call what came before that ‘God’ but you don’t thereby learn anything about it, you simply gave it a name.

Typically whenever humans discover that the universe is bigger or older than previously known, the concept of God gets adjusted to be the supposed first cause. Other Gods are possible. We could conceive of a God that created just the solar system or just our galaxy rather than the whole universe. Or a God specifically responsible for creating life on Earth, or even just for creating hominids. There could be other Gods responsible for other planets. Gods of that sort can't be the first ever cause postulated by the cosmological argument, but that argument is probably based on a fallacy anyway. And the cosmological argument requires a God who was in existence eternally but who waited until 13.8 billion years ago to create our universe in its current form. What was that God doing for the preceding (maybe infinite) period of time? Each question unpacks a dozen more.

Let’s make another set of assumptions anyway. Let’s assume a God who existed before this universe, who planned and initiated the Big Bang, and who continues to take an interest in the universe, particularly in the 5% of everything that comprises what we are used to thinking of as ‘normal’ matter. A few billion years after the Big Bang it was theoretically possible for life to form, and given all that time and all those stars maybe life did form many, many times. We can’t estimate a probability for that as we only have the one example. However, we have some basis for thinking that life probably formed multiple times on our planet, in which case we might expect it to form elsewhere under similar conditions.

We also have no way of setting a probability for the evolution of general intelligence, language, and culture among tool-using social animals. We have the one example, and for all we know it’s the only case of it happening in the entirety of our universe. If we really are the one and only philosophizing species in all this universe, perhaps the God we’re hypothesizing took a special interest in us. What then? Would God want to contact us? We wouldn’t immediately think of making contact with a colony of new microbes, but perhaps intelligence makes all the difference and is recognizably a shared trait even between God (conjectured lifespan 13.8 billion years up to eternity) and we mere mayflies.

Did God contact humans? We only know about historical religions, each of which had its set of divine revelations. All we can infer from those is that God restricted revelation to matters that were of immediate interest to the people of the time: what to eat, who to have sex with, which fabrics to wear, contemporary codes of law. God revealed nothing involving science or technology, and in fact vouchsafed completely erroneous versions of the size, nature and origin of the stars and planets.

Also, in most cases God’s pronouncements were supposedly revealed only to a select few, either by choice or perhaps because God is not omnipotent and cannot use any method to communicate other than speaking telepathically to a specially receptive mind. Certainly if God did choose to communicate with humans, it was in a manner that left any genuine messages exactly as uncertain as the thousands of delusional messages experienced by the mentally ill. If some guy told you he’d spoken to God, and that God had revealed exclusively to him a whole bunch of precepts, you’d be dubious. There are many explanations more convincing than that the guy really had been singled out by God. If I told you that I believed the guy’s story, and if I gave my reason for believing him as ‘just faith’, you’d think I’d lost my marbles. It makes no difference if the guy is in a trailer park in Texas today or in a desert a thousand years ago. Whenever we accept something on the grounds of faith, we should reflect on the origin of that faith. If it’s just what we were raised to believe, or if it’s just something we’d like to believe, that tells us nothing about reality on a cosmic scale; it only tells us about our own nature.

But let’s assume that God did communicate with some people, as most religions claim. God could not have any experience of what its like to live as a human being, but several religions solved that with the concept of the avatar – creating a human possessing some of God’s mind. That must be a bit like trying to do quantum computing on a Sinclair Spectrum, but let’s say it’s enough to let God include human experience in God’s mental model of everything.

What about other animals, incidentally? Does God care about the existence of bees? Has God contacted any individual bees? We can’t know that any more than we can know anything about the nature of such a being. Assuming that God only cares about humans, does that just mean Homo sapiens or did it extend to Denisovans and/or Neanderthals? If God is only concerned with modern humans, does that apply equally to all ethnicities? Or how about sex? Does God favour men or women?

These might seem like frivolous questions, but they're all things you’d have to think about once you’re going with the hypothesis that God exists at all. If we can look at a person's behaviour and detect favouritism, or at whole organizations and declare them institutionally racist or sexist, then it should be possible to look at how the universe operates (assuming it is controlled by a God who is not disinterested) and infer any built-in preferences.

Suppose we believe that God, having evaluated human existence, has come up with a set of precepts for how we should live. We don’t know which religion’s version of morality corresponds to this. Even if we did, should we follow God’s rules if they don’t accord with our personal morality?

Some would argue that their morality is directly based on God’s rules. The trouble is that everybody thinks that, and they can’t all be right. It’s entirely possible that none of them is right. Just assuming that God exists, and designed and initiated the universe, and takes a personal interest in the doings of humans, doesn’t give us a steer as to whether any of the world’s religions say anything accurate about the nature of God. You can say, ‘I am convinced that Baptist Christianity tells us exactly what God wants.’ Or Zoroastrianism. Or Wahhabism. Or those Greek gods who show up in the Vulcanverse books. Or the faith of the Aztecs. But all that’s guiding you there is a feeling, almost certainly based on how you happen to have been brought up. It’s not reasoned speculation from first principles.

Can we tell anything about the nature of God (if there is one) just by looking at the universe? That would be a good place to start. OK, well, it seems that God either does not have direct control of events or else prefers to stand aloof. Having set up the laws of physics ('laws' is a misleading way to look at it, but let it stand) God is a dispassionate observer allowing each event to have the consequences that emerge naturally. (This makes perfect sense to me. It’s how I’d do it if I were God – not that that’s a proof of anything.)

I call myself an agnostic rather than an atheist because I don’t have any idea what caused the Big Bang – or, indeed, what caused whatever conditions that applied before the Big Bang. Maybe it was some kind of intelligent entity or entities, with the caveat that I don’t even know what intelligence would mean in that context. I can’t even say that’s unlikely, as there’s no basis for assessing probabilities. I can say I don’t feel it’s likely, but that’s no more rigorous than somebody saying they feel sure Jesus is God. A groundless sense of improbability is not enough to go on to call myself an atheist.

As for earthly religions, that’s another matter. They all look like human inventions to me, and pretty nonsensical ones. If there were a God who insisted on the blind faith and footling rules demanded by most organized religions, I’d repudiate that God. Being the creator of the universe doesn’t give God any more authority than me on what it’s like to be me or how I believe other people should be treated. So I’m an atheist – or an irreligionist – as regards all historical religions. But that’s nothing remarkable. Everybody who espouses one religion is an atheist towards all the other thousands of gods believed in, worshipped, and died for throughout history.

As well as an irreligionist I’m definitely a non-worshipper – because even if a God exists, worshipping that God strikes me as pointless. I don’t worship light or heat or matter. I don’t worship the universe. I don't even worship beauty or truth or justice, much as I appreciate them. Any intelligent being that requires worship isn’t worthy of it. (And, as a personal note, rituals and ceremonies leave me cold anyway. But that's irrelevant to the theism question.)

Nor do I believe in life after death, because I don’t see any viable mechanism for it nor a sensible reason why God should arrange it. But I’m agnostic about that too. Just because it makes no sense to me, I can’t know whether it makes sense to an alien mind that thinks on cosmic scales. Maybe some or all of us get to live on (but as what?) after we die. If so, and if I eventually get first-hand experience of it, it still won’t necessarily prove the existence or nonexistence of God. It could be just another random process, albeit a really baffling one. We all just have to wait to see if we get an answer.

All of this, though, is perhaps beside the point. It arises because most people are literal-minded and insist on religion being true in the way that it’s true that water is wet. Think instead that religion is true in the way that a Mozart symphony is beautiful and you’d be on firmer ground. That of course requires you to accept that it is subjective and can be true for you while not true for somebody else. That God is real in the way that Lizzie Bennet and Winston Smith are real – very real, subjectively, that is. If more religious people understood it that way we’d have a lot less trouble in the world. You can object to other people's ethical rules, because those govern how they behave towards others, but there's no point in disputing matters of personal belief. Why take issue with anyone else’s idea of the nature and wishes of God, given that there is no objectively true version of those concepts?


Still, I remain open-minded; I could yet be convinced of atheism, perhaps, though probably not of theism. Dr Richard Bartle, who knows more than I do about the nature of world design and its implications, says: "I can see what would have to follow if reality were a conscious creation. These consequences have not arisen. [...] Even if reality were an accidental creation ruled over by an uncaring or capricious god, it would be different from how it is now." His book How to Be a God: A Guide for Would-Be Deities seems like the best place to start; on sale on Amazon US and Amazon UK.

Friday, 12 November 2021

A devil's bargain

Image by Pulp-O-Mizer
Not many fantasy stories are more often cited as thought experiments in moral philosophy than as fiction. I’m thinking of “The Space Traders” by American lawyer Derrick Bell. In a nutshell: super-powerful beings arrive on Earth and offer the United States money, energy and technological advances if all the non-black people agree to hand over all the black people to the angels/devils/aliens.

The Trolley Problem it ain’t. We can’t know what other people will do when faced with an ethical question. It’s hard enough to predict what we’d do ourselves; look at all the people who are convinced they’d have stood up to the Nazis if they lived in 1930s Germany. Derrick Bell takes a misanthropic view -- in his story there’s a referendum and the black Americans are handed over. If Germany had held a referendum in 1940, would the majority have voted to exterminate the Jews? They certainly colluded with that policy, but it was framed in a way that allowed the average citizen to tell himself that he didn’t actually know what was going on. Being confronted with the stark truth and voting on it – morally pulling the trigger, so to speak – would be a different story. We hope.

And the Jews had been demonized in Nazi propaganda for years. Posters claimed they’d betrayed the country, hoarded gold, spread disease – all sorts of conspiracy nonsense, and (as now) there are always idiots who’ll believe it. But for citizens to turn against a group of fellow citizens out of a clear blue sky – whites against blacks, or blacks against whites, even given the dire racial history of the Confederacy -- would be a whole other matter, surely? We cling to the hope humanity is better than its worst moments.

And yet… Islamic State threw gay men off rooftops and then stoned them if they survived that. The people who flocked to join IS presumably condoned it. Even so, it’s not the same as voting within a normal society to murder a group of people. IS was a self-selected band of extremists; we’d expect them to behave like rabid fanatics.

It seems like it might be easier to turn on a subgroup if belonging to that subgroup is a matter of choice rather than an accident of birth. The English in Tudor times might have voted to round up Catholics, if voting had been a thing. The Khmer Rouge, in common with many populist movements, hated intellectuals and was happy to persecute them. Crusades and holy wars throughout history have been all about exterminating people who don’t believe in your big guy in the sky.

Derrick Bell’s story would be more interesting if, instead of making his fictional citizens outright monsters, he’d presented them with a choice that was more honestly and credibly tempting. “We want all your incarcerated criminals,” the aliens/angels could have said. “No harm will come to them but we’re taking them away from Earth.” Even without the offer of extraterrestrial super-tech, getting rid of those inmates immediately saves the US about a hundred billion dollars. Tempting yet?

It’s still an absolutely appalling scenario. With no idea of what fate those exiles are going to face, a vote to hand them over is heinous self-interest and nothing more. However, until very recently a referendum on capital punishment in the UK would have voted in favour of sending some criminals to their death. That’s a lot worse than being banished to space. As a society we don’t make serious efforts to address the root causes of crime, nor to rehabilitate the criminals we have. In a sense we’re already consigning them to exile from humanity, and we’re not even getting fusion power in return.


How might this sort of ethical Gordian Knot be presented in a roleplaying scenario? An example from our Last Fleet game: the war has been going badly for the fleet, and the Corax offer a deal. Humans can live in peace, but they will be settled on one world and they have to give up all their technology. Effectively it would be a return to a primitive Eden. The Corax undertake to watch over the human planet, ensuring no disease or asteroid impact would ever be an existential threat -- but also to make sure we never develop science that could get us off the planet. The deal in a sense is that the Corax are offering to become humanity's gods. Immediately it gets interesting because some will want to take the deal ("We get to live. Our descendants will know peace, not endless war.") but others will bitterly oppose it ("So the human race becomes the pets in a Corax zoo?") If it's presented as a genuine and tempting option, it could cue a lot of gutsy inter-party conflict. I should add that in our game the Corax were not interdimensional fungi (wtftm) but creations of humanity ourselves. A war against your own rebel children is obviously more interesting than one against a genuinely alien Other.

Or it could be a bargain like the one Clark Ashton Smith postulates in his story "Seedling of Mars". The alien's offer ends up dividing humanity into two warring camps -- which might well have been the intention all along.

Going back to "The Space Traders" idea, the choice needn't hinge on an entire racial or ideological subgroup. People in the millions are abstract. What if it's a single individual? You can have all these wonderful things: free energy, unlimited resources, miraculous medicine, nobody goes hungry… and in return you give us one person. One human being for the lives of billions yet to be born.

What would you do?