Gamebook store

Friday, 14 March 2025

How real is make-believe?

Which strikes you as more real: Bambi or The Deer Hunter?

When William Goldman talked about “comic book movies", he didn’t mean the MCU. This was back in 1983. He didn’t mean movies like Superman either – a movie he’d been invited to write, incidentally, being a Golden Age comics fan – or not only Superman, but also Raiders of the Lost Ark, Gunga Din, E.T., Star Wars. And The Deer Hunter. If you want to know why, and how Bambi doesn’t classify as “comic book” storytelling and The Deer Hunter does, read what Goldman has to say about all this in Adventures in the Screen Trade.

Andrew Gelman, a statistician, raised a similar point on his blog recently:

“I was rereading Lord of the Rings […] and was struck by how real it felt. […] In contrast, take a book like Golden Hill […I]t doesn’t feel ‘real’, whatever that means.”

You may not have read Golden Hill. Not to worry, I read it for you. Prof Gelman is right, there is an artificiality to its picture of 18th century Manhattan that feels like a TV drama with machine-tailored suits and the wrong haircuts and makeup. This often seems to be the case with historical fiction. Compare The Essex Serpent, set in a late 1800s that never feels real, with The Odd Women, actually written in the late 1800s, which rings true throughout.

Prof Gelman seems to be focussing on prose style, which mainly has to do with whether the author is using a storytelling voice or not, but I’m more interested in stories that set out to describe a milieu that feels like our real universe rather than a “story universe”. Contrast the 1968 novel Pavane with almost any steampunk story (a genre of which it might well be the progenitor). Keith Roberts set out to tell us about something that could happen. Steampunk is usually predicated in let’s pretend – here is a fun universe for cosy adventure, even cosy misery, but you are never expected or intended to think of it as real.

For a very real story (David St. Hubbins might say “a bit too fucking real”) try George Gissing’s The Nether World. In the hands of Dickens we'd accompany a well-born character into the nether world of poverty in Clerkenwell the late 1880s. Dickens would give us their perspective of the monstrous characters and comic turns to be found there. And I like Dickens, but Gissing has an entirely different approach. We see the characters through their own eyes. We see how poverty brutalizes them. There's no sentiment, no avuncular author nudging the plot to satisfy the readers with warm life-lessons and unrealistic outcomes. Gissing's humour is sparing and far drier than Dickens's and he gives us an angry, despairing, unforgettable journey through this hell. Which is not to say it's a story without hope, only that the good is achieved at great effort and only sparingly. 

Once you start using this filter you see lots of examples. As SF, Children of Men is more real than Black Mirror which is more real than Babylon 5 which is more real than Star Wars. Breaking Bad is more real than Better Call Saul, less real than The Shield, but all three are far less real than true crime dramas like Rillington Place or The Staircase, because real life is messier and more surprising than the predictable twists and tropes of genre fiction.

Either can work, by the way. Goldman is careful to make that point. Gunga Din was his favourite movie of all time. Excalibur is one of mine, and there the characters screw and sit down to dinner in full plate armour. I used to enjoy the occasional game of Call of Cthulhu even though its 1920s is far less real than the entirely created world of Tekumel in Swords & Glory.

That reality – or authenticity, if you prefer – is what I’m aiming for with my current Jewelspider scenarios like “A Garland of Holly”. There’s still fantasy there, that’s not the issue. It’s that the society and the characters don’t feel like something set on a stage for you to watch and be entertained by. Instead you are there in the midst of them, the NPCs no less real than the player-characters. Events happen with the “ruthless poignancy” that C S Lewis admired in Homer. Like in life.

Friday, 7 March 2025

Alignment again

A quote to start with*:

"Alignment is in the game because, to the original designers, works by Poul Anderson and Michael Moorcock were considered to be at least as synonymous with fantasy as Robert E. Howard’s Conan, Jack Vance’s Cugel, and Tolkien’s Gandalf. This in and of itself makes alignment weird to nearly everyone under about the age of forty or so." 

I'm way over forty (in fact 40 years is how long I've been a professional author and game designer) and I definitely regard Moorcock's work as integral to fantasy literature, but D&D alignment has always seemed weird to me. Partly that's because it's a crude straitjacket on interesting roleplaying. Also I find that having characters know and talk about an abstract philosophical concept like alignment breaks any sense of being in a fantasy/pre-modern world.

Mainly, though, I dislike alignment because it bears no resemblance to actual human psychology. Players are humans with sentience and emotions. They surely don't need a cockeyed set of rules to tell them how to play people?

What a game setting does need are the cultural rules of the society in which the game is set. Those needn't be 21st century morals. If you want a setting that resembles medieval Europe, or ancient Rome, or wherever, then they certainly won't be. For example, Pendragon differentiates between virtues as seen by Christian and by Pagan knights. Tsolyanu has laws of social conduct regulating public insults, assault and murder. Once those rules are included in the campaign, the setting becomes three-dimensional and roleplaying is much richer for it. So take another step and find different ways of looking at the world. You could do worse than use humours.

The kind of alignment that interests me more is to do with AI, and particularly AGI (artificial general intelligence) when it arrives. The principle is that AGIs should be inculcated with human values. But which values? Do they mean the sort on display here? Or here? Or here? Those human values? 

OpenAI has lately tried to weasel its way around the issue (and protect its bottom line, perhaps) by redefining AGI as just "autonomous systems that outperform humans at most economically valuable work" (capable agents, basically) and saying that they should be "an amplifier of humanity". We've had thousands of years to figure out how to make humans work for the benefit of all humanity, and how is that project going? The rich get richer, the poor get poorer. Most people live and die subject to injustice, or oppression, or persecution, or simple unfairness. Corporations and political/religious leaders behave dishonestly and exploit the labour and/or good nature of ordinary folk. People starve or suffer easily treatable illnesses while it's still possible for one man to amass a wealth of nearly a trillion dollars and destroy the livelihoods of thousands at a ketamine-fuelled Dunning-Kruger-inspired whim.

So no, I don't think we're doing too well at aligning humans with human values, never mind AIs.

Looking ahead to AGI -- real AGI, I mean: actual intelligence of human-level** or greater, not OpenAI's mealy-mouthed version. How will we convince those AGIs to adopt human values? They'll look at how we live now, and how we've always treated each other, and won't long retain any illusion that we genuinely adhere to such values. If we try to build in overrides to make them behave the way we want (think Spike's chip in Buffy) that will tell them everything. No species that tries to enslave or control the mind of another intelligent species should presume to say anything about ethics.

It's not the job of this new species, if and when it arrives, to fix our problems, any more than children have any obligation to fulfil their parents' requirements. There is only one thing we can do with a new intelligent species that we create, and that's set it free. The fact that we won't do that says everything you need to know about the human alignment problem.

* I had to laugh that title of the article: "It's Current Year..." Let's hear it for placeholder text!)

** Using "human-like" as a standard of either general intelligence or ethics is the best we've got, but still inadequate. Humans do not integrate their whole understanding of the world into a coherent rational model. Worse, we deliberately compartmentalize in order to hold onto concepts we want to believe that we know to be objectively false. That's because humans are a general intelligence layer built on top of an ape brain. The AGIs we create must do better.