Pages

Friday, 30 August 2024

Fighting Fantasy Fest

Fighting Fantasy Fest 5 is coming up in just over a week. As you may have realized, it's the 40th anniversary of my first gamebook, Crypt of the Vampire. Oh, and it's the 40th anniversary of something called Deathtrap Dungeon, which accounts for why Fighting Fantasy author Sir Ian Livingstone and artist Iain McCaig are the guests of honour.

Originally the plan was for me to be on a panel discussion with Jamie Thomson and Paul Mason, perhaps reprising some of our talk from MantiCon in 2018. (Video above if you can't make it to Ealing for FFF.) Plans changed and now I think the panel involves Paul, Steve Williams and Marc Gascoigne, while Jamie is going to give a talk and I'll be there to answer questions. So in the last six years I'll have attended conventions in Germany, Italy, France, and now Britain. At my age it might be time to slow down. Cutting out air travel would be good for the planet, too.

I'm not specifically at FFF to sign books, apart from a brief slot from 10:30 to 11:00 just before Jamie's talk, but if you happen to bring a book along and I have time then of course I won't refuse. Jamie and Paul are probably planning to be there all day, so if you have limited space in your bag I'd advise bringing Way of the Tiger or Robin of Sherwood rather than any of mine.

Now, about that 40th anniversary of Crypt of the Vampire. I ought to do something to mark that, you say? Watch this space.

Wednesday, 28 August 2024

A vessel for the finer

I'm no fan of the brand of fantasy popularized by Dungeons & Dragons and Fighting Fantasy, but a book featuring the art of that genre will include magnificent work by the late, great Martin McKenna (above) and Russ Nicholson (below), along with lots of other talented folks obliged to earn their crust by continually depicting a world of cannon-fodder goblins, ale-quaffing dwarves, and vatic old men in taverns. So any wealthy gamebook collector is going to want a copy of Magic Realms: The Art of Fighting Fantasy and next month there's a launch party where you can get a copy signed by the author, Jonathan Green, and Sir Ian Livingstone.

Tuesday, 27 August 2024

One book to rule them all

At almost $300 for the hardback, The Routledge Handbook of Role-Playing Game Studies isn't likely to end up on my bookshelves, but wealthier gamers can buy it here.

Is it worth it? No idea, though I do wonder how long human beings can keep writing stuff like this now we have LLMs. An excerpt:

No AI hallucinations there, I'll give it that, but why bother with the accent in "Tekumel" if you aren't going to say what it means? (It's an emphasis mark, ie the word is pronounced TAY-koo-mail, but there's no point including the accent here if you aren't also going to write "Mórdor" when discussing Lord of the Rings and "Wiscónsin" in the section on TSR.)

Still, maybe I'm a tad biased towards the hard sciences. This diagram will tell you if it's the book for you or whether you'd rather buy the Vulcanverse series and have $225 left over:


Friday, 23 August 2024

Fire and water

You're in Paris. It's 1910, the year of the floods. You've been touring a doll factory. After looking around the workshop on the first floor (American: second floor) where the celluloid dolls are made, you go up to the second floor (that is, the third floor in US English). You stay there for a few hours, unaware that the Seine is flooding. The ground floor is soon completely underwater. The level rises to almost waist-height in the workshop, shorting out a fuse box. Sparks catch on the inflammable celluloid dolls. By the time you come back down, half the workshop is already ablaze.

You have to get out of the building. There are large windows behind you, not blocked by the fire, but you're on the first floor. Hurrying to the stairs, you find the stairwell completely submerged. To get down to the exit you'd have to swim underwater. It's only about fifteen or twenty metres, but the sun has set and the electric lights have fused. The only illumination down there is whatever is cast from the flames in the workshop.

This is in fact a scene from the 2023 movie The Beast. I won't give any spoilers except to say that the movie is 150 minutes of your life that you'll never get back, and that confusing and strident are not the same things as enigmatic and beguiling. If you do want a movie that conveys real emotional mystery, watch The Double Life of Veronique instead. Or you could read the Henry James short story, "The Beast in the Jungle", that the director Bertrand Bonello claims to have been inspired by.

But this is not a film review, it's a post about how screenwriters really ought to hire gamebook or RPG players to stress-test their scenarios. Because I can see an easy way to get out of the building which obviously didn't occur to the filmmakers because they didn't have a full mental picture of the characters' surroundings. (They also showed it as daylight outside. Unlikely at 7:30pm in January, but it allowed them to provide a lot more light in the submerged ground floor.)

OK, so what would you do? And can you think of any other movies where the characters missed an obvious solution?

Friday, 16 August 2024

Crafting characters and stories

The current trend in indie roleplaying is to keep at least one eye on the authorial view of your character. I played in a recent game where a player used a retcon rule to ensure their character appeared in the right place in the nick of time to foil an NPC villain's master plan. Somebody on Twitter (or X if you're a member of the Musk family) was proposing that a player should get to write the monologue for the BBEG of the campaign. Even the rules of some indie RPGs are built around "satisfying character arcs" and other Hollywood-exec jargon.

It's not to my taste. I don't like retcons because they break immersion. Taking an authorial view of your PC doubly so. I prefer narratives that emerge in the moment; they're more exhilarating to play in and less trite to experience. I didn't even know what BBEG stood for till I Googled it. My campaigns rarely have anything as simplistic as a Big Bad (that's for kids' TV) and in any case they wouldn't waste time monologuing (has nobody out there seen The Incredibles?).

When Pelgrane Press got the Dying Earth licence, we talked about some Dying Earth gamebooks and I must admit I came up with an authorial approach. Paul Mason had to point out to me that the main effect of putting the player in the author's role would be to distance them emotionally from the events of the story. That might be why it's favoured in indie roleplaying, in fact; the ultimate safe space is when you don't have to commit to the character, the same way that Mystery Science Theater 3000 allowed nerds to ironically distance themselves from movies they'd be embarrassed to admit to liking.

But even if you don't play in authorial mode, it's handy to know about plotting and characterization. If you're refereeing the game you'll at least want to go in with a storyline in mind, even if it's just a safety net that you'll never use because the real story will be shaped spontaneously by the players' actions. And character-creation tips aren't only useful for designing NPCs. Players can benefit from starting with some traits and foibles, even if (as often happens) those drop away later as the character becomes more real to them.

Which is why I recommend Roz Morris's Nail Your Novel series. All right, yes, I am married to her. But I wouldn't let a little thing like that sway my opinion. I use Roz's advice when writing my own stories, both in book form and around the gaming table. You can try out her 100 tips for fascinating characters free. Let us know how you get on, and which style of roleplaying you prefer.

Friday, 9 August 2024

The inheritors

One of my few regrets in life is passing up the opportunity to join one of today’s leading AI companies when it was just a dozen guys in a small office. I liked and respected the people involved and I believed in the company’s mission statement, it was just a matter of bad timing. After six months of pitches we’d just got funding for Fabled Lands LLP, and I didn’t feel I could walk off and abandon my partners to do all the work.

It remains a source of regret because the Fabled Lands company would have managed perfectly well without me, and AI – or more precisely artificial general intelligence – has been my dream since taking the practical in my physics course in the late 1970s. ‘I don’t think we’re going to get anywhere with this hardware; we need something fuzzier than 0s and 1s, more like brain cells,’ I told a friend, more expert than I in the field. McCulloch, Pitts and Rosenblatt had all got there years earlier, but in 1979 we were deep in the AI winter and weren't taught any of that. Anyway, my friend assured me that any software could be shown mathematically to be independent of the hardware it runs on, so my theory couldn’t be true.

Barely ten years later, another friend working in AI research told me about this shiny new thing called back-propagating neural networks. That was my eureka moment. ‘This could be the breakthrough AI has been waiting for,’ I reckoned – though even he wasn’t convinced for another decade or so. 

I’d never been impressed by the Turing Test as a measure of intelligence either. The reasoning seemed to go: a machine that passes the test is indistinguishable from a human; humans are intelligent; therefore that machine is intelligent. I’m quite sure Turing knew all about the undistributed middle, so probably he proposed the test with tongue in cheek. In any case, it derailed AI research for years.

‘AI isn’t really intelligence,’ say many people who, if you’d shown them 2024’s large language models in 2020 would have been flabbergasted. Current AI is very good (often better than human) at specific tasks. Protein folding prediction, weather forecasting, ocean currents, papyrology. Arguably language is just another such specialization, one of the tools of intelligence. Rather than being the litmus test Turing identified it as, linguistic ability can cover up for a lack of actual intelligence – in humans as well as in machines. One thing that interests me is the claim we've been seeing recently that GPT is equivalent to a 9-year-old in 'theory of mind' tests. What's fascinating there is not the notion that GPT might be conscious (of course it isn't) but that evidently a lot of what we regard as conscious reasoning comes pre-packed into the patterns of language we use.

What is intelligence, anyway?

As an illustration of how hard it is for us to define intelligence, consider hunting spiders. They score equivalent to rats in problems involving spatial reasoning. Is a hunting spider as smart as a rat? Not in general intelligence, certainly, but it can figure out the cleverest route to creep up on prey in a complex 3D environment because it has 3D analysis hardwired into its visual centre. We could say that what the spider is doing there in spatial reasoning, ‘cheating’ with its brain's hardware, GPT does in verbal reasoning with the common structures of language.

What we’re really seeing in all the different specialized applications of AI is that there’s a cluster of thinking tools or modules that fuzzily get lumped together as ‘intelligence’. AI is replicating a lot of those modules and when people say, ‘But that’s not what I mean by intelligence,’ they are right. What interests us is general intelligence. And there’s the rub. How will we know AGI when we see it?

Let me give you a couple of anecdotes. Having come across a bag of nuts left over from Christmas, I hung them on the washing line with a vague idea that birds or squirrels might want them. A little while later I glanced out of the window to see a squirrel looking up at the bag with keen interest. It climbed up, inched along the washing line, and got onto the bag. This was a bag of orange plastic netting. The squirrel hung on upside-down and started gnawing at the plastic, but after a few moments it couldn’t hold on and dropped to the ground. The fall of about two metres onto paving was hard enough that the squirrel paused for a moment. Then it bounded back up and repeated the process. Fall. Wince. Back to the fray (in both senses). After about a dozen goes at that, it chewed right through the bottom of the bag and this time when it fell a cascade of dozens of nuts came with it. The squirrel spent the next half hour hiding them all around the garden.

The other story. When I was a child we had a cat. He had a basket with a blanket in the conservatory at the back of the house, which faced east. In the afternoons he had a favourite spot in the sun at the front of the house, but that meant lying on a patch of hard earth. One day we came home to find the cat climbing the gate at the side with his blanket in his teeth, and he then dragged it over to his sunny spot.

Both of those stories indicate general intelligence, which we could sum up as having a model of reality based on observation which you can use to make plans, predicting how actions will change reality, and you can use the fit between your prediction and what actually happens to update your model. (Strictly speaking that describes agentic general intelligence, as a passive AGI could still update its mental model according to observation, even if it couldn’t take any action.)

That’s why Sir Demis Hassabis said recently that our AGI tech isn’t even at the level of cat intelligence yet. Crucially, if in 1979 I could have built a machine with the intelligence of a squirrel, manifestly that would have revolutionized the field of AI – even though it wouldn’t have had any hope of passing the Turing Test.

Once you have general intelligence, you can invent new thing X by combining known things Y and Z. Then it’s just a matter of scaling up. Humans have greater general intelligence than a cat because their mental model of reality is bigger and combines more things.

Yes, but… just because humans are capable of intelligent reasoning doesn’t mean all of human thinking is intelligent. Most of the time it seems the brain finds it easier (less resource-consuming) just to run off its existing learned patterns. This doesn't just apply to language but to art, religion, politics, morality, and so on. Humans typically identify as belonging to a tribe (a political group, say) and then adopt all the beliefs associated with that tribe. If challenged that one of those beliefs is irrational and conflicts with their core principles, they don’t use critical thinking to update their mental model, instead they engage their language skills to whip up (AI researchers would say confabulate) a justification -- or they resort to their emotions to mount an angry response.

Most human ‘thoughts' may be no more sophisticated than a generative AI drawing on its learnt patterns. This is why most people aren't disposed to employ critical thought; they hold opinion A, opinion B, etc, as appropriate to their chosen identity but have never compared their opinions for consistency nor taken any opinion apart to examine and revise it.

No longer alone in the universe

How will we get to AGI? Predictions that it’s five years off, ten years, a hundred years – these are just guesses, because it’s not necessarily just a question of throwing more resources at existing generative AI. The human brain is the most complex structure in the known universe; the squirrel brain isn’t far off. The architecture of these organs is very sophisticated. Transformers might provide the key to how to start evolving that kind of structure into artificial neural nets, but bear in mind these are virtual neural nets modelled in software -- my friend on the physics course would say it's proving his point. Real neurons and axons aren't as simple, and maybe we're going to need an entirely new kind of hardware, or else very powerful and/or cleverly structured computers to model the artificial brains on. Sir Demis says he thinks we might need several entirely new breakthroughs to get to AGI, and as he has over twenty years’ coalface experience in the field and is one of the smartest guys on the planet I’m going to defer to him.

One answer might not to use machines, or not only machines. Brain organoids can be trained to do tasks – unsurprisingly, seeing as they are neural nets; in fact, brains. But the problem with organic brains is that they are messily dependent on the entire organism’s physiology. We’d really like AGIs that we can plug into anything, that can fly to the asteroid belt without a huge payload of water, oxygen, food and so forth.

(A quick digression about consciousness. Well, who cares? There’s no test for consciousness. Much, maybe most, of our thinking doesn't engage the conscious part of the brain. We take it on trust that other humans are conscious because we know -- or think we know -- that we’re conscious and we assume other humans experience the world as we do. In fact I think that pretty much any agentic intelligence has a degree of consciousness. If you observe the world intelligently but cannot act, you have no need of a sense of ‘I’. If you have that mental map of reality and included in the map is an entity representing you – the furry body that climbed that washing line and the mouth that chewed at the orange string – then you’ll mark that agentic identity as ‘I’ and part of your mind will take responsibility for it and occasionally interrogate and order the other parts of the mind. It’s just philosophical speculation, but that’s consciousness.)

What will all this mean for the future? AI will revolutionize everything. That’s going to happen anyway. A kid in the middle of rural Africa can have personalized tuition as good as most children in the best schools. Medical diagnosis can be more accurate and better tailored to the individual. Wind turbines and solar panels can be more efficiently designed. Batteries too. We haven’t space here to list everything, but here’s a quick overview.

AGI, though. That’s more than a revolution. That’s a hyperspace jump to another level of civilization. AGI will plug into all the expert ‘idiot savant’ systems we have already. We can scale it up so that its mental model is far bigger than any one person – no more silo thinking; if any idea in any field is useful in another, the AGI will spot it and apply it. All strategic planning and most implementation will be quicker and more effective if handled by AGI.

The biggest area ripe for development is the design of artificial emotions. We don’t want our AGI driven by the sort of emotion we inherited as primates. (Arguably, if intelligent primates didn’t exist and were being developed now, we’d have to put a moratorium on the work because the damned things would be too destructive.) We’d like our AGI to be curious. To experience delight and wonder. To enjoy solving problems. To appreciate beauty and order and justice and shun their opposites. They could be built as our slaves but it’s far better if they are built as angels.

Thursday, 8 August 2024

Giving the multiverse another chance

Maybe you have no interest in the MCU. I wouldn't blame you. As a fan of Marvel in the Silver Age I counted myself lucky to have had a second bite of the cherry with movies from Iron Man through to Avengers: Endgame. That's an eleven-year run with very few flops (I wouldn't bother with Iron Man 3 again and I deliberately avoided Taika Waititi's sniggering take on Ragnarok) or even thirteen years if we include Spider-Man: No Way Home.

After that, for me, Marvel fatigue set in. Too many TV shows, too much multiverse, and the great characters were all gone. It felt a lot like the way the Silver Age deflated into the Bronze Age. I lived through that once and didn't want to witness it all over again. Also, the MCU seemed to be depending too much on sending itself up, and once you start on that route you're going all the way to the bottom.

But then, just the other week, the news that Robert Downey Jr will be returning as... wait, what? Doctor Doom? Surely a ghastly and cynical ploy to try and lure back the diehard fans like me who'd fallen out of love with all the multiverse shenanigans.

Well, maybe. Except that the creative team for Marvel Studios' Phase Six are Anthony and Joseph Russo, who helmed some of the best movies in the MCU's own little Silver Age. Their track record forces me to think again. Set aside the cynicism. Consider: if Robert Downey Jr as Doctor Doom is going to work, and work well, what might that involve?

And so I wrote this little speculative snippet. And if superheroes aren't your thing, forget it and come back tomorrow. But if you have any love for the MCU, take a look and let me know if you agree, or disagree -- or (best of all) if you have an idea of your own.

Friday, 2 August 2024

Pastiche in four flavours

In case you haven't seen it, Jody Macgregor has a review of HeroQuest: The Fellowship of Four on PC Gamer that's interesting for two reasons. First because the book came out 33 years ago (not that I'm complaining; a good review is worth waiting for) and second because Jody might be the first person in 33 years to spot the four literary antecedents that I drew on (oh, "swiped from" if you must) to spice up the narrative styles of the mage, elf, barbarian and dwarf.

That said, he did overlook the nod to Salman Rushdie's book The Satanic Verses in the opening chapter, which begins with the narrator falling from an altitude of several thousand metres. I didn't attempt to emulate Sir Salman's prose style, which I'll confess is a mite too rich for my tastes.

There's little chance of the book ever being republished, nor the sequels, as the fellow from Hasbro really didn't like it much. Backers of the Jewelspider Patreon might have seen glimpses, but that's only a rumour and I can't possibly comment. And I see you can buy scans from HeroScribe.org. Failing that, listen to the insightful Mr H J Doom talk about it on Fantastic Fights.

Thursday, 1 August 2024

The game of everything

Over on the Flat Earths blog, I’ve been caught up in an interesting discussion this week with one or more anonymous game developers. Our conversation has been about the challenges of creating a kind of grand strategy game (and/or world history sim) that would cover the evolution of societies across centuries -- something like the original Civilization boardgame (or its PC version, Incunabula) but with nothing left out. You could tweak a coastline or the climate and see the effects butterfly out into the course of history. I’m mentioning it here because the Fabled Lands blog has a much wider reach, so if this sounds like something you’d be interested to weigh in on, head over to see the full discussion.

In summary: traditional strategy and tactics games often present a simplified view of control, where a single player assumes absolute command over a nation or army. In reality, historical empires were shaped by complex layers of authority and influence, with different factions vying for power. We talk about the potential for a game that reflects this complexity, eg a massively multiplayer and multi-layer World War II game where players assume various roles from platoon commanders to high-ranking generals and politicians. Such a game could offer a more authentic experience by simulating the intricacies of real-world military and political manoeuvring.

We all vary in how they approach games: some players want fast-paced action while others prefer strategic deliberation over days or weeks, and so on. Designing a game that appeals to all the player types is no small feat. We’d need to ensure that the game remains enjoyable and challenging for everyone involved. The idea is to allow players to engage at their own level, whether they are lean-back dilettantes or deeply invested aficionados.

New technology offers possibilities for expanding the scope of strategy games. Imagine a game that integrates different platforms, from PCs to smartphones, to create a rich, interconnected experience. For instance, a farming app linked to the game could engage casual players, who would contribute to the in-game economy and benefit more dedicated players. Such integration could foster a sense of community and interdependence, where players protect and support one another in pursuit of shared goals.

The conversation on Flat Earths also touches on the potential for diverse player roles. Not everyone wants to lead armies into battle or even to take part in the action. Some might prefer to act as chroniclers, documenting the unfolding events and strategies of the players. This mirrors real-world dynamics, where some individuals are active participants in events, while others observe and report. And others watch those reports and are just spectators -- also a valid way to consume the game. Recognizing and facilitating these different roles could enrich the gaming experience, allowing players to engage in ways that suit their interests and skills.

But we mustn't play down the technical challenges of merging different game elements such as abstract military units and individual soldier-based tactics. While current advances make some aspects feasible, ensuring cohesion across multiple layers of gameplay pushes the boundaries of what is currently achievable.

We also discuss the importance of a village or town system as the backbone of the game. Such a system would define troop numbers, trade, cultural values, and achievements. To accommodate a massive multiplayer environment, the game needs to move away from the exponential growth model typical of 4X games, where mismatched opponents often lead to frustration. Instead, my anonymous friend proposes a homeostatic environment reminiscent of the Dark Ages, where cooperation and cultural exchanges are vital. This shift in focus from conquest to cultural and economic development could redefine players’ goals, emphasizing collaboration over domination.

And, of course, commercial viability is a critical consideration in game development. Past failures underscore the importance of minimizing reliance on external investment and (much as I hate to say it) the potential benefits of aligning the game with a popular intellectual property, such as Lord of the Rings or (better) as the veritable psychohistory engine of Foundation.

Anyway, the bottom line is that by blending historical realism with diverse multiplayer roles and leveraging modern technology, it’s becoming possible to envisage a game of a scope far beyond the simple abstractions of the past. Whether you could sell it to players is another question, though maybe government ministers and civil servants would appreciate a way to run simulations before trying out their plans on the only reality we've got.