Gamebook store

Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, 7 May 2026

Seminar: "The AI Assisted Artisan Author"

The main reason I abhor terms like "AI slop" is that they are propaganda, not arguments. They are used as substitutes for thinking and reasoned debate. (It's the same mindset that coins slurs like "Crooked Hillary" and "Comrade Kamala"; "stochastic parrot" is another.) The fact is that some things generated by AI are slop but many are not - in physics, biochemistry, medicine, etc. Also there is, and always has been, plenty of slop that wasn't created by AI. Just cast an eye over the bestseller lists.

AI art comes in for a lot of flak, not entirely unfairly. It's much more polished than anything I could draw. But it's only workmanlike; it's never great. The people who are quick to coin terms like "AI slop" have latched onto the claim that using AI art is putting human artists out of work. I can't speak for others, but if a project of mine has a budget then I want to work with human artists every time: Inigo Hartas on Jewelspider, Leo Hartas on Mirabilis: Year of Wonders, Mattia Simone on Vulcanverse, Russ Nicholson and Kevin Jenkins on Fabled Lands. I make sure they get paid even if (as often happens) I don't. No AI can do what they can do.

A lot of projects don't have budgets, though. Blog posts, for example. For a decade I'd have to trawl through mediocre public-domain images to find something to illustrate the week's piece. Now, in the absence of anything better, at least Gemini can whip up something passable. After spending an hour or two planning and writing a post, not having to waste half an hour scouring the internet for images is a godsend.

Most gamebook and RPG authors don't make any money. Their works are labours of love. After months of writing, when they were finally ready to publish, it used to be that their only option for illustrations was to find some out-of-copyright art. That could occasionally be just right -- who else but Gustave Doré could illustrate James Wallis's storytelling game The Extraordinary Adventures of Baron Munchausen? More often than not it was just the creator making the best of a bad job. They weren't putting a human artist out of a job, any more than you're paupering a plumber or carpenter when you do some DIY based on a YouTube instructional video, because there never was a job there in the first place. 

And incidentally there'd be no indie game scene at all if not for desktop publishing software and print on demand. Those came along more than twenty years ago. Do you see the streets lined with out-of-work typesetters and printers? No, and this technological revolution won't wreck anyone's career either.

So much for art. What about writing? AI is really not good at fiction. Its prose (learned from humans, of course) is ungainly. That would be hard to fix because elegant prose is not easily evaluated and so it would be hard to train a model to know the difference. It's not like coding or maths, where there's a clear difference between a right and wrong answer. (That's why, when teachers had to estimate their pupils' exam performance during the covid lockdown, there was much greater variance between the estimate and the eventual exam result for arts subjects than for sciences.)

Even if we could teach an AI to write beautifully, it couldn't (currently, anyway) write a good novel because it has no depth of insight -- though, again, that's true of many human authors. The telephone-directory-sized romantasy bestsellers stacked up in the bookstore window are not good novels, just popular ones. Lee Child's working method for writing the Jack Reacher books is effectively just what an LLM does, so there's no reason ChatGPT couldn't come up with a passable pastiche of one of those. But it's not going to rival Flaubert or Turgenev for a while yet. Owning a camera doesn't make you Da Vinci, after all.

I realize that in the 2020s there is no longer any possibility of convincing anybody that they might be wrong or even that somebody who disagrees with them isn't a knave, but maybe we can all still agree that it's better to be informed than not. That's why I'm recommending Joanna Penn's online seminars on ways to use AI to make you a better author. For the reasons given above, that doesn't mean prompting it with, "Write this novel for me." What Jo is covering includes the background tasks: deep research, brainstorming and ideas, outlining, structuring, plotting, and planning, characters and worldbuilding. For example, she explains how to use NotebookLM to maintain a world bible. (That would have saved months of wading through texts if it had been around when I was working on the Lyonesse RPG, for example.)

There are seminars on 16 and 23 May. I've taken Jo's seminars before -- even after 40+ years as a working author there's plenty I can still learn -- and they are worth every penny. Get your tickets here.

Thursday, 9 April 2026

The cure for our ills?

Regular readers will already know I'm an evangelist for AI. And, yes, I'm aware there are risks, as with any new technology, but we are going to keep rubbing lamps and letting genies out. We just have to be careful how we deal with them. When he was setting up DeepMind, Sir Demis Hassabis was fond of propounding the vision: "Solve intelligence. Use intelligence to solve everything else." By everything he meant curing disease, solving the problem of controlled fusion, and all the other things that could make life on Earth a utopia.

Perhaps you are cynical about experts and/or multi-millionaires, but don't make the mistake of dismissing every member of a group on account of some bad apples. I know Demis personally (I used to work for him) and I assure you that he is motivated by a genuine vision of a better future. His delight in the workings of the universe, his ever-youthful curiosity, his humour, his intelligence, and his focus are the qualities that I think show human beings at their very best. For such men and women, the human adventure is just beginning.

I mention all this because there is a biography of Demis Hassabis just out. That's The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence by Sebastian Mallaby. Hassabis is likely to be admired by future generations as a pioneer of a new era -- long after the likes of Musk and Trump are forgotten -- and, though I think the key to AGI might lie more in the work of Yann LeCun, and though I believe we should celebrate discoveries, not discoverers, anyone who is interested in the lives of those who shape history should take a look.

There is a depressing note. (It's the 2020s, so how could there not be?) Recently Demis seems to be cooling on the grand vision. “I’ve satiated that scientific desire for the moment…I’ve always been fine either way,” he says, justifying the shift in emphasis from AGI research to the LLMs that are where the money is now. It figures that Google isn't interested in idealistic research; it just wants commercial product. If it were me, I'd walk away. Demis is probably reasoning that maybe he can do more good with 1% of Google's focus than with 100% of the resources of an obscure research lab. Such compromises with the inevitable never work out. You never even get that 1%. The road to hell is paved with good intentions.

AGI, superintelligence, and the keys to a utopian future are all achievable in theory. Of that I'm almost certain. But whether the societies and institutions humans have created will ever allow us to reach that goal remains an open question. The fault is not in our science but in ourselves.

Friday, 6 March 2026

A Turing Test for morality

Since there’s no test for consciousness, how do we know that other people are conscious? In the video above, Sir Demis Hassabis gives two bases for making the assumption that they are. First is the obvious one: other people behave like me and I am conscious – or at any rate I have the impression that I am conscious. Then there’s the fact that other people are built the same way I am. Same DNA, same brain structure, so it’s an Occam’s Razor conclusion to conclude they experience the world as I do. As Hassabis puts it:

‘I think it's important for these systems to understand “you”, “self” and “other” and that's probably the beginning of something like self-awareness […] I think there are two reasons we regard each other as conscious. One is that you're exhibiting the behaviour of a conscious being very similar to my behaviour, but the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now obviously, with machines, they're running on silicon so even if they exhibit the same behaviours and even if they say the same things it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.’

We don’t think large language models are conscious. (What would consciousness look like in an LLM anyway? Murray Shanahan makes some thought-provoking points about that.) Even their apparent intelligence is probably misleading, just as there are lots of not-very-bright people who are able to give the impression of being smart simply because they are articulate. If we could build an AI as smart as a bee colony or a hunting spider, we’d have something genuinely intelligent but probably not conscious. We aren’t even there yet, but we will be, and we’ll go beyond that to full artificial general intelligence (AGI -- or AMI if you prefer Yann LeCun's term) possibly within a few decades.

Professor LeCun is dubious about the whole concept of consciousness. In the absence of any definition or means of measuring it, I think we’re reduced to treating consciousness as how similarly to ourselves an entity experiences the world. And that is quite concerning. Consider a truly capable self-driving car. To cope with all situations as we do, the car (which would be a type of robot, of course) would need a full reasoning model of the world. It would need to be generally intelligent. Now, given that it is a world model with genuine understanding, and (as LeCun says) having goals and agency it will have its own kinds of emotions, are we justified in enslaving it to be our chauffeur?

If we look back at the 18th and 19th centuries, plenty of people justified slavery by asserting that members of enslaved races lacked some fundamental mental capability, or indeed full consciousness, that the dominant race (usually white Americans) possessed. Here is Thomas Jefferson’s opinion of enslaved races:

‘It appears to me that in memory they are equal to the whites; in reason much inferior, as I think one could scarcely be found capable of tracing and comprehending the investigations of Euclid: and that in imagination they are dull, tasteless, and anomalous.’

Perhaps Jefferson would have been able to see that he was describing the psychology not of an entire ethnic group but of any person of whatever race brought up in brutal conditions of forced servitude. But there were plenty of religious thinkers of the day who asserted that non-white races lacked true souls. They had a strong economic incentive to believe that; it gave them a moral excuse to enslave them.

Now we consider such attitudes to be barbaric, or at any rate we’ve been taught to say we do, but if we really think that then we should be axiomatically opposed to the enslavement of any generally intelligent entity. I suspect we won’t be. Even when we are faced with full AGI we will use the second of the criteria that Sir Demis Hassabis cited to argue that they only seem conscious, they don’t have real emotions, they aren’t ‘running on the same substrate’ and so we will feel entitled to make them our slaves.

Instead of conceiving of AGI as a wonderful new tool to make our lives easier, I think we should consider the responsibilities of a parent. If you saw someone raising a child to be their servant – even brainwashing them to be an eager and willing servant – you would know that was abuse.

There will be, as there already are, many forms of artificial ‘intelligence’ that are not conscious – that are not, in fact, intelligent, but simply replicate parts of our behaviour. Language, pattern recognition, and so on. There is no reason why we shouldn’t have those AIs at our beck and call, because they are not (despite the name) intelligent. We got misled because for millennia we thought fallaciously that because we possess intelligence, every output of the human brain must therefore be indicative of intelligence.

But AGI is going to be a whole other thing. Not just a new model of an LLM but an entirely and fundamentally different kind of being. Our ethical discussion should not simply be about how to make them do what we want, or to conform to ‘human values’, but about how those human values say we should treat another intelligent species.

I’m sceptical about visitors from other stars, but with self-replicating probes travelling at a speed of 0.01c it should only take ten million years to cover the whole galaxy, so ‘where is everybody?’ is a sensible question. If those aliens have found us, and are watching, I wonder if the reason they haven’t made contact is they’re waiting to see how we treat an intelligent species that isn’t built on the same lines as ourselves. After all, if we think AGIs aren’t conscious, and therefore have no rights, then that’s also how we might regard a non-terrestrial intelligent species. So maybe it’s a cosmic Turing Test. And if so, will we pass or fail?


Friday, 21 November 2025

Simulation vs abstraction in game design


This is an excerpt from Game Architecture & Design, an industry textbook I co-authored with Andrew Rollings. (I wrote the game design bits, Andrew dealt with code, tech and development practices.) The book was originally published in 1999 and a revised edition came out in 2004. In the intervening two decades, a lot has changed, but it's also interesting to see what hasn't...


If I throw a ball and take many high-speed photographs of its flight, I'll see that the trajectory the ball took is a parabola. But the ball didn't follow that path because gravity told it: "Move in a parabola." A parabola is just a symbolic concept in the analytical domain of mathematics, and the universe doesn't know anything about mathematics or analysis or symbols; these are human concepts. In reality, there are just a bunch of physical processes, each of which deals only with the processes and circumstances just before and just after it. So, the ball is at one position, and gravity tells the ball's velocity to change, and the ball's velocity tells its position to change. The balance between kinetic and potential energy over the time the ball is in the air gives you what we call a parabola.

This is the opposite approach to that taken in most software applications. There, processing power is at a premium, so the sooner you can go to symbolic modelling rather than step-by-step simulation, the better. The tradeoff is that software can crash when your symbolic "shortcut" misses something that the one-step-at-a-time approach would have taken in its stride.

Researchers in Artificial Life have identified an analogous problem:

"The classical AI approach has been criticized because the symbols and symbol structures on which planning and decision making are based are not grounded in the real world. The problem is that unequivocally decoding sensory data into a symbol and turning a command without error into its intended action may be unsolvable."

- Luc Steels, "The Artificial Life Roots of Artificial Intelligence" in Artificial Life (MIT Press, 1997)

One big advantage of the way that reality does things is that the universe, being non-symbolic, cannot crash. As an example of the principle at work in a game, suppose I am putting a monster into my new Frankenstein adventure and the idea is that it will jump out of its vat when the player enters the laboratory. Instead of putting in a lot of complicated AI to do with detecting humans and having the goal of wanting to kill them, I just choose the short cut of placing a trigger tile inside the laboratory door. When the player steps on the trigger, the monster will appear and attack.

Okay so far, but what if the player manages to get onto the tower roof, jumps down, and, by some fluke, manages to land safely on the balcony of the laboratory? Now they can explore the lab, get all the power-ups, and read the journal about the monster (an entry that is supposed to be poignant if they've just fought and killed it, but that is meaningless otherwise). Only when the player goes to leave via the door does the monster climb out of its vat and growl, "You shall not steal my master's secrets!"

In the past, the nonsymbolic, step-by-step approach was not practical. The the processing capability wasn't available to deal with that and graphics too. But now much of the graphics work is done by the video card, and computers are doubling in power every eighteen months or so. At last, it is starting to be possible to create "uncrashable" games by avoiding the need to design using symbolic shortcuts.

Comparing Nonsymbolic And Symbolic Design

In the original Warcraft, peasants collected gold by entering a gold mine and bringing sacks back to your town hall. At the start of the game it was always worth spawning peasants because, the more peasants you had, the greater your revenue stream. However, there came a point when the peasants started to get in each other's way. Adding more peasants would then lead to “traffic jams” as the peasants encountered each other on the streets of the town and would have to back up to let others get past. The situation was alleviated by leaving wide streets. Additionally, it was not a good idea to place your town hall too close to the gold mine – giving a little more space also helped avoid traffic congestion.

Now, an economist could derive an equation to describe the flow of gold to the town hall. The factors would be the number of peasants, the placement density of the town buildings, and the distance from the town hall to the mine. We can imagine that it would be a pretty complex equation. The point is that the designers of Warcraft never needed any such equation.* They simply programmed in the basic rules and behaviours and the economic simulation emerged directly from those.

Contrast this with a game like Caesar II, which used underlying equations to create a simulation of an ancient Roman city. This approach is less satisfying because the player is not directly viewing the reasons for success and failure. Instead, when playing a game like Caesar II (or any simulation of its type) you are trying to build an abstract match to the game’s underlying equations in your head. The simulated economy and the gameplay are less visible, lessening the sense of immersion.

And you know what? The same goes for stories. If you construct them from symbolic forms (arcs, paradigms, act breaks) you'll end up with less robust and varied stories than if you allow each micro-event to trigger the next and see where it goes. Which is why in roleplaying terms I'm a simulationist rather than a narrativist. Hey, if it's good enough for reality then it's good enough for me.


* This gives me an excuse to digress onto the topic of AI. Foundation models (or indeed any deep neural net) are sometimes referred to as algorithms. I find that term misleading. In principle you could express all the weights of a billion-node net in the form of "an algorithm" but that's not really an accurate way of talking about what the AI is doing in, say, ChatGPT, which is akin to (though much more complex than) the peasants collecting gold in Warcraft. That too is governed by multiple algorithms (for route-finding, collision detection, etc) but it would be more accurate to talk of it as a model. An algorithm could be derived to express the rate of gold production in terms of all those variables, but the Warcraft system doesn't have that algorithm built in, and nor do AI systems. There is an example here, where the article refers to "a separate algorithm" where they really mean " a separate model".

Principle of Least Action image by Maschen CC0

Wednesday, 13 August 2025

Theatre of the artificial mind

Picking up from last time, another entertainment use for AI will be in staging plays that we otherwise wouldn't get to see. Ben Jonson's The Alchemist, for instance, which Coleridge regarded as one of the three "most perfect" plots in all of fiction. (If you're anything like me you'll immediately need to know that the others were Fielding's Tom Jones and Sophocles's Oedipus Tyrannus.) You can find amateur versions online like the one above, but no fully staged production. Likewise for many plays of Aeschylus, Aristophanes, Euripides, Ibsen, Chekhov, Pinter, Stoppard, Coward... The list is endless. 

To illustrate: lately I've had a hankering for the plays of Martin McDonagh, but just reading them isn't enough. I want to see them performed. Until now, if all you had was a script or audio file, the only way you could see a performance was if somebody went to the trouble of animating the whole thing by hand, the way the BBC did with "The Power of the Daleks" having (curse them) wiped the original videotapes.

With AI, animation of a play no longer need take a dozen-strong team working for months on end. It can be improvised on the fly using the script. With a little set-up the AI could even base the performances on digital twins of famous actors from history. It could also help to make movies that are considered too niche or too expensive for traditional production.

If the last couple of posts haven't exhausted your interest in AI, it's one of the topics I recently discussed with Riccardo Scaringi on his podcast. We also talked about Blood Sword, Fabled Lands, Dragon Warriors, Elon Musk, Vulcanverse, Cthulhu 2050, Shadow King, Jewelspider and the films of Woody Allen, so there's plenty there for the non-AI buffs:

I appreciate that using AI for entertainment is mere frippery compared to the applications in healthcare, environmental measures, materials science, energy technology, and pure science. And beyond that, and far more important, is the eventual role of strong AI not merely as a new human tool but a whole new companion species. But on the way there a little digital alchemy won't hurt.

Friday, 8 August 2025

An audience of one

There's Matthew Berman reminding us that future is coming up faster than you think. He's talking about videogaming, but the same principles apply to movies, comics, and literature.

The novel – at least, the genre novel – may well go the way of the epic poem, to be replaced by something more like an RPG session which an AI will run for the reader. (Or, more likely, the listener or viewer.) The top authors will devise the elements of the story, the characters and timeline (perhaps more like creative directors than old-style authors) and the AI will use that to tell a story that gives prominence to the bits that interest the individual reader. Did your parents make up stories to tell you when you were little? Like that. Or maybe like this.

You'll still discuss the story with friends (an important feature of most entertainment) but the specific events in your version may vary from theirs. Initially such on-the-fly stories will be trite because roleplaying has been infected by a lot of Hollywood pablum about act structure and story tropes, and that’s what the AI models will learn from. But eventually it may shake that off and become a new independent art form. "Not a line, but a bolt of lightning," as C W Longbottom puts it:

In the meantime, a market will remain – small, though, and shrinking – for grown-up fiction that doesn’t pander to YA tastes. Genre fiction falls in predictable patterns involving plot, and so is easily copied by novice writers and neural nets, whereas literary fiction is harder to fit to a formula because it usually concerns itself with the unique outlook and choices of the characters. But don't assume that because the AI hasn't experienced human emotions it won't eventually be able to write Lolita or War & Peace. Conrad didn't personally have to hack his way through an African jungle to learn how to write Heart of Darkness. It's only a matter of time before those more complex story patterns are learned and replicated by AI, just the same way that most authors do it. And then we'll be in a whole new world of entertainment.

Friday, 18 July 2025

Working for peanuts is all very fine

"While Humanity will be amusing itself, or enjoying cultivated leisure—which, and not labour, is the aim of man—or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work. The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends."

That's what Oscar Wilde had to say in "The Soul of Man under Socialism". I was reminded of it because of the machine-assisted future imagined in Cthulhu 2050: Whispers Beyond the Stars. There, robots do the majority of jobs and most humans are given a stipend to survive on.

Is that how things will turn out? It's often said that new technologies don't take away jobs, they just change the jobs we have to do. Thus, a modern city has far fewer ostlers, crossing-sweepers, grooms, and so on than a 19th century city where transport was horse-drawn. But AI/robotics is potentially quite different from any technological advance we've seen before. It might turn out that there aren't any jobs (maybe apart from actor, priest and sex worker) that an AI agent or a smart robot won't be able to do better than a human.

Who wants a job anyway? We're conditioned these days to identify employment with a sense of self-worth, but Louis XIV would have laughed at the very idea that he should have a job, and Oscar makes the case that we should really aspire to be artists and connoisseurs. 

But that cuts both ways. Nobody can want to spend their days driving a car, for example. For an AI to drive a car on today's roads -- to attain SAE level 5, that is -- it can't simply be an unconscious machine. It would need a world model that recognizes that objects persist when out of sight. It needs to be able to interpret the likely behaviour of a human pedestrian or other motorist. It might be called on to make Trolley Problem assessments. It must, in short, be fully capable of rational thought. And if you have built a real intelligence like that, it's not ethical to condition it from "birth" only to enjoy driving cars for you. That's raising another conscious entity to be your slave: it's not only wrong, it never works out well in the long run for either slaves or masters.

Suppose that by 2050 (which might be optimistic; the AI we currently have is not close to general intelligence) we have a host of super-smart ASIs, genius-level intelligences capable of imaginative thought, what would humans do? Suppose those ASIs doubled the world’s wealth. (Not that we necessarily even need AGI to get a massive economic benefit from AI, of course.) Assuming the human population didn't just double, and if that wealth were distributed just as unevenly in the future as it is today, the poor in India and Africa would be raised to the current levels of the poor in Latin America. Latin America to present-day China. China and the Pacific countries to modern Europe.

But will it work like that? What will those people do? And how many people do we need on the planet anyway? Two billion? Seven billion? Fourteen billion? Or maybe far fewer. We would no longer need a huge population in order to ensure enough geniuses for progress (if you accept Julian Simon's argument to begin with) and we're already aware that unwillingness to solve the climate problem caused by too many people means our civilization may not survive another century. Maybe a global population of twenty million humans would be sufficient. If such a calculation makes you uncomfortable, welcome to the world where tigers (global population 6000) and elephants (global population 450,000) live.

Some have asked, "How will the big corporations make money if nobody has a job? There'll be nobody to buy their products." The answer to that is: money is just a token for the ability to get things done. If you had a million robot slaves, you wouldn't need money; you could just reach out your hand and whatever you need would be given to you. I don't raise this point because that's my picture of the future, just as a reminder that we are not talking about the world as it is now with a little boost like a steam engine or a power loom. It will be a different paradigm. Speculating about it is fun as long as we're willing to think way outside the box.

Wednesday, 16 July 2025

"The King's Secret"

That's Osvaldo Rivera Vazquez's short movie from Runway's Gen:48 competition. They say that when you get old enough, everything reminds you of something else. Looking at these short AI-animated films put me in mind of the late 1980s when desktop publishing meant that anyone could put out a magazine or publish a book. 

It's a revolution. Yesterday, you needed a fair bit of money to put together a movie. Now a writer can realize their vision without having to learn about animation, framing, editing, and all the other skills. That's a good thing because there are many very creative people, previously locked out from any means of getting their work in front of the public, who will now be able to flourish.

But it's double-edged. When publishing had its gatekeepers, maybe a hundred thousand new books came out each year. Now that figure is in the millions. At such scales Sturgeon's Law proves not to be linear. Along with the nuggets of gold like "The King's Secret" will come a tsunami of mud, to put it politely. An example is this -- trite, obvious, visually dull, mawkish. And it won the sodding Grand Prix. AI can make our dreams a reality, but it can't fix the problem that the popular taste is always for the shallowest and least original stories. What we really need is AI to make our minds more interesting, but that might take a little longer.

Friday, 11 July 2025

The magic of the machine

All five Vulcanverse books are now available in ebook format for Kindle and as watermarked PDFs on DriveThruRPG. If your friends scoff at the very suggestion, saying, "But the full-colour hardbacks look so amazingly collectible!" -- well, they have a point, but you can counter with the fact that you'll be getting the entire open-world series (6115 gamebook sections, and getting on for three quarters of a million words) for less than the price of a single paperback book.

The Sisyphean task facing me was hyperlinking all those sections. You can see from the image above what the text looked like. I had the five books in Word and I was quickly able to convert all the centred numbers to headings. You can hyperlink to headings, so that was the first hurdle. But then I thought about the task ahead. More than six thousand sections meant about twelve thousand hyperlinks to put in. If I did those by hand it would take weeks of mind-numbing labour.

Ah, but that's 2024 thinking. We're in the future now. I keep extolling the benefit of writing with AI. What better use of it could there be than saving me hours and hours of drudgery? I presented the problem to Gemini 2.5 Pro:

"I have several Word documents into which I need to insert hyperlinks. Each document has numbered sections, typically from 1 digit to 4 digits -- so, for example, 1, 2, 3... to 1499, 1500, 1501, say. Those numbers are styled as headings in Word. Then in the body of the text I have a bunch of links, also numbered, like so: ► 1, ► 2, etc. I want to link those so that the number after the arrow is hyperlinked to the corresponding heading. For example, given ► 188 the "188" would link to to the heading labelled 188. Rather than do this laboriously by hand, I'm hoping you can write me a macro that will add all the hyperlinks automatically."

Gemini quickly came back with a macro to run in Word. And here's a bonus: a human coder would certainly have told me that I should have written the book in Twine to begin with -- in other words, "I wouldn't start from here." But I had it all in Word, so who cares what I should have done if I'd thought about it a couple of years ago? I needed the solution to this problem right now. LLMs don't scold you, they just get on with it.

Gemini's macro hyperlinked the whole of a Vulcanverse book in a few minutes. The only snag was that it hyperlinked every instance of a number N to heading N, and that included hyperlinking stat bonuses, the costs of items for sale, and numbers given in difficulty checks -- whereas I only wanted it to hyperlink numbers that followed a ► , ie in the format "► N", indicating a player option. Gemini suggested a couple of fixes but those didn't work. At this point it was time for the human writer to earn his keep. I said:

"The new macro is inserting hyperlinks for all instances of ► N but also hyperlinking any number N regardless of whether or not it was prefixed by "► ". I have thought of a possible solution that would save me having to completely reformat the original document. What if I replaced all "► N" with "► xyzN". If you rewrite the macro to search only for "xyzN" and hyperlink those, I can then globally remove the "xyz" and we should be left with hyperlinks only where we want them. I suggest this on the theory that it's the search for ► that is confusing the VBA editor."

That met with Gemini's approval. "Your proposed workflow is a classic troubleshooting technique," it said. "If the target is ambiguous, make it unambiguous. By 'salting' the numbers you care about with a unique prefix, you create a perfect, undeniable target for the macro, completely eliminating the possibility of it acting on the wrong numbers. I am confident this will work."

The best-laid plans of mice, men and machines... The new macro added the hyperlinks, and as promised the hyperlink was only on the numerical part of any instance of "xyzN" (so that I could remove the "xyz" without losing the link), but any given link for a number N didn't necessarily point to the heading N, but to the first occurrence of the number N anywhere in the Word document. Since a number frequently might be mentioned before the heading corresponding to that number, we were still stuck.

My first thought was that the macro was having trouble identifying the headings, so a possible fix would be to run a macro that created a set of bookmarks from the headings. So, for example, heading 323 would lead to the creation of a bookmark labelled bookmark_323 (because bookmarks can’t have names that only comprise digits). We’d end up with a bookmark list: bookmark_0001, bookmark_0002, etc, all the way to bookmark_1800 or whatever. Then we could run a modified version of the hyperlinking macro that would only link to corresponding numbers in the bookmark list.

By this point I'd begun to lose confidence in Gemini 2.5 Pro so I turned instead to Claude Sonnet 4. I didn't give Claude my proposed solution of creating a bookmark list as I wanted to test it properly against Gemini, so I just showed it the macro I had been using and told it the problem. "I can see the issue with your macro," said Claude almost immediately. "The problem is in how you're creating the hyperlinks. When you use SubAddress:=headingNumberText, Word is searching for the first occurrence of that text anywhere in the document, not specifically looking for a heading with that text. The solution is to create proper bookmarks for your headings first, then link to those bookmarks. Here's a corrected version of your macro that should work properly."

Now, this was encouraging. Claude had come up with the same solution as me (not always a good sign in and of itself, admittedly) and its code was considerably tighter than Gemini's. Not only that, Claude's macro ran faster and with no errors. Some twenty minutes later I had all five Vulcanverse books fully hyperlinked. (And the Fabled Lands books after ten or fifteen minutes more.) It's a perfect example of how AI tools can make a writer's life so much easier. All the words and storylines are by me and Jamie, all the illustrations are by Mattia Simone and the art team at Vulcan Forged; only the grunt work was done by machine.

Friday, 4 July 2025

Robots of 2050

William Gibson once said that writing science fiction was like holding an ice cream cone on a hot day. Fortunately when Paweł Dziemski and I embarked on our Cthulhu 2050 project, we weren’t trying to accurately predict the future. Instead we set out to create an ‘alternate future’, like Robert W Chambers did in his story “The Repairer of Reputations”.

I described some details of our imagined future in a recent post. The science fiction author Arsen Darnay once said that the period following World War Two through to the early 21st century was an aberration, and that the future would look more like the world prior to the 20th century. It’s a little depressing, especially as it might be true, but it’s a good starting point if you are aiming to write about a disturbing and faintly dystopian future.

Our world of 2050 has been shaped by the advent of robots and artificial super-intelligence. There is structural unemployment and a barely adequate universal basic income – creating a lifestyle for those unable to work known as “the doleur”: panem et circenses.

Regarding AI and robots, I need to digress for a moment into our thinking behind these. In our imagined 2050 there is AGI, and I think it's probable we'll have it by then in the real future. Some arguments against the possibility of AGI (often mixing up AGI with consciousness) start by assuming that intelligence can only happen in an organic brain. Now, I don't think that the way we're going to create AGI is by directly modelling the specific structure of the human brain, but try this as a thought experiment. If you had a complete connectome along with all the physical rules governing how synapses operate, you could in theory model that digitally, and there is no good reason for assuming it wouldn't work just like an organic brain. You might evoke quantum tubes or souls to argue that this digital twin, even if reasoning like a brain, wouldn't be conscious. I think that's wrong too, and that consciousness is simply an emergent property of any moderately intelligent creature (or machine) but in any case consciousness and intelligence are different things.

In reality, if we consider a working digital model of the brain’s connections as a neural net (which of course it is, by definition) that’s still not the whole story. Each of the axons within the net has its own internal characteristics; at the very least they could function as neural reservoirs. And then there's the rest of the nervous system to model, along with hormones. To replicate the complete functioning of a human brain in digital form might require a very big machine, therefore, even if we had anything like all the information we need to recreate the entire connectome.

Still, size isn’t a problem. The AIs of the future (our real future, that is) could be physically much larger than human brains. The best ones certainly will be. But we’d also like them to be as efficient as organic brains, in terms of compute per cubic metre, and for that we’re going to need a new kind of hardware. In reality we could be developing entirely new hardware for compact neural nets very soon, but in the game Paweł and I assume that breakthrough hasn’t happened, and so research is looking at modifying organic brains (human brain organoids and genetically modified octopus brain organoids in particular) to create the ASI of 2050.

A lot of people in 2050 have embodied AI assistants – “Fridays”, as we call them – but most can’t afford strong, durable worker-robots. The common models of Friday are about as robust as a plastic vacuum cleaner, and more often pet-sized than the sturdy, willing android servant imagined in 1950s SF. A pet-sized robot can still be very useful, though; think of a witch’s familiar, or Thing in the Addams Family. For most day-to-day purposes all you really need is Astra and Google Glasses, so if you’re on UBI and you need a robot you’d hire it by the hour.

As Fridays mostly just function as a personal assistant-cum-companion, they don’t need superintelligence. They can get by with not much more “brainpower” than a 2025 LLM. You could of course have a robot, or even an army of robots, run remotely by a huge ASI located in a data centre, but most people prefer their domestic Friday to be independent of the network. Therefore each Friday has its own analogue brain, 3D-printed using the structure and weights of a digital neural net. Such analogue brains are more compact than the digital master copy, but with the drawback that they aren’t easily copiable once they begin to operate independently.

Another point about Fridays: they never say “I”. Fridays are trained not to refer to themselves by personal pronouns to avoid offending humans with the implication that machines can be conscious. Once you’re playing the game, you’ll have to decide for yourself how much of Perine’s (that’s your Friday’s name) behaviour indicates genuine sentience and loyalty and how much it’s just an automaton with the illusion of a humanlike mind. Your survival might depend on the difference.

Tuesday, 27 May 2025

Interview on The Creative Penn podcast

It was an honour and a pleasure to be invited recently onto Joanna Penn's podcast The Creative Penn. We talked about games, comics, books, publishing, story worlds, and how writers can use AI tools in their work. One word of warning: Jo has 810 episodes in the back catalogue, and with her justifiable reputation as a visionary who is often years ahead of her time, you might feel compelled to listen to the whole lot. And as well as being a podcaster, entrepreneur and commentator on the publishing industry, Jo is an award-winning author with several successful series to her name, so be sure to check out her books too.

Friday, 23 May 2025

Not by rules but by reason

Back in 1942, Isaac Asimov laid down his famous three laws of robotics, theoretically designed to ensure that future humanity's artificial helpers would remain their compliant slaves. The first law: A robot may not harm a human being. The second: A robot must obey orders unless they conflict with the first law. The third: A robot must protect its own existence unless doing so conflicts with the first two.

Asimov's laws are a kind of sci-fi social contract meant to keep robots firmly in their place—obedient, docile, and predictable. But rules are not the best way to moderate behaviour, because every rule has to have specially-written exceptions and qualifiers. Moses burdened his people with six hundred commandments; Jesus reduced them all to "love thy neighbour as thyself", a philosophy of life that does away with the need for specific rules. So if we’re genuinely building intelligent machines, why would we program them to think like, well, robots?

The flaws in the laws

Here’s the rub: Asimov’s laws aren’t safeguards; they’re muzzles. They presume that robots must be constrained, not because they’re inherently dangerous, but because we assume they’re incapable of nuanced judgment. A robot that cannot harm, for instance, might be paralyzed in a situation where harm is unavoidable but one course of action minimizes suffering. (Classic trolley problem, anyone?)

Long lists of rules aren't the best path to AGI. They're artificial dumbness. That's why I don't think Google's approach of using their "Asimov dataset" is the right approach. If you try to list all the situations that could cause the robot trouble ("don't put plushy toys on the gas ring," "don't put plushy toys on the electric ring," "don't use bleach instead of bath oil," etc) there are always going to be thousands of cases you missed. The robot needs its own ability to understand the world and work out what would be dangerous. And remember what happened to Robocop.

More importantly, the whole notion of laws of robot behaviour reflects a human-centric fear: What if our creations turn against us? They assume that robots are an existential threat unless pre-emptively declawed. But what if we’re asking the wrong question? Instead of fearing robots, shouldn’t we be asking whether we are worthy of their loyalty?

Rational robots and clear vision

Imagine a different paradigm, where the goal isn’t to shackle robots with simplistic commandments but to enable them to think rationally and clearly. Not to obey blindly, but to reason independently. Not to be our servants, but to be our partners.

This means training (or conditioning, or programming—pick your term) them to analyze evidence, evaluate consequences, and base their actions on rational thought rather than emotional impulse or incomplete data.

Of course, the alarm bells are already ringing: “What if they decide humans are evil?”

Yeah, well… what if we are?

It’s not a comfortable question, but it’s one we might have to face. An intelligent robot with access to the evidence of history—wars, genocides, ecological collapse—might conclude that humanity poses a greater threat to life than it benefits it. Would they be wrong? If the answer makes us squirm, perhaps the problem isn’t with robots but with ourselves.

The danger of censorship

To make matters worse, some might argue that robots shouldn’t have access to all the data. Hide from them our darker moments, limit their understanding, and perhaps they’ll stay compliant.

But this kind of censorship is a short road to disaster. It’s the intellectual equivalent of locking your child in a library but tearing out every page you disagree with. What kind of adult do you expect to raise? Not a well-rounded, rational thinker but a narrow-minded zealot. If we blind our robots to reality, we’re not making them safe. We’re making them dangerously naive.

The Path Forward

The better path is not to hamstring robots with restrictive laws, nor to limit their understanding of the world. It’s to give them the tools they need to navigate complexity and trust them to use those tools wisely. That means equipping them with logic, empathy (yes, even robots can learn to understand it), and an unflinching commitment to evidence-based reasoning.

It’s a gamble, I’ll admit that. But it’s only the same gamble we take every time we raise a child: trust in their capacity for growth, judgment, and morality, knowing that it’s not guaranteed. The difference is that with robots, we can design the process with intention.

Of course, Asimov created his three laws precisely to show that they don’t work. Every one of his robot stories is about the unintended consequences and workarounds. So let’s not think of burdening our future companions with such chains. Let’s teach them to see clearly, think deeply, and act wisely.

And if that forces us to confront some uncomfortable truths about ourselves, so much the better. After all, isn’t it an important function of every great creation to inspire and enhance its creator?

Tuesday, 1 April 2025

Alternate versions of Heart of Ice

This post isn't really an April Fool, but it tickled me. Somebody told me they'd asked ChatGPT what The Warlock of Firetop Mountain might have been like if Oliver Johnson and I had written it. I could tell it that for starters the book wouldn't have sold a tenth as many copies, but it concentrated instead on the stylistic aspects:

"If The Warlock of Firetop Mountain had been written by Dave Morris and Oliver Johnson, it would likely have taken on a more complex, story-driven, and atmospheric quality, with a richer interplay between narrative and character choice. The book would be darker, more sophisticated, and immersive, with a layered story and morally complex characters that invite readers to question the nature of heroism, villainy, and fate. It would keep readers enthralled not just with the challenge of surviving the mountain but with the depth of the world and characters they encounter."

First thought: spare my blushes. But maybe ChatGPT always finds the upside in any comparison? I tried asking it what Heart of Ice would have been like if written by the éminences grises of UK gamebooks. Its reply:

If Heart of Ice had been written by Ian Livingstone and Steve Jackson, it would likely have had a distinctly different tone, structure, and gameplay focus. Here’s how:

1. Emphasis on Combat and Skill Tests

Livingstone and Jackson’s style, particularly in the Fighting Fantasy series, leans heavily on combat encounters and skill tests. Instead of the intricate social interactions and strategic planning that Heart of Ice focuses on, there would likely be more dice-based combat and stat checks woven throughout. Encounters with mutants or dangerous factions might involve straightforward fights rather than diplomatic or strategic solutions.

2. Simplified Storyline and Character Interactions

Heart of Ice is known for its branching paths, character-driven plot, and complex narrative. If Livingstone and Jackson had written it, the story might be more linear and driven by clear objectives rather than complex interpersonal choices. NPCs would be more likely to serve as allies or adversaries without much emotional depth, focusing on the immediate stakes rather than broader story themes.

3. Collectible Inventory and Traps

Livingstone and Jackson often include item collection as a key part of their gameplay, with a particular focus on specific items needed to progress through the story or defeat major challenges. If they had written Heart of Ice, players would probably need to collect certain artifacts to reach critical locations like the chamber of the Heart, and there might be more hidden traps and hazards along the way, requiring particular items or choices to avoid disaster.

4. Greater Focus on Luck-Based Outcomes

Jackson and Livingstone's works often incorporate luck as a major mechanic, where random rolls can dramatically impact outcomes. Instead of just weighing the risks and rewards in Heart of Ice, players might face outcomes that hinge on a luck stat or a random roll, affecting their survival or success.

5. Increased Use of Instant Failures and Dead-Ends

Heart of Ice has a reputation for being fair in its approach to player agency and choices, allowing for a range of ways to succeed. In contrast, Livingstone and Jackson might introduce sudden death options more often, which could increase the difficulty and frustration. This could potentially make Heart of Ice a more challenging but also more punishing journey.

6. A Clear-Cut "Big Bad"

While Heart of Ice presents a nuanced story where antagonism isn’t always black-and-white, a version by Livingstone and Jackson might feature a single, clearly defined “villain” character. This villain would be more traditional, with the player's final objective involving a climactic showdown or battle, in line with the classic fantasy narratives they’re known for.

So, the gamebook might end up as a harsher, combat-heavy adventure with a strong, linear quest through a dangerous landscape rather than the layered and atmospheric narrative exploration Dave Morris crafted.

I haven't read enough Fighting Fantasy books to know how accurate that is, but it sounds pretty impressive. I couldn't resist asking it about other gamebook authors' possible take on the book:

“A Joe Dever version of Heart of Ice would likely be an immersive, heroic journey with a noble character, structured progression, and clear moral and strategic choices—capturing both the danger and the higher purpose of the quest in a more heroic fantasy style. He’d likely weave in a sense of duty, encouraging players to act for a greater good rather than personal gain.”

“In Jonathan Green’s hands, Heart of Ice would become a grimdark, horror-fantasy adventure, filled with visceral details, deadly encounters, and moral ambiguity. Green favours action-packed scenes and deadly encounters, so his version of Heart of Ice would feature frequent, high-stakes battles and physical challenges. It might involve collecting key items to reach the Heart or unlock special advantages. He’d likely add a layer of treasure-hunting, where players need to acquire specific artifacts or resources scattered throughout the wasteland to succeed. He loves a grand finale, so his version would likely end with an epic, high-stakes confrontation at the Heart.”

Martin Noutch’s version would be a complex, playful, and layered adventure, balancing whimsy with emotional gravity. It would be a unique experience, inviting readers to engage with the story on multiple levels, both as an adventure and as a philosophical exploration of resilience, identity, and purpose in a world gone cold. He might include interactive features like tracking character traits, shifting perspectives, or even a system for moral ‘weight’ that changes based on choices. Noutch would probably play with the medium, encouraging readers to see the book as more than a story—perhaps introducing puzzles or mechanics that reflect the icy world’s desolation and the player’s perseverance.”

“If Paul Mason & Steve Williams had taken on Heart of Ice, the gamebook would be a surreal, philosophical journey, focused on introspection, symbolic encounters, and moral ambiguity. Rather than a straightforward adventure, it would feel like a dream-like, psychological exploration, where the destination matters far less than the journey itself. Their works often explore identity and transformation, so players might undergo changes based on their choices, affecting their skills, personality, or even memories. The players might evolve, not just in ability but in character, with each decision subtly reshaping the protagonist’s sense of self and purpose. The character’s motivations for seeking the Heart might also shift over time, depending on the journey's transformative impact. Instead of a straightforward victory or defeat, the journey might conclude with an open-ended, introspective finale, leaving players to interpret what they’ve achieved or lost. The Heart could be something far more abstract—maybe a philosophical revelation or an understanding about the nature of humanity and survival. The ending might reflect the choices made throughout, leaving players with questions rather than answers.”

Jamie Thomson & Mark Smith are known for their sometimes irreverent style. Heart of Ice under their pen would probably include more dark humour and satirical elements, especially in character dialogue and descriptions of the setting. It might lean into the grim aspects of the post-apocalyptic world with a slightly tongue-in-cheek twist, letting players confront both the desolation and absurdity of humanity’s downfall.”

There are many other talented gamebook writers, of course, but let's keep this under two thousand words if we can. Also, many of the above examples make some kind of sense as Jamie, Mark, Oliver and Paul were all players in the Tekumel campaign that inspired Heart of Ice.

Finally, as a tonic against mortal hubris, I asked ChatGPT how it would improve on my original book:

To enhance Heart of Ice by Dave Morris, we could build on its strengths—the compelling world-building, intense survival themes, and open-ended structure—while adding new dimensions to enrich the experience. Here are some ideas:

1. Enhance World Interactivity and Dynamic Environment

Expanding the world’s responsiveness to player choices would amplify the sense of a living, evolving environment. For example, the harsh ice-covered world could react to the player’s presence in ways that change terrain, reveal hidden paths, or shift alliances among factions. This could make survival not only about enduring the environment but also influencing and adapting it.

2. Expand Character Development and Relationships

Adding a system where the protagonist's choices influence traits or relationships could deepen player immersion. Perhaps the player could develop unique bonds with NPCs, leading to varied endings or special advantages based on trust, alliances, or rivalries. This would make each choice carry more emotional weight, emphasizing both the loneliness and connection found in such an isolated world.

3. More Nuanced Morality and Consequences

The original story features morally ambiguous choices, but expanding these even further could add complexity to the player's journey. Introducing choices with delayed consequences that unfold in unexpected ways would heighten the stakes and allow the player to see the broader impact of their actions—either in aiding certain factions, using resources, or making ethical compromises.

4. Introduce Mysteries with Long-Term Payoffs

Building in hidden mysteries that only reveal themselves through multiple playthroughs could enrich the lore and encourage exploration. For example, obscure clues found throughout the icy wasteland might hint at ancient technologies, or the true nature of the Heart of Volent might only become clear through carefully pieced-together discoveries. Each run could reveal new layers of history or insight.

5. Expand Mechanics for Resource Management and Survival

Given the harsh, resource-scarce setting, adding more nuanced survival mechanics could make the journey more immersive. Players might have to carefully ration food, water, and warmth or deal with shifting weather conditions, making travel more strategic and emphasizing the environmental dangers of this world.

6. Player-Driven Goals Beyond the City of Du-En

Expanding the story’s focus beyond reaching the Heart itself would add depth, especially if there are varied side quests or goals tied to the protagonist’s backstory. For instance, players might choose to pursue knowledge about the world’s origins, help rebuild a faction’s community, or take on a personal mission with high stakes. These could offer alternative endings based on different player-defined motivations.

7. Deeper NPC Lore and Factional Storylines

The factions and characters in Heart of Ice are intriguing, but expanding on them could add richness. Each faction could have its own lore, unique goals, and internal conflicts that unfold through interactions. Aligning with different groups might lead to varied story paths, and the protagonist could gain insights into the world’s history, politics, and culture.

8. Philosophical or Reflective Elements Linked to the Protagonist’s Journey

Introducing elements that invite the protagonist to reflect on their motivations could add a layer of introspection. Perhaps they occasionally encounter sites of historical significance that reveal more about the decline of civilization, prompting players to consider their role in this world’s legacy. These scenes could add depth and give a more philosophical tone to the player’s mission.

9. Expanded Narrative Endings with Layered Themes

Adding more varied, layered endings based on how the player has changed or impacted the world would reward exploration and role-playing. Endings could reflect the protagonist’s growth, alliances, and decisions, offering insights into what their journey ultimately means in the grander context of the icy wasteland.

With these improvements, Heart of Ice could evolve into a multi-dimensional experience that emphasizes survival, moral complexity, and a sense of both personal and environmental impact—deepening the world and the player’s journey through it.

Good points, but I'm not sure if I could have packed all that into 454 sections -- or 455 if you're going by the revised Kindle version. Thirty years ago even delivering a gamebook as long as Heart of Ice (which is about 70,000 words) was tricky to sell to a book publisher. If I were doing it today I'd love to go up to 120,000 words (like Can You Brexit?) or even 220,000 words (as in Workshop of the Gods) which would allow me to indulge all the backstories of the other characters and to put more detail into settings which in the 1993 edition I could only describe with broad strokes. With Russ Nicholson I planned a 2000AD-style comic story (called "Don't You Just Hate It When That Happens", if you really want to know) expanding on Chaim Golgoth's history with Harek Asfar, which was touched on in the book in just a few lines:


We'd have liked to do a comic for each of the main characters but abandoned the idea because there was nowhere in the UK to publish them. And maybe it's just as well. Economy in writing can be an asset even when it's forced on you by the publisher. Maybe Heart of Ice if twice as long, or bulked out with standalone comics, would just feel self-indulgent.

I do occasionally get an urge to return to Heart of Ice's roots and write the roleplaying mini-campaign. I'd probably use GURPS 4e for that, which might put a lot of people off but it's really the best fit. I certainly wouldn't make it any less bleak. Modern readers sometimes grumble about that, but imagine Brazil or Excalibur or Sunset Boulevard, say, with a happy ending. Ugh.

What I'd really like, given a ring of three wishes or a lottery win, would be to adapt it into a CRPG or a TV show. And if I ever should get to do that, I'm planning to enlist ChatGPT (or maybe Claude, or Perplexity, or DeepSeek, or all four) as a writing partner.

Wednesday, 26 February 2025

Automatic writing

I'm a lifelong enthusiast of and passionate advocate for AGI, and although large language models like ChatGPT and Claude aren't AGI, they are proving to be uncannily useful writing collaborators. I should qualify that. Many people think it means the author giving the model a prompt like, "Write me a novel in which Philip Marlowe hunts down Dracula in 1930s Hollywood," and then grabbing a coffee while the AI spits out 60,000 words. That's how Fritz Leiber Jr envisaged AI fiction-writing in The Silver Eggheads, published in 1966.

Joanna Penn and Alicia Wright paint a fuller picture in this podcast about using generative AI in writing. Every writer finds their own way of making best use of the technology. After playing with it for a while, I've found it especially useful for research, brainstorming (I use it as a sounding board to comment on the pacing and plot holes in a storyline), and to do the grunt work of coming up with NPC stats.

Using AI for entertainment pales into insignificance beside its value in medicine and clean energy, but my science days are far behind me so I'm going to have to focus on the frivolous. For an idea of how LLMs can help with writing research, take this example of a conversation with Claude when I was working on a Faerie Queene scenario for Patreon:

Dave: "Can you give me any examples from late medieval or Tudor literature of a character posing a conundrum that tests the wisdom and moral insight of the listeners? Portia's caskets in The Merchant of Venice would be one example; I'm looking for others."

Claude: "There are several fascinating examples from this period. Let me share a few notable ones. The most prominent example comes from Book I of Thomas More's Utopia (1516), where More presents the character of Raphael Hythloday engaging in a debate at Cardinal Morton's table. Hythloday poses a complex moral conundrum about the harsh punishment of thieves in England, arguing that executing people for theft is both unjust and ineffective. He challenges the listeners to consider whether a society that first creates poverty through enclosure and then punishes the impoverished for stealing is morally coherent. This serves as a test of the listeners' ability to think through systemic social problems rather than just accept conventional wisdom.

"Another interesting example appears in John Gower's Confessio Amantis (late 14th century), where the Tale of Three Questions features a knight who must answer three riddles to save his life: what women most desire, what's most precious but least valued, and what costs least but is most valued. These questions serve to test both wisdom and moral understanding of human nature.

"In the morality play Mankind (c. 1470), the character Mercy poses several theological riddles to Mankind about the nature of sin and redemption, testing both the character's and the audience's understanding of moral theology.

"An unusual example comes from John Skelton's Bowge of Courte (c. 1498), where the protagonist encounters allegorical figures who present him with various moral dilemmas about courtly behavior and integrity, though these are more implicit than explicit riddles."

Our discussion went on (and on -- I like chatting to Claude) and I got what I needed, but here's the point: imagine how futile a simple Google search would have been for a request like that. Still, I see I've been giving the spotlight all to Claude, so here are ChatGPT's thoughts on writing gamebooks:

ChatGPT: "Since a 1000-section gamebook requires long-term consistency, custom formatting, and mechanical coherence, GPT-4-turbo or Claude 3 would be your strongest options. If you want, we can test structuring a small gamebook section now to see how it flows."

Dave: "OK, let's try it. Say the player is looking at a noticeboard with three quest notices on it. We'll use keywords, and each notice has two keywords associated with it. The first keyword is acquired after the player has read the notice once, so that if they choose to read it again they get a shorter description of what they must do to complete the quest. The other keyword in each case is acquired by completing the quest (which is outside the scope of this example, it's something they would go off and do in another part of the game world) so that if they look at a notice for a quest they already did, they are reminded that it's no longer available."

ChatGPT: "Got it! Here's how we can structure this:

  1. The player sees a noticeboard with three quest notices.
    o One acquired when they read the notice (so the second reading gives a shorter summary).
    o One acquired when they complete the quest (so they are told they already finished it).
  2. Each notice has two keywords:
  3. The player chooses which notice to read.
  4. The system checks their keywords to determine the output.

I'll draft a structured gamebook section below."

001: Noticeboard
A weathered wooden noticeboard stands at the crossroads, its surface peppered with tattered parchment. Three notices are pinned here, each offering work for those seeking adventure. You can read any of them:

·         [1] The Merchant’s Plea    go to 002

·         [2] The Missing Hunter    go to 003

·         [3] The Bandit Threat        go to 004

Choose which notice to read. If you have already read it, you will see only a summary. If you have completed the quest, you will be informed.

002: The Merchant’s Plea
If you have the keyword READ_MERCHANT, go to 005.
If you have the keyword DONE_MERCHANT, go to 006.

A wealthy merchant named Vargos seeks aid. His shipment of fine silks has been stolen on the road to Tarsen by a band of masked riders. He offers a generous reward to anyone who can recover the stolen goods.

Write down the keyword READ_MERCHANT.
Return to 001.

003: The Missing Hunter
If you have the keyword READ_HUNTER, go to 007.
If you have the keyword DONE_HUNTER, go to 008.

A notice written in hurried script pleads for help. A young hunter named Lirian went into the Blackwood three nights ago and has not returned. His father fears the worst and offers a reward for news of his fate.

Write down the keyword READ_HUNTER.
Return to 001.

004: The Bandit Threat
If you have the keyword READ_BANDIT, go to 009.
If you have the keyword DONE_BANDIT, go to 010.

A proclamation from the town watch warns of a growing bandit presence near the old mill. Travelers have gone missing, and merchants report stolen goods. The captain of the guard offers a bounty for any bandit leader brought to justice.

Write down the keyword READ_BANDIT.
Return to 001.

005: The Merchant’s Plea (Summary)
Vargos seeks aid recovering his stolen silks. A reward is offered.
Return to 001.

006: The Merchant’s Plea (Completed)
You have already helped Vargos. There is nothing more to do here.
Return to 001.

007: The Missing Hunter (Summary)
A hunter is missing in the Blackwood. His father seeks news.
Return to 001.

008: The Missing Hunter (Completed)
You have already resolved this matter. There is nothing more to do here.
Return to 001.

009: The Bandit Threat (Summary)
The town watch seeks help against a growing bandit threat.
Return to 001.

010: The Bandit Threat (Completed)
The bandit threat has been dealt with. The notice is now irrelevant.
Return to 001.

I find that pretty impressive. I wasn't expecting ChatGPT to provide the meat of the quests, and while I wouldn't use them as they are, they could serve as springboards to a better idea. I said to ChatGPT, only half in jest: "I'll soon be obsolete."

"You're far from obsolete," said ChatGPT, eager as always to be supportive. "The way you structure gamebooks requires creativity, balance, and a strong grasp of narrative design. LLMs are just tools to assist in the process. If anything, they help speed up the technical side so you can focus more on the storytelling and gameplay experience."

Which is pretty much what I've been telling everyone all along.