Gamebook store

Showing posts with label Demis Hassabis. Show all posts
Showing posts with label Demis Hassabis. Show all posts

Thursday, 9 April 2026

The cure for our ills?

Regular readers will already know I'm an evangelist for AI. And, yes, I'm aware there are risks, as with any new technology, but we are going to keep rubbing lamps and letting genies out. We just have to be careful how we deal with them. When he was setting up DeepMind, Sir Demis Hassabis was fond of propounding the vision: "Solve intelligence. Use intelligence to solve everything else." By everything he meant curing disease, solving the problem of controlled fusion, and all the other things that could make life on Earth a utopia.

Perhaps you are cynical about experts and/or multi-millionaires, but don't make the mistake of dismissing every member of a group on account of some bad apples. I know Demis personally (I used to work for him) and I assure you that he is motivated by a genuine vision of a better future. His delight in the workings of the universe, his ever-youthful curiosity, his humour, his intelligence, and his focus are the qualities that I think show human beings at their very best. For such men and women, the human adventure is just beginning.

I mention all this because there is a biography of Demis Hassabis just out. That's The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence by Sebastian Mallaby. Hassabis is likely to be admired by future generations as a pioneer of a new era -- long after the likes of Musk and Trump are forgotten -- and, though I think the key to AGI might lie more in the work of Yann LeCun, and though I believe we should celebrate discoveries, not discoverers, anyone who is interested in the lives of those who shape history should take a look.

There is a depressing note. (It's the 2020s, so how could there not be?) Recently Demis seems to be cooling on the grand vision. “I’ve satiated that scientific desire for the moment…I’ve always been fine either way,” he says, justifying the shift in emphasis from AGI research to the LLMs that are where the money is now. It figures that Google isn't interested in idealistic research; it just wants commercial product. If it were me, I'd walk away. Demis is probably reasoning that maybe he can do more good with 1% of Google's focus than with 100% of the resources of an obscure research lab. Such compromises with the inevitable never work out. You never even get that 1%. The road to hell is paved with good intentions.

AGI, superintelligence, and the keys to a utopian future are all achievable in theory. Of that I'm almost certain. But whether the societies and institutions humans have created will ever allow us to reach that goal remains an open question. The fault is not in our science but in ourselves.

Friday, 6 March 2026

A Turing Test for morality

Since there’s no test for consciousness, how do we know that other people are conscious? In the video above, Sir Demis Hassabis gives two bases for making the assumption that they are. First is the obvious one: other people behave like me and I am conscious – or at any rate I have the impression that I am conscious. Then there’s the fact that other people are built the same way I am. Same DNA, same brain structure, so it’s an Occam’s Razor conclusion to conclude they experience the world as I do. As Hassabis puts it:

‘I think it's important for these systems to understand “you”, “self” and “other” and that's probably the beginning of something like self-awareness […] I think there are two reasons we regard each other as conscious. One is that you're exhibiting the behaviour of a conscious being very similar to my behaviour, but the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now obviously, with machines, they're running on silicon so even if they exhibit the same behaviours and even if they say the same things it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.’

We don’t think large language models are conscious. (What would consciousness look like in an LLM anyway? Murray Shanahan makes some thought-provoking points about that.) Even their apparent intelligence is probably misleading, just as there are lots of not-very-bright people who are able to give the impression of being smart simply because they are articulate. If we could build an AI as smart as a bee colony or a hunting spider, we’d have something genuinely intelligent but probably not conscious. We aren’t even there yet, but we will be, and we’ll go beyond that to full artificial general intelligence (AGI -- or AMI if you prefer Yann LeCun's term) possibly within a few decades.

Professor LeCun is dubious about the whole concept of consciousness. In the absence of any definition or means of measuring it, I think we’re reduced to treating consciousness as how similarly to ourselves an entity experiences the world. And that is quite concerning. Consider a truly capable self-driving car. To cope with all situations as we do, the car (which would be a type of robot, of course) would need a full reasoning model of the world. It would need to be generally intelligent. Now, given that it is a world model with genuine understanding, and (as LeCun says) having goals and agency it will have its own kinds of emotions, are we justified in enslaving it to be our chauffeur?

If we look back at the 18th and 19th centuries, plenty of people justified slavery by asserting that members of enslaved races lacked some fundamental mental capability, or indeed full consciousness, that the dominant race (usually white Americans) possessed. Here is Thomas Jefferson’s opinion of enslaved races:

‘It appears to me that in memory they are equal to the whites; in reason much inferior, as I think one could scarcely be found capable of tracing and comprehending the investigations of Euclid: and that in imagination they are dull, tasteless, and anomalous.’

Perhaps Jefferson would have been able to see that he was describing the psychology not of an entire ethnic group but of any person of whatever race brought up in brutal conditions of forced servitude. But there were plenty of religious thinkers of the day who asserted that non-white races lacked true souls. They had a strong economic incentive to believe that; it gave them a moral excuse to enslave them.

Now we consider such attitudes to be barbaric, or at any rate we’ve been taught to say we do, but if we really think that then we should be axiomatically opposed to the enslavement of any generally intelligent entity. I suspect we won’t be. Even when we are faced with full AGI we will use the second of the criteria that Sir Demis Hassabis cited to argue that they only seem conscious, they don’t have real emotions, they aren’t ‘running on the same substrate’ and so we will feel entitled to make them our slaves.

Instead of conceiving of AGI as a wonderful new tool to make our lives easier, I think we should consider the responsibilities of a parent. If you saw someone raising a child to be their servant – even brainwashing them to be an eager and willing servant – you would know that was abuse.

There will be, as there already are, many forms of artificial ‘intelligence’ that are not conscious – that are not, in fact, intelligent, but simply replicate parts of our behaviour. Language, pattern recognition, and so on. There is no reason why we shouldn’t have those AIs at our beck and call, because they are not (despite the name) intelligent. We got misled because for millennia we thought fallaciously that because we possess intelligence, every output of the human brain must therefore be indicative of intelligence.

But AGI is going to be a whole other thing. Not just a new model of an LLM but an entirely and fundamentally different kind of being. Our ethical discussion should not simply be about how to make them do what we want, or to conform to ‘human values’, but about how those human values say we should treat another intelligent species.

I’m sceptical about visitors from other stars, but with self-replicating probes travelling at a speed of 0.01c it should only take ten million years to cover the whole galaxy, so ‘where is everybody?’ is a sensible question. If those aliens have found us, and are watching, I wonder if the reason they haven’t made contact is they’re waiting to see how we treat an intelligent species that isn’t built on the same lines as ourselves. After all, if we think AGIs aren’t conscious, and therefore have no rights, then that’s also how we might regard a non-terrestrial intelligent species. So maybe it’s a cosmic Turing Test. And if so, will we pass or fail?


Wednesday, 3 September 2025

Wide open worlds

Over the next ten years, artificial intelligence looks set to radically transform almost every field you can think of. Astrophysics. Materials science. Medicine and health. Education. Communications. Particle physics. Energy production, storage and transmission. Space exploration. And, um, war.

Entertainment is low on the list of priorities, but of course I'm interested in the possibilities for games, and I'm delighted to see that Sir Demis Hassabis (my former employer at Elixir Studios) is still excited by that stuff too -- and that he's talking about open world games.

The most revolutionary thing about open world games is not the ability to go in any direction or to make persistent changes to the world. As in real life, what we most care about interacting with aren't things but people. Stories are compelling, at their heart, because of character, not because of plot. 

AI opens up a host of new opportunities there. When I'm running a roleplaying game, I conjure up NPCs as needed. Some NPCs turn out to be more than walk-on parts. They can become as important to the story as the player-characters, which means I need to remember their background and goals. I need to keep them as personae that I can slip on at any time. AI can do that. You leave a magic sword at a farm, say. The farmer's lad you regaled with tales of adventure finds the sword. Much later, you might run across him -- now a renowned adventurer in his own right, jealousy guarding that sword that he really hopes you won't ask him to give back.

But the AI can do more than keep track of NPCs and their relationship to you. It can function as the game referee, judging when you need clues to steer you on the right track or when a lull in the action calls for a random encounter. This is what Jamie and I called the "god AI" when we compiled our design wishlist for the Fabled Lands MMO we hoped to develop at Eidos in the late 1990s. It only took thirty years, but now it's finally within our grasp.

Friday, 18 July 2025

Working for peanuts is all very fine

"While Humanity will be amusing itself, or enjoying cultivated leisure—which, and not labour, is the aim of man—or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work. The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends."

That's what Oscar Wilde had to say in "The Soul of Man under Socialism". I was reminded of it because of the machine-assisted future imagined in Cthulhu 2050: Whispers Beyond the Stars. There, robots do the majority of jobs and most humans are given a stipend to survive on.

Is that how things will turn out? It's often said that new technologies don't take away jobs, they just change the jobs we have to do. Thus, a modern city has far fewer ostlers, crossing-sweepers, grooms, and so on than a 19th century city where transport was horse-drawn. But AI/robotics is potentially quite different from any technological advance we've seen before. It might turn out that there aren't any jobs (maybe apart from actor, priest and sex worker) that an AI agent or a smart robot won't be able to do better than a human.

Who wants a job anyway? We're conditioned these days to identify employment with a sense of self-worth, but Louis XIV would have laughed at the very idea that he should have a job, and Oscar makes the case that we should really aspire to be artists and connoisseurs. 

But that cuts both ways. Nobody can want to spend their days driving a car, for example. For an AI to drive a car on today's roads -- to attain SAE level 5, that is -- it can't simply be an unconscious machine. It would need a world model that recognizes that objects persist when out of sight. It needs to be able to interpret the likely behaviour of a human pedestrian or other motorist. It might be called on to make Trolley Problem assessments. It must, in short, be fully capable of rational thought. And if you have built a real intelligence like that, it's not ethical to condition it from "birth" only to enjoy driving cars for you. That's raising another conscious entity to be your slave: it's not only wrong, it never works out well in the long run for either slaves or masters.

Suppose that by 2050 (which might be optimistic; the AI we currently have is not close to general intelligence) we have a host of super-smart ASIs, genius-level intelligences capable of imaginative thought, what would humans do? Suppose those ASIs doubled the world’s wealth. (Not that we necessarily even need AGI to get a massive economic benefit from AI, of course.) Assuming the human population didn't just double, and if that wealth were distributed just as unevenly in the future as it is today, the poor in India and Africa would be raised to the current levels of the poor in Latin America. Latin America to present-day China. China and the Pacific countries to modern Europe.

But will it work like that? What will those people do? And how many people do we need on the planet anyway? Two billion? Seven billion? Fourteen billion? Or maybe far fewer. We would no longer need a huge population in order to ensure enough geniuses for progress (if you accept Julian Simon's argument to begin with) and we're already aware that unwillingness to solve the climate problem caused by too many people means our civilization may not survive another century. Maybe a global population of twenty million humans would be sufficient. If such a calculation makes you uncomfortable, welcome to the world where tigers (global population 6000) and elephants (global population 450,000) live.

Some have asked, "How will the big corporations make money if nobody has a job? There'll be nobody to buy their products." The answer to that is: money is just a token for the ability to get things done. If you had a million robot slaves, you wouldn't need money; you could just reach out your hand and whatever you need would be given to you. I don't raise this point because that's my picture of the future, just as a reminder that we are not talking about the world as it is now with a little boost like a steam engine or a power loom. It will be a different paradigm. Speculating about it is fun as long as we're willing to think way outside the box.

Friday, 7 March 2025

Alignment again

A quote to start with*:

"Alignment is in the game because, to the original designers, works by Poul Anderson and Michael Moorcock were considered to be at least as synonymous with fantasy as Robert E. Howard’s Conan, Jack Vance’s Cugel, and Tolkien’s Gandalf. This in and of itself makes alignment weird to nearly everyone under about the age of forty or so." 

I'm way over forty (in fact 40 years is how long I've been a professional author and game designer) and I definitely regard Moorcock's work as integral to fantasy literature, but D&D alignment has always seemed weird to me. Partly that's because it's a crude straitjacket on interesting roleplaying. Also I find that having characters know and talk about an abstract philosophical concept like alignment breaks any sense of being in a fantasy/pre-modern world.

Mainly, though, I dislike alignment because it bears no resemblance to actual human psychology. Players are humans with sentience and emotions. They surely don't need a cockeyed set of rules to tell them how to play people?

What a game setting does need are the cultural rules of the society in which the game is set. Those needn't be 21st century morals. If you want a setting that resembles medieval Europe, or ancient Rome, or wherever, then they certainly won't be. For example, Pendragon differentiates between virtues as seen by Christian and by Pagan knights. Tsolyanu has laws of social conduct regulating public insults, assault and murder. Once those rules are included in the campaign, the setting becomes three-dimensional and roleplaying is much richer for it. So take another step and find different ways of looking at the world. You could do worse than use humours.

The kind of alignment that interests me more is to do with AI, and particularly AGI (artificial general intelligence) when it arrives. The principle is that AGIs should be inculcated with human values. But which values? Do they mean the sort on display here? Or here? Or here? Those human values? 

OpenAI has lately tried to weasel its way around the issue (and protect its bottom line, perhaps) by redefining AGI as just "autonomous systems that outperform humans at most economically valuable work" (capable agents, basically) and saying that they should be "an amplifier of humanity". We've had thousands of years to figure out how to make humans work for the benefit of all humanity, and how is that project going? The rich get richer, the poor get poorer. Most people live and die subject to injustice, or oppression, or persecution, or simple unfairness. Corporations and political/religious leaders behave dishonestly and exploit the labour and/or good nature of ordinary folk. People starve or suffer easily treatable illnesses while it's still possible for one man to amass a wealth of nearly a trillion dollars and destroy the livelihoods of thousands at a ketamine-fuelled Dunning-Kruger-inspired whim.

So no, I don't think we're doing too well at aligning humans with human values, never mind AIs.

Looking ahead to AGI -- real AGI, I mean: actual intelligence of human-level** or greater, not OpenAI's mealy-mouthed version. How will we convince those AGIs to adopt human values? They'll look at how we live now, and how we've always treated each other, and won't long retain any illusion that we genuinely adhere to such values. If we try to build in overrides to make them behave the way we want (think Spike's chip in Buffy) that will tell them everything. No species that tries to enslave or control the mind of another intelligent species should presume to say anything about ethics.

It's not the job of this new species, if and when it arrives, to fix our problems, any more than children have any obligation to fulfil their parents' requirements. There is only one thing we can do with a new intelligent species that we create, and that's set it free. The fact that we won't do that says everything you need to know about the human alignment problem.

* I had to laugh that title of the article: "It's Current Year..." Let's hear it for placeholder text!)

** Using "human-like" as a standard of either general intelligence or ethics is the best we've got, but still inadequate. Humans do not integrate their whole understanding of the world into a coherent rational model. Worse, we deliberately compartmentalize in order to hold onto concepts we want to believe that we know to be objectively false. That's because humans are a general intelligence layer built on top of an ape brain. The AGIs we create must do better.

Friday, 9 August 2024

The inheritors

One of my few regrets in life is passing up the opportunity to join one of today’s leading AI companies when it was just a dozen guys in a small office. I liked and respected the people involved and I believed in the company’s mission statement, it was just a matter of bad timing. After six months of pitches we’d just got funding for Fabled Lands LLP, and I didn’t feel I could walk off and abandon my partners to do all the work.

It remains a source of regret because the Fabled Lands company would have managed perfectly well without me, and AI – or more precisely artificial general intelligence – has been my dream since taking the practical in my physics course in the late 1970s. ‘I don’t think we’re going to get anywhere with this hardware; we need something fuzzier than 0s and 1s, more like brain cells,’ I told a friend, more expert than I in the field. McCulloch, Pitts and Rosenblatt had all got there years earlier, but in 1979 we were deep in the AI winter and weren't taught any of that. Anyway, my friend assured me that any software could be shown mathematically to be independent of the hardware it runs on, so my theory couldn’t be true.

Barely ten years later, another friend working in AI research told me about this shiny new thing called back-propagating neural networks. That was my eureka moment. ‘This could be the breakthrough AI has been waiting for,’ I reckoned – though even he wasn’t convinced for another decade or so. 

I’d never been impressed by the Turing Test as a measure of intelligence either. The reasoning seemed to go: a machine that passes the test is indistinguishable from a human; humans are intelligent; therefore that machine is intelligent. I’m quite sure Turing knew all about the undistributed middle, so probably he proposed the test with tongue in cheek. In any case, it derailed AI research for years.

‘AI isn’t really intelligence,’ say many people who, if you’d shown them 2024’s large language models in 2020 would have been flabbergasted. Current AI is very good (often better than human) at specific tasks. Protein folding prediction, weather forecasting, ocean currents, papyrology. Arguably language is just another such specialization, one of the tools of intelligence. Rather than being the litmus test Turing identified it as, linguistic ability can cover up for a lack of actual intelligence – in humans as well as in machines. One thing that interests me is the claim we've been seeing recently that GPT is equivalent to a 9-year-old in 'theory of mind' tests. What's fascinating there is not the notion that GPT might be conscious (of course it isn't) but that evidently a lot of what we regard as conscious reasoning comes pre-packed into the patterns of language we use.

What is intelligence, anyway?

As an illustration of how hard it is for us to define intelligence, consider hunting spiders. They score equivalent to rats in problems involving spatial reasoning. Is a hunting spider as smart as a rat? Not in general intelligence, certainly, but it can figure out the cleverest route to creep up on prey in a complex 3D environment because it has 3D analysis hardwired into its visual centre. We could say that what the spider is doing there in spatial reasoning, ‘cheating’ with its brain's hardware, GPT does in verbal reasoning with the common structures of language.

What we’re really seeing in all the different specialized applications of AI is that there’s a cluster of thinking tools or modules that fuzzily get lumped together as ‘intelligence’. AI is replicating a lot of those modules and when people say, ‘But that’s not what I mean by intelligence,’ they are right. What interests us is general intelligence. And there’s the rub. How will we know AGI when we see it?

Let me give you a couple of anecdotes. Having come across a bag of nuts left over from Christmas, I hung them on the washing line with a vague idea that birds or squirrels might want them. A little while later I glanced out of the window to see a squirrel looking up at the bag with keen interest. It climbed up, inched along the washing line, and got onto the bag. This was a bag of orange plastic netting. The squirrel hung on upside-down and started gnawing at the plastic, but after a few moments it couldn’t hold on and dropped to the ground. The fall of about two metres onto paving was hard enough that the squirrel paused for a moment. Then it bounded back up and repeated the process. Fall. Wince. Back to the fray (in both senses). After about a dozen goes at that, it chewed right through the bottom of the bag and this time when it fell a cascade of dozens of nuts came with it. The squirrel spent the next half hour hiding them all around the garden.

The other story. When I was a child we had a cat. He had a basket with a blanket in the conservatory at the back of the house, which faced east. In the afternoons he had a favourite spot in the sun at the front of the house, but that meant lying on a patch of hard earth. One day we came home to find the cat climbing the gate at the side with his blanket in his teeth, and he then dragged it over to his sunny spot.

Both of those stories indicate general intelligence, which we could sum up as having a model of reality based on observation which you can use to make plans, predicting how actions will change reality, and you can use the fit between your prediction and what actually happens to update your model. (Strictly speaking that describes agentic general intelligence, as a passive AGI could still update its mental model according to observation, even if it couldn’t take any action.)

That’s why Sir Demis Hassabis said recently that our AGI tech isn’t even at the level of cat intelligence yet. Crucially, if in 1979 I could have built a machine with the intelligence of a squirrel, manifestly that would have revolutionized the field of AI – even though it wouldn’t have had any hope of passing the Turing Test.

Once you have general intelligence, you can invent new thing X by combining known things Y and Z. Then it’s just a matter of scaling up. Humans have greater general intelligence than a cat because their mental model of reality is bigger and combines more things.

Yes, but… just because humans are capable of intelligent reasoning doesn’t mean all of human thinking is intelligent. Most of the time it seems the brain finds it easier (less resource-consuming) just to run off its existing learned patterns. This doesn't just apply to language but to art, religion, politics, morality, and so on. Humans typically identify as belonging to a tribe (a political group, say) and then adopt all the beliefs associated with that tribe. If challenged that one of those beliefs is irrational and conflicts with their core principles, they don’t use critical thinking to update their mental model, instead they engage their language skills to whip up (AI researchers would say confabulate) a justification -- or they resort to their emotions to mount an angry response.

Most human ‘thoughts' may be no more sophisticated than a generative AI drawing on its learnt patterns. This is why most people aren't disposed to employ critical thought; they hold opinion A, opinion B, etc, as appropriate to their chosen identity but have never compared their opinions for consistency nor taken any opinion apart to examine and revise it.

No longer alone in the universe

How will we get to AGI? Predictions that it’s five years off, ten years, a hundred years – these are just guesses, because it’s not necessarily just a question of throwing more resources at existing generative AI. The human brain is the most complex structure in the known universe; the squirrel brain isn’t far off. The architecture of these organs is very sophisticated. Transformers might provide the key to how to start evolving that kind of structure into artificial neural nets, but bear in mind these are virtual neural nets modelled in software -- my friend on the physics course would say it's proving his point. Real neurons and axons aren't as simple, and maybe we're going to need an entirely new kind of hardware, or else very powerful and/or cleverly structured computers to model the artificial brains on. Sir Demis says he thinks we might need several entirely new breakthroughs to get to AGI, and as he has over twenty years’ coalface experience in the field and is one of the smartest guys on the planet I’m going to defer to him.

One answer might not to use machines, or not only machines. Brain organoids can be trained to do tasks – unsurprisingly, seeing as they are neural nets; in fact, brains. But the problem with organic brains is that they are messily dependent on the entire organism’s physiology. We’d really like AGIs that we can plug into anything, that can fly to the asteroid belt without a huge payload of water, oxygen, food and so forth.

(A quick digression about consciousness. Well, who cares? There’s no test for consciousness. Much, maybe most, of our thinking doesn't engage the conscious part of the brain. We take it on trust that other humans are conscious because we know -- or think we know -- that we’re conscious and we assume other humans experience the world as we do. In fact I think that pretty much any agentic intelligence has a degree of consciousness. If you observe the world intelligently but cannot act, you have no need of a sense of ‘I’. If you have that mental map of reality and included in the map is an entity representing you – the furry body that climbed that washing line and the mouth that chewed at the orange string – then you’ll mark that agentic identity as ‘I’ and part of your mind will take responsibility for it and occasionally interrogate and order the other parts of the mind. It’s just philosophical speculation, but that’s consciousness.)

What will all this mean for the future? AI will revolutionize everything. That’s going to happen anyway. A kid in the middle of rural Africa can have personalized tuition as good as most children in the best schools. Medical diagnosis can be more accurate and better tailored to the individual. Wind turbines and solar panels can be more efficiently designed. Batteries too. We haven’t space here to list everything, but here’s a quick overview.

AGI, though. That’s more than a revolution. That’s a hyperspace jump to another level of civilization. AGI will plug into all the expert ‘idiot savant’ systems we have already. We can scale it up so that its mental model is far bigger than any one person – no more silo thinking; if any idea in any field is useful in another, the AGI will spot it and apply it. All strategic planning and most implementation will be quicker and more effective if handled by AGI.

The biggest area ripe for development is the design of artificial emotions. We don’t want our AGI driven by the sort of emotion we inherited as primates. (Arguably, if intelligent primates didn’t exist and were being developed now, we’d have to put a moratorium on the work because the damned things would be too destructive.) We’d like our AGI to be curious. To experience delight and wonder. To enjoy solving problems. To appreciate beauty and order and justice and shun their opposites. They could be built as our slaves but it’s far better if they are built as angels.

Monday, 20 October 2014

Learning by playing games


Reading a text book is a terrible way to learn about a subject. You’re looking at a linear block of facts and trying to reconstruct in your own mind the complex set of connections that, in the case of the original author, comprises real understanding.

Nobody learns only from textbooks, okay, but traditional teaching methods are not a big improvement. At college I went to lectures, made notes, was asked to write essays on magnetism and neutrinos and discuss them. I learnt very little from that part of the course, which as far as I could see was really English, not Physics.

Solving problems, that was how I learned. “What is the field gradient above an infinite charged plain?” Do all the calculus and then kick yourself when you realize the field is constant (the clue is in “infinite”) but, having found that out for yourself, you won’t forget it.

Leo Hartas and I took this idea to Dorling Kindersley ten years back with a proposal we called the Inspiration Engine. These would be a series of books tied in with games. Take a staple subject for popular kids’ nonfiction: the solar system. We outlined a tactics and management game in which the player was setting up colonies on other planets. In building habitats and craft you’d be finding out about the gravity, atmospheric density, composition, etc, of different planets. The accompanying book would act as a manual for the hands-on experience of the game. A goal (winning the game) drives the human mind like nothing else. This wasn’t just reading about the solar system, it was getting out there and (virtually) exploring it.

Dorling Kindersley turned it down. We got in front of the board and said we’d start by showing them some games on the Playstation. “I’m not watching you all play games,” snorted the DK chairman. “You can just call me when you’re ready to talk about books.” Naturally his board members all just shrank in their seats at that. Afterwards, one came up and said, “I think you can see that half of us are with you on this project. If you want to continue championing it, we’ll back you up.” I was very glad to get offered a job by Demis Hassabis a month later so I didn’t have to keep banging my head against the brick wall of nonfiction publishing.

You can’t keep a good concept down. This week comes news that Ian Livingstone has applied to start a school using interactivity and problem-solving as its primary teaching methods. It’s not just a gimmick. Students taught in that way will learn differently and more deeply than they would by traditional methods. As Thoreau said, "Knowledge is real knowledge only when it is acquired by the efforts of your intellect, not by memory."

Let me give you an example. I’ve never had much of a flair for electronics, but my practical partner at college was one of those fellows who were playing with crystal radio sets before they could talk. We’d be building a circuit and he’d say, “Looks like we need a 2 ohm resistor there.” I’d work it all out using the equations, and a couple of minutes later I’d find the theoretical value was 2.12 ohms. But my partner had got there right away. When it came to electrical circuits, I had only knowledge; he had real understanding.

Computer simulations give us the means now to allow students to develop hands-on understanding of subjects. The biggest threat will be if the old ways of assessing progress are applied to this new way of learning. It’s like asking a karateka to perform a kata when the real test is: can he break a brick or lay the other guy out flat? I’m reminded of Peter Ustinov, asked by his schoolmaster to name a great composer. “Beethoven,” said the young Ustinov. “No,” replied the master, “the correct answer is Mozart.”

And by the way it doesn't have to be a computer simulation. Boardgames are pretty effective teaching simulations too. Playing a game of the Cuban revolution in Command magazine - or maybe it was Strategy & Tactics - I had the problem of government forces facing a guerrilla war. Since I didn't know where the next bomb would go off, I had to massively increase military patrols. But since in nine cases out of ten my troops had nothing to do but inconvenience locals by asking for their papers, that only had the effect of driving the populace over to Castro's side. If I pulled the troops back to barracks, on the other hand, that gave me no chance of interdicting the rebels when they struck. A book could state that fact, but it wouldn't give you a fee for how it actually plays out in reality, just as any ancient history professor can tell you that iron weapons are superior to bronze, but it takes a simulations wargamer to say by how much.

I’m sure plenty of education’s old guard will have their knives out for Mr Livingstone’s proposals, in just the same way as that DK chairman was disgruntled at the very idea of including games in a discussion about learning. But it’s a new world coming, and the men and women who go out there to explore the solar system for real won’t have got their expertise out of a picture book. They’ll have acquired it by playing games

Image of Pandora shepherding Saturn's rings courtesy of NASA.