Pages

Friday, 9 August 2024

The inheritors

One of my few regrets in life is passing up the opportunity to join one of today’s leading AI companies when it was just a dozen guys in a small office. I liked and respected the people involved and I believed in the company’s mission statement, it was just a matter of bad timing. After six months of pitches we’d just got funding for Fabled Lands LLP, and I didn’t feel I could walk off and abandon my partners to do all the work.

It remains a source of regret because the Fabled Lands company would have managed perfectly well without me, and AI – or more precisely artificial general intelligence – has been my dream since taking the practical in my physics course in the late 1970s. ‘I don’t think we’re going to get anywhere with this hardware; we need something fuzzier than 0s and 1s, more like brain cells,’ I told a friend, more expert than I in the field. McCulloch, Pitts and Rosenblatt had all got there years earlier, but in 1979 we were deep in the AI winter and weren't taught any of that. Anyway, my friend assured me that any software could be shown mathematically to be independent of the hardware it runs on, so my theory couldn’t be true.

Barely ten years later, another friend working in AI research told me about this shiny new thing called back-propagating neural networks. That was my eureka moment. ‘This could be the breakthrough AI has been waiting for,’ I reckoned – though even he wasn’t convinced for another decade or so. 

I’d never been impressed by the Turing Test as a measure of intelligence either. The reasoning seemed to go: a machine that passes the test is indistinguishable from a human; humans are intelligent; therefore that machine is intelligent. I’m quite sure Turing knew all about the undistributed middle, so probably he proposed the test with tongue in cheek. In any case, it derailed AI research for years.

‘AI isn’t really intelligence,’ say many people who, if you’d shown them 2024’s large language models in 2020 would have been flabbergasted. Current AI is very good (often better than human) at specific tasks. Protein folding prediction, weather forecasting, ocean currents, papyrology. Arguably language is just another such specialization, one of the tools of intelligence. Rather than being the litmus test Turing identified it as, linguistic ability can cover up for a lack of actual intelligence – in humans as well as in machines. One thing that interests me is the claim we've been seeing recently that GPT is equivalent to a 9-year-old in 'theory of mind' tests. What's fascinating there is not the notion that GPT might be conscious (of course it isn't) but that evidently a lot of what we regard as conscious reasoning comes pre-packed into the patterns of language we use.

What is intelligence, anyway?

As an illustration of how hard it is for us to define intelligence, consider hunting spiders. They score equivalent to rats in problems involving spatial reasoning. Is a hunting spider as smart as a rat? Not in general intelligence, certainly, but it can figure out the cleverest route to creep up on prey in a complex 3D environment because it has 3D analysis hardwired into its visual centre. We could say that what the spider is doing there in spatial reasoning, ‘cheating’ with its brain's hardware, GPT does in verbal reasoning with the common structures of language.

What we’re really seeing in all the different specialized applications of AI is that there’s a cluster of thinking tools or modules that fuzzily get lumped together as ‘intelligence’. AI is replicating a lot of those modules and when people say, ‘But that’s not what I mean by intelligence,’ they are right. What interests us is general intelligence. And there’s the rub. How will we know AGI when we see it?

Let me give you a couple of anecdotes. Having come across a bag of nuts left over from Christmas, I hung them on the washing line with a vague idea that birds or squirrels might want them. A little while later I glanced out of the window to see a squirrel looking up at the bag with keen interest. It climbed up, inched along the washing line, and got onto the bag. This was a bag of orange plastic netting. The squirrel hung on upside-down and started gnawing at the plastic, but after a few moments it couldn’t hold on and dropped to the ground. The fall of about two metres onto paving was hard enough that the squirrel paused for a moment. Then it bounded back up and repeated the process. Fall. Wince. Back to the fray (in both senses). After about a dozen goes at that, it chewed right through the bottom of the bag and this time when it fell a cascade of dozens of nuts came with it. The squirrel spent the next half hour hiding them all around the garden.

The other story. When I was a child we had a cat. He had a basket with a blanket in the conservatory at the back of the house, which faced east. In the afternoons he had a favourite spot in the sun at the front of the house, but that meant lying on a patch of hard earth. One day we came home to find the cat climbing the gate at the side with his blanket in his teeth, and he then dragged it over to his sunny spot.

Both of those stories indicate general intelligence, which we could sum up as having a model of reality based on observation which you can use to make plans, predicting how actions will change reality, and you can use the fit between your prediction and what actually happens to update your model. (Strictly speaking that describes agentic general intelligence, as a passive AGI could still update its mental model according to observation, even if it couldn’t take any action.)

That’s why Sir Demis Hassabis said recently that our AGI tech isn’t even at the level of cat intelligence yet. Crucially, if in 1979 I could have built a machine with the intelligence of a squirrel, manifestly that would have revolutionized the field of AI – even though it wouldn’t have had any hope of passing the Turing Test.

Once you have general intelligence, you can invent new thing X by combining known things Y and Z. Then it’s just a matter of scaling up. Humans have greater general intelligence than a cat because their mental model of reality is bigger and combines more things.

Yes, but… just because humans are capable of intelligent reasoning doesn’t mean all of human thinking is intelligent. Most of the time it seems the brain finds it easier (less resource-consuming) just to run off its existing learned patterns. This doesn't just apply to language but to art, religion, politics, morality, and so on. Humans typically identify as belonging to a tribe (a political group, say) and then adopt all the beliefs associated with that tribe. If challenged that one of those beliefs is irrational and conflicts with their core principles, they don’t use critical thinking to update their mental model, instead they engage their language skills to whip up (AI researchers would say confabulate) a justification -- or they resort to their emotions to mount an angry response.

Most human ‘thoughts' may be no more sophisticated than a generative AI drawing on its learnt patterns. This is why most people aren't disposed to employ critical thought; they hold opinion A, opinion B, etc, as appropriate to their chosen identity but have never compared their opinions for consistency nor taken any opinion apart to examine and revise it.

No longer alone in the universe

How will we get to AGI? Predictions that it’s five years off, ten years, a hundred years – these are just guesses, because it’s not necessarily just a question of throwing more resources at existing generative AI. The human brain is the most complex structure in the known universe; the squirrel brain isn’t far off. The architecture of these organs is very sophisticated. Transformers might provide the key to how to start evolving that kind of structure into artificial neural nets, but bear in mind these are virtual neural nets modelled in software -- my friend on the physics course would say it's proving his point. Real neurons and axons aren't as simple, and maybe we're going to need an entirely new kind of hardware, or else very powerful and/or cleverly structured computers to model the artificial brains on. Sir Demis says he thinks we might need several entirely new breakthroughs to get to AGI, and as he has over twenty years’ coalface experience in the field and is one of the smartest guys on the planet I’m going to defer to him.

One answer might not to use machines, or not only machines. Brain organoids can be trained to do tasks – unsurprisingly, seeing as they are neural nets; in fact, brains. But the problem with organic brains is that they are messily dependent on the entire organism’s physiology. We’d really like AGIs that we can plug into anything, that can fly to the asteroid belt without a huge payload of water, oxygen, food and so forth.

(A quick digression about consciousness. Well, who cares? There’s no test for consciousness. Much, maybe most, of our thinking doesn't engage the conscious part of the brain. We take it on trust that other humans are conscious because we know -- or think we know -- that we’re conscious and we assume other humans experience the world as we do. In fact I think that pretty much any agentic intelligence has a degree of consciousness. If you observe the world intelligently but cannot act, you have no need of a sense of ‘I’. If you have that mental map of reality and included in the map is an entity representing you – the furry body that climbed that washing line and the mouth that chewed at the orange string – then you’ll mark that agentic identity as ‘I’ and part of your mind will take responsibility for it and occasionally interrogate and order the other parts of the mind. It’s just philosophical speculation, but that’s consciousness.)

What will all this mean for the future? AI will revolutionize everything. That’s going to happen anyway. A kid in the middle of rural Africa can have personalized tuition as good as most children in the best schools. Medical diagnosis can be more accurate and better tailored to the individual. Wind turbines and solar panels can be more efficiently designed. Batteries too. We haven’t space here to list everything, but here’s a quick overview.

AGI, though. That’s more than a revolution. That’s a hyperspace jump to another level of civilization. AGI will plug into all the expert ‘idiot savant’ systems we have already. We can scale it up so that its mental model is far bigger than any one person – no more silo thinking; if any idea in any field is useful in another, the AGI will spot it and apply it. All strategic planning and most implementation will be quicker and more effective if handled by AGI.

The biggest area ripe for development is the design of artificial emotions. We don’t want our AGI driven by the sort of emotion we inherited as primates. (Arguably, if intelligent primates didn’t exist and were being developed now, we’d have to put a moratorium on the work because the damned things would be too destructive.) We’d like our AGI to be curious. To experience delight and wonder. To enjoy solving problems. To appreciate beauty and order and justice and shun their opposites. They could be built as our slaves but it’s far better if they are built as angels.

No comments:

Post a Comment