Why is progress in semantic technologies so slow?
A shiver ran down the collective spine of humanity in 1997 when Deep Blue bested the then world grandmaster of chess, Garry Kasparov. Chess grandmasters, as we all know, are super-smart: if a computer could beat Kasparov, what hope for the rest of us? Surely, the machines were taking over.
Almost two decades later, they palpably haven’t. At least not yet.
Artificial Intelligence (AI) has taken some giant strides over those years, giving us Siri, sat nav, driverless cars, high-frequency trading and Google’s Knowledge Graph among other wonders. But developments in the field seem puzzlingly patchy. Why is machine translation still so bad (and only painfully slowly getting better)? And, more to the point so far as this blog is concerned, why have semantic technologies that could aid scholarship been so slow to develop? We have seen only incremental gains in this field, so what is holding this particular branch of AI back?
A book by philosopher Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (apparently very popular with Silicon Valley CEOs) provides answers. But, a trigger warning in advance: some of the answers are liable to make you wish you’d never asked the question.
Why daleks can’t climb stairs
As Bostrum tells it, machines beating humans at chess isn’t really that big a deal.
Perhaps because we’ve always viewed chess as the province of superbrains, the ‘sport of kings’ – the game that a Bond mastervillain might play with one hand while stroking his white Persian with the other – we falsely assumed that proficiency in chess must also require a high level of general intelligence. In fact, as Bostrom says, ‘it turned out be possible to build a perfectly fine chess engine around a special-purpose algorithm’.
Not only is chess not the most difficult challenge in AI, it isn’t even the hardest game : the best humans still beat machines at full-ring Texas hold ‘em poker, for instance.
A lot seems counter-intuitive about the AI problems that have proved really hard. While machines make short shrift of stuff that we consider difficult – like mental maths – they struggle with things that humans find so easy we take them for granted. A basic motor skills function like walking up a flight of stairs on two legs has proved a hugely difficult problem for robots (and daleks). Computer scientist Donald Knuth puts this neatly: ‘AI has by now succeeded in doing essentially everything that requires “thinking” but has failed to do most of what people and animals do “without thinking” – that, somehow, is much harder!’
Into this category of incredibly sophisticated things we do so instinctively that we take them for granted falls language. Natural language processing (NLP) has some of the toughest problems for AI to solve. In fact, human-level natural language processing, as it is called, is considered an AI-complete problem.
This means that when we have solved the problem of NLP we will have also solved the ‘central artificial intelligence’ problem – and achieved what Nick Bostrom calls human-level machine intelligence (or HMLI).
So here is where the really bad news for semantic technologies – and potentially for the human race – comes in. But before I tell you that, I first have to explain a couple of terms.
HMLI is defined as a level of AI that ‘can carry out most human professions at least as well as a typical human’ – at which point, farewell to your job. Superintelligence is the point beyond that where machines surpass human brains in general intelligence. At which point … well, suffice to it say that Superintelligence is a book that could make you sleep a little less easily at night.
The race to be replaced
If human-level natural language processing is an AI-complete problem, it follows that the future of semantic technologies must be closely tied to the future of AI in general. Therefore, if we are asking the question, when will semantic technologies become mature enough to be of general use for scholarship (rather than just in certain domains that have a very nailed down, controllable vocabulary, like chemistry) – if we are asking when something like the semantic web might really arrive – what we are really asking is, when will HMLI arrive? Because solving the problem of language, if I interpret this correctly, seems to involve solving the bigger problem of general intelligence.
Expert opinions about when the HMLI milestone might be reached vary wildly. A recent survey of expert communities discussed in the book returned answers varying from 2020 to 2093 (some think it will never happen, others disagree with the definition). Bostrom plumps for mid-century. He also feels that when that happens, superintelligence will happen ‘fairly soon thereafter’.
In other words (to put things crudely) when something like the semantic web finally arrives, it might not necessarily be time to pop the bubbly. For those of a more pessimistic disposition it will instead be the signal to load up on guns and head for the hills – because by then, the machines really will be taking over.
… Or, how I learned to stop worrying and love the singularity
Bostrom’s book, calm and clearly argued, is usefully devoid of the apocalyptic and boosterish strains that creep into other works on this subject (and seem somehow to have crept into this posting, too). However, as he thinks through and describes the ways in which machines might take over, and the strategies we should deploy to stop it happening, the calm, reasonable tone became, for me at least, all the more chilling.
I found myself remembering the last reel of Stanley Kubrik’s film, Dr Strangelove; with Peter Sellers’ wheelbound scientist laying out strategies for surviving nuclear Armageddon; underground bunkers and the like. International co-operation at an early stage will be of importance, according to Bostrom, as we try to stop the machines going renegade and talking their way out of whatever box we have confined them within. International cooperation? Well, good luck with that. And when I reached a chapter headed ‘Will the best in human nature stand up’, and found myself mentally adding a question mark … I finally gave up and reached for the whisky bottle.