A shiver ran down the collective spine of humanity in 1997 when Deep Blue bested the then world grandmaster of chess, Garry Kasparov. Chess grandmasters, as we all know, are super-smart: if a computer could beat Kasparov, what hope for the rest of us? Surely, the machines were taking over.
Almost two decades later, they palpably haven’t. At least not yet.
Artificial Intelligence (AI) has taken some giant strides over those years, giving us Siri, sat nav, driverless cars, high-frequency trading and Google’s Knowledge Graph among other wonders. But developments in the field seem puzzlingly patchy. Why is machine translation still so bad (and only painfully slowly getting better)? And, more to the point so far as this blog is concerned, why have semantic technologies that could aid scholarship been so slow to develop? We have seen only incremental gains in this field, so what is holding this particular branch of AI back?
A book by philosopher Nick Bostrom, AI-complete problem…;
This means that when we have solved the problem of NLP we will have also solved the ‘central artificial intelligence’ problem – and achieved what Nick Bostrom calls human-level machine intelligence (or HMLI).
So here is where the really bad news for semantic technologies – and potentially for the human race – comes in. But before I tell you that, I first have to explain a couple of terms.
HMLI is defined as a level of AI that ‘can carry out most human professions at least as well as a typical human’ – at which point, farewell to your job. Superintelligence is the point beyond that where machines surpass human brains in general intelligence. At which point … well, suffice to it say that Superintelligence is a book that could make you sleep a little less easily at night.
The race to be replaced
If human-level natural language processing is an AI-complete problem, it follows that the future of semantic technologies must be closely tied to the future of AI in general. Therefore, if we are asking the question, when will semantic technologies become mature enough to be of general use for scholarship (rather than just in certain domains that have a very nailed down, controllable vocabulary, like chemistry) – if we are asking when something like the semantic web might really arrive – what we are really asking is, when will HMLI arrive? Because solving the problem of language, if I interpret this correctly, seems to involve solving the bigger problem of general intelligence.
Expert opinions about when the HMLI milestone might be reached vary wildly. A recent survey of expert communities discussed in the book returned answers varying from 2020 to 2093 (some think it will never happen, others disagree with the definition). Bostrom plumps for mid-century. He also feels that when that happens, superintelligence will happen ‘fairly soon thereafter’.
In other words (to put things crudely) when something like the semantic web finally arrives, it might not necessarily be time to pop the bubbly. For those of a more pessimistic disposition it will instead be the signal to load up on guns and head for the hills – because by then, the machines really will be taking over.
… Or, how I learned to stop worrying and love the singularity
Bostrom’s book, calm and clearly argued, is usefully devoid of the apocalyptic and boosterish strains that creep into other works on this subject (and seem somehow to have crept into this posting, too). However, as he thinks through and describes the ways in which machines might take over, and the strategies we should deploy to stop it happening, the calm, reasonable tone became, for me at least, all the more chilling.
I found myself remembering the last reel of Stanley Kubrik’s film, Dr Strangelove; with Peter Sellers’ wheelbound scientist laying out strategies for surviving nuclear Armageddon; underground bunkers and the like. International co-operation at an early stage will be of importance, according to Bostrom, as we try to stop the machines going renegade and talking their way out of whatever box we have confined them within. International cooperation? Well, good luck with that. And when I reached a chapter headed ‘Will the best in human nature stand up’, and found myself mentally adding a question mark … I finally gave up and reached for the whisky bottle.
Latest news and blog articles
A quick rundown from the 9th Peer Review Congress – Part 3 – Preprints and their effectiveness in peer review