The rise of AI… and EI

Vision, February 2016

What happens if the machines we spend billions of dollars creating and refining actually develop sentient qualities? What kind of world would we live in? Microsoft co-founder Paul Allen hit the headlines earlier this year for, as The Washington Post puts it, his plan to “build an artificial brain from scratch”. The reality is slightly more prosaic, and does not involve cyborgs walking around Seattle. But Oren Etzioni, Chief Executive of the Allen Institute for Artificial Intelligence, can understand the intrigue.

“Because of films like The Terminator, AI gets a bad rap,” he says. “I understand that. But as a scientist it’s my job to look at the next 25 years and, in my view, it’s more likely that an asteroid will destroy the Earth than AI will somehow turn evil. If you open the aperture and ask me if it’s possible for AI to be dangerous to humans in a million years… well, that’s a speculative, conceptual possibility.”

Etzioni’s belief for now, then, is that AI is a tool for empowerment rather than something that will make the human race obsolete. One of the first products from the Allen Institute’s initial investigations into AI – all funded philanthropically through Allen – is the recently launched Semantic Scholar. Once search parameters have been set, the system trawls through literally millions of academic papers to find connections.

“Ultimately it’s about helping scientists and doctors cope with the information overload,” says Etzioni. “Nobody can possibly keep up. So our work is to leverage AI to help them find that critical paper. It can work right from helping them to prescribe the right drug to finding cures for cancer or climate change. Scientists need help – and AI can provide it.

Still, to the wider world, there does not appear to be much difference between this Allen Institute project and a Google search. Indeed, there is even a program, Google Scholar, which Etzioni admits revolutionised the research field – Semantic Scholar is named in homage to it. But, he explains, Google Scholar is basically a “keyword search engine where you hope for the best and get millions of results. It’s like trying to find a needle in a haystack”.

“What we have with Semantic Scholar is that in two or three clicks we can be down to the three relevant papers. To do that requires machine learning techniques and natural language processing. It requires AI,” says Etzioni.

Of course, this kind of targeted search uses a lot of processing and programming to act in the way a human might. It’s a similar idea to asking an app to find you the quickest way home based on all the environmental considerations, rather than just all the ways home. In future – and it is the most eye-catching example of attainable AI – the computer will actually drive you home on that route. Does that make the computer ‘intelligent’ in the traditional sense, able to acquire and apply knowledge and skills? Defining intelligence in any field is incredibly difficult, says Etzioni.

“What we actually work on is augmented intelligence rather than artificial intelligence, because the computer of today is literal-minded and limited. It is ‘weak AI’ – technology that is effectively providing tools for humanity.”

The idea that AI can be a positive force is gaining some traction, mirrored in the UAE’s AI & Robotics Award for Good initiative. Aiming to “encourage research and applications of innovative solutions in artificial intelligence and robotics to meet existing challenges in the categories of health, education and social services”, there’s a US$1m prize for the winners of the international and national competitions.

artificial intelligence

HH Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, the Crown Prince of Dubai, introducing the UAE’s AI and Robotics Award for Good initiative

“Humanity is on a journey. From the discovery of fire to the industrial revolution, we are on a constant voyage of discovery. Robotics and artificial intelligence are the next step,” said His Highness Sheikh Mohammed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE, and Ruler of Dubai, at the launch earlier this year, and it’ll be fascinating to see which companies and innovations make the shortlist for 2016. In the meantime, the continuing work at the Allen Institute for AI is very much consistent with the award’s aims.

Not that Etzioni looks at the world of AI with completely rose-tinted spectacles. He understands that with greater reliance on AI and robotics, the implications for the job market are immense. As our automated tools become more efficient, fewer humans are required in any process. So when The Washington Post heralds the Allen Institute’s creation of an artificial brain that can pass a high school science test, it’s bound to be a cause for concern. Current Allen Institute projects Aristo and Euclid are indeed solving maths, geometry and science problems at multiple grade levels right now, but with these advances comes the argument that students should be using their own intelligence, not just firing up an AI program and copying the results.

“We’re absolutely not trying to replace that important process of learning,” says Etzioni. “Really, what we’re doing is more of a proving ground for the machine. Currently it lacks the ability to understand natural language and exhibit basic common sense. So a question like ‘why does the marble drop to the bottom of a glass of water: is it due to gravity, or magnetism, or friction?’ is easy for most people to answer. But paradoxically it’s beyond the capability of most machines today. And yet a machine can play grandmaster chess, which is hard for most people.

“So we’re using these tests to develop that understanding. It’s a huge breakthrough to get a computer to even an average level in a relatively limited field of geometry.”

The connection between intelligence and learning is, of course, key. And it’s here where AI is at its most interesting. Can computers ‘learn’ to solve ill-defined questions without the outside influence of programmers? Etzioni refers to the advances in speech recognition technology that exhibit some of those behaviours. It is a narrow task with a well-defined outcome. A search engine ‘remembers’ your favourite sites and suggests similar things you might like. But writing a beautiful poem out of nowhere, for example, is an ill-defined task. So the computer struggles.

“And that’s because computers execute instructions and optimise functions. But if they don’t know what they’re optimising, they’re of little use. There are no nuances with computers.”

This is why the Allen Institute is so interesting. Alongside the AI work, there’s also The Allen Institute for Brain Science, a medical research centre with the sole aim of understanding how the brain works. So while one part of the Seattle organisation breaks down the natural brain, another, in a way, is building one up.

“Paul Allen first focused on understanding intelligence from a neurological, biological perspective back in 2003,” says Etzioni, “and in 2013 he asked us to take a different approach through computing. He says that he doesn’t know which discipline will best give us that understanding and, to be honest, it’s not a race – the two perspectives might help each other. “After all, in the end, we’re both tackling one of the most fundamental issues of all science: the nature of thought and intelligence. How we imbue computers with even a modicum of that is absolutely fascinating work.”

EI: the next stage of AI

It is 20 years since Daniel Goleman popularised the idea of emotional intelligence – the ability to understand and express emotions in a way that effectively communicates meaning and intent.

In November, Daniel McDuff at Massachusetts Institute of Technology’s Media Lab unveiled a basic webcam that could detect a range of facial movements, then translate them into common emotional states, such as happiness, contempt and disgust, which a computer could ‘understand

The technology has widespread applications: the BBC has used it to measure audience reaction to a comedy show and clinicians could track a patient’s response to treatment. Even a car could soon be able to detect whether a driver was tired.

Affective computing has moved ahead in voice technology too, perhaps because the nuances of speech are more obvious: when we are angry or joyful we tend to speak faster and louder. Put simply, this technology is less about Siri telling you where the nice restaurants are, and more about understanding from your voice how important it is that you eat now.

We are, as Oren Etzioni from the Allen Institute for AI says, a long way from computers that can actually feel emotion as sentient beings themselves. But understanding it in others, and displaying some form of basic emotional intelligence? That day has already arrived.


Be first to comment