It is "sheer fantasy" to think AI will ever match an intelligent human
Maggie Boden, one of the world’s most eminent experts in artificial intelligence is, by her own admission, “completely computer phobic”.
“I can’t cope with the damn things,” she says. “I have a Mac on my desk and if anything goes wrong it’s an absolute nightmare.”
But then hardware devices, or “gizmos” as she prefers to call them, have never really been behind her 50-year fascination with AI.
For Maggie, Research Professor of Cognitive Science at the University of Sussex, the challenge has been to see if the multifarious workings of the brain and mind together can be understood by applying the principles of computer programming; essentially imagining the mind/brain as a machine.
Now aged 81, there is no sign of her running out of steam. Earlier this year she was appointed to the advisory board of an All Party Parliamentary Group on AI, while her most recent publication, AI, Its Nature and Future (2016) has been described in Nature as a “masterclass of a book”.
Her many accolades and achievements include an OBE and three honorary doctorates. She has also been vice-president of the British Academy, chair of the Council of The Royal Institution, and profiled on BBC Radio 4’s The Life Scientific.
With her rapier wit, fearsome intellect and penchant for vibrant clothes and statement jewellery, she could, if it’s not inappropriate in the context of AI, be described as “larger than life”. And certainly her reputation in her field, where for many years she was a lone woman, remains indomitable.
But her insatiable curiosity – particularly about how the brain, a material thing, can also be responsible for intangibilities such as creativity, emotion and personality – is what makes her such engaging company.
Putting theory to the test
At her home in Brighton she reflects on how theories first conceived several decades ago are now being put to the test through advanced computer technology. Big data storage and artificial neural networks that behave in ways somewhat similar to the brain are helping scientists take those small steps to identify what goes on in the body’s most complex and mysterious organ.
“The current advances are technical rather than theoretical,” she points out. “It’s only recently that we have had enough computer power to exploit theoretical ideas that originated in the 1980s. And many of the problems facing AI back in the 1950s and 1960s still haven’t been solved.”
For example, she wouldn’t be surprised if we never have an AI system that’s more intelligent across the board than a human. “I know a lot of my AI colleagues think I’m nuts to say that. But the idea that we could ever have a time when a machine could converse in a rich and subtle way about matters of human interest with an intelligent and educated human is, to my mind, sheer fantasy.”
Nevertheless, the quest continues. And Maggie’s role, as she sees it, is to examine AI from a philosophical perspective.
She herself has been intrigued by big ideas since a child – choosing to skip games afternoons at the City of London School For Girls in order to sit on the floor of bookshops in the Charing Cross Road to read Bertrand Russell.
Initially, she wanted to be a psychiatrist. “I was fascinated by how the body worked, how the human mind worked and what was going on when it went wrong in various ways.”
But, following undergraduate degrees in both medicine and philosophy at Cambridge University, followed by three years teaching philosophy at the University of Birmingham, she went to Harvard University to study psychology, and realised that she was in a position to bring together these academic disciplines in a unique way.
In 1965 she arrived at Sussex as a lecturer in philosophy and in the early 1970s she helped to found the world’s first centre for cognitive science (including AI), now part of the School of Informatics and Engineering.
“I wanted to do things that didn’t exist,” she says.“I wanted to do the philosophy of biology, the philosophy of psychology. Even before arriving at Sussex, I had been publishing articles about how using the idea of a computer program could help to address the mind-brain problem.”
She had started developing these ideas in her first book in 1972, Purposes of Explanation in Psychology. (The publishers’ advance copy arrived three days after the birth of her second baby. “Both were deep purple,” she remembers.)
While others were building computers, Maggie was addressing the social implications of AI. Her groundbreaking book, Artificial Intelligence and Natural Man (1977), the world’s first overview of AI, didn’t contain a single line of computer code, yet it became a set text at Harvard, MIT and the Open University.
AI beyond imagination
Although she recognised that computers would play an important social role in the future, she says it was inconceivable to her that they would become things you would carry around in your pocket.
She recalls her colleague at Sussex, Aaron Sloman, creating a game on their office computer back in the early 1970s that she had encouraged her children, Ruskin and Jehane, to play. “I didn’t want them to see computers as something alien,” she says, aware of the irony that parents now moan about not being able to get their children off personal devices.
A single mother for most of her career, her success in what was – and arguably still is – a male-dominated field has largely been down to not caring that she was often the only woman in the room.
“I rather liked it,” she admits. “I like men. Getting discriminated against is another matter. But being the only woman in the room doesn’t bother me – it’s irrelevant intellectually.”
She sometimes dealt with prejudices – frequently being mistaken for a girlfriend or a secretary – by “being very naughty” and playing along with the assumptions until delivering the killer line that she was an academic – and an eminent one at that.
Another withering response that she taught herself (after practising in front of a mirror for hours) was to raise only one eyebrow, although she still kicks herself for not employing it after giving her first lecture to the Aristotelian Society at the age of 24 and receiving a sexist put-down.
“My lecture was about explanation,” she recalls, “and one of the many things I said was that in the general case, an explanation doesn’t have to be true. When I finished, the chairman, with a long white beard stained with nicotine, didn’t even say thank you. He simply said, ‘Just like a woman. Truth isn’t important’.”
While she is still carrying on with her research (her main focus is on how to understand the mind’s computational processes that underlie creativity), she admits to being anxious about the direction AI is taking.
“I don’t envy any of my grandchildren,” she says. “I have four, the eldest is 16 and the youngest is 10. I think there are huge problems ahead, one of which is AI and its effect on employment.
“People say we shouldn’t worry about jobs. Of course there will be new sorts of jobs. I mean, whoever heard of a data scientist five years ago? But those jobs will be fewer than the number of jobs displaced, and they will only be open to those with a relatively high level of intelligence and education. It’s already true, but there is going to be a more extreme version of that.”
She is particularly alarmed at the prospect of carers being replaced by robots.
“If it’s a robot that plays you music or can entertain you, I have no problem. But if it’s supposed to be a companion that can have emotionally sensible conversations with you, then I think it’s a terrible betrayal of someone’s human dignity.
“People might say it’s better for someone to be entertained or engaged – they might otherwise be on their own. I think that’s terrible too, but I don’t think it’s worse. We should value human-to-human interactions, which are so important. Certainly AI has started to make some people think more carefully about what it is to be human.”