Are we quickly approaching a point when machines will be smarter than humans? Will machines enhance our capabilities or take over?
At a seminar arranged by IVA’s Electrical Engineering and Information Technology divisions, these questions were explored from several different perspectives.
Anders Sandberg researches the future at the University of Oxford.
“Eventually AI will be able to do everything that humans do. At the moment AI complements human capabilities,” he said.
Machines are already better at games than the top human players. They learn by competing against themselves. And they get access to a recording of a human voice so they can imitate it and then put together their own sentences”.
“AI machines can write their own text if they get a heading to build on. But do they understand what they’re writing?”
AI in physical form – as a robot – is not necessarily the whole story. AI can also be a service in the cloud.
“Even if no one discovers anything new in AI/machine learning, it is already changing the existing algorithms in the world,” said Sandberg.
Danica Kragic, a robotics researcher at the Royal Institute of Technology (KTH), predicts that AI and AR (augmented reality) will make it unnecessary to be physically present at our place of work, but we will still be there.
She also pointed out that the way robots are developing, they’re becoming more interactive in their interactions with humans.
Staffan Truvé is CTO at Recorded Future.
“Sensors and IA are the basis for predictions on the future. And now the whole of humankind is a sensor,” he said.
He expected AI to impact people’s situations in the same way as horses were affected with the breakthrough of the automobile in the last century.
“But in Sweden there are more horses now than when cars had their breakthrough. And they’re better,” he said.
How fast the world will be transformed by AI, machine learning, AR and other new technology is unclear. There are some factors slowing the process, and fundamental decisions need to be made before it runs riot. This was the opinion of Virginia Dignum, a professor of computer science at Umeå University.
“In fifty years’ time we will probably still be in about the same place. It’s not just the technology that controls what happens. It is also about what we want the technology and commercial enterprises to be able to do. But AI can help us to make smarter decisions,” she said.
She made the point that the general choices we make to enable new technology to do real good must include eliminating the risk of things like partiality, discrimination and loss of human control, among many other aspects.
“Responsible AI needs to be ethical, law-abiding and reliable. It also needs to know that it is in fact artificial and that humans are the ones who in control”.
AI technology also needs to be controlled by values.
“But which values? And whose? Who will participate in the design process and who decides in the end? It’s more complex than it seems. All this needs to be in place before AI can be widely used,” said Virginia Dignum.
More is required in terms of education in light of the fast development of AI, machine learning and new algorithms.
Jonas Ivarsson is a professor of education at the University of Gothenburg.
“People need factual knowledge in order to be good at something. But the education system is not adapted for the type of moving goals that AI creates,” he said.
What type of knowledge humans will need in an AI world is to some extent a matter of opinion.
“In the past a person who wanted to drive a taxi in London had to learn every single street off by heart. That’s how taxi drivers found their way around”.
Then along came Uber offering the same service but with the help of technology and with no need for the person behind the wheel to know all the streets.
“How much human knowledge is needed? Surgeons need to have a thorough knowledge of the anatomy. No one would agree to be operated on by someone trying to find their way around using an app,” said Jonas Ivarsson.