If something has been dividing people on the tech front for some time now it is artificial intelligence (AI). Not even some of the brightest minds, many of them regular speakers at TED conferences, can agree on whether AI will empower us as human beings or kill us, softly but surely.
Well-known, forward-thinking luminaries like Elon Musk have been among those calling for caution, warning that AI could ultimately overpower us and even cause an AI apocalypse.
So, is AI all doom and gloom or an opportunity to hold on ever more tightly to human values and ethics?
These opinion pieces don’t give a definitive answer as to whether AI is ultimately good or bad but they will give you a good idea of the major factors at play and help you form your own opinion. Feel free to post comments below!
Major risks at stake
At the 2016 TED Summit, Neuroscientist and Philosopher Sam Harris warned, “If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of super intelligence. And we have no idea how long it will take us to do that safely.”
Harris added, “The moment we admit that some appropriate computational system is the basis of what intelligence is, and we admit that we will improve this system continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we could live with.”
He stressed the importance of keeping super intelligent AI in check. The problem is, if AI goes wrong, even in the first try, apocalyptic consequences may be likely.
“When you’re talking about super-intelligent AI that can make changes to itself, we only have one chance to get the initial conditions right. And even then, we will need to absorb the economic and political consequences of getting them right,” he added.
Check out the TED Talk here: Can we build AI without losing control over it? - Sam Harris
For technologist and TEDx Brussels 2014 speaker Jeremy Howard, the AI revolution that the super intelligent AI will bring will have a stronger impact than what the Industrial Revolution did to the 19th century society.
Howard explained that when engines started taking over jobs in the 19th century, there was drastic social disruption which eventually settled down after a while. On the other hand, he continued, “The machine learning revolution will be different from industrial revolution because it never settles down. The better computers get at intellectual activities, the more they can build better computers with intellectual capabilities, so this will be a change that the world has never seen before.”
He says the difference lies in the fact that, through deep learning, AI capability grows exponentially, as opposed to the growth of engine capability before.
“It’s already impacting us”, Howard warned. “In the last 25 years, as capital productivity increased, labour productivity has been flat and even a bit down. So, I want us to start having the discussion now. So, we have to start thinking how are we going to adjust our economic and social structures to be aware of this new reality.”
The wonderful and terrifying implications of computers that can learn – Jeremy Howard
Is AI Worth It?
There’s a reason why despite the many risks associated with AI, scientists are still pursuing its development. Many would argue that AI’s more tangible benefits that are closer to realization than the prophesied doomsday scenario and have far more weight.
One of the biggest developments was proposed by inventor and futurist Ray Kurzweil during the TED 2014 conference in Vancouver.
He said that nanobots will allow the brain to connect to our human neocortex to a synthetic neocortex in the cloud, thus providing an extension of our brain. “Now today, you have a computer in your phone but if you need 10,000 computers for a few seconds to do a complex search, you can access that for a second or two in the cloud. Our thinking then would be a hybrid of biological and non-biological thinking.”
Check out the TED Talk here: Get ready for hybrid thinking – Ray Kurzweil
“Non-biological thinking will be subject to the accelerating law of returns – it will grow exponentially. And remember what happened the last time we expanded our neocortex? That was 2 million years ago when we became humanoids and developed these large foreheads. The frontal cortex is not really qualitatively different – it’s a quantitative expansion of the neocortex. But that additional quantity of thinking was the enabling factor for us to take a qualitative leap and advance language, art, science and technology.”
Kurzweil explained that no other species has done that. “So, in the next few decades, we’re going to do it again, we’re going to expand our neocortex, only this time, we won’t be limited by a fixed architecture of enclosure, it will be expanded without limit. That again will be the enabling factor for a qualitative leap in culture and technology.”
Siri creator and TED2017 speaker Tom Gruber echoed the sentiments of Kurzweil when he said, “I think the purpose of AI is to empower humans with machine intelligence. And as machines get smarter, we get smarter. I call this, humanistic AI - artificial intelligence designed to meet human needs by collaborating and helping people.”
“We can choose to make AI automate and compete with us, or we can use AI to augment and collaborate with us - to overcome our cognitive limitations and to help us do what we want to do, only better. And that is why every time a machine gets smarter, we get smarter.”
Fei-Fei Li, who is developing a technology that will allow computers to see and understand images, also agrees with the benefits brought by AI. “When machines can see, doctors and nurses will have extra pairs of tireless eyes to help them diagnose and take care of patients. Cars will run smarter and safer on the road. Robots, not just human will help us brave the disaster zones to save the trapped and wounded. We would discover new species, better materials and explore unseen frontiers with the help of machines.”
She added, “First, we teach them to see. Then, they help us to see better. We will not only use the machines for their intelligence. We will also collaborate with them in ways that we cannot even imagine.”
Solving the AI fears may lie in ethics
The argument is that almost any benefit brought by AI can be turned against humans. This is why many experts believe ethics should be at the centre of discussions on AI. How we think about ethics in the AI context is the key to answering today’s biggest AI fears.
Techno-sociologist Zeynep Tufekci, during the 2016 TED Summit, cites an example of a predictive AI technology now being used to determine hiring in companies and rate the likelihood of criminal offenders re-offending. However, despite it being meant to be impartial, their algorithms still seem to have been imbued with biases that people have.
She said, “Artificial intelligence doesn’t give us a get out of ethics free card. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability and auditing and meaningful transparency. Bringing math and computation to messy, value-laden human affairs, doesn’t bring objectivity, rather the complexity of human affairs invades the algorithms.”
Tufekci agreed that we should use computation to help us make better decisions. “We have to own up to our moral responsibility and use algorithms within that framework, and not as a means to abdicate and outsource our responsibilities. Machine intelligence is here. That means we should hold on ever tighter to human values and ethics.”
Check out the TED Talk here: Machine intelligence makes human morals more important – Zeynep Tufekci
Another way to solve the fundamental issues of AI was proposed by Nick Bostrom, speaker during the TED2015 conference. He said that the key is to figure out how to create super intelligent AI such that even if, or when it does an unplanned action, it’s still safe because it is fundamentally on our side and because it shares our values.
“We would create an AI that uses its intelligence to learn what we value. And its motivation system is constructed in such a way that it is motivated to pursue our values or perform actions that it predicts what we would have approved of,” Bostrom said.
He adds, “The values that the AI has need to match ours not just in the familiar context, like where we can easily check how the AI behaves but also all novel contexts that the AI may encounter in the near future.”
As you’ve seen, AI experts are divided between those that see AI as an extension of us and thus, an improvement for humans and those that believe AI will destroy us (or at least be harmful to our society in the long-run). However, there are those that think ethics plays a key role in preventing the latter from happening.
Whether you’re for, against or undecided about AI, one thing is for sure – that we as a society will have to think seriously about how we want AI to develop as it embeds itself more and more into our lives.
1 comment on "Will AI Kill Us Or Empower Us? TED Talk Experts Weigh In"
Blockchain and Artificial Intelligence
I came across this really insightful article by Jeremy Epstein on the potential of blockchain in helping us keep #AI under control and ownership of our data. Below, is a summary from Jeremy's article, including some early pioneering work on blockchain-based AI.
There are three essential layers to AI: the data repository; the algorithm/machine learning engine; the AI interface.
If you are going to trust your decision-making to a centralised AI source, you need to have 100 percent confidence in:
- The integrity and security of the data (are the inputs accurate and reliable, and can they be manipulated or stolen?)
- The machine learning algorithms that inform the AI (are they prone to excessive error or bias, and can they be inspected?)
- The AI’s interface (does it reliably represent the output of the AI and effectively capture new data?)
In a centralised, closed model of AI, you are asked to implicitly trust in each layer without knowing what is going on behind the curtains.
The results can be alarming, even totally unacceptable in a democratic society. Reports by the New York Times and Wired cover the use of a proprietary machine-learning system called COMPAS, which is used by courts in many parts of the U.S. and which actually recommends longer prison sentences for blacks than whites, with all other data points being equal.
The AI makes racially-biased decisions, but no one can inspect it, and the company that makes it will not explain it. It’s closed, it is hidden, and models like these are in the hands of big, powerful companies with no incentive to share them or reveal how they work.
Blockchain + AI
Over time, more and more data will flow into blockchains. This is expected to reduce the big data advantage that the FANGs, BATs, and Fortune 1000 have over the little people.
Several projects have now been set up to reward people through cryptographic tokens for making their data available through a decentralised marketplace. The hope is that it could lead to ever-more accurate AI models and the ability to create valuable conversational user interfaces, all with the trust and transparency that blockchains offer.
Blockchain-based AI projects are still in the early stage of development, but let's take a look at some of the pioneering ones to get an idea of where they might take us:
1. Ocean Protocol: decentralised data exchange protocol and network that incentivizes the publishing of data for use in the training of artificial intelligence models.
2. Singularity Net: aims to be the first AI-as-a-service (AIaaS) blcockhain-based marketplace.
3. SEED: a project aimed at giving us the confidence that we can actually trust the bots in our lives. It's an open-source, decentralised network where any and all bot interactions can be managed, viewed, and verified.
More details and analysis from Jeremy Epstein, CEO of Never Stop Marketing and author of The CMO Primer for the Blockchain World, http://news.rubiksdigital.com/dcu15y.