Tuesday, April 11, 2017
Humanity IS Already Starting to LOSE CONTROL of AI
What sets humans apart from machines is the speed at which we can learn from our surroundings.
But scientists have successfully trained computers to use artificial intelligence to learn from experience – and one day they will be smarter than their creators.
Now scientists have admitted they are already baffled by the mechanical brains they have built, raising the prospect that we could lose control of them altogether.
Computers are already performing incredible feats – like driving cars and predicting diseases, but their makers say they aren’t entirely in control of their creations.
This could have catastrophic consequences for civilization, tech experts have warned.
Take the strange driverless car which appeared on the streets of New Jersey, US, last year.
It differed from Google, Tesla or Uber’s autonomous vehicles, which follow the rules set by tech developers to react to scenarios while on the road.
This car could make its own decisions after watching how humans learnt how to drive.
And its creators, researchers at chip making company Nvidia (who supply some of the biggest car makers with supercomputer chips) said they weren’t 100 percent sure how it did so, MIT Technology Review reported.
Its mysterious mind could be a sign of dark times to come, skeptics fear.
The car’s underlying technology, dubbed “deep learning”, is a powerful tool for solving problems.
It helps us tag our friends on Facebook, provides assistance on our smartphones using Siri, Cortana or Google.
Deep learning has helped computers get better at recognizing objects than a person.
The military is pouring millions into the technology so it can be used to steer ships, control drones and destroy targets.
And there’s hope it will be able to diagnose deadly diseases, make traders billionaires by reading the stock market and totally transform the world we live in.
But if we don’t make sure creators have a full understanding of how it works, we’re in deep trouble, scientists claim.
If they can’t figure out how the algorithms (the formulas which keep computers performing the tasks we ask them to do) work, they won’t be able to predict when they fail.
Tommi Jaakkola, a professor at MIT who works on applications of machine learning warns: “If you had a very small neural network [deep learning algorithm], you might be able to understand it.”
“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”
That means a driverless car, like Nvidia’s, could soar headfirst into a tree and we would have no idea why it decided to do so.
Just imagine if artificial intelligence was given control of the stock market or military systems.
Another computer was also tasked with analyzing patient records to predict disease.
Joel Dudley, who led the project at New York’s Mount Sinai Hospital, said the machine was inexplicably good at recognizing schizophrenia – but no-one knew why.
“We can build these models, but we don’t know how they work,” he said.
Several big technology firms have been asked to be more transparent about how they create and apply deep learning.
This includes Google, which said it would create an AI ethics board but has kept mysteriously quiet about its existence.
A top British astronomer recently warned that humans will be wiped out by robots who will take over the earth in a matter of centuries.
He claims aliens could already be remnants of a human-like civilization which have evolved into artificially intelligent machines.