Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: ” Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’ rebuilding right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
Nick Bostrom a Swedish philosopher and futurist known for suggesting that future advances in artificial- intelligence research may pose a supreme danger to humanity if the problem of control has not been solved before superintelligence is brought into being. In addition to AI Singleton takeover and deliberate extermination of- humanity scenarios, Bostrom cautions that even when given an innocuous task, a superintelligence might ruthlessly optimize, and destroy humankind as a side effect. He says that although there are potentially great benefits from AI, the problem of control should be the absolute priority.
let’s see what he said about technology and supercomputer ” Let’s ask ourselves, what is the cause of this current anomaly? Some people would say it’s technology. Now it’s true, technology has accumulated through human history, and right now, technology advances extremely rapidly — that is the proximate cause, that’s why we are currently so very productive. But I like to think back further to the ultimate cause. And he also talks about machine learning that ” Today, the action is really around machine learning. So
rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data. Basically, the same thing that the human infant does. The result is A.I. that is not limited to one domain — the same system can learn to translate between any pairs of languages or learn to play any computer game on the Atari console. We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It’s extremely good at using available means to achieve a state in which its goal is realized. This means that there is no necessary connection between being highly intelligent in this sense and having an objective that we humans would find worthwhile or meaningful. To better understand “what happens When Our Computer Gets Smarter Then We Are?