HOW ROBOTS LEARN AND ADAPT WITH AI?

December 8, 2017 0

From driverless cars to medical imaging, artificial intelligence is being heralded as the next industrial revolution. But how does AI relate to us in all our glorious complex humanity?  “Raia Hadsell” explored the ways in which we’ll interact with algorithmic intelligence in the near future.

She offers an insight into how she and her colleagues are developing robots with the capacity to learn. Their superhuman ability to play video games is just the start.

Raia is a senior research scientist on the Deep Learning team at DeepMind, with a particular focus on solving robotics and navigation using deep neural networks.

She talked about “how robots learns from their experience and change their own behaviour”. To explain the concept she narrated a story about her first experience where she was creating a robot to avoid obstacles outdoors, and she was pleasantly amazed to find that the robot could manoeuver around trees without her having to change a single of code. The robot learned and adapted -itself.

“Let me tell you about the machines that can learn to play video games, solve maze and run through an obstacles course” Raia said.

The programmes she writes is based on a learning algorithm called reinforcement learning. She explains about the ‘Agent’ which is a major part of algorithm that learns and takes actions itself. She said, “ we call it an agent as it  has its own autonomy”. The agent interacts with the environment and based on observations, it takes actions and gets rewarded. The idea of reinforcement learning is simple and powerful at the same time.

The agent she is talking about is a mathematical model and she feels that the team has been very successful in using deep neural networks with many layers of neuron and millions of connections between those neurons.  She mainly uses games at deep mind because then she could readily evaluate how well the agent does by pitting it against a human player.

She goes on to share a video to demonstrate how agents can play a game by learning only from the pixels on the screen and becoming an expert by playing it  over 500 times and explains that developing an architecture involves training two neural networks or two agents at the same time from the same game, one of them will break the problems in subproblems and send to another one, who takes the actions control the joystick and tries to maximize the rewards in the game.

To understand better “how robots will experience, learn and adapt with the help of algorithmic intelligence in the near future please watch the full video:

Categories: Videos

Leave A Reply