Tech giant Google's artificial intelligence (AI) division, DeepMind, which the company acquired back in 2014, is currently working on a project that can change the course of AI for years to come.

The division, which claims to be on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how, recently shared that they have mastered an AI that can make its own plans.

According to DeepMind, they have successfully created “Imagination-Augmented Agents” that are capable of “imagining” the possible consequences of their actions, and interpret those simulations accordingly. The London-based company claims that these agents can make the right decision for what it is they have set out to achieve.

DeepMind researchers shared that in a number of tasks that were carried out to test these agents, they ended up handsomely outperforming baseline agents. According to the researchers, these agents are the closest that AI has come to human thinking. They think like humans, try out different strategies in their head about a situation prior to executing it, and are therefore able to learn despite having little or no real life experience.

Talking about their path-breaking invention, DeepMind researchers in a blogpost divulged that their Imagination-Augmented Agents comes with an 'imagination encoder’- a neural network which learns to extract any information which might be useful for the agent’s future decisions, but ignore which is not relevant. They learn to interpret their internal simulations, which allows them to use models which coarsely capture the environmental dynamics, even when those dynamics are not perfect. They also make use of their imagination efficiently by adapting the number of imagined trajectories to suit a particular problem. On the top of it all, these agents can learn different strategies to construct plans by withers choosing between continuing a current imagined trajectory or restarting from the very beginning.

In order to assess the performance of its agents, DeepMind tested them on a spaceship navigation game and the famous puzzle game Sokoban, both of which require forward planning and reasoning to win.

According to the researchers, the agents tested out well for both the tasks. In fact, they outperformed the imagination-less baselines considerably, meaning they learned with less experience and are able to successfully cope with the imperfections in modelling the environment. This is because they're able to extract more knowledge from internal simulations.

In the near future, DeepMind wants to create computers that can survive in complex environments where unpredictable problems can arise anytime.
Advertisements

Post a Comment

Comment

Previous Post Next Post