Skip to main content

Meta Researchers Investigate Internal Maps for Navigation

Meta researchers prove that simple LSTM architectures are able to form internal maps solely from ego-motion in the task of goal based navigation
Created on February 2|Last edited on February 2
Meta researchers recently explored the capabilities of simple reinforcement learning agents abilities to create internal representations (maps) of their environment.
A major motivation behind this research is that it has been shown that many biological organisms form internal maps of their environment, and this allows them to be more efficient in finding paths to achieve a specific task, such as scavenging for food. Until now, it was unclear whether ML agents learned these internal representations.
The research showed that in model architectures like LSTM’s receiving inputs restricted solely to ego-motion (change in position between successive steps), agents were able to build internal maps of their environment, and quickly generalize to new environments in the task of “Point Goal Navigation,” which analyzes an agents ability to locate to a specified offset position relative to the agents starting position.
For example, the task could be “go fifty units left, and 10 units down” within an environment that contains obstacles preventing direct paths. This work provides insights into how map-free neural network architectures are able to learn internal map representations only from goal-driven navigation.

Whats next?

The work also notes a few key directions for future research, which include investigating proper ways to imbue models with effective implicit or explicit (model architectures or training objectives) priors that result in a more sophisticated internal map representation.

The paper:

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.