Localisation via Deep Imagination: learn the features not the map (bibtex)
by Spencer, Jaime, Mendez, Oscar, Bowden, Richard and Hadfield, Simon
Abstract:
How many times does a human have to drive through the same area to become familiar with it? To begin with, we might first build a mental model of our surroundings. Upon revisiting this area, we can use this model to extrapolate to new unseen locations and imagine their appearance. Based on this, we propose an approach where an agent is capable of modelling new environments after a single visitation. To this end, we introduce “Deep Imagination”, a combination of classical Visual-based Monte Carlo Localisation and deep learning. By making use of a feature embedded 3D map, the system can “imagine” the view from any novel location. These “imagined” views are contrasted with the current observation in order to estimate the agent’s current location. In order to build the embedded map, we train a deep Siamese Fully Convolutional U-Net to perform dense feature extraction. By training these features to be generic, no additional training or fine tuning is required to adapt to new environments. Our results demonstrate the generality and transfer capability of our learnt dense features by training and evaluating on multiple datasets. Additionally, we include several visualizations of the feature representations and resulting 3D maps, as well as their application to localisation.
Reference:
Localisation via Deep Imagination: learn the features not the map (Spencer, Jaime, Mendez, Oscar, Bowden, Richard and Hadfield, Simon), In Proceedings of the European Conference on Computer Vision (ECCV), workshop on Vision-based Navigation for Autonomous Driving, Springer International Publishing, 2018. (Poster)
Bibtex Entry:
@InProceedings{Martin18,
author = {Spencer, Jaime and Mendez, Oscar and Bowden, Richard and Hadfield, Simon},
year = {2018},
month = {09},
pages = {},
title = {Localisation via Deep Imagination: learn the features not the map},
  Publisher                = {Springer International Publishing},
  Series                   = {Lecture Notes in Computer Science},
  Booktitle                = {Proceedings of the European Conference on Computer Vision (ECCV), workshop on Vision-based Navigation for Autonomous Driving},
  
  Abstract                 = {How many times does a human have to drive through the same area to become familiar with it? To begin with, we might first build a mental model of our surroundings. Upon revisiting this area, we can use this model to extrapolate to new unseen locations and imagine their appearance.

Based on this, we propose an approach where an agent is capable of modelling new environments after a single visitation. To this end, we introduce “Deep Imagination”, a combination of classical Visual-based Monte Carlo Localisation and deep learning. By making use of a feature embedded 3D map, the system can “imagine” the view from any novel location. These “imagined” views are contrasted with the current observation in order to estimate the agent’s current location. In order to build the embedded map, we train a deep Siamese Fully Convolutional U-Net to perform dense feature extraction. By training these features to be generic, no additional training or fine tuning is required to adapt to new environments.

Our results demonstrate the generality and transfer capability of our learnt dense features by training and evaluating on multiple datasets. Additionally, we include several visualizations of the feature representations and resulting 3D maps, as well as their application to localisation.},
  Comment                  = {<a href="http://personalpages.surrey.ac.uk/s.hadfield/posters/Martin18.pdf">Poster</a>},
  Url                      = {http://personalpages.surrey.ac.uk/s.hadfield/papers/Martin18.pdf},
}
Powered by bibtexbrowser