In this study we present a novel method for position and scale invariant object representation based on a biologically-inspired framework. Grid cells are neurons in the entorhinal cortex whose multiple firing locations form a periodic triangular array, tiling the surface of an animal’s environment. We propose a model for simple object representation that maintains position and scale invariance, in which grid maps capture the fundamental structure and features of an object. The model provides a mechanism for identifying feature locations in a Cartesian plane and vectors between object features encoded by grid cells. It is shown that key object features can be represented and located by grid maps and their specific spacing, orientation, and spatial phase. Using a multi-output Convolutional Neural Network, we were able to predict ~99.3% and ~99.1% accuracy for x and y values respectively. The model further provides a mechanism for translating and scaling vectors encoded by grid cells. The translation and scale of an object is limited to the borders of the Cartesian plane, indicative of the role boundary vector cells in the brain. We intend to continue this work and explore its applications in image processing for image recognition.
Kraiger, Keaton, "The Applications of Grid Cells in Computer Vision" (2019). Undergraduate Research & Mentoring Program. 35.