next up previous
Next: Conclusions: DepthOn/Off Up: A Self-Organizing Neural Network Previous: Using Disinhibition to

Learning Relative Depth from Occlusion Events

As we have described, the network develops Off channels that represent occluded objects. The activation of neurons in the Off channels is highly likely to be correlated with the activation of other neurons elsewhere in the visual system, specifically neurons whose activation indicates the presence of occluders. Simple Hebbian-type learning will let such occlusion-indicator neurons gradually establish excitatory connections to the Off channel neurons, and vice versa.

After such reciprocal excitatory connections have been learned, the activation of occlusion-indicator neurons at a given spatial position tends to cause the network to favor the Off channel in its predictions -- i.e., to predict that a moving object will be invisible at that position. Thus, the network learns to use occlusion information to generate better predictions of the visibility/invisibility of objects.

Conversely, the activation of Off channel neurons causes the occlusion-indicator neurons to receive excitation. The disappearance of an object excites the representation of an occluder at that location. If the representation of the occluder was not previously activated, then the excitation from the Off channel may even be strong enough to activate it alone. Thus, the disappearance of moving visual objects constitutes evidence for the presence of an inferred occluder.

We believe that our model of depth perception is consistent with Grossberg's [5] figure-ground detection network, and we believe that ours can lead to a self-organizing network that has similar figure-ground detection behavior.


next up previous
Next: Conclusions: DepthOn/Off Up: A Self-Organizing Neural Network Previous: Using Disinhibition to