Gray scale images are represented by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork Ni implements a stochastic nonlinear fractal operator Fi, constructed for the given image fi. We show that under realstic assumptions F has a unique attractor which is located in the vicinity of the original image. Therefore one subnetwork represents one original image. The associative recall is implemented in two stages. Firstly, the competition layer finds the most invariant subnetwork for the given input noisy image g. Next, the selected recurrent subnetwork in few (5-10) global iterations produces high quality approximation of the original image. The degree of invariance for the subnetwork Ni on the inprt g is measured by a norm ||g-Fi(g)||. We have experimentally verified that associative recall for images of natural scenes with pixel values in [0, 255] is successful even when Gaussian noise has the standard deviation σ as large as 500. Moreover, the norm, computed only on 10% of pixels chosen randomly from images still successfuly recalls a close approximation of original image. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. However, for each new stored image a new subnetwork must be added.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Wfadysfaw SKARBEK, Andrzej CICHOCKI, "Image Associative Memory by Recurrent Neural Subnetworks" in IEICE TRANSACTIONS on Fundamentals,
vol. E79-A, no. 10, pp. 1638-1646, October 1996, doi: .
Abstract: Gray scale images are represented by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork Ni implements a stochastic nonlinear fractal operator Fi, constructed for the given image fi. We show that under realstic assumptions F has a unique attractor which is located in the vicinity of the original image. Therefore one subnetwork represents one original image. The associative recall is implemented in two stages. Firstly, the competition layer finds the most invariant subnetwork for the given input noisy image g. Next, the selected recurrent subnetwork in few (5-10) global iterations produces high quality approximation of the original image. The degree of invariance for the subnetwork Ni on the inprt g is measured by a norm ||g-Fi(g)||. We have experimentally verified that associative recall for images of natural scenes with pixel values in [0, 255] is successful even when Gaussian noise has the standard deviation σ as large as 500. Moreover, the norm, computed only on 10% of pixels chosen randomly from images still successfuly recalls a close approximation of original image. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. However, for each new stored image a new subnetwork must be added.
URL: https://globals.ieice.org/en_transactions/fundamentals/10.1587/e79-a_10_1638/_p
Copy
@ARTICLE{e79-a_10_1638,
author={Wfadysfaw SKARBEK, Andrzej CICHOCKI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Image Associative Memory by Recurrent Neural Subnetworks},
year={1996},
volume={E79-A},
number={10},
pages={1638-1646},
abstract={Gray scale images are represented by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork Ni implements a stochastic nonlinear fractal operator Fi, constructed for the given image fi. We show that under realstic assumptions F has a unique attractor which is located in the vicinity of the original image. Therefore one subnetwork represents one original image. The associative recall is implemented in two stages. Firstly, the competition layer finds the most invariant subnetwork for the given input noisy image g. Next, the selected recurrent subnetwork in few (5-10) global iterations produces high quality approximation of the original image. The degree of invariance for the subnetwork Ni on the inprt g is measured by a norm ||g-Fi(g)||. We have experimentally verified that associative recall for images of natural scenes with pixel values in [0, 255] is successful even when Gaussian noise has the standard deviation σ as large as 500. Moreover, the norm, computed only on 10% of pixels chosen randomly from images still successfuly recalls a close approximation of original image. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. However, for each new stored image a new subnetwork must be added.},
keywords={},
doi={},
ISSN={},
month={October},}
Copy
TY - JOUR
TI - Image Associative Memory by Recurrent Neural Subnetworks
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1638
EP - 1646
AU - Wfadysfaw SKARBEK
AU - Andrzej CICHOCKI
PY - 1996
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E79-A
IS - 10
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - October 1996
AB - Gray scale images are represented by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork Ni implements a stochastic nonlinear fractal operator Fi, constructed for the given image fi. We show that under realstic assumptions F has a unique attractor which is located in the vicinity of the original image. Therefore one subnetwork represents one original image. The associative recall is implemented in two stages. Firstly, the competition layer finds the most invariant subnetwork for the given input noisy image g. Next, the selected recurrent subnetwork in few (5-10) global iterations produces high quality approximation of the original image. The degree of invariance for the subnetwork Ni on the inprt g is measured by a norm ||g-Fi(g)||. We have experimentally verified that associative recall for images of natural scenes with pixel values in [0, 255] is successful even when Gaussian noise has the standard deviation σ as large as 500. Moreover, the norm, computed only on 10% of pixels chosen randomly from images still successfuly recalls a close approximation of original image. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. However, for each new stored image a new subnetwork must be added.
ER -