This letter studies learning of the binary neural network and its relation to the logical synthesis. The network has the signum activation function and can approximate a desired Boolean function if parameters are selected suitably. In a parameter subspace the network is equivalent to the disjoint canonical form of the Boolean functions. Outside of the subspace, the network can have simpler structure than the canonical form where the simplicity is measured by the number of hidden neurons. In order to realize effective parameter setting, we present a learning algorithm based on the genetic algorithm. The algorithm uses the teacher signals as the initial kernel and tolerates a level of learning error. Performing basic numerical experiments, the algorithm efficiency is confirmed.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Yuta NAKAYAMA, Ryo ITO, Toshimichi SAITO, "A Simple Class of Binary Neural Networks and Logical Synthesis" in IEICE TRANSACTIONS on Fundamentals,
vol. E94-A, no. 9, pp. 1856-1859, September 2011, doi: 10.1587/transfun.E94.A.1856.
Abstract: This letter studies learning of the binary neural network and its relation to the logical synthesis. The network has the signum activation function and can approximate a desired Boolean function if parameters are selected suitably. In a parameter subspace the network is equivalent to the disjoint canonical form of the Boolean functions. Outside of the subspace, the network can have simpler structure than the canonical form where the simplicity is measured by the number of hidden neurons. In order to realize effective parameter setting, we present a learning algorithm based on the genetic algorithm. The algorithm uses the teacher signals as the initial kernel and tolerates a level of learning error. Performing basic numerical experiments, the algorithm efficiency is confirmed.
URL: https://globals.ieice.org/en_transactions/fundamentals/10.1587/transfun.E94.A.1856/_p
Copy
@ARTICLE{e94-a_9_1856,
author={Yuta NAKAYAMA, Ryo ITO, Toshimichi SAITO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Simple Class of Binary Neural Networks and Logical Synthesis},
year={2011},
volume={E94-A},
number={9},
pages={1856-1859},
abstract={This letter studies learning of the binary neural network and its relation to the logical synthesis. The network has the signum activation function and can approximate a desired Boolean function if parameters are selected suitably. In a parameter subspace the network is equivalent to the disjoint canonical form of the Boolean functions. Outside of the subspace, the network can have simpler structure than the canonical form where the simplicity is measured by the number of hidden neurons. In order to realize effective parameter setting, we present a learning algorithm based on the genetic algorithm. The algorithm uses the teacher signals as the initial kernel and tolerates a level of learning error. Performing basic numerical experiments, the algorithm efficiency is confirmed.},
keywords={},
doi={10.1587/transfun.E94.A.1856},
ISSN={1745-1337},
month={September},}
Copy
TY - JOUR
TI - A Simple Class of Binary Neural Networks and Logical Synthesis
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1856
EP - 1859
AU - Yuta NAKAYAMA
AU - Ryo ITO
AU - Toshimichi SAITO
PY - 2011
DO - 10.1587/transfun.E94.A.1856
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E94-A
IS - 9
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - September 2011
AB - This letter studies learning of the binary neural network and its relation to the logical synthesis. The network has the signum activation function and can approximate a desired Boolean function if parameters are selected suitably. In a parameter subspace the network is equivalent to the disjoint canonical form of the Boolean functions. Outside of the subspace, the network can have simpler structure than the canonical form where the simplicity is measured by the number of hidden neurons. In order to realize effective parameter setting, we present a learning algorithm based on the genetic algorithm. The algorithm uses the teacher signals as the initial kernel and tolerates a level of learning error. Performing basic numerical experiments, the algorithm efficiency is confirmed.
ER -