1-5hit |
Hiroki YAMAOKA Toshimichi SAITO
A digital map is a simple dynamical system that is related to various digital dynamical systems including cellular automata, dynamic binary neural networks, and digital spiking neurons. Depending on parameters and initial condition, the map can exhibit various periodic orbits and transient phenomena to them. In order to analyze the dynamics, we present two simple feature quantities. The first and second quantities characterize the plentifulness of the periodic phenomena and the deviation of the transient phenomena, respectively. Using the two feature quantities, we construct the steady-versus-transient plot that is useful in the visualization and consideration of various digital dynamical systems. As a first step, we demonstrate analysis results for an example of the digital maps based on analog bifurcating neuron models.
Jungo MORIYASU Toshimichi SAITO
This paper studies a cascade system of dynamic binary neural networks. The system is characterized by signum activation function, ternary connection parameters, and integer threshold parameters. As a fundamental learning problem, we consider storage and stabilization of one desired binary periodic orbit that corresponds to control signals of switching circuits. For the storage, we present a simple method based on the correlation learning. For the stabilization, we present a sparsification method based on the mutation operation in the genetic algorithm. Using the Gray-code-based return map, the storage and stability can be investigated. Performing numerical experiments, effectiveness of the learning method is confirmed.
Yuta NAKAYAMA Ryo ITO Toshimichi SAITO
This letter studies learning of the binary neural network and its relation to the logical synthesis. The network has the signum activation function and can approximate a desired Boolean function if parameters are selected suitably. In a parameter subspace the network is equivalent to the disjoint canonical form of the Boolean functions. Outside of the subspace, the network can have simpler structure than the canonical form where the simplicity is measured by the number of hidden neurons. In order to realize effective parameter setting, we present a learning algorithm based on the genetic algorithm. The algorithm uses the teacher signals as the initial kernel and tolerates a level of learning error. Performing basic numerical experiments, the algorithm efficiency is confirmed.
Masanori SHIMADA Toshimichi SAITO
This paper presents a flexible learning algorithm for the binary neural network that can realize a desired Boolean function. The algorithm determines hidden layer parameters using a genetic algorithm. It can reduce the number of hidden neurons and can suppress parameters dispersion. These advantages are verified by basic numerical experiments.
Atsushi YAMAMOTO Toshimichi SAITO
This paper proposes a simple learning algorithm that can realize any boolean function using the three-layer binary neural networks. The algorithm has flexible learning functions. 1) moving "core" for the inputs separations,2) "don't care" settings of the separated inputs. The "don't care" inputs do not affect the successive separations. Performing numerical simulations on some typical examples, we have verified that our algorithm can give less number of hidden layer neurons than those by conventional ones.