The most obvious architectural solution for high-speed fuzzy inference is to exploit temporal parallelism and spatial parallelism inherited in a fuzzy inference execution. However, in fact, the active rules in each fuzzy inference execution are often only a small part of the total rules. In this paper, we present a new architecture that uses less hardware resources by discarding non-active rules in the earlier pipeline stage. Compared with previous work, implementation data show that the proposed architecture achieves very good results in terms of the inference speed and the chip area.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Shih-Hsu HUANG, Jian-Yuan LAI, "A High Speed Fuzzy Inference Processor with Dynamic Analysis and Scheduling Capabilities" in IEICE TRANSACTIONS on Information,
vol. E88-D, no. 10, pp. 2410-2416, October 2005, doi: 10.1093/ietisy/e88-d.10.2410.
Abstract: The most obvious architectural solution for high-speed fuzzy inference is to exploit temporal parallelism and spatial parallelism inherited in a fuzzy inference execution. However, in fact, the active rules in each fuzzy inference execution are often only a small part of the total rules. In this paper, we present a new architecture that uses less hardware resources by discarding non-active rules in the earlier pipeline stage. Compared with previous work, implementation data show that the proposed architecture achieves very good results in terms of the inference speed and the chip area.
URL: https://globals.ieice.org/en_transactions/information/10.1093/ietisy/e88-d.10.2410/_p
Copy
@ARTICLE{e88-d_10_2410,
author={Shih-Hsu HUANG, Jian-Yuan LAI, },
journal={IEICE TRANSACTIONS on Information},
title={A High Speed Fuzzy Inference Processor with Dynamic Analysis and Scheduling Capabilities},
year={2005},
volume={E88-D},
number={10},
pages={2410-2416},
abstract={The most obvious architectural solution for high-speed fuzzy inference is to exploit temporal parallelism and spatial parallelism inherited in a fuzzy inference execution. However, in fact, the active rules in each fuzzy inference execution are often only a small part of the total rules. In this paper, we present a new architecture that uses less hardware resources by discarding non-active rules in the earlier pipeline stage. Compared with previous work, implementation data show that the proposed architecture achieves very good results in terms of the inference speed and the chip area.},
keywords={},
doi={10.1093/ietisy/e88-d.10.2410},
ISSN={},
month={October},}
Copy
TY - JOUR
TI - A High Speed Fuzzy Inference Processor with Dynamic Analysis and Scheduling Capabilities
T2 - IEICE TRANSACTIONS on Information
SP - 2410
EP - 2416
AU - Shih-Hsu HUANG
AU - Jian-Yuan LAI
PY - 2005
DO - 10.1093/ietisy/e88-d.10.2410
JO - IEICE TRANSACTIONS on Information
SN -
VL - E88-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 2005
AB - The most obvious architectural solution for high-speed fuzzy inference is to exploit temporal parallelism and spatial parallelism inherited in a fuzzy inference execution. However, in fact, the active rules in each fuzzy inference execution are often only a small part of the total rules. In this paper, we present a new architecture that uses less hardware resources by discarding non-active rules in the earlier pipeline stage. Compared with previous work, implementation data show that the proposed architecture achieves very good results in terms of the inference speed and the chip area.
ER -