This paper discusses the task of human action detection. It requires not only classifying what type the action of interest is, but also finding actions' spatial-temporal locations in a video. The novelty of this paper lies on two significant aspects. One is to introduce a new graph based representation for the search space in a video. The other is to propose a novel sub-volume search method by Minimum Cycle detection. The proposed method has a low computation complexity while maintaining a high action detection accuracy. It is evaluated on two challenging datasets which are captured in cluttered backgrounds. The proposed approach outperforms other state-of-the-art methods in most situations in terms of both Precision-Recall values and running speeds.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Ping GUO, Zhenjiang MIAO, Xiao-Ping ZHANG, Zhe WANG, "A Fast Sub-Volume Search Method for Human Action Detection" in IEICE TRANSACTIONS on Information,
vol. E95-D, no. 1, pp. 285-288, January 2012, doi: 10.1587/transinf.E95.D.285.
Abstract: This paper discusses the task of human action detection. It requires not only classifying what type the action of interest is, but also finding actions' spatial-temporal locations in a video. The novelty of this paper lies on two significant aspects. One is to introduce a new graph based representation for the search space in a video. The other is to propose a novel sub-volume search method by Minimum Cycle detection. The proposed method has a low computation complexity while maintaining a high action detection accuracy. It is evaluated on two challenging datasets which are captured in cluttered backgrounds. The proposed approach outperforms other state-of-the-art methods in most situations in terms of both Precision-Recall values and running speeds.
URL: https://globals.ieice.org/en_transactions/information/10.1587/transinf.E95.D.285/_p
Copy
@ARTICLE{e95-d_1_285,
author={Ping GUO, Zhenjiang MIAO, Xiao-Ping ZHANG, Zhe WANG, },
journal={IEICE TRANSACTIONS on Information},
title={A Fast Sub-Volume Search Method for Human Action Detection},
year={2012},
volume={E95-D},
number={1},
pages={285-288},
abstract={This paper discusses the task of human action detection. It requires not only classifying what type the action of interest is, but also finding actions' spatial-temporal locations in a video. The novelty of this paper lies on two significant aspects. One is to introduce a new graph based representation for the search space in a video. The other is to propose a novel sub-volume search method by Minimum Cycle detection. The proposed method has a low computation complexity while maintaining a high action detection accuracy. It is evaluated on two challenging datasets which are captured in cluttered backgrounds. The proposed approach outperforms other state-of-the-art methods in most situations in terms of both Precision-Recall values and running speeds.},
keywords={},
doi={10.1587/transinf.E95.D.285},
ISSN={1745-1361},
month={January},}
Copy
TY - JOUR
TI - A Fast Sub-Volume Search Method for Human Action Detection
T2 - IEICE TRANSACTIONS on Information
SP - 285
EP - 288
AU - Ping GUO
AU - Zhenjiang MIAO
AU - Xiao-Ping ZHANG
AU - Zhe WANG
PY - 2012
DO - 10.1587/transinf.E95.D.285
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E95-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2012
AB - This paper discusses the task of human action detection. It requires not only classifying what type the action of interest is, but also finding actions' spatial-temporal locations in a video. The novelty of this paper lies on two significant aspects. One is to introduce a new graph based representation for the search space in a video. The other is to propose a novel sub-volume search method by Minimum Cycle detection. The proposed method has a low computation complexity while maintaining a high action detection accuracy. It is evaluated on two challenging datasets which are captured in cluttered backgrounds. The proposed approach outperforms other state-of-the-art methods in most situations in terms of both Precision-Recall values and running speeds.
ER -