1-2hit |
We present an effective system for integrating generative zero-shot classification modules into a YOLO-like dense detector to detect novel objects. Most double-stage-based novel object detection methods are achieved by refining the classification output branch but cannot be applied to a dense detector. Our system utilizes two paths to inject knowledge of novel objects into a dense detector. One involves injecting the class confidence for novel classes from a classifier trained on data synthesized via a dual-step generator. This generator learns a mapping function between two feature spaces, resulting in better classification performance. The second path involves re-training the detector head with feature maps synthesized on different intensity levels. This approach significantly increases the predicted objectness for novel objects, which is a major challenge for a dense detector. We also introduce a stop-and-reload mechanism during re-training for optimizing across head layers to better learn synthesized features. Our method relaxes the constraint on the detector head architecture in the previous method and has markedly enhanced performance on the MSCOCO dataset.
KuanChao CHU Satoshi YAMAZAKI Hideki NAKAYAMA
This work focuses on training dataset enhancement of informative relational triplets for Scene Graph Generation (SGG). Due to the lack of effective supervision, the current SGG model predictions perform poorly for informative relational triplets with inadequate training samples. Therefore, we propose two novel training dataset enhancement modules: Feature Space Triplet Augmentation (FSTA) and Soft Transfer. FSTA leverages a feature generator trained to generate representations of an object in relational triplets. The biased prediction based sampling in FSTA efficiently augments artificial triplets focusing on the challenging ones. In addition, we introduce Soft Transfer, which assigns soft predicate labels to general relational triplets to make more supervisions for informative predicate classes effectively. Experimental results show that integrating FSTA and Soft Transfer achieve high levels of both Recall and mean Recall in Visual Genome dataset. The mean of Recall and mean Recall is the highest among all the existing model-agnostic methods.