Editorial Issue 33.1 Article Swipe
YOU?
·
· 2022
· Open Access
·
· DOI: https://doi.org/10.1002/cav.2042
· OA: W4214720228
This issue contains five papers. In the first paper, Yuling Yan, Lijun Zhang, from Suzhou University, China and Minye Chen, from Shanghai University of Science and Technology, all in China propose a virtual training system of aircraft maintenance based on gesture recognition interaction. Leap Motion is used as a sensor to construct a hybrid machine learning gesture recognition model, to obtain natural human–computer interaction experience. In the recognition model, the initial weight matrix, and the number of hidden layer nodes in the BPNN are jointly optimized by the PSO algorithm with self-adaption inertial weight. This optimization algorithm achieved a recognition rate of 81.26% in the dynamic gesture database constructed in this paper, which is higher than other available algorithms. A preliminary usability evaluation in university classrooms shows that the teaching system in this paper can achieve a better interactive experience. In the second paper, Junsong Zhang, Zhu Shaoqiang, Kunxiang Liu, and Xiaoyu Liu, from National Engineering Research Center for E-learning, in Wuhan, China, propose a novel adversarial architecture for multiple sketch colorization which is a scribble-based, automatic and exemplar-based colorization method. The proposed framework has two stages, namely imitating stage, and shading stage. In the imitating stage, to address the challenge of lack of texture in the sketch, the authors train a grayscale generation network to accomplish a mapping task, namely generating a grayscale map with textured, grayscale, boundary information from the input sparse sketch. In the shading stage, the model can accurately colorize the objects in the gray image generated in the previous stage and generate high-quality colorized images. With the proposed model trained on their database, the experimental results show that their method can generate vivid colorized images and achieve a better performance than previous methods evaluated by FID metric. In the third paper, Assia MESSACI, Zenati Nadia, Belhocine Mahmoud, from CDTA, Algiers, Algeria and Otmane Samir, from Université Evry, Université Paris-Saclay, France propose the Zoom-fwd, which is an efficient 3D interaction technique. The proposed technique uses gesture recognition for different 3D interaction tasks like selection and manipulation. This new approach allows an efficient interaction with distant and occluded objects, while providing a precise selection, even when the environment is crowded. A user study is conducted to determine whether the proposed technique is more suitable when performing interaction tasks. The results show that the Zoom-fwd technique provides effective interaction with distant and occluded objects. The fourth paper, by Wen Zhou, Wenying Jiang, Biao Jie, and Weixin Bian, from Anhui Normal University, in Wuhu, China, present a multiagent evacuation framework for complex virtual fire scenarios, effectively used to simulate the procedure of multiagent evacuation to approximate the goal of fire drills in a least-cost manner. Specifically, the concept of a multihierarchy agent group model is proposed; that is, the evacuation of multiple agents is separated into leader-follower and freedom modes. Additionally, several complex actions of individual humans in actual fire drills are fully considered, and a multiaction agent schema is presented to characterize the associated real effects. In addition, generative adversarial imitation learning (GAIL) is adopted to obtain the evacuation path of the leader-agent by training numerous learning epochs. The results of extensive experiments show that the proposed method is feasible and realistically and reasonably shows the procedure of multiagent evacuation in complex fire emergency scenarios. The last paper by Jingjing Zhang, Jingsheng Lei, Shengying Yang, and Xinqi Yang, from Zhejiang University of Science and Technology, in Hangzhou, China propose the SIL-Net designed to discover semantic differences between two fine-grained categories via pairwise comparison. Specifically, SIL-Net first collecting contrastive information by learning the mutual feature of input image pair, and then compare it with individual features to generate corresponding semantic features. These features learn semantic differences from contextual comparison, this gives SIL-Net the ability to distinguish between two confusing images via pairwise interaction. After training, SIL-Net can adaptively learn feature priorities under the supervision of the margin ranking loss and converge quickly. SIL-Net performs well on two public vehicle benchmarks (Stanford Cars and CompCars), showing the suitability of SIL-Net to fine-grained vehicle recognition.