Editorial Issue 32.6 Article Swipe
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.1002/cav.2037
· OA: W4200111614
This issue contains five papers. In the first paper, Kun Qian from King's College in London, UK, Meili Wang, from Northwest Agriculture and Forestry University, in Shannxi, China and Yaqing Cui, Bournemouth University, in Poole, UK propose a simulation ready model generation pipeline which can convert a non-manifold polygonal surface mesh into a degeneracy free surface while preserving the original model's surface parameterization attribute. Their pipeline includes two stages. The first stage is a voxelization and remesh based simulation ready model generation pipeline which can keep the shape of original 3D surface model meanwhile eliminate the non-manifold geometry. In the second stage, a cutting-based surface mesh parameterization transfer algorithm is proposed which can transfer the original surface parameterization to the simulation ready model. Detailed comparison with existing pipelines shows that the proposed pipeline can achieve surface parameterization preservation feature and is more suitable for improving the efficiency of virtual surgery production. In the second paper, Muhammad Usman and Petros Faloutsos, from York University in Toronto, Canada, Brandon Haworth, from University of Victoria in Canada and Mubbasir Kapadia, from Rutgers University, Piscataway in USA, propose a cross-browser service-based simulation analytics platform to analyze environment layouts with respect to occupancy and activity. Their platform allows users to access simulation services by uploading 3D environment models in numerous common formats, devise targeted simulation scenarios, run simulations, and instantly generate crowd-based analytics for their designs. The authors conducted a case study to showcase cross-domain applicability of their service-based platform, and a user study to evaluate the usability of this approach. In the third paper, Junxuan Bai, Rong Dai, Ju Dai, and Junjun Pan, from Beihang University in Beijing, and Peng Cheng Laboratory in Shenzhen, all in China propose a hybrid feature for emotional classification in dance performances. The hybrid feature is composed of an explicit feature and a deep feature. The explicit feature is calculated based on the Laban movement analysis, which considers the body, effort, shape, and space properties. The deep feature is obtained from latent representation through a 1D convolutional autoencoder. Eventually, the authors present an elaborate feature fusion network to attain the hybrid feature that is almost linearly separable. The abundant experiments demonstrate that their hybrid feature is superior to the separate features for the emotional classification in dance performances. The fourth paper, by Donya Ghafourzadeh, Sahel Fallahdoust, Cyrus Rahgoshay, André Beauchamp, Adeline Aubame, Eric Paquette, from Ecole de technologie supérieure, and Tiberiu Popa, from Concordia University, all in Montreal, Canada, present an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. They created a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Their local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. They provide different editing paradigms, all designed from the analysis of the data set. Their part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. The results show that their part-based 3DMM approach has excellent generative properties and allows the user intuitive local control. The last paper by David Antonio Gomez Jauregui, from Ecole Supérieure des Technologies Industrielles Avancées, in Bidart, France, Tom Giraud, from University of Augsburg in Germany, Brice Isableu, from Aix-Marseille University, in Aix-en-Provence, France and Jean-Claude Martin, from LIMSI-CNRS, in Orsay, France propose a library of motion-captured movements that interviewers are most likely to display. They designed a fully automatic interactive virtual agent able to display these movements in response to the bodily movements of the user. Thirty-two participants presented themselves to this virtual agent during a simulated job interview. The authors explain the different hypotheses used (1) comparison between the performance of participants with human interviewers and the performance of participants with virtual interviewers, (2) comparison between mirror and random postural behaviors displayed by a female vs. a male virtual interviewer, (3) correlation between the participants' performance and their personality traits.