Editorial Issue 30.6 Article Swipe
YOU?
·
· 2019
· Open Access
·
· DOI: https://doi.org/10.1002/cav.1918
· OA: W2992711121
This issue contains six papers: The first two papers are extended versions of invited papers from the SIGGRAPH Asia 2017 Workshop on Data-Driven Animation, whereas the last four papers are regular papers. In the first paper, Yuhang Huang, Yonghang Yu, and Takashi Kanai, from The University of Tokyo, in Japan, propose a novel data-driven method that uses a machine learning scheme for formulating fracture simulation with the boundary element method (BEM) as a regression problem. With this method, the crack-opening displacement (COD) of every correlation node is predicted at the next frame. In their naive prediction, the authors design a feature vector directly exploiting stress intensities and toughness at the current frame, so that their method predicts the COD at the next frame more reliably. Thus, there is no need to solve the original linear BEM system to calculate displacements. This enables them to propagate crack fronts using the estimated stress intensities. In the second paper, Jacky C. P. Chan, from Caritas Institute of Higher Education, Hong Kong; Hubert P. H. Shum, from Northumbria University; He Wang, from the University of Leeds, UK; Li Yi, from Yilifilm, Shenzhen, China; Wei Wei, from Xi'an University of Technology, China; and Edmond S. L. Ho, from Northumbria University, UK, explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous works that encode emotions into discrete motion style descriptors, they propose a continuous control indicator called emotion strength, by controlling which a data-driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, their method explicitly learns a model mapping low-level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. In the third paper, Bailin Yang, Zhaoyi Jiang, and Jiantao Shangguan, from Zhejiang Gongshang University, in Hangzhou, China; Frederick W. B. Li, from Durham University, in UK; Chao Song, from Zhejiang Gongshang University, in Hangzhou, China; and Yibo Guo and Mingliang Xu, from Zhengzhou University, in China, present a novel framework that allows effective dynamic mesh sequence (DMS) compression and progressive streaming by eliminating spatial and temporal redundancy. To explore temporal redundancy, they propose a temporal frame-clustering algorithm to organize DMS frames by their motion trajectory changes, eliminating intracluster redundancy by PCA dimensionality reduction. To eliminate spatial redundancy, they propose an algorithm to transform the coordinates of mesh vertex trajectory into a decorrelated trajectory space, generating a new spatially nonredundant trajectory representation. They finally apply a spectral graph wavelet transform (SGWT) with CSPECK encoding to turn the resultant DMS into a multiresolution representation to support progressive streaming. In the fourth paper, Enrico Ronchi, David Mayorga, and Ruggiero Lovreglio, from Massey University, Palmerston North, New Zealand; Jonathan Wahlqvist, from Lund University, Sweden; and Daniel Nilsson, from Lund University, Sweden, and the University of Canterbury, in Christchurch, New Zealand, compare the results of tunnel evacuation experiments aimed at investigating the design of flashing lights on emergency exit portals using two different VR methods (CAVE vs mobile-powered HMD). The experiments were performed by repeating the same case study in a CAVE laboratory and a low-cost mobile-powered HMD. The CAVE experiment involved 96 participants, whereas the HMD experiment involved 55 participants. An affordance-based questionnaire was used to interview participants immersed in a VR road tunnel emergency evacuation scenario and rank different emergency portal designs. In the fifth paper, Quan Jiang, from the Ningbo Institute of Materials Technology and Engineering Chinese Academy of Sciences and the University of Chinese Academy of Sciences, in Beijing, China; and Xiliang Chen, from Ningbo Institute of Materials Technology and Engineering Chinese Academy of Sciences and Taizhou University China, employ the DualSPHysics multiphase flow model to relatively easily simulate the dynamics of landslide-generated waves in large scales. In addition, the simulation results are well visualized by the marching cubes algorithm. It is found that when the factors of the landslide scale, the water length, and the water depth are fixed, the initial maximum wave height decreases as the water width increases. After the water is squeezed by the falling slider, one-way and two-way propagation of the surge are generated under 2D and 3D conditions, respectively. In addition, under the true-3D condition, although the initial swell generated is smaller than the 2D and quasi-2D conditions, it will generate a large wave height during the climbing and undulation of the near shore. In the last paper, Na Liu, Xingce Wang, Shaolong Liu, Zhongke Wu, and Jiale He, from Beijing Normal University, in China, and Peng Cheng, Chunyan Miao, and Nadia Magnenat-Thalmann, from the Nanyang Technological University, in Singapore, propose a framework of crowd formation via hierarchical planning, which includes cooperative-task, coordinated-behaviour, and action-control planning. In cooperative-task planning, they improve the grid potential field to achieve global path planning for a team. In coordinated-behaviour planning, they propose a time–space table to arrange behaviour scheduling for a movement. In action-control planning, they combine the gaze-movement angle model and fuzzy logic control to achieve agent action. Their method is verified using crowds of various densities, from sparse to dense, employing quantitative performance measures. The approach is independent of the simulation model and can be extended to other crowd simulation tasks. We would like to thank Professor Taku Komura, from Edinburgh University, who handled the two first papers, selected from the SIGGRAPH Asia 2017 Workshop on Data-Driven Animation.