Computer Graphics and Visualization Lab

Computer Graphics and Visualization Lab

University of Hong Kong

Computer Graphics and Visualization

CGVU Lab, led by Prof. Taku Komura, belongs to the Department of Computer Science, the University of Hong Kong. Our research focus is on physically-based animation and the application of machine learning techniques for animation synthesis.

group.png poster

Meet the Team

Principal Investigator

Avatar

Taku Komura

Professor

Physical Simulation, Character Animation, 3D Modelling

Research Staff

Avatar

Floyd M. Chitalu

Senior researcher, since Nov. 2022.

Physical Simulation

Avatar

Yinghao Huang

Postdoc, since Aug. 2023.

Human Pose Estimation, Human Motion Generation

Avatar

Chen Peng

Postdoc, since Sep. 2023.

Physically-Based Animation, Fluid Simulation

Graduate Students

Avatar

Linxu Fan

PHD, since Nov. 2019.

Physical Simulation

Avatar

Zhiyang Dou

PhD, since Aug. 2020.
Co-supv. by Prof. Wenping Wang.

Character Animation, Geometric Computing

Avatar

Dafei Qin

PhD, since Sep. 2020.

Facial Animation, Neural Rendering

Avatar

Mingyi Shi

PhD, since Nov. 2020.

3D Human Moton, Generative AI

Avatar

Jintao Lu

PhD, since Sept. 2021.

Human Scene Interaction, Motion Control

Avatar

Huancheng Lin

M.Phil., since Sep. 2022.

Physical Simulation

Avatar

Guying Lin

MPhil, since Sept. 2022.
Co-supv. by Prof. Wenping Wang.

Neural Implicit Surface Representation

Avatar

Kemeng Huang

PhD, since Sep. 2022.

Physical Simulation, High Performance Computing

Avatar

Wenjia Wang

PhD, since Jan. 2023.

3D Reconstruction, Human Pose Estimation, Human Motion Generation

Avatar

Zhouyingcheng Liao

PhD, since Jan. 2023.

Neural Cloth Simulation, Character Animation

Avatar

Xiaohan Ye

PhD, since Sept. 2023.

Physics Simulation, Motion Control

Avatar

Yuke Lou

M.Phil, since Sept. 2023.

Motion Generation

Research Assistant

Avatar

Leo Ho

Research Assistant, since Aug. 2023.

Digital Humans, Motion Synthesis

Avatar

Xinyu Lu

Research Assistant, since Sep. 2023.

Physically-Based Animation, Simulation

Recent Publications

Quickly discover relevant content by filtering publications.
C·ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

C·ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

We present C·ASE, an efficient and effective framework that learns Conditional Adversarial Skill Embeddings for physics-based characters. C·ASE enables the physically simulated character to learn a diverse repertoire of skills while providing controllability in the form of direct manipulation of the skills to be performed. This is achieved by dividing the heterogeneous skill motions into distinct subsets containing homogeneous samples for training a low-level conditional model to learn the conditional behavior distribution. The skill-conditioned imitation learning naturally offers explicit control over the character’s skills after training. The training course incorporates the focal skill sampling, skeletal residual forces, and element-wise feature masking to balance diverse skills of varying complexities, mitigate dynamics mismatch to master agile motions and capture more general behavior characteristics, respectively. Once trained, the conditional model can produce highly diverse and realistic skills, outperforming state-of-the-art models, and can be repurposed in various downstream tasks. In particular, the explicit skill control handle allows a high-level policy or a user to direct the character with desired skill specifications, which we demonstrate is advantageous for interactive character animation.

Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh Reconstruction

Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh Reconstruction

As it is hard to calibrate single-view RGB images in the wild, existing 3D human mesh reconstruction (3DHMR) methods either use a constant large focal length or estimate one based on the background environment context, which can not tackle the problem of the torso, limb, hand or face distortion caused by perspective camera projection when the camera is close to the human body. The naive focal length assumptions can harm this task with the incorrectly formulated projection matrices. To solve this, we propose Zolly, the first 3DHMR method focusing on perspective-distorted images. Our approach begins with analysing the reason for perspective distortion, which we find is mainly caused by the relative location of the human body to the camera center. We propose a new camera model and a novel 2D representation, termed distortion image, which describes the 2D dense distortion scale of the human body. We then estimate the distance from distortion scale features rather than environment context features. Afterwards, We integrate the distortion feature with image features to reconstruct the body mesh. To formulate the correct projection matrix and locate the human body position, we simultaneously use perspective and weak-perspective projection loss. Since existing datasets could not handle this task, we propose the first synthetic dataset PDHuman and extend two real-world datasets tailored for this task, all containing perspective-distorted human images. Extensive experiments show that Zolly outperforms existing state-of-the-art methods on both perspective-distorted datasets and the standard benchmark (3DPW).

Contact

  • CYC Building, the University of Hong Kong, Hong Kong,