Yi-Hua Huang

PhD Student · The University of Hong Kong · 3D/4D Graphics & Vision

I am a third-year PhD student at the University of Hong Kong (HKU), working at the CVMI Lab, supervised by Xiaojuan Qi. My research focuses on 3D/4D generation, reconstruction, simulation, and editing.

Prior to this, I completed my master's degree at the Institute of Computing Technology (ICT), part of the Chinese Academy of Sciences, under the supervision of Professor Lin Gao. I have also had the privilege of collaborating closely with Dr. Yan-Pei Cao and Professor Yu-Kun Lai. Before embarking on my master's program, I earned my bachelor's degree from the University of Chinese Academy of Sciences (UCAS), where I was mentored by the esteemed Professor Xilin Chen.

Selected Publications

First Author / Project Lead
Full Publications →
AniGen

AniGen: Unified S³ Fields for Animatable 3D Asset Generation

Yi-Hua Huang, Zi-Xin Zou, Yuting He, Chirui Chang, Cheng-Feng Pu, Ziyi Yang, Yuan-Chen Guo, Yan-Pei Cao#, Xiaojuan Qi#

SIGGRAPH (TOG) 2026

"Generate animatable & articulate-ready 3D assets with given images."

We present AniGen, a unified framework that directly generates animate-ready 3D assets conditioned on a single image. Our key insight is to represent shape, skeleton, and skinning as mutually consistent S³ Fields (Shape, Skeleton, Skin) defined over a shared spatial domain.

ObjectMorpher

ObjectMorpher: 3D-Aware Image Editing via Deformable 3DGS

Yuhuan Xie*, Aoxuan Pan*, Yi-Hua Huang, Chirui Chang, Peng Dai, Xin Yu, Xiaojuan Qi#

CVPR 2026

"Interactive edit image objects with 3D manipulation."

We present ObjectMorpher, a unified, interactive framework that converts ambiguous 2D edits into geometry-grounded operations. ObjectMorpher lifts target instances with an image-to-3D generator into editable 3D Gaussian Splatting (3DGS), enabling fast, identity-preserving manipulation.

DRK

Deformable Radial Kernel Splatting

Yi-Hua Huang, MingXian Lin, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao#, Xiaojuan Qi#

CVPR 2025

"Explore beyond Gaussian kernels! A flexible general kernel splatting."

We introduce Deformable Radial Kernel (DRK), which extends Gaussian splatting into a more general and flexible framework. Through learnable radial bases with adjustable angles and scales, DRK efficiently models diverse shape primitives while enabling precise control over edge sharpness and boundary curvature.

SC-GS

SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes

Yi-Hua Huang*, Yang-Tian Sun*, Ziyi Yang*, Xiaoyang Lyu, Yan-Pei Cao#, Xiaojuan Qi#

CVPR 2024

"Dynamic reconstruction and interactive editing!"

We introduce sparse-controlled gaussian splatting to synthesize dynamic novel views. With the learned node graph of sparse control points, real-time editing can be achieved with ARAP deformation by interactive dragging of users.

Splatter a Video

Splatter a Video: Video Gaussian Representation for Versatile Processing

Yang-Tian Sun*, Yi-Hua Huang*, Lin Ma, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi#

NeurIPS 2024

"A video representation for effortless tracking, depth estimation, segmentation, and editing!"

We introduce a novel explicit 3D representation, video Gaussian representation, that embeds a video into 3D Gaussians, enabling tracking, consistent video depth and feature refinement, motion and appearance editing, and stereoscopic video generation.

NeRF-Texture TPAMI

NeRF-Texture: Synthesizing Neural Radiance Field Textures

Yi-Hua Huang, Yan-Pei Cao, Yu-Kun Lai, Ying Shan, Lin Gao

TPAMI 2024

"Grow realistic real-captured 3D textures on any surface to bring shapes to life!"

We propose an algorithm to synthesize NeRF textures on arbitrary manifolds. By using a patch-matching method on curved surfaces, we can smoothly quilt texture patches on mesh surfaces. We create a multi-resolution pyramid for a fast patch-matching process.

NeRF-Texture SIGGRAPH

NeRF-Texture: Texture Synthesis with Neural Radiance Fields

Yi-Hua Huang, Yan-Pei Cao, Yu-Kun Lai, Ying Shan, Lin Gao

SIGGRAPH 2023

"Capture a bouquet on video, then generate endless flower textures from it!"

We introduce a NeRF-based system to acquire, synthesize, map, and relight textures from real-world textures. A novel coarse-fine disentangling representation is proposed to model meso-structures of textures.

StylizedNeRF

StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D Mutual Learning

Yi-Hua Huang, Yue He, Yu-Jie Yuan, Yu-Kun Lai, Lin Gao

CVPR 2022

"The first to stylize NeRF! Turn reality into art, from Monet's to Van Gogh's!"

We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF to fuse the stylization ability of 2D stylization network with the 3D consistency of NeRF.

Services

💼

Internships

VAST, Tencent (2023 Summer)

📝

Reviewer

SIGGRAPH, SIGGRAPH Asia, TPAMI, CVPR, ICCV, NeurIPS, ICLR, ECCV, IJCV, TVCG, WACV, BMVC, ACCV, Pacific Graphics, Virtual Reality, Computer & Graphics, Neurocomputing, Pattern Recognition

🎤

Talks

Deep Blue College 2023, Graphics And Mixed Environment Seminar (GAMES) 2022