Monday, May 24, 2021

UC San Diego computer scientist wins UC San Diego Chancellor's Dissertation Medal

UC San Diego computer science PhD student Zexiang Xu has been selected as this year's Chancellor's Dissertation Medal recipient within the UC San Diego Jacobs School of Engineering. Xu is currently a research scientist at Adobe Research. 

 


Zexiang Xu was advised by
computer science professor Ravi Ramamoorthi, who is Director of the UC San Diego Center for Visual Computing.

 

Zexiang Xu's abstract: Sparse Sampling for Appearance Acquisition

 

Dissertation Abstract

Modeling the appearance of real scenes from captured images is one key problem in computer graphics and computer vision. This traditionally requires a large number of input samples (e.g. images, light-view directions, depth hypotheses, etc.) and consumes extensive computational resources. In this dissertation, we aim to make scene acquisition more efficient and practical, and we present several approaches that successfully reduce the required number of samples in various appearance acquisition problems.

 

We exploit techniques to explicitly reconstruct the geometry and materials in a real scene; the two components essentially determine the scene appearance. On the geometry side, we introduce a novel deep multi-view stereo technique that can reconstruct high-quality scene geometry from a sparse set of sampling depth hypotheses. We leverage uncertainty estimation in a multi-stage cascaded network, which reconstructs highly accurate and highly complete geometry with low costs in a coarse-to-fine framework. On the material side, the reflectance of a real material is traditionally measured by tens and even hundreds of captured images. We present a novel reflectance acquisition technique that can reconstruct high-fidelity real materials from only two near-field images.

 

Moreover, we exploit image-based acquisition techniques that bypass explicit scene reconstruction and focus on realistic image synthesis under new conditions. We first present a novel deep neural network for image-based relighting. Our network simultaneously learns optimized input lighting directions and a relighting function. Our approach can produce photo-realistic relighting results under novel environment maps from only five images captured under five optimized directional lights. We also study the problem of view synthesis for real objects under controlled lighting, which classically requires dense input views with small baselines. We propose a novel deep learning based view synthesis technique that can synthesize photo-realistic images from novel views across six widely-spaced input views. Our network leverages visibility-aware attention information to effectively aggregate multi-view appearance. We also show that our view synthesis technique can be combined with our relighting technique to achieve novel-view relighting from sparse light-view samples.


No comments:

Post a Comment