Faculty and students from the Center for Visual Computing will present 12 papers at CVPR 2017, the premier international forum for computer vision research this year, held in Honolulu, Hawaii.
1. Robust Energy Minimization for BRDF-Invariant Shape From Light Fields
Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker
2. Light Field Blind Motion Deblurring
Pratul P. Srinivasan, Ren Ng, Ravi Ramamoorthi
3. Deeply Supervised Salient Object Detection With Short Connections
Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, Philip H. S. Torr
4. Aggregated Residual Transformations for Deep Neural Networks
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He
5. Semantically Consistent Regularization for Zero-Shot Recognition
Pedro Morgado, Nuno Vasconcelos
6. AGA: Attribute-Guided Augmentation
Mandar Dixit, Roland Kwitt, Marc Niethammer, Nuno Vasconcelos
7. Deep Learning With Low Precision by Half-Wave Gaussian Quantization
Zhaowei Cai, Xiaodong He, Jian Sun, Nuno Vasconcelos
8. Deep Supervision With Shape Concepts for Occlusion-Aware 3D Object Parsing
Chi Li, M. Zeeshan Zia, Quoc-Huy Tran, Xiang Yu, Gregory D. Hager, Manmohan Chandraker
9. DESIRE: Distant Future Prediction in Dynamic Scenes With Interacting Agents
Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B. Choy, Philip H. S. Torr, Manmohan Chandraker
10. Deep Network Flow for Multi-Object Tracking
Samuel Schulter, Paul Vernaza, Wongun Choi, Manmohan Chandraker
11. Learning Random-Walk Label Propagation for Weakly-Supervised Semantic Segmentation
Paul Vernaza, Manmohan Chandraker
12. Person Re-Identification in the Wild
Liang Zheng, Hengheng Zhang, Shaoyan Sun, Manmohan Chandraker, Yi Yang, Qi Tian
13. A Point Set Generation Network for 3D Object Reconstruction from a Single Image
Hao Su, Haoqiang Fan and Leonidas Guibas.
14. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Hao Su, Charles Qi, Kaichun Mo and Leonidas Guibas.
15. SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation
Li Yi, Hao Su, Xingwen Guo and Leonidas Guibas.
16. Learning Shape Abstractions by Assembling Volumetric Primitives
Shubham Tulsiani, Hao Su, Leonidas Guibas, Alexei A. Efros and
Jitendra Malik.
17. Learning Non-Lambertian Object Intrinsics across ShapeNet Categories
Jian Shi, Yue Dong, Hao Su and Stella X. Yu.
A blog managed by the communications team at the UC San Diego Jacobs School of Engineering. Are you a member of the Jacobs School community? Have ideas for a blog post? Let us know! Email dbkane AT ucsd DOT edu or let us know via our Facebook page: http://www.facebook.com/UCSDJacobs
Showing posts with label Center for Visual Computing. Show all posts
Showing posts with label Center for Visual Computing. Show all posts
Wednesday, June 7, 2017
Friday, June 2, 2017
Center for Visual Computing papers at SIGGRAPH 2017
Faculty and students from the Center for Visual Computing will present five papers at SIGGRAPH 2017, the premier international forum for computer graphics research this year, held in Los Angeles.
Center for Visual Computing papers at SIGGRAPH 2017:
1. "Antialiasing Complex Global Illumination Effects in Path-space” by Laurent Belcour, Lingqi Yan, Ravi Ramamoorthi and Derek Nowrouzezahrai
2. “An Efficient and Practical Near and Far Field Fur Reflectance Model” by Lingqi Yan, Henrik Wann Jensen and Ravi Ramamoorthi
3. "Light Field Video Capture Using a Learning-Based Hybrid Imaging System” by Ting-Chun Wang, Junyan Zhu, Nima Khademi Kalantari, Alexei A. Efros and Ravi Ramamoorthi
4. “Deep High Dynamic Range Imaging of Dynamic Scenes” by Nima Khademi Kalantari and Ravi Ramamoorthi
5. "Patch-Based Optimization for Image-Based Texture Mapping” by Sai Bi, Nima Khademi Kalantari and Ravi Ramamoorthi
6. "Learning Hierarchical Shape Segmentation and Labeling from Online Repositories" by Li Yi, Leonidas J. Guibas, Aaron Hertzmann, Vladimir G. Kim, Hao Su, Ersin Yumer
For more about the Center for Visual Computing visit viscomp.ucsd.edu.
Center for Visual Computing papers at SIGGRAPH 2017:
1. "Antialiasing Complex Global Illumination Effects in Path-space” by Laurent Belcour, Lingqi Yan, Ravi Ramamoorthi and Derek Nowrouzezahrai
2. “An Efficient and Practical Near and Far Field Fur Reflectance Model” by Lingqi Yan, Henrik Wann Jensen and Ravi Ramamoorthi
3. "Light Field Video Capture Using a Learning-Based Hybrid Imaging System” by Ting-Chun Wang, Junyan Zhu, Nima Khademi Kalantari, Alexei A. Efros and Ravi Ramamoorthi
4. “Deep High Dynamic Range Imaging of Dynamic Scenes” by Nima Khademi Kalantari and Ravi Ramamoorthi
5. "Patch-Based Optimization for Image-Based Texture Mapping” by Sai Bi, Nima Khademi Kalantari and Ravi Ramamoorthi
6. "Learning Hierarchical Shape Segmentation and Labeling from Online Repositories" by Li Yi, Leonidas J. Guibas, Aaron Hertzmann, Vladimir G. Kim, Hao Su, Ersin Yumer
For more about the Center for Visual Computing visit viscomp.ucsd.edu.
Tuesday, September 27, 2016
Center for Visual Computing Faculty and Students to Present 8 papers at the European Conference on Computer Vision
Faculty and students from the Center for Visual Computing will present eight papers at ECCV, the European Conference on Computer Vision, Oct. 8-16, 2016 the premier international forum for computer vision research this year, held in Amsterdam.
Center for Visual Computing papers at ECCV 2016:
1. Top-down Learning for Structured Labeling with Convolutional Pseudoprior
Saining Xie, Xun Huang, Zhuowen Tu
2. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection
Zhaowei Cai, Quanfu Fan, Rogerio Feris, Nuno Vasconcelos
3. Semantic Clustering for Robust Fine-Grained Scene Recognition
MarianGeorge, Dixit Mandar, Gábor Zogg, Nuno Vasconcelos
4. Peak-Piloted Deep Network for Facial Expression Recognition
Xiangyun Zhao, Xiaodan Liang, Luoqi Liu, Teng Li, Yugang Han, Nuno Vasconcelos, Shuicheng Yan
5. HFS: Hierarchical Feature Selection for Efficient Image Segmentation
Ming-Ming Cheng, Yun Liu, Qibin Hou, Jiawang Bian, Philip Torr, Shimin Hu, Zhuowen Tu
6. Linear depth estimation from an uncalibrated, monocular polarisation image
William Smith, Ravi Ramamoorthi, Silvia Tozza
7. A 4D Light-Field Dataset and CNN Architectures for Material Recognition
Ting-Chun Wang, Jun-Yan Zhu, Hiroaki Ebi, Manmohan Chandraker, Alexei Efros, Ravi Ramamoorthi
8. Deep Deformation Network for Object Landmark Localization
Xiang Yu, Feng Zhou, Manmohan Chandraker
Visual Computing Center Faculty and students will also present three papers at the SIGGRAPH Asia 2016 computer graphics conference, held in Macao in early December.
Center for Visual Computing papers at SIGGRAPH Asia 2016:
1. Minimal BRDF Sampling for Two-Shot Near-Field Reflectance
Acquisition, Zexiang Xu, Jannik Boll Nielsen, Jiyang Yu, Henrik Wann Jensen, Ravi Ramamoorthi
2. Downsampling Scattering Parameters for Rendering Anisotropic Media
Shuang Zhao, Lifan Wu, Fredo Durand, Ravi Ramamoorthi
3. Learning-Based View Synthesis for Light Field Cameras
Nima Khademi Kalantari, Ting-Chun Wang, Ravi Ramamoorthi
Center for Visual Computing papers at ECCV 2016:
1. Top-down Learning for Structured Labeling with Convolutional Pseudoprior
Saining Xie, Xun Huang, Zhuowen Tu
2. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection
Zhaowei Cai, Quanfu Fan, Rogerio Feris, Nuno Vasconcelos
3. Semantic Clustering for Robust Fine-Grained Scene Recognition
MarianGeorge, Dixit Mandar, Gábor Zogg, Nuno Vasconcelos
4. Peak-Piloted Deep Network for Facial Expression Recognition
Xiangyun Zhao, Xiaodan Liang, Luoqi Liu, Teng Li, Yugang Han, Nuno Vasconcelos, Shuicheng Yan
5. HFS: Hierarchical Feature Selection for Efficient Image Segmentation
Ming-Ming Cheng, Yun Liu, Qibin Hou, Jiawang Bian, Philip Torr, Shimin Hu, Zhuowen Tu
6. Linear depth estimation from an uncalibrated, monocular polarisation image
William Smith, Ravi Ramamoorthi, Silvia Tozza
7. A 4D Light-Field Dataset and CNN Architectures for Material Recognition
Ting-Chun Wang, Jun-Yan Zhu, Hiroaki Ebi, Manmohan Chandraker, Alexei Efros, Ravi Ramamoorthi
8. Deep Deformation Network for Object Landmark Localization
Xiang Yu, Feng Zhou, Manmohan Chandraker
Visual Computing Center Faculty and students will also present three papers at the SIGGRAPH Asia 2016 computer graphics conference, held in Macao in early December.
Center for Visual Computing papers at SIGGRAPH Asia 2016:
1. Minimal BRDF Sampling for Two-Shot Near-Field Reflectance
Acquisition, Zexiang Xu, Jannik Boll Nielsen, Jiyang Yu, Henrik Wann Jensen, Ravi Ramamoorthi
2. Downsampling Scattering Parameters for Rendering Anisotropic Media
Shuang Zhao, Lifan Wu, Fredo Durand, Ravi Ramamoorthi
3. Learning-Based View Synthesis for Light Field Cameras
Nima Khademi Kalantari, Ting-Chun Wang, Ravi Ramamoorthi
Monday, July 18, 2016
Highlights from the 2016 UC San Diego Center for Visual Computing Retreat
UC San Diego held its first annual Center for Visual Computing Retreat May 20-21, 2016. Faculty members of the Center reviewed the work that has been done since it’s opening in 2015. The Center was created to find innovative solutions in computer vision and computer graphics. The retreat included 50+ participants, including 19 visitors from nine industrial sponsors.
At the retreat, Ravi Ramamoorthi, Director of the Center and Ronald L. Graham professor in the Computer Science and Engineering Department, and Jacobs School of Engineering Dean Albert P. Pisano gave opening remarks and introduced the Center.
Following opening remarks, Ramamoorthi and other UC San Diego faculty members from the Computer Science and Engineering Department and Calit2 gave updates on their research:
Thomas A. DeFanti, Research Scientist, Calit2
Cameras for Virtual Reality Displays
Ravi Ramamoorthi, PhD, Director, Center for Visual Computing | Professor, CSE
Sampling and Reconstruction of HighDimensional Visual Appearance
Zhuowen Tu, PhD, Professor, Cognitive Science, CSE
Deep Supervision for Deep Learning: Training, Regularization, and MultiScale Learning
Jürgen Schulze, PhD, Associate Research Scientist, Computer Science
Virtual Reality with Head Mounted Displays
The majority of the first day consisted of student presentations on past and ongoing work, as well as a poster session in the evening.
Computer science and engineering professor Henrik Wann Jensen also spoke on the challenges presented by light transport simulation.
Following more student presentations on Day 2, Ramamoorthi, professor of computer science Jurgen Schulze and cognitive science professor Zhuowen Tu served on a panel featuring a discussion about 3D and VR imaging.
The retreat concluded with feedback from sponsors, including Cubic, which posted a blog post about the event.
Cameras for Virtual Reality Displays
Ravi Ramamoorthi, PhD, Director, Center for Visual Computing | Professor, CSE
Sampling and Reconstruction of HighDimensional Visual Appearance
Zhuowen Tu, PhD, Professor, Cognitive Science, CSE
Deep Supervision for Deep Learning: Training, Regularization, and MultiScale Learning
Jürgen Schulze, PhD, Associate Research Scientist, Computer Science
Virtual Reality with Head Mounted Displays
The majority of the first day consisted of student presentations on past and ongoing work, as well as a poster session in the evening. Computer science and engineering professor Henrik Wann Jensen also spoke on the challenges presented by light transport simulation.
Following more student presentations on Day 2, Ramamoorthi, professor of computer science Jurgen Schulze and cognitive science professor Zhuowen Tu served on a panel featuring a discussion about 3D and VR imaging.
The retreat concluded with feedback from sponsors, including Cubic, which posted a blog post about the event.
Friday, April 3, 2015
Lovely digital creatures on display at Research Expo April 16
A relative of this cute digital white rabbit will be part of one of the many posters on display April 16 at Research Expo at the Price Center. The picture is a--very rough--draft for the work of computer science Ph.D. student Chiwei Tseng, who works with Professors Ravi Ramamoorthi and Henrik W. Jensen. The work also is part of the Jacobs School's new Center for Visual Computing.
Here's the abstract for Tseng's poster, which is titled "A generic light scattering model for rendering photorealistic animal fur fibers:"
Rendering photorealistic animal fur is of practical importance in many computer graphics productions. In the past, the visual appearances of specific fiber types have been studied, and various light scattering models derived from cylindrical geometry have been proposed. These models, however, lack either physical accuracy or versatility to produce the wide range of specular and diffusive material properties observed on animal fur fibers in the wild. We propose an anatomically based light scattering model for arbitrary animal fur fibers, represented by two coaxial cylinder volumes. We show that our model preserves high fidelity to actual animal fur and can simulate a large array of visual appearances by qualitatively matching synthetic optical microscopy images and far-field scattering profiles with measured data. Through reconstructing the light paths for formerly unexplainable scattering lobes observed on 10 animal fur fiber types, we reveal how the subsurface structures of a fur fiber can bring about decisive effects to its visual appearance.
More info about Research Expo at: http://www.jacobsschool.ucsd.edu/re/
Here's the abstract for Tseng's poster, which is titled "A generic light scattering model for rendering photorealistic animal fur fibers:"
Rendering photorealistic animal fur is of practical importance in many computer graphics productions. In the past, the visual appearances of specific fiber types have been studied, and various light scattering models derived from cylindrical geometry have been proposed. These models, however, lack either physical accuracy or versatility to produce the wide range of specular and diffusive material properties observed on animal fur fibers in the wild. We propose an anatomically based light scattering model for arbitrary animal fur fibers, represented by two coaxial cylinder volumes. We show that our model preserves high fidelity to actual animal fur and can simulate a large array of visual appearances by qualitatively matching synthetic optical microscopy images and far-field scattering profiles with measured data. Through reconstructing the light paths for formerly unexplainable scattering lobes observed on 10 animal fur fiber types, we reveal how the subsurface structures of a fur fiber can bring about decisive effects to its visual appearance.
More info about Research Expo at: http://www.jacobsschool.ucsd.edu/re/
Subscribe to:
Posts (Atom)

