近日,Siggraph Asia 2017审稿结果已出,实验室两篇论文《Auto-colorization of 3D Models from Images》和《DCFont: An End-To-End Deep Chinese Font Generation System》被接收为Technical Brief,均将在会议上做20分钟的Oral Presentation。Siggraph Asia是计算机图形学领域的顶级会议,今年将在泰国曼谷召开,
论文信息如下:
《Auto-colorization of 3D Models from Images》
Authors:
Juncheng Liu, Zhouhui Lian, Jianguo Xiao
Abstract:
Color is crucial to achieve more realism and better visual perception. However, majority of existing 3D model repositories are colorless. In this paper, we proposed an automatic scheme for 3D model colorization taking advantage of large availability of realistic 2D images with similar appearance. Specifically, we establish a region-based correspondence between 3D model and its 2D image counterpart. Then we employ a PatchMatch based approach to synthesize the texture images. Subsequently, we quilt the texture gaps via multi-view coverage. Finally, the texture coordinates are obtained by projecting back to the 3D model. Our method yields satisfactory results in most situations even when there exists an inconsistency between the 3D model and the given image. In the results, we present a cross-over experiment that validates the effectiveness and generality of our method.
《DCFont: An End-To-End Deep Chinese Font Generation System》
Authors:
Yue Jiang, Zhouhui Lian, Yingmin Tang, Jianguo Xiao
Abstract:
Building a complete personalized Chinese font library for an ordinary person is a tough task due to the existence of huge amounts of characters with complicated structures. Yet, existing automatic font generation methods still have many drawbacks. To address those problems, this paper proposes an end-to-end learning system, DCFont, to automatically generate the whole GB2312 font library that consists of 6763 Chinese characters from a small number (e.g., 775) of characters written by the user. Our system has two major advantages. On the one hand, the system works in an end-to-end manner, which means that human interventions during offline training and online generating periods are not required. On the other hand, a novel deep neural network architecture is designed to solve the font feature reconstruction and handwriting synthesis problems through adversarial training, which requires fewer input data but obtains more realistic and high-quality synthesis results compared to other deep learning based approaches. Experimental results verify the superiority of our method against the state of the art.