- 专利标题: Methods, systems, and media for relighting images using predicted deep reflectance fields
-
申请号: US16616235申请日: 2019-10-16
-
公开(公告)号: US10997457B2公开(公告)日: 2021-05-04
- 发明人: Christoph Rhemann , Abhimitra Meka , Matthew Whalen , Jessica Lynn Busch , Sofien Bouaziz , Geoffrey Douglas Harvey , Andrea Tagliasacchi , Jonathan Taylor , Paul Debevec , Peter Joseph Denny , Sean Ryan Francesco Fanello , Graham Fyffe , Jason Angelo Dourgarian , Xueming Yu , Adarsh Prakash Murthy Kowdle , Julien Pascal Christophe Valentin , Peter Christopher Lincoln , Rohit Kumar Pandey , Christian Häne , Shahram Izadi
- 申请人: Google LLC
- 申请人地址: US CA Mountain View
- 专利权人: Google LLC
- 当前专利权人: Google LLC
- 当前专利权人地址: US CA Mountain View
- 代理机构: Brake Hughes Bellermann LLP
- 国际申请: PCT/US2019/056532 WO 20191016
- 国际公布: WO2020/236206 WO 20201126
- 主分类号: G06K9/46
- IPC分类号: G06K9/46 ; G06T15/50 ; G06T15/20 ; G06N3/08 ; G06K9/62
摘要:
Methods, systems, and media for relighting images using predicted deep reflectance fields are provided. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample includes (i) a group of one-light-at-a-time (OLAT) images that have each been captured when one light of a plurality of lights arranged on a lighting structure has been activated, (ii) a group of spherical color gradient images that have each been captured when the plurality of lights arranged on the lighting structure have been activated to each emit a particular color, and (iii) a lighting direction, wherein each image in the group of OLAT images and each of the spherical color gradient images are an image of a subject, and wherein the lighting direction indicates a relative orientation of a light to the subject; training a convolutional neural network using the group of training samples, wherein training the convolutional neural network comprises: for each training iteration in a series of training iterations and for each training sample in the group of training samples: generating an output predicted image, wherein the output predicted image is a representation of the subject associated with the training sample with lighting from the lighting direction associated with the training sample; identifying a ground-truth OLAT image included in the group of OLAT images for the training sample that corresponds to the lighting direction for the training sample; calculating a loss that indicates a perceptual difference between the output predicted image and the identified ground-truth OLAT image; and updating parameters of the convolutional neural network based on the calculated loss; identifying a test sample that includes a second group of spherical color gradient images and a second lighting direction; and generating a relit image of the subject included in each of the second group of spherical color gradient images with lighting from the second lighting direction using the trained convolutional neural network.
公开/授权文献
信息查询