-
公开(公告)号:US10789754B2
公开(公告)日:2020-09-29
申请号:US16047839
申请日:2018-07-27
摘要: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
-
公开(公告)号:US12056849B2
公开(公告)日:2024-08-06
申请号:US17466711
申请日:2021-09-03
发明人: Michal Lukác , Daniel Sýkora , David Futschik , Zhaowen Wang , Elya Shechtman
IPC分类号: G06T5/50 , G06F18/214
CPC分类号: G06T5/50 , G06F18/214 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084
摘要: Embodiments are disclosed for translating an image from a source visual domain to a target visual domain. In particular, in one or more embodiments, the disclosed systems and methods comprise a training process that includes receiving a training input including a pair of keyframes and an unpaired image. The pair of keyframes represent a visual translation from a first version of an image in a source visual domain to a second version of the image in a target visual domain. The one or more embodiments further include sending the pair of keyframes and the unpaired image to an image translation network to generate a first training image and a second training image. The one or more embodiments further include training the image translation network to translate images from the source visual domain to the target visual domain based on a calculated loss using the first and second training images.
-
公开(公告)号:US20200035010A1
公开(公告)日:2020-01-30
申请号:US16047839
申请日:2018-07-27
发明人: Vladimir Kim , Wilmot Li , Marek Dvoroznák , Daniel Sýkora
摘要: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.
-
-