-
公开(公告)号:US20240134597A1
公开(公告)日:2024-04-25
申请号:US17967714
申请日:2022-10-17
Applicant: Adobe Inc.
Inventor: Lubomira Assenova DONTCHEVA , Anh Lan TRUONG , Hanieh DEILAMSALEHY , Kim Pascal PIMMEL , Aseem Omprakash AGARWALA , Dingzeyu Li , Joel Richard BRANDT , Joy Oakyung KIM
IPC: G06F3/16 , G06F3/0482 , G06F3/0484 , G06F16/735 , G06F16/738
CPC classification number: G06F3/167 , G06F3/0482 , G06F3/0484 , G06F16/735 , G06F16/738
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
-
公开(公告)号:US11908036B2
公开(公告)日:2024-02-20
申请号:US17034467
申请日:2020-09-28
Applicant: Adobe Inc.
Inventor: Oliver Wang , Jianming Zhang , Dingzeyu Li , Zekun Hao
CPC classification number: G06T1/0014 , G06N3/08 , G06T1/0007 , G06T7/11 , G06T7/55 , G06V10/44 , G06T2207/20081
Abstract: The technology described herein is directed to a cross-domain training framework that iteratively trains a domain adaptive refinement agent to refine low quality real-world image acquisition data, e.g., depth maps, when accompanied by corresponding conditional data from other modalities, such as the underlying images or video from which the image acquisition data is computed. The cross-domain training framework includes a shared cross-domain encoder and two conditional decoder branch networks, e.g., a synthetic conditional depth prediction branch network and a real conditional depth prediction branch network. The shared cross-domain encoder converts synthetic and real-world image acquisition data into synthetic and real compact feature representations, respectively. The synthetic and real conditional decoder branch networks convert the respective synthetic and real compact feature representations back to synthetic and real image acquisition data (refined versions) conditioned on data from the other modalities. The cross-domain training framework iteratively trains the domain adaptive refinement agent.
-
公开(公告)号:US11899917B2
公开(公告)日:2024-02-13
申请号:US17969536
申请日:2022-10-19
Applicant: Adobe Inc.
Inventor: Seth Walker , Joy O Kim , Aseem Agarwala , Joel Richard Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F3/04847 , G06F3/0485 , G06F3/04845
CPC classification number: G06F3/04847 , G06F3/0485 , G06F3/04845 , G06F2203/04806
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US11887629B2
公开(公告)日:2024-01-30
申请号:US17330689
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popović , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/00 , G11B27/036 , G06F3/0486 , G06F3/0482 , G11B27/02
CPC classification number: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
-
公开(公告)号:US20220076024A1
公开(公告)日:2022-03-10
申请号:US17017353
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Hijung Shin , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Xue Bai
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a metadata panel with a composite list of video metadata. The composite list is segmented into selectable metadata segments at locations corresponding to boundaries of video segments defined by a hierarchical segmentation. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. One or more metadata segments can be selected in various ways, such as by clicking or tapping on a metadata segment or by performing a metadata search. When a metadata segment is selected, a corresponding video segment is emphasized on the video timeline, a playback cursor is moved to the first video frame of the video segment, and the first video frame is presented.
-
公开(公告)号:US20220075513A1
公开(公告)日:2022-03-10
申请号:US17017366
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F3/0484 , G06F3/0485
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US20220060842A1
公开(公告)日:2022-02-24
申请号:US17515918
申请日:2021-11-01
Applicant: Adobe Inc.
Inventor: Zhenyu Tang , Timothy Langlois , Nicholas Bryan , Dingzeyu Li
Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for rendering scene-aware audio based on acoustic properties of a user environment. For example, the disclosed system can use neural networks to analyze an audio recording to predict environment equalizations and reverberation decay times of the user environment without using a captured impulse response of the user environment. Additionally, the disclosed system can use the predicted reverberation decay times with an audio simulation of the user environment to optimize material parameters for the user environment. The disclosed system can then generate an audio sample that includes scene-aware acoustic properties based on the predicted environment equalizations, material parameters, and an environment geometry of the user environment. Furthermore, the disclosed system can augment training data for training the neural networks using frequency-dependent equalization information associated with measured and synthetic impulse responses.
-
公开(公告)号:US20210248801A1
公开(公告)日:2021-08-12
申请号:US16788551
申请日:2020-02-12
Applicant: ADOBE INC.
Inventor: Dingzeyu Li , Yang Zhou , Jose Ignacio Echevarria Vallespi , Elya Shechtman
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
-
-
-
-
-
-
-
-