-
公开(公告)号:US12014548B2
公开(公告)日:2024-06-18
申请号:US17805075
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
CPC classification number: G06V20/49 , G06F18/231 , G06V20/41 , G06V20/46 , G10L25/78 , G11B27/002 , G11B27/19 , G06V20/44
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US11630562B2
公开(公告)日:2023-04-18
申请号:US17017366
申请日:2020-09-10
Applicant: ADOBE INC.
Inventor: Seth Walker , Joy Oakyung Kim , Aseem Agarwala , Joel R. Brandt , Jovan Popović , Lubomira Dontcheva , Dingzeyu Li , Hijung Shin , Xue Bai
IPC: G06F3/04847 , G06F3/04845 , G06F3/0485
Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
-
公开(公告)号:US10650490B2
公开(公告)日:2020-05-12
申请号:US16379496
申请日:2019-04-09
Applicant: Adobe Inc.
Inventor: Xue Bai , Elya Shechtman , Sylvain Philippe Paris
Abstract: Environmental map generation techniques and systems are described. A digital image is scaled to achieve a target aspect ratio using a content aware scaling technique. A canvas is generated that is dimensionally larger than the scaled digital image and the scaled digital image is inserted within the canvas thereby resulting in an unfilled portion of the canvas. An initially filled canvas is then generated by filling the unfilled portion using a content aware fill technique based on the inserted digital image. A plurality of polar coordinate canvases is formed by transforming original coordinates of the canvas into polar coordinates. The unfilled portions of the polar coordinate canvases are filled using a content-aware fill technique that is initialized based on the initially filled canvas. An environmental map of the digital image is generated by combining a plurality of original coordinate canvas portions formed from the polar coordinate canvases.
-
公开(公告)号:US12223962B2
公开(公告)日:2025-02-11
申请号:US17967502
申请日:2022-10-17
Applicant: Adobe Inc.
Inventor: Justin Jonathan Salamon , Fabian David Caba Heilbron , Xue Bai , Aseem Omprakash Agarwala , Hijung Shin , Lubomira Assenova Dontcheva
IPC: G10L15/08 , G10L15/26 , G11B27/031
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for music-aware speaker diarization. In an example embodiment, one or more audio classifiers detect speech and music independently of each other, which facilitates detecting regions in an audio track that contain music but do not contain speech. These music-only regions are compared to the transcript, and any transcription and speakers that overlap in time with the music-only regions are removed from the transcript. In some embodiments, rather than having the transcript display the text from this detected music, a visual representation of the audio waveform is included in the corresponding regions of the transcript.
-
公开(公告)号:US12119028B2
公开(公告)日:2024-10-15
申请号:US17967364
申请日:2022-10-17
Applicant: Adobe Inc.
Inventor: Xue Bai , Justin Jonathan Salamon , Aseem Omprakash Agarwala , Hijung Shin , Haoran Cai , Joel Richard Brandt , Lubomira Assenova Dontcheva , Cristin Ailidh Fraser
IPC: G11B27/036 , G06F40/166 , G10L15/26 , G10L25/57 , G11B27/34 , G06F3/0482 , G06F3/04845 , G06F3/0485
CPC classification number: G11B27/036 , G06F40/166 , G10L15/26 , G10L25/57 , G11B27/34 , G06F3/0482 , G06F3/04845 , G06F3/0485
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
-
公开(公告)号:US20220301313A1
公开(公告)日:2022-09-22
申请号:US17805076
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US20220292831A1
公开(公告)日:2022-09-15
申请号:US17805080
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US20220292830A1
公开(公告)日:2022-09-15
申请号:US17805075
申请日:2022-06-02
Applicant: ADOBE INC.
Inventor: Hijung Shin , Xue Bai , Aseem Agarwala , Joel R. Brandt , Jovan Popovic , Lubomira Dontcheva , Dingzeyu Li , Joy Oakyung Kim , Seth Walker
Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e., non-overlapping) video segments with a corresponding level of granularity.
-
公开(公告)号:US20220076707A1
公开(公告)日:2022-03-10
申请号:US17330702
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popovic , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/036 , G06F3/0486 , G06F3/0482
Abstract: Embodiments are directed to a snap point segmentation that defines the locations of selection snap points for a selection of video segments. Candidate snap points are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate snap point separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation between consecutive snap points on a video timeline. The snap point segmentation is computed by solving a shortest path problem through a graph that models different snap point locations and separations. When a user clicks or taps on the video timeline and drags, a selection snaps to the snap points defined by the snap point segmentation. In some embodiments, the snap points are displayed during a drag operation and disappear when the drag operation is released.
-
公开(公告)号:US20220076706A1
公开(公告)日:2022-03-10
申请号:US17330689
申请日:2021-05-26
Applicant: ADOBE INC.
Inventor: Seth Walker , Hijung Shin , Cristin Ailidh Fraser , Aseem Agarwala , Lubomira Dontcheva , Joel Richard Brandt , Jovan Popovic , Joy Oakyung Kim , Justin Salamon , Jui-hsien Wang , Timothy Jeewun Ganter , Xue Bai , Dingzeyu Li
IPC: G11B27/036 , G06F3/0482 , G06F3/0486
Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
-
-
-
-
-
-
-
-
-