-
公开(公告)号:US20210134025A1
公开(公告)日:2021-05-06
申请号:US17150742
申请日:2021-01-15
Applicant: Adobe Inc.
Inventor: Jose Ignacio Echevarria Vallespi , Stephen DiVerdi , Hema Susmita Padala , Bernard Kerr , Dmitry Baranovskiy
IPC: G06T11/00 , G06F3/0484
Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system computes a color map path between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.
-
公开(公告)号:US10957063B2
公开(公告)日:2021-03-23
申请号:US15935976
申请日:2018-03-26
Applicant: Adobe Inc. , Portland State University
Inventor: Stephen DiVerdi , Cuong Nguyen , Aaron Hertzmann , Feng Liu
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified video content to reduce depth conflicts between user interface elements and video objects. For example, the disclosed systems can analyze an input video to identify feature points that designate objects within the input video and to determine the depths of the identified feature points. In addition, the disclosed systems can compare the depths of the feature points with a depth of a user interface element to determine whether there are any depth conflicts. In response to detecting a depth conflict, the disclosed systems can modify the depth of the user interface element to reduce or avoid the depth conflict. Furthermore, the disclosed systems can apply a blurring effect to an area around a user interface element to reduce the effect of depth conflicts.
-
公开(公告)号:US10592776B2
公开(公告)日:2020-03-17
申请号:US15427598
申请日:2017-02-08
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Matthew Douglas Hoffman , Ardavan Saeedi
Abstract: The present disclosure is directed towards methods and systems for determining multimodal image edits for a digital image. The systems and methods receive a digital image and analyze the digital image. The systems and methods further generate a feature vector of the digital image, wherein each value of the feature vector represents a respective feature of the digital image. Additionally, based on the feature vector and determined latent variables, the systems and methods generate a plurality of determined image edits for the digital image, which includes determining a plurality of set of potential image attribute values and selecting a plurality of sets of determined image attribute values from the plurality of sets of potential image attribute values wherein each set of determined image attribute values comprises a determined image edit of the plurality of image edits.
-
公开(公告)号:US20190295280A1
公开(公告)日:2019-09-26
申请号:US15935976
申请日:2018-03-26
Applicant: Adobe Inc. , Portland State University
Inventor: Stephen DiVerdi , Cuong Nguyen , Aaron Hertzmann , Feng Liu
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified video content to reduce depth conflicts between user interface elements and video objects. For example, the disclosed systems can analyze an input video to identify feature points that designate objects within the input video and to determine the depths of the identified feature points. In addition, the disclosed systems can compare the depths of the feature points with a depth of a user interface element to determine whether there are any depth conflicts. In response to detecting a depth conflict, the disclosed systems can modify the depth of the user interface element to reduce or avoid the depth conflict. Furthermore, the disclosed systems can apply a blurring effect to an area around a user interface element to reduce the effect of depth conflicts.
-
公开(公告)号:US11178374B2
公开(公告)日:2021-11-16
申请号:US16428201
申请日:2019-05-31
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Seth Walker , Oliver Wang , Cuong Nguyen
IPC: H04N13/111 , H04N13/282 , H04N13/383
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
-
公开(公告)号:US10949057B2
公开(公告)日:2021-03-16
申请号:US16847765
申请日:2020-04-14
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Seth Walker , Brian Williams
IPC: G06F3/0481 , G06T19/00 , G06F3/0346 , G06F9/451 , G06T19/20
Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
-
公开(公告)号:US20210042965A1
公开(公告)日:2021-02-11
申请号:US16533308
申请日:2019-08-06
Applicant: Adobe Inc.
Inventor: Ankit Phogat , Vineet Batra , Sayan Ghosh , Stephen DiVerdi , Scott Cohen
Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.
-
8.
公开(公告)号:US10671238B2
公开(公告)日:2020-06-02
申请号:US15816280
申请日:2017-11-17
Applicant: Adobe Inc.
Inventor: Stephen DiVerdi , Seth Walker , Brian Williams
IPC: G06F3/0481 , G06T19/00 , G06F3/0346 , G06F9/451 , G06T19/20
Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
-
公开(公告)号:US10347238B2
公开(公告)日:2019-07-09
申请号:US15796292
申请日:2017-10-27
Applicant: Adobe Inc. , The Trustees of Princeton University
Inventor: Zeyu Jin , Gautham J. Mysore , Stephen DiVerdi , Jingwan Lu , Adam Finkelstein
Abstract: Systems and techniques are disclosed for synthesizing a new word or short phrase such that it blends seamlessly in the context of insertion or replacement in an existing narration. In one such embodiment, a text-to-speech synthesizer is utilized to say the word or phrase in a generic voice. Voice conversion is then performed on the generic voice to convert it into a voice that matches the narration. An editor and interface are described that support fully automatic synthesis, selection among a candidate set of alternative pronunciations, fine control over edit placements and pitch profiles, and guidance by the editors own voice.
-
公开(公告)号:US20230252746A1
公开(公告)日:2023-08-10
申请号:US17666806
申请日:2022-02-08
Applicant: Adobe Inc.
Inventor: Kazi Rubaiat Habib , Tianyi Wang , Stephen DiVerdi , Li-Yi Wei
IPC: G06T19/20 , G06T7/73 , G06F3/04815 , G06T15/20
CPC classification number: G06T19/20 , G06F3/04815 , G06T7/73 , G06T15/20 , G06T2219/2004
Abstract: Certain aspects and features of this disclosure relate to virtual 3D pointing and manipulation. For example, video communication is established between a presenter client device and a viewer client device. A presenter video image is captured. A 3D image of a 3D object is rendered on the client devices and a presenter avatar is rendered on at least the viewer client device. The presenter avatar includes at least a portion of the presenter video image. When a positional input is detected at the presenter client device, the system renders, on the viewer client device, an articulated virtual appurtenance associated with the positional input, the 3D image, and the presenter avatar. A virtual interaction between the articulated virtual appurtenance and the 3D image appear to a viewer as naturally positioned for the interaction with respect to the viewer.
-
-
-
-
-
-
-
-
-