Proficiency based tutorial modification

    公开(公告)号:US10831516B1

    公开(公告)日:2020-11-10

    申请号:US16813441

    申请日:2020-03-09

    Applicant: Adobe Inc.

    Abstract: In implementations of proficiency based tutorial modification, a computing device implements a tutorial system to receive a user modification of a digital image. A difference between the user modification and an application modification of the digital image is determined. The tutorial system generates a proficiency score for an editing tool based on the difference between the user modification and the application modification, and the proficiency score indicates the user's proficiency in using the editing tool. The tutorial system generates a pre-modified input image for a tutorial depicting a modification applied to an input image to be modified in the tutorial using the editing tool based on the proficiency score for the editing tool being greater than a proficiency threshold.

    Content-aware selection
    32.
    发明授权

    公开(公告)号:US10817739B2

    公开(公告)日:2020-10-27

    申请号:US16264387

    申请日:2019-01-31

    Applicant: Adobe Inc.

    Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.

    Enhanced automatic perspective and horizon correction

    公开(公告)号:US10652472B2

    公开(公告)日:2020-05-12

    申请号:US15902899

    申请日:2018-02-22

    Applicant: ADOBE INC.

    Abstract: Embodiments relate to automatic perspective and horizon correction. Generally, a camera captures an image as an image file. Capture-time orientation data from one or more sensors is used to determine the camera's attitude with respect to a defined reference frame. The orientation data and/or attitude can be registered into metadata of the image file and used to generate axis lines representative of the camera's reference frame. A reference line such as a horizon can be automatically identified from detected line segments in the image that align with one of the axis lines within a predetermined angular threshold. The reference line can be used to generate a camera transformation from a starting orientation reflected by the camera attitude to a transformed orientation that aligns the reference line with the reference frame. The transformation can be applied to the image to automatically correct perspective distortion and/or horizon tilt in the image.

    Images for the visually impaired
    34.
    发明授权

    公开(公告)号:US12039882B2

    公开(公告)日:2024-07-16

    申请号:US17451426

    申请日:2021-10-19

    Applicant: Adobe Inc.

    Abstract: Some implementations include methods for communicating features of images to visually impaired users. An image to be displayed on a touch sensitive screen of a computing device may include one or more objects. Each of the one or more objects may be associated with a bounding box. A contact with the image may be detected via the touch sensitive screen. The contact may be determined to be within a bounding box associated with a first object of the one or more objects. Responsive to detecting the contact to be within the bounding box associated with the first object, a caption of the first object may be caused to become audible and the touch sensitive screen may be caused to vibrate based on a vibration pattern unique to the first object.

    Customizing digital content tutorials based on tool proficiency

    公开(公告)号:US11947983B1

    公开(公告)日:2024-04-02

    申请号:US17930154

    申请日:2022-09-07

    Applicant: Adobe Inc.

    CPC classification number: G06F9/453 G06T11/60

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for customizing digital content tutorials for a user within a digital editing application based on user experience with editing tools. The disclosed system determines proficiency levels for a plurality of different portions of a digital content tutorial corresponding to a digital content editing task. The disclosed system generates tool proficiency scores associated with the user in a digital editing application in connection with the portions of the digital content tutorial. Specifically, the disclosed system generates the tool proficiency scores based on usage of tools corresponding to the portions. Additionally, the disclosed system generates a mapping for the user based on the tool proficiency scores associated with the user and the proficiency levels of the portions of the digital content tutorial. The disclosed system provides a customized digital content tutorial for display at a client device according to the mapping.

    CONTENT-SPECIFIC-PRESET EDITS FOR DIGITAL IMAGES

    公开(公告)号:US20240078730A1

    公开(公告)日:2024-03-07

    申请号:US18504821

    申请日:2023-11-08

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06N20/00 G06T11/20 G06T7/11 G06T2210/12

    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for generating object-specific-preset edits to be later applied to other digital images depicting a same object type or applying a previously generated object-specific-preset edit to an object of the same object type within a target digital image. For example, in some cases, the disclosed systems generate an object-specific-preset edit by determining a region of a particular localized edit in an edited digital image, identifying an edited object corresponding to the localized edit, and storing in a digital-image-editing document an object tag for the edited object and instructions for the localized edit. In certain implementations, the disclosed systems further apply such an object-specific-preset edit to a target object in a target digital image by determining transformed-positioning parameters for a localized edit from the object-specific-preset edit to the target object.

    IMAGES FOR THE VISUALLY IMPAIRED
    37.
    发明申请

    公开(公告)号:US20230121539A1

    公开(公告)日:2023-04-20

    申请号:US17451426

    申请日:2021-10-19

    Applicant: Adobe Inc.

    Abstract: Some implementations include methods for communicating features of images to visually impaired users. An image to be displayed on a touch sensitive screen of a computing device may include one or more objects. Each of the one or more objects may be associated with a bounding box. A contact with the image may be detected via the touch sensitive screen. The contact may be determined to be within a bounding box associated with a first object of the one or more objects. Responsive to detecting the contact to be within the bounding box associated with the first object, a caption of the first object may be caused to become audible and the touch sensitive screen may be caused to vibrate based on a vibration pattern unique to the first object.

    ADAPTIVE SEARCH RESULTS FOR MULTIMEDIA SEARCH QUERIES

    公开(公告)号:US20220414149A1

    公开(公告)日:2022-12-29

    申请号:US17902457

    申请日:2022-09-02

    Applicant: Adobe Inc.

    Abstract: A system identifies a video comprising frames associated with content tags. The system detects features for each frame of the video. The system identifies, based on the detected features, scenes of the video. The system determines, for each frame for each scene, a frame score that indicates a number of content tags that match the other frames within the scene. The system selects, for each scene, a set of key frames that represent the scene based on the determined frame scores. The system receives a search query comprising a keyword. The system generates, for display, search results responsive to the search query including a dynamic preview of the video. The dynamic preview comprises an arrangement of frames of the video corresponding to each scene of the video. Each of the arrangement of frames is selected from the selected set of key frames representing the respective scene of the video.

    Generating tool-based smart-tutorials

    公开(公告)号:US11468786B2

    公开(公告)日:2022-10-11

    申请号:US16654737

    申请日:2019-10-16

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate dynamic tool-based animated tutorials. In particular, in one or more embodiments, the disclosed systems generate an animated tutorial in response to receiving a request associated with an image editing tool. The disclosed systems then extract steps from existing general tutorials that pertain to the image editing tool to generate tool-specific animated tutorials. In at least one embodiment, the disclosed systems utilize a clustering algorithm in conjunction with image parameters to provide a set of these generated animated tutorials that showcase diverse features and/or attributes of the image editing tool based on measured aesthetic gains resulting from application of the image editing tool within the animated tutorials.

    CONTENT-SPECIFIC-PRESET EDITS FOR DIGITAL IMAGES

    公开(公告)号:US20220222875A1

    公开(公告)日:2022-07-14

    申请号:US17147931

    申请日:2021-01-13

    Applicant: Adobe Inc.

    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for generating object-specific-preset edits to be later applied to other digital images depicting a same object type or applying a previously generated object-specific-preset edit to an object of the same object type within a target digital image. For example, in some cases, the disclosed systems generate an object-specific-preset edit by determining a region of a particular localized edit in an edited digital image, identifying an edited object corresponding to the localized edit, and storing in a digital-image-editing document an object tag for the edited object and instructions for the localized edit. In certain implementations, the disclosed systems further apply such an object-specific-preset edit to a target object in a target digital image by determining transformed-positioning parameters for a localized edit from the object-specific-preset edit to the target object.

Patent Agency Ranking