MODALITY ADAPTIVE INFORMATION RETRIEVAL

    公开(公告)号:US20220230061A1

    公开(公告)日:2022-07-21

    申请号:US17153130

    申请日:2021-01-20

    Applicant: Adobe Inc.

    Abstract: In some embodiments, a multimodal computing system receives a query and identifies, from source documents, text passages and images that are relevant to the query. The multimodal computing system accesses a multimodal question-answering model that includes a textual stream of language models and a visual stream of language models. Each of the textual stream and the visual stream contains a set of transformer-based models and each transformer-based model includes a cross-attention layer using data generated by both the textual stream and visual stream of language models as an input. The multimodal computing system identifies text relevant to the query by applying the textual stream to the text passages and computes, using the visual stream, relevance scores of the images to the query, respectively. The multimodal computing system further generates a response to the query by including the text and/or an image according to the relevance scores.

    SEMANTICALLY-AWARE IMAGE EXTRAPOLATION
    2.
    发明公开

    公开(公告)号:US20230169632A1

    公开(公告)日:2023-06-01

    申请号:US17521503

    申请日:2021-11-08

    Applicant: Adobe Inc.

    CPC classification number: G06T5/50 G06T7/181

    Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.

    Generating summary content tuned to a target characteristic using a word generation model

    公开(公告)号:US11062087B2

    公开(公告)日:2021-07-13

    申请号:US16262655

    申请日:2019-01-30

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve tuning summaries of input text to a target characteristic using a word generation model. For example, a method for generating a tuned summary using a word generation model includes generating a learned subspace representation of input text and a target characteristic token associated with the input text by applying an encoder to the input text and the target characteristic token. The method also includes generating, by a decoder, each word of a tuned summary of the input text from the learned subspace representation and from a feedback about preceding words of the tuned summary. The tuned summary is tuned to target characteristics represented by the target characteristic token.

    Structure-based transformers with localization and encoding for chart question answering

    公开(公告)号:US11386114B2

    公开(公告)日:2022-07-12

    申请号:US17076484

    申请日:2020-10-21

    Applicant: Adobe Inc.

    Abstract: Embodiments are disclosed for determining an answer to a query associated with a graphical representation of data. In particular, in one or more embodiments, the disclosed systems and methods comprise obtaining a visual embedding for a graphical representation of data, the visual embedding representing a plurality of graphical elements. The one or more embodiment further include obtaining a query embedding for a query associated with the graphical representation of data, the query embedding representing a plurality of textual elements of the query with at least one textual element substituted with an identifier for at least one graphical element of the set of graphical elements. The one or more embodiment further include generating a chart sequence from the visual embedding and a query sequence from the query embedding, generating an output sequence based on the graph and the query sequences, and determining an answer to the query from the output sequence.

    GENERATING SUMMARY CONTENT TUNED TO A TARGET CHARACTERISTIC USING A WORD GENERATION MODEL

    公开(公告)号:US20210312129A1

    公开(公告)日:2021-10-07

    申请号:US17348257

    申请日:2021-06-15

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve tuning summaries of input text to a target characteristic using a word generation model. For example, a method for generating a tuned summary using a word generation model includes generating a learned subspace representation of input text and a target characteristic token associated with the input text by applying an encoder to the input text and the target characteristic token. The method also includes generating, by a decoder, each word of a tuned summary of the input text from the learned subspace representation and from a feedback about preceding words of the tuned summary. The tuned summary is tuned to target characteristics represented by the target characteristic token.

    GENERATING SUMMARY CONTENT TUNED TO A TARGET CHARACTERISTIC USING A WORD GENERATION MODEL

    公开(公告)号:US20200242197A1

    公开(公告)日:2020-07-30

    申请号:US16262655

    申请日:2019-01-30

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve tuning summaries of input text to a target characteristic using a word generation model. For example, a method for generating a tuned summary using a word generation model includes generating a learned subspace representation of input text and a target characteristic token associated with the input text by applying an encoder to the input text and the target characteristic token. The method also includes generating, by a decoder, each word of a tuned summary of the input text from the learned subspace representation and from a feedback about preceding words of the tuned summary. The tuned summary is tuned to target characteristics represented by the target characteristic token.

    Modality adaptive information retrieval

    公开(公告)号:US12198048B2

    公开(公告)日:2025-01-14

    申请号:US17153130

    申请日:2021-01-20

    Applicant: Adobe Inc.

    Abstract: In some embodiments, a multimodal computing system receives a query and identifies, from source documents, text passages and images that are relevant to the query. The multimodal computing system accesses a multimodal question-answering model that includes a textual stream of language models and a visual stream of language models. Each of the textual stream and the visual stream contains a set of transformer-based models and each transformer-based model includes a cross-attention layer using data generated by both the textual stream and visual stream of language models as an input. The multimodal computing system identifies text relevant to the query by applying the textual stream to the text passages and computes, using the visual stream, relevance scores of the images to the query, respectively. The multimodal computing system further generates a response to the query by including the text and/or an image according to the relevance scores.

    Generating summary content tuned to a target characteristic using a word generation model

    公开(公告)号:US11657225B2

    公开(公告)日:2023-05-23

    申请号:US17348257

    申请日:2021-06-15

    Applicant: Adobe Inc.

    CPC classification number: G06F40/284 G06N20/00

    Abstract: Systems and methods for generating a tuned summary using a word generation model. An example method includes receiving, at a decoder of the word generation model, a training data learned subspace representation of training data. The method also includes identifying tunable linguistic characteristics of the word generation model and training the decoder to output a training tuned summary of the training data learned subspace representation based on at least one of the tunable linguistic characteristics. The method further includes receiving an input text and a target characteristic token, and generating, by the trained decoder of the word generation model, each word of a tuned summary of the input text from a learned subspace representation and from feedback about preceding words of the tuned summary, wherein the tuned summary is tuned to target characteristics represented by the target characteristic token.

Patent Agency Ranking