ATTRIBUTING ASPECTS OF GENERATED VISUAL CONTENTS TO TRAINING EXAMPLES

    公开(公告)号:US20240273865A1

    公开(公告)日:2024-08-15

    申请号:US18387677

    申请日:2023-11-07

    IPC分类号: G06V10/764 G06V10/774

    CPC分类号: G06V10/764 G06V10/774

    摘要: Systems, methods and non-transitory computer readable media for attributing aspects of generated visual contents to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Properties of an aspect of the first visual content and properties of visual contents associated with the plurality of training examples may be used to attribute the aspect of the first visual content to a subgroup of the plurality of training examples. For each source of the sources associated with the visual contents associated with the training examples of the subgroup, a data-record associated with the source may be updated based on the attribution of the aspect of the first visual content.

    IDENTIFYING PROMPTS USED FOR TRAINING OF INFERENCE MODELS

    公开(公告)号:US20240273300A1

    公开(公告)日:2024-08-15

    申请号:US18444120

    申请日:2024-02-16

    IPC分类号: G06F40/30 G06F40/40

    CPC分类号: G06F40/30 G06F40/40

    摘要: Systems, methods and non-transitory computer readable media for identifying prompts used for training of inference models are provided. In some examples, a specific textual prompt in a natural language may be received. Further, data based on at least one parameter of an inference model may be accessed. The inference model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. The data and the specific textual prompt may be analyzed to determine a likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples may be generated.

    GENERATING VISUAL CONTENT CONSISTENT WITH ASPECTS OF A VISUAL LANGUAGE

    公开(公告)号:US20220156983A1

    公开(公告)日:2022-05-19

    申请号:US17519334

    申请日:2021-11-04

    发明人: Yair ADATO Gal JACOBI

    IPC分类号: G06T11/00 G06K9/00

    摘要: Systems, methods and non-transitory computer readable media for generating visual content consistent with aspects of a visual brand language are provided. An indication of at least one aspect of a visual brand language may be received. Further, an indication of a desired visual content may be received. A new visual content consistent with the visual brand language and corresponding to the desired visual content may be generated based on the indication of the at least one aspect of the visual brand language and the indication of the desired visual content. The new visual content may be provided in a format ready for presentation.

    ATTRIBUTING GENERATED VISUAL CONTENT TO TRAINING EXAMPLES

    公开(公告)号:US20240104697A1

    公开(公告)日:2024-03-28

    申请号:US18531608

    申请日:2023-12-06

    摘要: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.

    IDENTIFYING VISUAL CONTENTS USED FOR TRAINING OF INFERENCE MODELS

    公开(公告)号:US20230154153A1

    公开(公告)日:2023-05-18

    申请号:US17986378

    申请日:2022-11-14

    IPC分类号: G06V10/764

    CPC分类号: G06V10/764

    摘要: Systems, methods and non-transitory computer readable media for identifying visual contents used for training of inference models are provided. A specific visual content may be received. Data based on at least one parameter of an inference model may be received. The inference model may be a result of training a machine learning algorithm using a plurality of training examples. Each training example of the plurality of training examples may include a visual content. The data and the specific visual content may be analyzed to determine a likelihood that the specific visual content is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific visual content is included in at least one training example of the plurality of training examples may be generated.

    VISUAL CONTENT OPTIMIZATION
    6.
    发明申请

    公开(公告)号:US20220156991A1

    公开(公告)日:2022-05-19

    申请号:US17519500

    申请日:2021-11-04

    IPC分类号: G06T11/40 G06Q30/02 G06N3/04

    摘要: Systems, methods and non-transitory computer readable media for optimizing visual contents are provided. A particular mathematical object corresponding to a particular visual content in a mathematical space including a plurality of mathematical objects corresponding to visual contents may be determined. The mathematical space and the particular mathematical object may be used to obtain first and second mathematical objects of the plurality of mathematical objects. The visual content corresponding to the first mathematical object may be used in a communication with a first user and the visual content corresponding to the second mathematical object may be used in a communication with a second user. Indications of the reactions of the first and second users to the communications may be received. A third visual content may be obtained based on the reactions. The third visual content may be used in a communication with a third user.

    GENERATING LOOPED VIDEO CLIPS
    7.
    发明申请

    公开(公告)号:US20220156317A1

    公开(公告)日:2022-05-19

    申请号:US17519366

    申请日:2021-11-04

    摘要: Systems, methods and non-transitory computer readable media for generating looped video clips are provided. A still image may be received. The still image may be analyzed to generate a series of images. The series of images may include at least first, middle and last images. The first image may be substantially visually similar to the last image, and the middle image may be visually different from the first and last images. The series of images may be provided. Playing the series of images in a video clip that starts with the first image and finishes with the last image, and repeating the video clip from the first image immediately after completing the playing of the video clip with the last image may create visually smooth transaction in which the transition from the last image to the first image is visually indistinguishable from the transactions between frames within the video clip.

    ATTRIBUTING GENERATED VISUAL CONTENT TO TRAINING EXAMPLES

    公开(公告)号:US20240153039A1

    公开(公告)日:2024-05-09

    申请号:US17986347

    申请日:2022-11-14

    摘要: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.

    GENERATING AND ORCHESTRATING MOTION OF VISUAL CONTENTS IN AN INTERACTIVE INTERFACE TO GUIDE USER ATTENTION

    公开(公告)号:US20220157341A1

    公开(公告)日:2022-05-19

    申请号:US17519525

    申请日:2021-11-04

    IPC分类号: G11B27/031 G06K9/00 G06F3/01

    摘要: Systems, methods and non-transitory computer readable media for generating and orchestrating motion of visual contents are provided. A plurality of visual contents may be accessed. Data indicative of a layout of the plurality of visual contents in a user interface may be accessed. A sequence for the plurality of visual contents may be determined based on the layout. For each visual content of the plurality of visual contents, the visual content may be analyzed to generate a video clip including a motion of at least one object depicted in the visual content. A presentation of the plurality of visual contents in the user interface may be caused. The determined sequence for the plurality of visual contents may be used to orchestrate a series of playbacks of the generated video clips.

    SYNTHETIC VISUAL CONTENT CREATION AND MODIFICATION USING TEXTUAL INPUT

    公开(公告)号:US20220156994A1

    公开(公告)日:2022-05-19

    申请号:US17519495

    申请日:2021-11-04

    发明人: Yair ADATO Gal JACOBI

    摘要: Systems, methods and non-transitory computer readable media for generating and modifying synthetic visual content using textual input are provided. One or more keywords may be received from a user. The one or more keywords may be used to generate a plurality of textual descriptions. Each generated textual description may correspond to a possible visual content. The generated plurality of textual descriptions may be presented to the user through a user interface that enables the user to modify the presented textual descriptions. A modification to at least one of the plurality of textual descriptions may be received from the user, therefore obtaining a modified plurality of textual descriptions. A selection of one textual description of the modified plurality of textual descriptions may be received from the user. A plurality of visual contents corresponding to the selected textual description may be presented to the user.