AUTOMATED GENERATION OF TRAINING DATA FOR CONTEXTUALLY GENERATED PERCEPTIONS

    公开(公告)号:US20240386578A1

    公开(公告)日:2024-11-21

    申请号:US18786926

    申请日:2024-07-29

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

    Contextually Generated Perceptions
    2.
    发明申请

    公开(公告)号:US20200250223A1

    公开(公告)日:2020-08-06

    申请号:US16263326

    申请日:2019-01-31

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

    CONTEXTUALLY GENERATED PERCEPTIONS

    公开(公告)号:US20220300551A1

    公开(公告)日:2022-09-22

    申请号:US17833705

    申请日:2022-06-06

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

    AUTOMATED GENERATION OF TRAINING DATA FOR CONTEXTUALLY GENERATED PERCEPTIONS

    公开(公告)号:US20210326646A1

    公开(公告)日:2021-10-21

    申请号:US17233986

    申请日:2021-04-19

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

    Contextually Generated Perceptions
    5.
    发明公开

    公开(公告)号:US20240354578A1

    公开(公告)日:2024-10-24

    申请号:US18760199

    申请日:2024-07-01

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

    Contextually generated perceptions

    公开(公告)号:US11354351B2

    公开(公告)日:2022-06-07

    申请号:US16263326

    申请日:2019-01-31

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

    Contextually generated perceptions

    公开(公告)号:US12026622B2

    公开(公告)日:2024-07-02

    申请号:US17833705

    申请日:2022-06-06

    Abstract: Embodiments of the present invention train multiple Perception models to predict contextual metadata (tags) with respect to target content items. By extracting context from content items, and generating associations among the Perception models, individual Perceptions trigger one another based on the extracted context to generate a more robust set of contextual metadata. A Perception Identifier predicts core tags that make coarse distinctions among content items at relatively higher levels of abstraction, while also triggering other Perception models to predict additional perception tags at lower levels of abstraction. A Dense Classifier identifies sub-content items at various levels of abstraction, and facilitates the iterative generation of additional dense tags across integrated Perceptions. Class-specific thresholds are generated with respect to individual classes of each Perception to address the inherent sampling bias that results from the varying number and quality of training samples (across different classes of content items) available to train each Perception.

Patent Agency Ranking