Multi-Level Emotional Enhancement of Dialogue

    公开(公告)号:US20240070399A1

    公开(公告)日:2024-02-29

    申请号:US17894967

    申请日:2022-08-24

    Abstract: A system for emotionally enhancing dialogue includes a computing platform having processing hardware and a system memory storing a software code including a predictive model. The processing hardware is configured to execute the software code to receive dialogue data identifying an utterance for use by a digital character in a conversation, analyze, using the dialogue data, an emotionality of the utterance at multiple structural levels of the utterance, and supplement the utterance with one or more emotional attributions, using the predictive model and the emotionality of the utterance at the multiple structural levels, to provide one or more candidate emotionally enhanced utterance(s). The processing hardware further executes the software code to perform an audio validation of the candidate emotionally enhanced utterance(s) to provide a validated emotionally enhanced utterance, and output an emotionally attributed dialogue data providing the validated emotionally enhanced utterance for use by the digital character in the conversation.

    Conversation-Driven Character Animation
    2.
    发明公开

    公开(公告)号:US20240070951A1

    公开(公告)日:2024-02-29

    申请号:US17894984

    申请日:2022-08-24

    CPC classification number: G06T13/40 G06N20/00 G06T13/80 G06T2213/12

    Abstract: A system for producing conversation-driven character animation includes a computing platform having processing hardware and a system memory storing software code, the software code including multiple trained machine learning (ML) models. The processing hardware executes the software code to obtain a conversation understanding feature set describing a present state of a conversation between a digital character and a system user, and to generate an inference, using at least a first trained ML model of the multiple trained ML models and the conversation understanding feature set, the inference including labels describing a predicted next state of a scene within the conversation. The processing hardware further executes the software code to produce, using at least a second trained ML model of the multiple trained ML models and the labels, an animation stream of the digital character participating in the predicted next state of the scene within the conversation.

    User Responsive Dynamic Content Transformation

    公开(公告)号:US20240276078A1

    公开(公告)日:2024-08-15

    申请号:US18107718

    申请日:2023-02-09

    Abstract: A system includes a hardware processor and a memory storing software code and one or more machine learning (ML) model(s) trained to transform content. The hardware processor executes the software code to ingest content components each corresponding respectively to a different feature of multiple features included in a content file, receive sensor data describing at least one of an action or an environment of a system user, and identify, using the sensor data, at least one of the content components as content to be transformed. The hardware processor further executes the software code to transform, using the ML model(s), that identified content to provide at least one transformed content component, combine a subset of the ingested content components with the at least one transformed content component to produce a dynamically transformed content, and output the dynamically transformed content in real-time with respect to ingesting the content components.

    User responsive dynamic content transformation

    公开(公告)号:US12096093B2

    公开(公告)日:2024-09-17

    申请号:US18107718

    申请日:2023-02-09

    Abstract: A system includes a hardware processor and a memory storing software code and one or more machine learning (ML) model(s) trained to transform content. The hardware processor executes the software code to ingest content components each corresponding respectively to a different feature of multiple features included in a content file, receive sensor data describing at least one of an action or an environment of a system user, and identify, using the sensor data, at least one of the content components as content to be transformed. The hardware processor further executes the software code to transform, using the ML model(s), that identified content to provide at least one transformed content component, combine a subset of the ingested content components with the at least one transformed content component to produce a dynamically transformed content, and output the dynamically transformed content in real-time with respect to ingesting the content components.

    Conversation-driven character animation

    公开(公告)号:US11983808B2

    公开(公告)日:2024-05-14

    申请号:US17894984

    申请日:2022-08-24

    CPC classification number: G06T13/40 G06N20/00 G06T13/80 G06T2213/12

    Abstract: A system for producing conversation-driven character animation includes a computing platform having processing hardware and a system memory storing software code, the software code including multiple trained machine learning (ML) models. The processing hardware executes the software code to obtain a conversation understanding feature set describing a present state of a conversation between a digital character and a system user, and to generate an inference, using at least a first trained ML model of the multiple trained ML models and the conversation understanding feature set, the inference including labels describing a predicted next state of a scene within the conversation. The processing hardware further executes the software code to produce, using at least a second trained ML model of the multiple trained ML models and the labels, an animation stream of the digital character participating in the predicted next state of the scene within the conversation.

Patent Agency Ranking