Expert knowledge transfer using egocentric video

    公开(公告)号:US11620796B2

    公开(公告)日:2023-04-04

    申请号:US17249371

    申请日:2021-03-01

    摘要: A method, a computer program product, and a computer system for transferring knowledge from an expert to a user using a mixed reality rendering. The method includes determining a user perspective of a user viewing an object on which a procedure is to be performed. The method includes determining an anchoring of the user perspective to an expert perspective, the expert perspective associated with an expert providing a demonstration of the procedure. The method includes generating a virtual rendering of the expert at the user perspective based on the anchoring at a scene viewed by the user, the virtual rendering corresponding to the demonstration of the procedure as performed by the expert. The method includes generating a mixed reality environment in which the virtual rendering of the expert is shown in the scene viewed by the user.

    DETECTING COMPLEX USER ACTIVITIES USING ENSEMBLE MACHINE LEARNING OVER INERTIAL SENSORS DATA

    公开(公告)号:US20190095814A1

    公开(公告)日:2019-03-28

    申请号:US15716524

    申请日:2017-09-27

    IPC分类号: G06N99/00 G06N7/00 G06F1/16

    摘要: A computer implemented method of detecting complex user activities, comprising using processor(s) in each of a plurality of consecutive time intervals for: obtaining sensory data from wearable inertial sensor(s) worn by a user, computing an action score for continuous physical action(s) performed by the user, the continuous physical action(s) extending over multiple time intervals are indicated by repetitive motion pattern(s) identified by analyzing the sensory data, computing a gesture score for brief gesture(s) performed by the user, the brief gesture(s) bounded in a single basic time interval is identified by analyzing the sensory data, aggregating the action and gesture scores to produce an interval activity score of predefined activity(s) for a current time interval, adding the interval activity score to a cumulative activity score accumulated during a predefined number of preceding time intervals and identifying the predefined activity(s) when the cumulative activity score exceeds a predefined threshold.

    Enhanced Emergency Reporting System
    3.
    发明申请
    Enhanced Emergency Reporting System 审中-公开
    增强应急报告系统

    公开(公告)号:US20160313708A1

    公开(公告)日:2016-10-27

    申请号:US14692245

    申请日:2015-04-21

    IPC分类号: G05B9/02 G06N5/04

    摘要: A method enhances an emergency reporting system for controlling equipment. A message receiver receives an electronic message from a person. The electronic message is a report regarding an emergency event. One or more processors identify a profile of the person who sent the electronic message, and determine a bias of the person regarding the emergency event based on the person's profile. One or more processors amend, based on the bias of the person, a content of the electronic message to create a modified electronic message regarding the emergency event. The modified electronic message is consolidated with other modified electronic messages into a bias-corrected report about the emergency event. One or more processors then automatically adjust equipment based on the bias-corrected report about the emergency event.

    摘要翻译: 一种方法增强了用于控制设备的应急报告系统。 消息接收者从人接收电子消息。 电子信息是关于紧急事件的报告。 一个或多个处理器识别发送电子消息的人的简档,并且基于该人的简档来确定该人关于紧急事件的偏见。 一个或多个处理器基于该人的偏见来修改电子消息的内容以创建关于紧急事件的修改的电子消息。 经修改的电子消息与其他修改的电子消息合并成关于紧急事件的偏差纠正报告。 然后一个或多个处理器根据关于紧急事件的偏差纠正报告自动调整设备。

    Augmented reality guided inspection

    公开(公告)号:US11501502B2

    公开(公告)日:2022-11-15

    申请号:US17206554

    申请日:2021-03-19

    IPC分类号: G06T19/00

    摘要: A method, computer system, and a computer program product for augmented reality guidance are provided. Device orientation instructions may be displayed as augmented reality on a display screen of a device. The device may include a camera and may be portable. The display screen may show a view of an object. At least one additional instruction may be received that includes at least one word directing user interaction with the object. The at least one additional instruction may be displayed on the display screen of the device. The camera may capture an image of the object regarding the at least one additional instruction. The image may be input to a first machine learning model so that an output of the first machine learning model is generated. The output may be received from the first machine learning model. The output may be displayed on the display screen.

    Enhanced emergency reporting system

    公开(公告)号:US10162345B2

    公开(公告)日:2018-12-25

    申请号:US14692245

    申请日:2015-04-21

    摘要: A method enhances an emergency reporting system for controlling equipment. A message receiver receives an electronic message from a person. The electronic message is a report regarding an emergency event. One or more processors identify a profile of the person who sent the electronic message, and determine a bias of the person regarding the emergency event based on the person's profile. One or more processors amend, based on the bias of the person, a content of the electronic message to create a modified electronic message regarding the emergency event. The modified electronic message is consolidated with other modified electronic messages into a bias-corrected report about the emergency event. One or more processors then automatically adjust equipment based on the bias-corrected report about the emergency event.

    AUTOMATIC GENERATION OF CONTENT FOR AUTONOMIC AUGMENTED REALITY APPLICATIONS

    公开(公告)号:US20210142570A1

    公开(公告)日:2021-05-13

    申请号:US16681888

    申请日:2019-11-13

    摘要: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.

    Generating 3D videos from 2D models

    公开(公告)号:US11651538B2

    公开(公告)日:2023-05-16

    申请号:US17204035

    申请日:2021-03-17

    IPC分类号: G06T13/20 G06T15/20

    CPC分类号: G06T13/20 G06T15/205

    摘要: An approach for creating instructional 3D animated videos, without physical access to the object or to the object CAD models as a prerequisite is disclosed. The approach allows the user to submit some images or a video of the object and a knowledge about the required procedure. The required procedures includes, adding the instructions and text annotations. The approach will build a 3D model based on the submitted images and/or video. The approach will generate the instructional animated video based on the 3D model and the required procedure.