Processing Sensor Data with Multi-Model System on Resource-Constrained Device

    公开(公告)号:US20230274147A1

    公开(公告)日:2023-08-31

    申请号:US18313072

    申请日:2023-05-05

    Applicant: Google LLC

    Abstract: Methods, systems, and computer-readable media for multi-model processing on resource-constrained devices. A resource-constrained device can determine, based on a battery-life for a battery of the device, whether to process input through a first model or a second model. The first model can be a gating model that is more energy efficient to execute, and the second model can be a main model that is more accurate than the gating model. Depending on the current battery-life and/or other criteria, the system can process, through the gating model, sensor input that can record activity performed by a user of the resource-constrained device. If the gating model predicts an activity performed by the user that is recorded by the sensor data, the device can process the same or additional input through the main model. Overall power consumption can be reduced with a minimum accuracy maintained over processing input only through the main model.

    Processing sensor data with multi-model system on resource-constrained device

    公开(公告)号:US11669742B2

    公开(公告)日:2023-06-06

    申请号:US16950275

    申请日:2020-11-17

    Applicant: Google LLC

    Abstract: Methods, systems, and computer-readable media for multi-model processing on resource-constrained devices. A resource-constrained device can determine, based on a battery-life for a battery of the device, whether to process input through a first model or a second model. The first model can be a gating model that is more energy efficient to execute, and the second model can be a main model that is more accurate than the gating model. Depending on the current battery-life and/or other criteria, the system can process, through the gating model, sensor input that can record activity performed by a user of the resource-constrained device. If the gating model predicts an activity performed by the user that is recorded by the sensor data, the device can process the same or additional input through the main model. Overall power consumption can be reduced with a minimum accuracy maintained over processing input only through the main model.

    Processing Sensor Data With Multi-Model System On Resource-Constrained Device

    公开(公告)号:US20220156589A1

    公开(公告)日:2022-05-19

    申请号:US16950275

    申请日:2020-11-17

    Applicant: Google LLC

    Abstract: Methods, systems, and computer-readable media for multi-model processing on resource-constrained devices. A resource-constrained device can determine, based on a battery-life for a battery of the device, whether to process input through a first model or a second model. The first model can be a gating model that is more energy efficient to execute, and the second model can be a main model that is more accurate than the gating model. Depending on the current battery-life and/or other criteria, the system can process, through the gating model, sensor input that can record activity performed by a user of the resource-constrained device. If the gating model predicts an activity performed by the user that is recorded by the sensor data, the device can process the same or additional input through the main model. Overall power consumption can be reduced with a minimum accuracy maintained over processing input only through the main model.

    Embedding metadata into images and videos for augmented reality experience

    公开(公告)号:US10607415B2

    公开(公告)日:2020-03-31

    申请号:US16100766

    申请日:2018-08-10

    Applicant: Google LLC

    Abstract: A method for embedding metadata into images and/or videos for AR experience is described. In one example implementation, the method may include generating a first image/video including an environment captured by a device and a virtually-rendered augmented reality (AR) object composited with the environment. The first image/video may be embedded with a first metadata. The method may further include generating a second image/video by modifying the first image/video. The second image/video may be embedded with a second metadata. The second metadata is generated based on the first metadata.

    EMBEDDING METADATA INTO IMAGES AND VIDEOS FOR AUGMENTED REALITY EXPERIENCE

    公开(公告)号:US20200051334A1

    公开(公告)日:2020-02-13

    申请号:US16100766

    申请日:2018-08-10

    Applicant: Google LLC

    Abstract: A method for embedding metadata into images and/or videos for AR experience is described. In one example implementation, the method may include generating a first image/video including an environment captured by a device and a virtually-rendered augmented reality (AR) object composited with the environment. The first image/video may be embedded with a first metadata. The method may further include generating a second image/video by modifying the first image/video. The second image/video may be embedded with a second metadata. The second metadata is generated based on the first metadata.

Patent Agency Ranking