Abstract:
Systems and methods may provide for determining a usage configuration of a wearable device and setting an activation state of an air conduction speaker of the wearable device based at least in part on the usage configuration. Additionally, an activation state of a tissue conduction speaker of the wearable device may be set based at least in part on the usage configuration. In one example, the usage configuration is determined based on a set of status signals that indicate one or more of a physical position, a physical activity, a current activation state, an interpersonal proximity state or a manual user request associated with one or more of the air conduction speaker or the tissue conduction speaker.
Abstract:
One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.
Abstract:
An apparatus may include a memory to store a recorded video. The apparatus may further include an interface to receive at least one set of sensor information based on sensor data that is recorded concurrently with the recorded video and a video clip creation module to identify a sensor event from the at least one set of sensor information and to generate a video clip based upon the sensor event, the video clip comprising video content from the recorded video that is synchronized to the sensor event.
Abstract:
Logic may determine a specific performance of a neural network based on an event and may present the specific performance to provide a user with an explanation of the inference by a machine learning model such as a neural network. Logic may determine a first activation profile associated with the event, the first activation profile based on activation of nodes in one or more layers of the neural network during inference to generate an output. Logic may correlate the first activation profile against a second activation profile associated with a first training sample of training data. Logic may determine that the first training sample is associated with the event based on the correlation. Logic may output an indicator to identify the first training sample as being associated with the event.
Abstract:
Disclosed is a technical solution to calibrate confidence scores of classification networks. A classification network has been trained to receive an input and output a label of the input that indicates a class of the input. The classification network also outputs a confidence score of the label, which indicates a likelihood of the input falling into the class, i.e., a confidence level of the classification network that the label is correct. To calibrate the confidence of the classification network, a logit transformation function may be added into the classification network. The logic transformation function may be an entropy-based function and have learnable parameters, which may be trained by inputting calibration samples into the classification network and optimizing a negative log likelihood based on the labels generated by the classification network and ground-truth labels of the calibration samples. The trained logic transformation function can be used to compute reliable confidence scores.
Abstract:
Systems, apparatuses, and methods include technology that identifies, with a neural network, that a predetermined amount of a first action is completed at a first portion of a plurality of portions. A subset of the plurality of portions collectively represents the first action. The technology generates a first loss based on the predetermined amount of the first action being identified as being completed at the first portion. The technology updates the neural network based on the first loss.
Abstract:
Methods and apparatus to operate closed-lid portable computers are disclosed. An example portable compute device includes: a microphone; a speaker; a first camera to face a first direction; and a second camera to face a second direction, the second direction opposite the first direction. The compute device further includes communications circuitry; a first display; a second display separate from the first display; and a hinge to enable the first display to rotate relative to the second display between an open position and a closed position. At least a portion of the second display is capable of being visible when the first display is rotated about the hinge to the closed position. The portion of the second display is multiple times longer in a third direction than in a fourth direction perpendicular to the third direction, the third direction extending parallel to an axis of rotation of the hinge.
Abstract:
Techniques for gesture-based device connections are described. For example, a method may comprise receiving video data corresponding to motion of a first computing device, receiving sensor data corresponding to motion of the first computing device, comparing, by a processor, the video data and the sensor data to one or more gesture models, and initiating establishment of a wireless connection between the first computing device and a second computing device if the video data and sensor data correspond to gesture models for the same gesture. Other embodiments are described and claimed.
Abstract:
Various systems and methods for personal sensory drones are described herein. A personal sensory drone system includes a drone remote control system comprising: a task module to transmit a task to a drone swarm for the drone swarm to execute, the drone swarm including at least two drones; a transceiver to receive information from the drone swarm related to the task; and a user interface module to present a user interface based on the information received from the drone swarm.
Abstract:
Methods of using subcutaneously implantable sensor devices and associated systems having a communication module that is controlled based upon the detection of a predetermined chemical agent.