Abstract:
Disclosed are apparatus and methods for enhancing operation of an ultrasonic sensing device for determining the status of an object near such ultrasonic sensing device. From the ultrasonic sensing device, an emission signal having a current frequency or band in an ultrasonic frequency range is emitted. Ultrasonic signals are received and analyzed to detect an object. After a trigger occurs, a background noise signal emitted, reflected, or diffracted from the object in an environment outside of the ultrasonic sensing device is detected and background noise metrics are estimated based on the background noise signal after halting the emitting of the emission signal. It is then determined whether the current frequency of the emission signal is optimized based on the background noise metrics. A next frequency or band is selected and the emission signal is emitted at the next frequency or band if the current frequency or band is not optimum.
Abstract:
Disclosed are apparatus and methods for automatically training a sensor node to detect anomalies in an environment. At the sensor node, an indication is received to initiate training by the sensor node to detect anomalies in the environment based on sensor data generated by a sensor that resides on such sensor node and is operable to detect sensor signals from the environment. After training is initiated, the sensor node automatically trains a model that resides on the sensor to detect anomalies in the environment, and such training is based on the sensor data. After the model is trained, the model to detect anomalies in the environment is executed by the sensor node.
Abstract:
Provided are various mechanisms and processes for automatic computer vision-based defect detection using a neural network. A system is configured for receiving historical datasets that include training images corresponding to one or more known defects. Each training image is converted into a corresponding matrix representation for training the neural network to adjust weighted parameters based on the known defects. Once sufficiently trained, a test image of an object that is not part of the historical dataset is obtained. Portions of the test image are extracted as input patches for input into the neural network as respective matrix representations. A probability score indicating the likelihood that the input patch includes a defect is automatically generated for each input patch using the weighted parameters. An overall defect score for the test image is then generated based on the probability scores to indicate the condition of the object.
Abstract:
A method of classifying touch screen events uses known non-random patterns of touch events over short periods of time to increase the accuracy of analyzing such events. The method takes advantage of the fact that after one touch event, certain actions are more likely to follow than others. Thus if a touch event is classified as a knock, and then within 500 ms a new event in a similar location occurs, but the classification confidence is low (e.g., 60% nail, 40% knuckle), the classifier may add weight to the knuckle classification since this touch sequence is far more likely. Knowledge about the probabilities of follow-on touch events can be used to bias subsequent classification, adding weight to particular events.
Abstract:
A method of classifying touch screen events uses known non-random patterns of touch events over short periods of time to increase the accuracy of analyzing such events. The method takes advantage of the fact that after one touch event, certain actions are more likely to follow than others. Thus if a touch event is classified as a knock, and then within 500 ms a new event in a similar location occurs, but the classification confidence is low (e.g., 60% nail, 40% knuckle), the classifier may add weight to the knuckle classification since this touch sequence is far more likely. Knowledge about the probabilities of follow-on touch events can be used to bias subsequent classification, adding weight to particular events.
Abstract:
Systems and methods are provided that determine when an initial stroke and a subsequent stroke track may be part of a common user input action. A method may include receiving a signal from which an initial stroke track representing an initial movement of a user controlled indicator against a touch sensitive surface and sensing a subsequent stroke track representing subsequent movement of the user controlled indicator against the touch sensitive surface can be determined. The method further includes determining that the initial stroke track and the subsequent stroke track comprise portions of common user input action when the initial stroke track is followed by the subsequent stroke track within a predetermined period of time and a trajectory of the initial stroke track is consistent with a trajectory of the subsequent stroke track.
Abstract:
Some embodiments of the present invention include a method of differentiating touch screen users based on characterization of features derived from the touch event acoustics and mechanical impact and includes detecting a touch event on a touch sensitive surface, generating a vibro-acoustic waveform signal using at least one sensor detecting such touch event, converting the waveform signal into at least a domain signal, extracting distinguishing features from said domain signal, and classifying said features to associate the features of the domain signal with a particular user.
Abstract:
An input tool includes a body in the form of a stylus with plurality of vibro-acoustically distinct regions. The vibro-acoustically distinct regions produce vibro-acoustic responses when the regions touch the surface of the touch screen. The vibro-acoustic responses are used in a computing device to detect what region of the input tool was used.
Abstract:
An electronic device includes a touch-sensitive surface, for example a touch pad or touch screen. The user interacts with the touch-sensitive surface, producing touch interactions. The resulting actions taken depend at least in part on the touch type. For example, the same touch interactions performed by three different touch types of a finger pad, a finger nail and a knuckle, may result in the execution of different actions.
Abstract:
Provided are various mechanisms and processes for automatic computer vision-based defect detection using a neural network. A system is configured for receiving historical datasets that include training images corresponding to one or more known defects. Each training image is converted into a corresponding matrix representation for training the neural network to adjust weighted parameters based on the known defects. Once sufficiently trained, a test image of an object that is not part of the historical dataset is obtained. Portions of the test image are extracted as input patches for input into the neural network as respective matrix representations. A probability score indicating the likelihood that the input patch includes a defect is automatically generated for each input patch using the weighted parameters. An overall defect score for the test image is then generated based on the probability scores to indicate the condition of the object.