Abstract:
An event in a public safety (PS) network is recorded by mounting a video recording device for capturing images in a field of view (FOV) on a PS person, by tracking the PS person's point of view (POV) by mounting a motion sensor on the PS person's head for joint movement therewith to generate an output direction control signal indicative of a direction along which the POV is directed; by determining a context of the event in which the PS person is engaged by generating from a context sensor an output context signal indicative of the context of the event; and by automatically controlling operation of the video recording device based on the context and control signals by controlling one of direction of the FOV, angle of the FOV, size of the images, and resolution of the images.
Abstract:
A method, system and computer program product for intelligent tracking and transformation between interconnected sensor devices of mixed type is disclosed. Metadata derived from image data from a camera is compared to different metadata derived from radar data from a radar device to determine whether an object in a Field of View (FOV) of one of the camera and the radar device is an identified object that was previously in the FOV of the other of the camera and the radar device.
Abstract:
Using sensor hubs for tracking an object. One system includes a first sensor hub and a second sensor hub. The first sensor hub includes a first audio sensor and a first electronic processor. In response to determining that one or more words captured by the first audio sensor is included in the list of trigger words, the first electronic processor generates a first voice signature of a voice of an unidentified person, generates a tracking profile, and transmits the tracking profile to the second sensor hub. The second sensor hub receives the tracking profile and includes a second electronic processor, a second audio sensor, and a camera. In response to determining that a second voice signature matches the first voice signature, the second electronic processor is configured to determine a visual characteristic of the unidentified person based on an image from the camera and update the tracking profile.
Abstract:
Intelligent beam forming for a range detection device of a vehicle. One system includes a range detection device including a detection array, and an electronic processor communicatively coupled to the detection array. The electronic processor is configured to receive image or video data from a camera having a field of view. The electronic processor is further configured to identify an area in the field of view of the camera, determine a first threat probability of the identified area, and determine that the first threat probability is greater than a threat level threshold. In response to determining that the first threat probability is greater than the threat level threshold, the electronic processor is configured to provide an instruction to the detection array to change a shape of a beam created by the detection array to focus the beam in a direction of the identified area.
Abstract:
A system and methods for content presentation selection. One method includes displaying, on a display of a portable device, a plurality of tiles. The method includes receiving a first gesture-based input corresponding to a selected tile of the plurality of tiles. The method includes selecting a first application based on the content of the selected tile. The method includes superimposing, on or near a first portion of the selected tile, a first icon corresponding to the first application. The method includes receiving a second gesture-based input selecting the first icon. The method includes retrieving, from the first application, a first application view based on the content. The method includes replacing the selected tile with the first application view.
Abstract:
An object learning system, method, and device. The object learning device includes an electronic processor configured to provide an identifier based on target to at least one auxiliary object learning device and initiate an edge learning process on the target to create first preprocessed object recognition data. The electronic processor is further configured to receive second preprocessed object recognition data corresponding to the target from the at least one auxiliary object learning device and create, based on the first and the second preprocessed object recognition data, a classifier of the target.
Abstract:
A system and method for inline object detection using hue saturation value. One method includes determining, with an electronic processor running a single object classifier, a hue saturation value range. The method includes receiving a digital image including an object. The method includes detecting, without reloading the single object classifier, a macroblock from the digital image, the macroblock associated with the object. The method includes determining a target region within the macroblock. The method includes determining a quantity of pixels in the target region having a hue saturation value within the hue saturation value range. The method includes, when the quantity of pixels exceeds a threshold, completing object classification of the macroblock.
Abstract:
Methods and systems of positioning a drone including a camera. One method includes generating, from a first image capture position of the drone, a first image or video having a first field of view. The method further includes determining a plurality of regions of interest, each of the plurality of regions of interest located within a predetermined area and having an associated priority. The method further includes determining a second image capture position different from the first image capture position for the drone as a function of the associated priority and a viewing distance of the camera. The method further includes generating a command for the drone to move to the second image capture position. The method further includes moving the drone based on the command. The method further includes generating, from the second image capture position, a second image or video having a second field of view.
Abstract:
A method and apparatus for imaging a scene. The method includes receiving a plurality of images of the scene from a plurality of first source devices. The method also includes receiving a first metadata identifying a location and a field-of-view of each of the plurality of first source devices. The method also includes receiving a second metadata identifying a location and a field-of-view of each of one or more available image source devices. The method also includes identifying overlapping portions of the plurality of images. The method also includes stitching the plurality of images together to form a combined image of the scene based on the overlapping portions of the plurality of images. The method also includes identifying a missing portion of the combined image of the scene and responsive to identifying the missing portion, performing one or more actions to fill a part of the missing portion.
Abstract:
Disclosed herein are methods and systems for object recognition and link integration in a composite video stream. One embodiment takes the form of a process that includes detecting an object of interest in a set of video frames. The process also includes tracking the movements of the detected object of interest across a subset of the video frames in the set of video frames. The process further includes generating a composite video stream from the video frames in the subset. The composite video stream shows the tracked movements of the detected object of interest without showing background data from the video frames in the subset. The process also includes outputting the generated composite video stream.