Abstract:
Disclosed herein are a human behavior recognition apparatus and method. The human behavior recognition apparatus includes a multimodal sensor unit for generating at least one of image information, sound information, location information, and Internet-of-Things (IoT) information of a person using a multimodal sensor, a contextual information extraction unit for extracting contextual information for recognizing actions of the person from the at least one piece of generated information, a human behavior recognition unit for generating behavior recognition information by recognizing the actions of the person using the contextual information and recognizing a final action of the person using the behavior recognition information and behavior intention information, and a behavior intention inference unit for generating the behavior intention information based on context of action occurrence related to each of the actions of the person included in the behavior recognition information.
Abstract:
An apparatus and method for capturing a light field image. The light field image capture apparatus includes a mirror array or a micro-lens array. An activated mirror selected from among the multiple mirrors of the mirror array provides light to the multiple elements of an image sensor. An activated lens, selected from among the multiple lenses of the micro-lens array, provides light to the multiple elements of the image sensor. Based on timesharing, the mirror, selected as the activated mirror from among the multiple mirrors, is changed, and the lens, selected as the activated lens from among the multiple lenses, is changed.
Abstract:
An apparatus includes an image receiving module configured to collect a depth image provided from a camera, a human body detection module configured to detect a human body from the collected depth image, and an activity recognition module configured to recognize an action of the human body on the basis of a 3-dimensional action volume extracted from the human body and a previously learned action model.
Abstract:
A method for tracking an object in an object tracking apparatus includes receiving an image frame of an image; and detecting a target, a depth analogous obstacle and an appearance analogous obstacle; tracking the target, the depth analogous obstacle and the appearance analogous obstacle; when the detected target overlaps the depth analogous obstacle, comparing the variation of tracking score of the target with that of the depth analogous obstacle. Further, the method includes continuously tracking the target when the variation of tracking score of the target is below that of the depth analogous obstacle and processing a next frame when the variation of tracking score of the target is above that of the depth analogous obstacle; and re-detecting the target.
Abstract:
Disclosed herein are an apparatus and method for informing a bus driver of a user's intention to get on or off a bus. The apparatus for informing a bus driver of a user's intention to get on or off a bus includes a getting-on information collection device. The getting-on information collection device includes a getting-on information reception unit, an arrival-expected bus identification unit, and a transmission information transmission unit. The getting-on information reception unit receives getting-on information entered by a passenger who desires to get on a bus at a bus stop via a getting-on information transfer device of the bus stop. The arrival-expected bus identification unit identifies a bus expected to arrive having the earliest expected arrival time at the bus stop based on a bus number. The transmission information transmission unit transmits the transmission information to the getting-on information notification device of the bus expected to arrive.
Abstract:
Disclosed herein is a method for integrated anomaly detection. The method includes detecting a thing object and a human object in input video using a first neural network, and tracking the human object, and detecting an anomalous situation based on an object detection result and a human object tracking result.
Abstract:
Disclosed herein are an apparatus and method for determining a modality of interaction between a user and a robot. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program may perform recognizing a user state and an environment state by sensing circumstances around a robot, determining an interaction capability state associated with interaction with a user based on the recognized user state and environment state, and determining the interaction behavior of the robot for the interaction with the user based on the user state, the environment state, and the interaction capability state.
Abstract:
An apparatus includes: an input/output interface configured to have a reference surface model and a floating surface model inputted thereto; a memory having instructions for registration of the reference surface model and the floating surface model stored therein; and a processor configured for registration of the reference surface model and the floating surface model according to the instructions. The instructions perform: selecting initial transformation parameters corresponding to the floating surface model by comparing depth images of the reference surface model and the floating surface model; transforming the floating surface model according to the initial transformation parameters; calculating compensation transformation parameters through a matrix calculated by applying singular value decomposition to a cross covariance matrix between the reference surface model and the floating surface model; and transforming the floating surface model according to the compensation transformation parameters, and executing registration of the reference surface model and the floating surface model.
Abstract:
A human information recognition method includes analyzing sensor data from multi-sensor resource placed in a recognition space to generate human information based on the sensor data, the human information including an identity, location and activity information of people existed in the recognition space. Further, the human information recognition method includes mixing the human information based on the sensor data with human information, the human information being acquired through interaction with the people existed in the recognition space; and storing a human model of the people existed in the recognition space depending on the mixed human information in a database unit.
Abstract:
Disclosed herein are an apparatus and method for providing crosswalk pedestrian guidance based on an image and a beacon. The method for providing crosswalk pedestrian guidance based on an image and a beacon may include estimating a walking location based on a beacon signal corresponding to at least one traffic light and first-person view sensor information, analyzing a hazard factor around a pedestrian based on an image acquired from a camera corresponding to the traffic light, predicting a hazard around the pedestrian in combination by considering together the walking location, the hazard factor, and status information of the traffic light, and providing walking guidance to a pedestrian guidance terminal based on the predicted hazard around the pedestrian.