-
公开(公告)号:US20220103984A1
公开(公告)日:2022-03-31
申请号:US17486018
申请日:2021-09-27
Inventor: Blagovest Iordanov VLADIMIROV , Sang Joon PARK , Jin Hee SON , So Yeon LEE , Chang Eun LEE , Sung Woo JUN , Eun Young CHO
IPC: H04W4/33 , H04W4/02 , H04W24/08 , H04B17/318 , G06N7/00
Abstract: Provided are a system and method for active data collection mode control for reducing crowd-sourcing signal data collection required for fingerprint database (FPDB) maintenance. The system for active data collection mode control for reducing crowd-sourcing signal data collection required for FPDB maintenance includes a mobile device configured to support a survey mode, a localization mode, and a crowd-sourcing mode and a server configured to receive data from the mobile device, generate and update an FPDB, and control a data collection mode.
-
2.
公开(公告)号:US20220121853A1
公开(公告)日:2022-04-21
申请号:US17505555
申请日:2021-10-19
Inventor: Jin Hee SON , Sang Joon PARK , Blagovest Iordanov Vladimirov , So Yeon LEE , Chang Eun LEE , Jin Mo CHOI , Sung Woo JUN , Eun Young CHO
Abstract: Provided is a segmentation and tracking system based on self-learning using video patterns in video. The present invention includes a pattern-based labeling processing unit configured to extract a pattern from a learning image and then perform labeling in each pattern unit to generate a self-learning label in the pattern unit, a self-learning-based segmentation/tracking network processing unit configured to receive two adjacent frames extracted from the learning image and estimate pattern classes in the two frames selected from the learning image, a pattern class estimation unit configured to estimate a current labeling frame through a previous labeling frame extracted from the image labeled by the pattern-based labeling processing unit and a weighted sum of the estimated pattern classes of a previous frame of the learning image, and a loss calculation unit configured to calculate a loss between a current frame and the current labeling frame by comparing the current labeling frame with the current labeling frame estimated by the pattern class estimation unit.
-
公开(公告)号:US20220277555A1
公开(公告)日:2022-09-01
申请号:US17551038
申请日:2021-12-14
Inventor: So Yeon LEE , BLAGOVEST IORDANOV VLADIMIROV , Sang Joon PARK , Jin Hee SON , Chang Eun LEE , Sung Woo JUN , Eun Young CHO
Abstract: Provided is an atypical environment-based location recognition apparatus. The apparatus includes a sensing information acquisition unit configured to, from sensing data collected by sensor modules, detect object location information and semantic label information of a video image and detect an event in the video image; a walk navigation information provision unit configured to acquire user movement information; a metric map generation module configured to generate a video odometric map using sensing data collected through a sensing information acquisition unit and reflect the semantic label information; and a topology map generation module configured to generate a topology node using sensing data acquired through the sensing information acquisition unit and update the topology node through the collected user movement information.
-
公开(公告)号:US20210318693A1
公开(公告)日:2021-10-14
申请号:US17230360
申请日:2021-04-14
Inventor: Chang Eun LEE , Sang Joon PARK , So Yeon LEE , Vladimirov Blagovest Iordanov , Jin Hee SON , Sung Woo JUN , Eun Young CHO
IPC: G05D1/02 , G05D1/00 , G01S17/931
Abstract: Provided is a multi-agent based manned-unmanned collaboration system including: a plurality of autonomous driving robots configured to form a mesh network with neighboring autonomous driving robots, acquire visual information for generating situation recognition and spatial map information, and acquire distance information from the neighboring autonomous driving robots to generate location information in real time; a collaborative agent configured to construct location positioning information of a collaboration object, target recognition information, and spatial map information from the visual information, the location information, and the distance information collected from the autonomous driving robots, and provide information for supporting battlefield situational recognition, threat determination, and command decision using the generated spatial map information and the generated location information of the autonomous driving robot; and a plurality of smart helmets configured to display the location positioning information of the collaboration object, the target recognition information, and the spatial map information constructed through the collaborative agent and present the pieces of information to wearers.
-
-
-