-
公开(公告)号:US20230177709A1
公开(公告)日:2023-06-08
申请号:US18102527
申请日:2023-01-27
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Taehee LEE , Sungwon KIM , Saeyoung KIM , Yoojeong LEE , Junghwan LEE
IPC: G06T7/55 , G06V10/22 , G06T7/521 , G01S17/87 , G01S17/894
CPC classification number: G06T7/55 , G06V10/22 , G06T7/521 , G01S17/87 , G01S17/894 , G06T2207/10028 , G06T2207/10024 , G06T2207/10012 , G06T2207/20212
Abstract: An electronic device is disclosed. The electronic device may comprise a first image sensor, a second image sensor, and a processor, wherein the processor may: acquire a first depth image and a confidence map by using the first image sensor; acquire an RGB image by using the second image sensor; acquire a second depth image on the basis of the confidence map and the RGB image; and acquire a third depth image by composing the first depth image and the second depth image on the basis of the pixel value of the confidence map.
-
公开(公告)号:US20240017406A1
公开(公告)日:2024-01-18
申请号:US18224881
申请日:2023-07-21
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Yongkook KIM , Saeyoung KIM , Junghoe KIM , Hyeontaek LIM , Boseok MOON
CPC classification number: B25J9/163 , B25J11/0005 , G10L15/22 , G10L15/26
Abstract: A robot transmits a command to control an external device around the robot based on pre-stored environment information while the robot is operating in a learning mode. The external device makes a noise as part of its operation. Also, the robot outputs user speech for learning while the external device is operating. The robot learns a speech recognition model based on the noise and speech of a user acquired through a microphone of the robot. The speech recognition model is then used by the robot or by another device to better understand the user when the user talks. The robot is then able to more accurately understand and properly execute speech commands from the user.
-
公开(公告)号:US20240083033A1
公开(公告)日:2024-03-14
申请号:US18509858
申请日:2023-11-15
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Hyeontaek LIM , Saeyoung KIM , Yongkook KIM , Boseok MOON
CPC classification number: B25J9/1694 , B25J19/02 , B25J19/026 , G06V20/50 , G06V40/10 , G10L15/22 , H04R1/028 , H04R1/406 , H04R3/005
Abstract: A robot includes: a light detection and ranging (LiDAR) sensor; a plurality of directional microphones; and at least one processor configured to: identify, based on sensing data obtained through the LiDAR sensor, an object in a vicinity of the robot, identify, based on the type of the object, a weight to apply to an audio signal received through a directional microphone corresponding to a location of the object from among the plurality of directional microphones, obtain context information of the robot based on the sensing data, identify, based on the context information, a pre-processing model corresponding to each directional microphone of the plurality of directional microphones, apply the weight to an audio signal received through the directional microphone corresponding to the location of the object among a plurality of audio signals received through the plurality of directional microphones, obtain a plurality of pre-processed audio signals by inputting the audio signal to which the weight has been applied, and the remaining audio signals into the pre-processing model corresponding to the respective directional microphone , and perform voice recognition based on the plurality of pre-processed audio signals.
-
公开(公告)号:US20230245337A1
公开(公告)日:2023-08-03
申请号:US18131270
申请日:2023-04-05
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Junghwan LEE , Sungwon KIM , Saeyoung KIM , Yoojeong LEE , Taehee LEE
CPC classification number: G06T7/70 , G06T7/50 , G06V10/25 , G06V10/761 , G06T2207/10024 , G06T2207/20084
Abstract: An electronic apparatus includes: a camera; and a processor configured to: identify a first area of a threshold size in an image obtained by the camera, the first area including an object of interest; identify depth information of the object of interest and depth information of a plurality of background objects included in an area excluding the object of interest in the first area; and identify a background object where the object of interest is located, from among the plurality of background objects, based on a difference between the depth information of the object of interest and the depth information of each of the plurality of background objects.
-
公开(公告)号:US20210251454A1
公开(公告)日:2021-08-19
申请号:US17177742
申请日:2021-02-17
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Junghwan LEE , Daehun KIM , Saeyoung KIM , Soonhyuk HONG
Abstract: A method of controlling a robot includes obtaining a first image and a second image of a plurality of objects, the first and second image being captured from different positions; obtaining, from the first and second images, a plurality of candidate positions corresponding to each of the plurality of objects, based on a capturing position of each of the first and second images and a direction to each of the plurality of objects from each capturing position; obtaining distance information between each capturing position and each of the plurality of objects in the first and second images by analyzing the first and second images; and identifying a position of each of the plurality of objects from among the plurality of candidate positions based on the distance information.
-
公开(公告)号:US20210005180A1
公开(公告)日:2021-01-07
申请号:US16979344
申请日:2019-03-20
Applicant: Samsung Electronics Co., Ltd.
Inventor: Saeyoung KIM
IPC: G10L15/06 , G10L13/033
Abstract: The present disclosure relates to an artificial intelligence (AI) system utilizing a machine learning algorithm such as deep learning, etc. and an application thereof. In particular, a controlling method of an electronic apparatus includes obtaining a user voice of a first user, converting the voice of the first user into a first spectrogram, obtaining a second spectrogram by inputting the first spectrogram to a trained model through an artificial intelligence algorithm, converting the second spectrogram into a voice of a second user, and outputting the converted second user voice. Here, the trained model is a model trained to obtain a spectrogram of a style of the second user voice by inputting a spectrogram of a style of the first user voice. In particular, at least part of the controlling method of the electronic apparatus uses an artificial intelligence model trained according to at least one of machine learning, a neural network or a deep learning algorithm.
-
-
-
-
-