-
公开(公告)号:US20240148321A1
公开(公告)日:2024-05-09
申请号:US18548911
申请日:2022-03-24
Applicant: Bodygram, Inc.
Inventor: Subas Chhatkuli , Kyohei Kamiyama , Chong Jin Koh
CPC classification number: A61B5/4869 , A61B5/1072 , A61B5/1079 , G06V10/82 , G06V40/168 , G16H10/60 , G06V2201/03
Abstract: Systems and methods for generating a prediction of a body composition of a user using an image capturing device are disclosed. The systems and methods can be used to predict body compositions such as body fat percentage, water content percentage, muscle mass, bone mass, and so on, from a single user image. The methods include the steps of receiving one or more user images and one or more user parameters, generating one or more key points based on the one or more user images, and generating a prediction of the body composition of the user based on the one or more key points and the one or more user parameters, using a body composition deep learning network (DLN). In one embodiment, the body composition DLN comprises a face image DLN, a body feature DLN, and an output DLN.
-
公开(公告)号:US12138073B2
公开(公告)日:2024-11-12
申请号:US18548911
申请日:2022-03-24
Applicant: Bodygram, Inc.
Inventor: Subas Chhatkuli , Kyohei Kamiyama , Chong Jin Koh
Abstract: Systems and methods for generating a prediction of a body composition of a user using an image capturing device are disclosed. The systems and methods can be used to predict body compositions such as body fat percentage, water content percentage, muscle mass, bone mass, and so on, from a single user image. The methods include the steps of receiving one or more user images and one or more user parameters, generating one or more key points based on the one or more user images, and generating a prediction of the body composition of the user based on the one or more key points and the one or more user parameters, using a body composition deep learning network (DLN). In one embodiment, the body composition DLN comprises a face image DLN, a body feature DLN, and an output DLN.
-
3.
公开(公告)号:US11869152B2
公开(公告)日:2024-01-09
申请号:US17924994
申请日:2021-05-11
Applicant: Bodygram, Inc.
Inventor: Chong Jin Koh , Kyohei Kamiyama , Nobuyuki Hayashi
IPC: G06T19/00 , G06T7/60 , G06Q30/0601 , G06T17/20
CPC classification number: G06T19/00 , G06Q30/0621 , G06Q30/0643 , G06T7/60 , G06T17/20 , G06T2207/20081 , G06T2207/30196 , G06T2219/012
Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
-
公开(公告)号:US11010896B2
公开(公告)日:2021-05-18
申请号:US16697146
申请日:2019-11-26
Applicant: Bodygram, Inc.
Inventor: Kyohei Kamiyama , Chong Jin Koh
Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
-
公开(公告)号:US20250064395A1
公开(公告)日:2025-02-27
申请号:US18941419
申请日:2024-11-08
Applicant: Bodygram, Inc.
Inventor: Subas Chhatkuli , Kyohei Kamiyama , Chong Jin Koh
Abstract: Systems and methods for generating a prediction of a body composition of a user using an image capturing device are disclosed. The systems and methods can be used to predict body compositions such as body fat percentage, water content percentage, muscle mass, bone mass, and so on, from a single user image. The methods include the steps of receiving one or more user images and one or more user parameters, generating one or more key points based on the one or more user images, and generating a prediction of the body composition of the user based on the one or more key points and the one or more user parameters, using a body composition deep learning network (DLN). In one embodiment, the body composition DLN comprises a face image DLN, a body feature DLN, and an output DLN.
-
公开(公告)号:US11798299B2
公开(公告)日:2023-10-24
申请号:US17773661
申请日:2020-11-02
Applicant: Bodygram, Inc.
Inventor: Kyohei Kamiyama , Chong Jin Koh
CPC classification number: G06V20/647 , G06N3/08 , G06T7/0012 , G06T7/344 , G06T7/75 , G06V10/255 , G06V40/103 , G06V40/107 , G06T2207/20081 , G06T2207/20084
Abstract: Disclosed are systems and methods for generating data sets for training deep learning networks for key point annotations and measurements extraction from photos taken using a mobile device camera. The method includes the steps of receiving a 3D scan model of a 3D object or subject captured from a 3D scanner and a 2D photograph of the same 3D object or subject at a virtual workspace. The 3D scan model is rigged with one or more key points. A superimposed image of a pose-adjusted and aligned 3D scan model superimposed over the 2D photograph is captured by a virtual camera in the virtual workspace. Training data for a key point annotation DLN is generated by repeating the steps for a plurality of objects belonging to a plurality of object categories. The key point annotation DLN learns from the training data to produce key point annotations of objects from 2D photographs captured using any mobile device camera.
-
公开(公告)号:US10470510B1
公开(公告)日:2019-11-12
申请号:US16537542
申请日:2019-08-10
Applicant: Bodygram, Inc.
Inventor: Chong Jin Koh , Kyohei Kamiyama
Abstract: Disclosed are systems and methods for full body measurements extraction using a mobile device camera. The method includes the steps of receiving one or more user parameters from a user device; receiving at least one image from the user device, the at least one image containing the human and a background; performing body segmentation on the at least one image to identify one or more body features associated with the human from the background; performing annotation on the one or more identified body features to generate annotation lines on each body feature corresponding to body feature measurement locations utilizing a plurality of annotation deep-learning networks that have been separately trained on each body feature; generating body feature measurements from the one or more annotated body features utilizing a sizing machine-learning module based on the annotated body features and the one or more user parameters; and generating body size measurements by aggregating the body feature measurements for each body feature.
-
8.
公开(公告)号:US20230186567A1
公开(公告)日:2023-06-15
申请号:US17924994
申请日:2021-05-11
Applicant: Bodygram, Inc.
Inventor: Chong Jin Koh , Kyohei Kamiyama , Nobuyuki Hayashi
IPC: G06T19/00 , G06T17/20 , G06T7/60 , G06Q30/0601
CPC classification number: G06T19/00 , G06T17/20 , G06T7/60 , G06Q30/0621 , G06Q30/0643 , G06T2207/30196 , G06T2207/20081 , G06T2219/012
Abstract: The present invention provides systems and methods for generating a 3D product mesh model and product dimensions from user images. The system is configured to receive one or more images of a user's body part, extract a body part mesh having a plurality of body part key points, generate a product mesh from an identified subset of the body part mesh, and generate one or more product dimensions in response to the selection of one or more key points from the product mesh. The system may output the product mesh, the product dimensions, or a manufacturing template of the product. In some embodiments, the system uses one or more machine learning modules to generate the body part mesh, identify the subset of the body part mesh, generate the product mesh, select the one or more key points, and/or generate the one or more product dimensions.
-
公开(公告)号:US10962404B2
公开(公告)日:2021-03-30
申请号:US16830497
申请日:2020-03-26
Applicant: Bodygram, Inc.
Inventor: Kyohei Kamiyama , Chong Jin Koh , Yu Sato
Abstract: Disclosed are systems and methods for body weight prediction from one or more images. The method includes the steps of receiving one or more subject parameters; receiving one or more images containing a subject; identifying one or more annotation key points for one or more body features underneath a clothing of the subject from the one or more images utilizing one or more annotation deep-learning networks; calculating one or more geometric features of the subject based on the one or more annotation key points; and generating a prediction of the body weight of the subject utilizing a weight machine-learning module based on the one or more geometric features of the subject and the one or more subject parameters.
-
10.
公开(公告)号:US20230316046A1
公开(公告)日:2023-10-05
申请号:US18025648
申请日:2021-09-17
Applicant: Bodygram, Inc.
Inventor: Ito Takafumi , Kyohei Kamiyama
IPC: G06N3/045
CPC classification number: G06N3/045
Abstract: Methods and systems are disclosed for evaluating or training a machine learning module when its corresponding truth data sets are unavailable or unreliable. The methods and systems are configured for evaluating or training a target machine learning module having a first (system) input and a first output, wherein the target module is connected to a second machine learning module having an intermediate input (identical to the first output of the target module) and a second (system) output, by training the second module using received corresponding intermediate and output data sets, generating an evaluation data set using a received system input data set, and evaluating or training the target module using a loss function based on a distance metric between the evaluation data set and a received system output data set corresponding to the system input data set.
-
-
-
-
-
-
-
-
-