-
公开(公告)号:US12277687B2
公开(公告)日:2025-04-15
申请号:US17246032
申请日:2021-04-30
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured with means for performing a self-discovery operation which crops 2D patches or crops 3D cubes from similar patient scans received at the system as input; means for transforming each anatomical pattern represented within the cropped 2D patches or the cropped 3D cubes to generate transformed 2D anatomical patterns or transformed 3D anatomical patterns; means for performing a self-classification operation of the transformed anatomical patterns by formulating a C-way multi-class classification task for representation learning; means for performing a self-restoration operation by recovering original anatomical patterns from the transformed 2D patches or transformed 3D cubes having transformed anatomical patterns embedded therein to learn different sets of visual representation; and means for providing a semantics-enriched pre-trained AI model having a trained encoder-decoder structure with skip connections in between based on the performance of the self-discovery operation, the self-classification operation, and the self-restoration operation. Other related embodiments are disclosed.
-
公开(公告)号:US12118455B2
公开(公告)日:2024-10-15
申请号:US15965691
申请日:2018-04-27
Inventor: Jianming Liang , Zongwei Zhou , Jae Shin
IPC: G06N3/08 , G06F18/21 , G06F18/214 , G06F18/2413 , G06F18/28 , G06N3/045 , G06N3/047 , G06V10/44 , G06V10/764 , G06V10/772 , G06V10/774 , G06V10/776 , G06V10/82
CPC classification number: G06N3/08 , G06F18/2148 , G06F18/217 , G06F18/2413 , G06F18/28 , G06N3/045 , G06N3/047 , G06V10/454 , G06V10/764 , G06V10/772 , G06V10/7747 , G06V10/776 , G06V10/82
Abstract: Systems for selecting candidates for labelling and use in training a convolutional neural network (CNN) are provided, the systems comprising: a memory device; and at least one hardware processor configured to: receive a plurality of input candidates, wherein each candidate includes a plurality of identically labelled patches; and for each of the plurality of candidates: determine a plurality of probabilities, each of the plurality of probabilities being a probability that a unique patch of the plurality of identically labelled patches of the candidate corresponds to a label using a pre-trained CNN; identify a subset of candidates of the plurality of input candidates, wherein the subset does not include all of the plurality of candidates, based on the determined probabilities; query an external source to label the subset of candidates to produce labelled candidates; and train the pre-trained CNN using the labelled candidates.
-
公开(公告)号:US20220328189A1
公开(公告)日:2022-10-13
申请号:US17716929
申请日:2022-04-08
Inventor: Zongwei Zhou , Jianming Liang
IPC: G16H50/20 , G06V10/774
Abstract: Embodiments described herein include systems for implementing annotation-efficient deep learning in computer-aided diagnosis. Exemplary embodiments include systems having a processor and a memory specially configured with instructions for learning annotation-efficient deep learning from non-labeled medical images to generate a trained deep-learning model by applying a multi-phase model training process via specially configured instructions for pre-training a model by executing a one-time learning procedure using an initial annotated image dataset; iteratively re-training the model by executing a fine-tuning learning procedure using newly available annotated images without re-using any images from the initial annotated image dataset; selecting a plurality of most representative samples related to images of the initial annotated image dataset and the newly available annotated images by executing an active selection procedure based on the which of a collection of un-annotated images exhibit either a greatest uncertainty or a greatest entropy; extracting generic image features; updating the model using the generic image features extrated; and outputting the model as the trained deep-learning model for use in analyzing a patient medical image. Other related embodiments are disclosed.
-
公开(公告)号:US20220309811A1
公开(公告)日:2022-09-29
申请号:US17676134
申请日:2022-02-19
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
IPC: G06V20/70 , G06V10/764 , G06V10/82 , G06V10/774 , G06V10/26 , G06V10/74
Abstract: Described herein are means for the generation of Transferable Visual Word (TransVW) models through self-supervised learning in the absence of manual labeling, in which the trained TransVW models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured to perform self-supervised learning for an AI model in the absence of manually labeled input, by performing the following operations: receiving medical images as input; performing a self-discovery operation of anatomical patterns by building a set of the anatomical patterns from the medical images received at the system, performing a self-classification operation of the anatomical patterns; performing a self-restoration operation of the anatomical patterns within cropped and transformed 2D patches or 3D cubes derived from the medical images received at the system by recovering original anatomical patterns to learn different sets of visual representation; and providing a semantics-enriched pre-trained AI model having a trained encoder-decoder structure with skip connections in between based on the performance of the self-discovery operation, the self-classification operation, and the self-restoration operation. Other related embodiments are disclosed.
-
公开(公告)号:US20210265043A1
公开(公告)日:2021-08-26
申请号:US17180575
申请日:2021-02-19
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
Abstract: Described herein are means for learning semantics-enriched representations via self-discovery, self-classification, and self-restoration in the context of medical imaging. Embodiments include the training of deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a collection of semantics-enriched pre-trained models, called Semantic Genesis. Other related embodiments are disclosed.
-
公开(公告)号:US12236592B2
公开(公告)日:2025-02-25
申请号:US17944881
申请日:2022-09-14
Inventor: Nahid Ul Islam , Shiv Gehlot , Zongwei Zhou , Jianming Liang
Abstract: Described herein are means for systematically determining an optimal approach for the computer-aided diagnosis of a pulmonary embolism, in the context of processing medical imaging. According to a particular embodiment, there is a system specially configured for diagnosing a Pulmonary Embolism (PE) within new medical images which form no part of the dataset upon which the AI model was trained. Such a system executes operations for receiving a plurality of medical images and processing the plurality of medical images by executing an image-level classification algorithm to determine the presence or absence of a Pulmonary Embolism (PE) within each image via operations including: pre-training an AI model through supervised learning to identify ground truth; fine-tuning the pre-trained AI model specifically for PE diagnosis to generate a pre-trained PE diagnosis and detection AI model; wherein the pre-trained AI model is based on a modified CNN architecture having introduced therein a squeeze and excitation (SE) block enabling the CNN architecture to extract informative features from the plurality of medical images by fusing spatial and channel-wise information; applying the pre-trained PE diagnosis and detection AI model to new medical images to render a prediction as to the presence or absence of the Pulmonary Embolism within the new medical images; and outputting the prediction as a PE diagnosis for a medical patient.
-
公开(公告)号:US11915417B2
公开(公告)日:2024-02-27
申请号:US17240271
申请日:2021-04-26
Inventor: Ruibin Feng , Zongwei Zhou , Jianming Liang
CPC classification number: G06T7/0012 , G06F18/2155 , G06T7/174 , G06T15/08 , G06T17/10 , G06V10/82 , G06T2207/20081 , G06T2207/20084 , G06T2207/20132 , G06T2207/30016 , G06T2207/30056 , G06V2201/031
Abstract: Described herein are means for training a deep model to learn contrastive representations embedded within part-whole semantics via a self-supervised learning framework, in which the trained deep models are then utilized for the processing of medical imaging. For instance, an exemplary system is specifically configured for performing a random cropping operation to crop a 3D cube from each of a plurality of medical images received at the system as input; performing a resize operation of the cropped 3D cubes; performing an image reconstruction operation of the resized and cropped 3D cubes to predict the resized whole image represented by the original medical images received; and generating a reconstructed image which is analyzed for reconstruction loss against the original image representing a known ground truth image to the reconstruction loss function. Other related embodiments are disclosed.
-
公开(公告)号:US20220300769A1
公开(公告)日:2022-09-22
申请号:US17698805
申请日:2022-03-18
Inventor: Zongwei Zhou , Jae Shin , Jianming Liang
IPC: G06K9/62 , G16H30/40 , G06V10/82 , G06V10/764 , G06T7/00
Abstract: Described herein are systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements, in which the trained networks are then utilized in the context of medical imaging. The success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, it is tedious, laborious, and time consuming to create large annotated datasets, and demands costly, specialty-oriented skills. A novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework is presented to dramatically reduce annotation cost, starting with a pre-trained CNN to seek “worthy” samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. The described method was evaluated using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.
-
公开(公告)号:US20220262105A1
公开(公告)日:2022-08-18
申请号:US17625313
申请日:2020-07-17
Applicant: Zongwei ZHOU , Vatsal SODHA , Md, Mahfuzur RAHMAN SIDDIQUEE , Ruibin FENG , Nima TAJBAKHSH , Jianming LIANG , Arizona Board of Regents on behalf of Arizona State University
Inventor: Zongwei Zhou , Vatsal Sodha , Md Mahfuzur Rahman Siddiquee , Ruibin Feng , Nima Tajbakhsh , Jianming Liang
IPC: G06V10/774 , G06V10/82 , G06V10/98 , G06V10/776
Abstract: Described herein are means for generating source models for transfer learning to application specific models used in the processing of medical imaging. In some embodiments, the method comprises: identifying a group of training samples, wherein each training sample in the group of training samples includes an image; for each training sample in the group of training samples: identifying an original patch of the image corresponding to the training sample; identifying one or more transformations to be applied to the original patch; generating a transformed patch by applying the one or more transformations to the identified patch; and training an encoder-decoder network using a group of transformed patches corresponding to the group of training samples, wherein the encoder-decoder network is trained to generate an approximation of the original patch from a corresponding transformed patch, and wherein the encoder-decoder network is trained to minimize a loss function that indicates a difference between the generated approximation of the original patch and the original patch. The source models significantly enhance the transfer learning performance for many medical imaging tasks including, but not limited to, disease/organ detection, classification, and segmentation. Other related embodiments are disclosed.
-
公开(公告)号:US20210343014A1
公开(公告)日:2021-11-04
申请号:US17246032
申请日:2021-04-30
Inventor: Fatemeh Haghighi , Mohammad Reza Hosseinzadeh Taher , Zongwei Zhou , Jianming Liang
Abstract: Described herein are means for the generation of semantic genesis models through self-supervised learning in the absence of manual labeling, in which the trained semantic genesis models are then utilized for the processing of medical imaging. For instance, an exemplary system is specially configured with means for performing a self-discovery operation which crops 2D patches or crops 3D cubes from similar patient scans received at the system as input; means for transforming each anatomical pattern represented within the cropped 2D patches or the cropped 3D cubes to generate transformed 2D anatomical patterns or transformed 3D anatomical patterns; means for performing a self-classification operation of the transformed anatomical patterns by formulating a C-way multi-class classification task for representation learning; means for performing a self-restoration operation by recovering original anatomical patterns from the transformed 2D patches or transformed 3D cubes having transformed anatomical patterns embedded therein to learn different sets of visual representation; and means for providing a semantics-enriched pre-trained AI model having a trained encoder-decoder structure with skip connections in between based on the performance of the self-discovery operation, the self-classification operation, and the self-restoration operation. Other related embodiments are disclosed.
-
-
-
-
-
-
-
-
-