Dilated Fully Convolutional Network for 2D/3D Medical Image Registration

    公开(公告)号:US20210012514A1

    公开(公告)日:2021-01-14

    申请号:US17030955

    申请日:2020-09-24

    摘要: A method and system for 3D/3D medical image registration. A digitally reconstructed radiograph (DRR) is rendered from a 3D medical volume based on current transformation parameters. A trained multi-agent deep neural network (DNN) is applied to a plurality of regions of interest (ROIs) in the DRR and a 2D medical image. The trained multi-agent DNN applies a respective agent to each ROI to calculate a respective set of action-values from each ROI. A maximum action-value and a proposed action associated with the maximum action value are determined for each agent. A subset of agents is selected based on the maximum action-values determined for the agents. The proposed actions determined for the selected subset of agents are aggregated to determine an optimal adjustment to the transformation parameters and the transformation parameters are adjusted by the determined optimal adjustment. The 3D medical volume is registered to the 2D medical image using final transformation parameters resulting from a plurality of iterations.

    CROSS DOMAIN MEDICAL IMAGE SEGMENTATION
    23.
    发明申请

    公开(公告)号:US20190259153A1

    公开(公告)日:2019-08-22

    申请号:US16271130

    申请日:2019-02-08

    摘要: Systems and method are described for medical image segmentation. A medical image of a patient in a first domain is received. The medical image comprises one or more anatomical structures. A synthesized image in a second domain is generated from the medical image of the patient in the first domain using a generator of a task driven generative adversarial network. The one or more anatomical structures are segmented from the synthesized image in the second domain using a dense image-to-image network of the task driven generative adversarial network. Results of the segmenting of the one or more anatomical structures from the synthesized image in the second domain represent a segmentation of the one or more anatomical structures in the medical image of the patient in the first domain.

    CROSS DOMAIN SEGMENTATION WITH UNCERTAINTY-GUIDED CURRICULUM LEARNING

    公开(公告)号:US20240177458A1

    公开(公告)日:2024-05-30

    申请号:US18058884

    申请日:2022-11-28

    IPC分类号: G06V10/774 G06T7/00 G06V10/26

    摘要: Systems and methods for training a machine learning based segmentation network are provided. A set of medical images, each depicting an anatomical object, in a first modality is received. For each respective medical image of the set of medical images, a synthetic image, depicting the anatomical object, in a second modality is generated based on the respective medical image. One or more augmented images are generated based on the synthetic image. One or more segmentations of the anatomical object are performed from the one or more augmented images using a machine learning based reference network. An uncertainty associated with segmenting the anatomical object from the respective medical image is computed based on results of the one or more segmentations. It is determined whether the respective medical image is suitable for training a machine learning based segmentation network based on the uncertainty. The machine learning based segmentation network is trained based on 1) the suitable medical images of the set of medical images and 2) annotations of the anatomical object determined using a machine learning based teacher network.

    SEMI-SUPERVISED TRACKING IN MEDICAL IMAGES WITH CYCLE TRACKING

    公开(公告)号:US20230316544A1

    公开(公告)日:2023-10-05

    申请号:US17657366

    申请日:2022-03-31

    IPC分类号: G06T7/246 G06T7/73

    摘要: Systems and methods for tracking a location of an object of interest through a sequence of medical images are provided. First and second input medical images of a patient are received. The first input medical image comprises an annotation of a location of an object of interest. Features are extracted from the first and the second input medical images. A location of the object of interest in the second input medical image is determined using a machine learning based location predictor network based on the annotation of the location of the object of interest in the first input medical image and the extracted features from the first and the second input medical images. The location of the object of interest in the second input medical image is output. The machine learning based location predictor network is trained based on a comparison between 1) locations of a particular object in a sequence of training images determined during a forward tracking of the particular object through the sequence of training images and 2) locations of the particular object determined during a backward tracking of the particular object through the sequence of training images.

    RISK MANAGEMENT FOR ROBOTIC CATHETER NAVIGATION SYSTEMS

    公开(公告)号:US20230165638A1

    公开(公告)日:2023-06-01

    申请号:US17456681

    申请日:2021-11-29

    IPC分类号: A61B34/20 G06T7/11

    摘要: Systems and methods for navigating a catheter in a patient using a robotic navigation system with risk management are provided. An input medical image of a patient is received. A trajectory for navigating a catheter from a current position to a target position in the patient is determined based on the input medical image using a trained segmentation network. One or more actions of a robotic navigation system for navigating the catheter from the current position towards the target position and a confidence level associated with the one or more actions are determined by a trained AI (artificial intelligence) agent and based on the generated trajectory and a current view of the catheter. In response to the confidence level satisfying a threshold, the one or more actions are evaluated based on a view of the catheter when navigated according to the one or more actions. The catheter is navigated from the current position towards the target position using the robotic navigation system according to the one or more actions based on the evaluation.