Abstract:
A mechanism for automatically generating and ranking M-mode lines for generating or defining M-mode data usable to assess fetal heart activity, e.g. determine a fetal heart rate. A region of interest, containing a fetal heart, in a sequence of ultrasound images is identified. The region of interest is used to define the position of each of a plurality of M-mode lines, e.g. anatomical M-mode lines. A quality measure of each M-mode line is determined based on M-mode data generated for each M-mode line, and the quality measures are then used to rank the M-mode lines.
Abstract:
A computer-implemented method for visualization of an elongated anatomical structure (20), for example of a fetal spine using ultrasound is provided. The method comprising the steps of: receiving a plurality of 3D ultrasound image volumes, each image volume depicting at least a portion of an elongated anatomical structure (20); on each 3D ultrasound image volume, automatically or semi-automatically fitting a parametric curve (30) to the depicted portion of the elongated anatomical structure, the parametric curve being defined by curve parameters; reformatting each 3D ultrasound image volume by applying a transformation which straightens the parametric curve along at least one axis, so as to generate a plurality of reformatted image volumes and reformatted parametric curves (32, 34); registering the reformatted image volumes with one another by determining the joining point of their respective parametric curves; and fusing the reformatted image volumes with one another to yield a fused image depicting the whole elongated anatomical structure or a larger portion thereof than the 3D ultrasound image volumes.
Abstract:
The present invention relates to a device (10) for detecting a misuse of a medical imaging system (20), comprising a data interface (12) for acquiring medical image data (24) and audit log data (26) from the medical imaging system (20); a processing unit (14) which is configured to configured to analyse the medical image data (24) to determine whether or not a part of a fetus is imaged in the medical image data (24), to compare the medical image data (24) and the audit log data (26) with each other, and to determine based on said comparison whether there is a mismatch between the medical image data (24) and the audit log data (26); and a feedback unit (16) which is configured to generate a misuse alert signal if a mismatch is detected by the processing unit (14).
Abstract:
The present invention relates to a device (2) and a method (100) for determining at least one final two-dimensional image or slice for visualizing an object of interest in a three-dimensional ultrasound volume. The method (100) for determining at least one final two-dimensional image, the method comprises the steps: a) providing (101) a three-dimensional image of a body region of a patient body, wherein an applicator configured for fixating at least one radiation source is inserted into the body region; b) providing (102) an initial direction, in particular by randomly determining the initial direction within the three-dimensional image; c) repeating (103) the following sequence of steps s1) to s4): s1) determining (104), via a processing unit, a set-direction within the three-dimensional image based on the initial direction for the first sequence or based on a probability map determined during a previous sequence; s2) extracting (105), via the processing unit, an image-set of two-dimensional images from the three-dimensional image, such that the two-dimensional images of the image-set are arranged coaxially and subsequently in the set-direction; s3) applying (106), via the processing unit, an applicator pre-trained classification method to each of the two-dimensional images of the image-set resulting in a probability score for each of the two-dimensional images of the image-set indicating a probability of the applicator being depicted, in particular fully depicted, in the respective two-dimensional image of the image-set in a cross-sectional view; and s4) determining (107), via the processing unit, a probability-map representing the probability scores of the two-dimensional images of the image-set with respect to the set-direction; wherein the method comprises the further step: d) determining (108), via a processing unit and after finishing the last sequence, the two-dimensional image associated with the highest probability score, in particular from the image-set determined during the last sequence, as the final two-dimensional image. The invention provides an efficient way to ensure that the ultrasound volume has the required clinical information by providing the necessary scan planes having the object of interest e.g. the applicator (6) in a three-dimensional ultrasound volume.
Abstract:
The invention provides for a method of obtaining a composite 3D ultrasound image of a region of interest. The method includes obtaining preliminary ultrasound data from a region of interest of a subject and identifying an anatomical feature within the region of interest based on the preliminary ultrasound data. A first imaging position and one or more additional imaging positions are then determined based on the anatomical feature. A first 3D ultrasound image is obtained from the first imaging position and one or more additional 3D ultrasound images are obtained from the one or more additional imaging positions, wherein a portion of the first 3D ultrasound image overlaps a portion of the one or more additional 3D ultrasound images, thereby forming an overlapping portion comprising the anatomical feature. Spatial registration is performed between the first 3D ultrasound image and the one or more additional 3D ultrasound images based on the anatomical feature the 3D ultrasound images are then blended based on the spatial registration, thereby generating a composite 3D ultrasound image.
Abstract:
The invention relates to a method and an apparatus for cervical image analysis, wherein a transformation zone is identified in an acetic acid image by registering with its Lugol's iodine counterpart. Then, regions in the transformation zone which show significant changes in whiteness are identified as aceto-white regions and registered with the corresponding Lugol's iodine image, and it is determined if the identified regions in the Lugol's iodine image are iodine negative or positive. Based thereon, the aceto-white region can be categorized as one of metaplasia, inflammation or premalignant lesion region.
Abstract:
The invention relates to a computer-implemented method for automatically detecting anatomical structures (3) in a medical image (1) of a subject, the method comprising applying an object detector function (4) to the medical image, wherein the object detector function performs the steps of: (A) applying a first neural network (40) to the medical image, wherein the first neural network is trained to detect a first plurality of classes of larger-sized anatomical structures (3a), thereby generating as output the coordinates of at least one first bounding box (51) and the confidence score of it containing a larger-sized anatomical structure; (B) cropping (42) the medical image to the first bounding box, thereby generating a cropped image (11) containing the image content within the first bounding box (51); and (C) applying a second neural network (44) to the cropped medical image, wherein the second neural network is trained to detect at least one second class of smaller-sized anatomical structures (3b), thereby generating as output the coordinates of at least one second bounding box (54) and the confidence score of it containing a smaller-sized anatomical structure.
Abstract:
A mechanism for defining a set of preset parameter values for an ultrasound imaging system. Information about local machine-learning models, generated by a plurality of ultrasound imaging systems and updated responsive to operator feedback, is provided to an external server. The external server generates a global machine-learning model based on this information, which is then used to update the local machine-learning model on target ultrasound imaging systems.
Abstract:
A computer-implemented method for visualization of an elongated anatomical structure (20), for example of a fetal spine using ultrasound is provided. The method comprising the steps of: receiving a plurality of 3D ultrasound image volumes, each image volume depicting at least a portion of an elongated anatomical structure (20); on each 3D ultrasound image volume, automatically or semi-automatically fitting a parametric curve (30) to the depicted portion of the elongated anatomical structure, the parametric curve being defined by curve parameters; reformatting each 3D ultrasound image volume by applying a transformation which straightens the parametric curve along at least one axis, so as to generate a plurality of reformatted image volumes and reformatted parametric curves (32, 34); registering the reformatted image volumes with one another by determining the joining point of their respective parametric curves; and fusing the reformatted image volumes with one another to yield a fused image depicting the whole elongated anatomical structure or a larger portion thereof than the 3D ultrasound image volumes.
Abstract:
The invention relates to a computer-implemented method for automatically detecting anatomical structures (3) in a medical image (1) of a subject, the method comprising applying an object detector function (4) to the medical image, wherein the object detector function performs the steps of: (A) applying a first neural network (40) to the medical image, wherein the first neural network is trained to detect a first plurality of classes of larger-sized anatomical structures (3a), thereby generating as output the coordinates of at least one first bounding box (51) and the confidence score of it containing a larger-sized anatomical structure; (B) cropping (42) the medical image to the first bounding box, thereby generating a cropped image (11) containing the image content within the first bounding box (51); and (C) applying a second neural network (44) to the cropped medical image, wherein the second neural network is trained to detect at least one second class of smaller-sized anatomical structures (3b), thereby generating as output the coordinates of at least one second bounding box (54) and the confidence score of it containing a smaller-sized anatomical structure.