Abstract:
An apparatus and method for adaptive computer-aided diagnosis (CAD) are provided. The adaptive CAD apparatus includes an image analysis algorithm selector configured to select an image analysis algorithm based on a speed of a probe or a resolution of a current image frame obtained by the probe; and an image analyzer configured to detect and classify a region of interest (ROI) in the current image frame using the selected image analysis algorithm.
Abstract:
An apparatus and method for supporting computer aided diagnosis (CAD). The apparatus includes: a control processor configured to determine a duration during which a remaining image of a first region of interest (ROI) detected from a first image frame is displayed, based on a characteristic of measuring the first ROI; and a display configured to mark a remaining image of a second ROI of a second image frame in the first image frame and display the marked image on a screen, in response to the first image frame being acquired during a duration set to display the remaining image of the second ROI. the first image frame is obtained subsequent to the second image frame.
Abstract:
Disclosed is region of interest (ROI) detection apparatus and method. The ROI detection apparatus includes: a selecting criterion acquirer configured to acquire a selecting criterion; an image receiver configured to receive a current image; a suspicious area selector configured to select a part of the current image as a suspicious area according to the selecting criterion; and an ROI detector configured to detect an ROI from the suspicious area.
Abstract:
Disclosed are apparatuses and methods for processing a control command for an electronic device based on a voice agent. The apparatus includes a command tagger configured to receive at least one control command for the electronic device from at least one voice agent and to tag additional information to the at least one control command, and a command executor configured to, in response to the command tagger receiving a plurality of control commands, integrate the plurality of control commands based on additional information tagged to each of the plurality of control commands and to control the electronic device based on a result of the integration.
Abstract:
A method of visualizing anatomical elements in a medical includes receiving a medical image; detecting a plurality of anatomical elements from the medical image; verifying a location of each of the plurality of anatomical elements based on anatomical context information including location relationships between the plurality of anatomical elements; adjusting the location relationships between the plurality of anatomical elements; and combining the verified and adjusted information of the plurality of anatomical elements with the medical image.
Abstract:
A Computer-Aided Diagnosis (CAD) apparatus and a CAD method are provided. The CAD apparatus includes an automatic diagnoser configured to perform automatic diagnosis using an image that is received from a probe, and generate diagnosis information including results of the automatic diagnosis. The CAD apparatus further includes an information determiner configured to determine diagnosis information to be displayed among the generated diagnosis information, based on a manual diagnosis of a user, and a display configured to display the received image and the determined diagnosis information.
Abstract:
A system for processing a user utterance is provided. The system includes at least one network interface; at least one processor operatively connected to the at least one network interface; and at least one memory operatively connected to the at least one processor, wherein the at least one memory stores a plurality of specified sequences of states of at least one external electronic device, wherein each of the specified sequences is associated with a respective one of domains, wherein the at least one memory further stores instructions that, when executed, cause the at least one processor to receive first data associated with the user utterance provided via a first of the at least one external electronic device, wherein the user utterance includes a request for performing a task using the first of the at least one external device.
Abstract:
A method of visualizing anatomical elements in a medical includes receiving a medical image; detecting a plurality of anatomical elements from the medical image; verifying a location of each of the plurality of anatomical elements based on anatomical context information including location relationships between the plurality of anatomical elements; adjusting the location relationships between the plurality of anatomical elements; and combining the verified and adjusted information of the plurality of anatomical elements with the medical image.
Abstract:
A personalized augmented reality providing apparatus includes an interest object determiner configured to determine an interest object among external objects each having a predetermined relationship with a user, a relationship identifier configured to identify a subjective relationship between the interest object determined by the interest object determiner and the user, an additional information generator configured to generate additional information representing a current relationship state between the interest object and the user based on the subjective relationship identified by the relationship identifier, and an additional information provider configured to provide the user with the additional information generated by the additional information generator.
Abstract:
An electronic device and method are disclosed herein. The electronic device includes a network interface and processor. The processor implements the method, including receiving a voice input through a network interface as transmitted from a first external device, including a request to execute a function using at least one application which is not indicated in the voice input, extracting a first text from the voice input by executing automatic speech recognition (ASR), when the at least one application is identified based on the first text, transmitting, through the network interface to the first external device, second data associated with the identified at least one application for display by the first external device, and when the at least one application is not identified based at least in part on the first text, reattempting identification of the at least one application by executing natural language understanding (NLU) on the first text.