Abstract:
A LUS robotic surgical system is trainable by a surgeon to automatically move a LUS probe in a desired fashion upon command so that the surgeon does not have to do so manually during a minimally invasive surgical procedure. A sequence of 2D ultrasound image slices captured by the LUS probe according to stored instructions are processable into a 3D ultrasound computer model of an anatomic structure, which may be displayed as a 3D or 2D overlay to a camera view or in a PIP as selected by the surgeon or programmed to assist the surgeon in inspecting an anatomic structure for abnormalities. Virtual fixtures are definable so as to assist the surgeon in accurately guiding a tool to a target on the displayed ultrasound image.
Abstract:
An exemplary method includes receiving images of a site captured at a same time by a camera, generating, based one or more of the images, a monochromatic image, generating, based on one or more of the images, an alternate image representative of an alternate imaging characteristic of the site, and displaying the displaying the monochromatic image combined with the alternate image, the alternate image being highlighted relative to the monochromatic image.
Abstract:
A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions.
Abstract:
A LUS robotic surgical system is trainable by a surgeon to automatically move a LUS probe in a desired fashion upon command so that the surgeon does not have to do so manually during a minimally invasive surgical procedure. A sequence of 2D ultrasound image slices captured by the LUS probe according to stored instructions are processable into a 3D ultrasound computer model of an anatomic structure, which may be displayed as a 3D or 2D overlay to a camera view or in a PIP as selected by the surgeon or programmed to assist the surgeon in inspecting an anatomic structure for abnormalities. Virtual fixtures are definable so as to assist the surgeon in accurately guiding a tool to a target on the displayed ultrasound image.
Abstract:
In one embodiment, a surgical instrument includes a housing linkable with a manipulator arm of a robotic surgical system, a shaft operably coupled to the housing, a force transducer on a distal end of the shaft, and a plurality of fiber optic strain gauges on the force transducer. In one example, the plurality of strain gauges are operably coupled to a fiber optic splitter or an arrayed waveguide grating (AWG) multiplexer. A fiber optic connector is operably coupled to the fiber optic splitter or the AWG multiplexer. A wrist joint is operably coupled to a distal end of the force transducer, and an end effector is operably coupled to the wrist joint. In another embodiment, a robotic surgical manipulator includes a base link operably coupled to a distal end of a manipulator positioning system, and a distal link movably coupled to the base link, wherein the distal link includes an instrument interface and a fiber optic connector optically linkable to a surgical instrument. A method of passing data between an instrument and a manipulator via optical connectors is also provided.
Abstract:
A surgical site is simultaneously illuminated by less than all the visible color components that make up visible white light, and a fluorescence excitation illumination component by an illuminator in a minimally invasive surgical system. An image capture system acquires an image for each of the visible color components illuminating the surgical site and a fluorescence image, which is excited by the fluorescence excitation component from the illuminator. The minimally invasive surgical system uses the acquired images to generate a background black and white image of the surgical site. The acquired fluorescence image is superimposed on the background black and white image, and is highlighted in a selected color, e.g., green. The background black and white image with the superimposed highlighted fluorescence image is displayed for a user of the system. The highlighted fluorescence image identifies tissue of clinical interest.
Abstract:
A system comprises: a robotic arm operatively coupleable to a tool comprising a working end; and an input device communicatively coupled to the robotic arm. The input device is manipulatable by an operator. The system further comprises a processor configured to cause an image of a work site, captured by an image capture device from a perspective of an image reference frame, to be displayed on a display. The image of the work site includes an image of the working end of the tool. The processor is further configured to determine a position of the working end of the tool in the image of the work site and render a tool information overlay at the position of the working end of the tool in the image of the work site. The tool information overlay visually indicates an identity of the input device.
Abstract:
A medical robotic system includes a viewer for displaying an image of a work site, a gaze tracker for tracking a gaze point of a user on the viewer, and a processor. The processor is configured to: draw an area or volume defining shape, overlaid on the image of the work site, in a position determined by the gaze tracker; assign a fixed virtual constraint to the shape and constrain movement of a robotic tool according to the fixed virtual constraint; receive a user selected action command selecting an image of patient anatomy; and superimpose the selected image of the patient anatomy over the image of the work site within the shape. The selected image of the patient anatomy is registered to the image of the work site.
Abstract:
In one embodiment, a digital zoom and panning system for digital video is disclosed including an image acquisition device to capture digital video images; an image buffer to store one or more frames of digital video images as source pixels; a display device having first pixels to display images; a user interface to accept user input including a source rectangle to select source pixels within frames of the digital video images, a destination rectangle to select target pixels within the display device to display images, and a region of interest within the digital video images to display in the destination rectangle; and a digital mapping and filtering device to selectively map and filter source pixels in the region of interest from the image buffer into target pixels of the display device in response to the user input.
Abstract:
An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer.