-
1.
公开(公告)号:US12120461B2
公开(公告)日:2024-10-15
申请号:US18312507
申请日:2023-05-04
Applicant: Verb Surgical Inc.
Inventor: Pablo Garcia Kilroy , Jagadish Venkataraman
CPC classification number: H04N7/169 , G06V20/40 , G11B27/323 , G16H70/20 , G06V20/44
Abstract: This disclosure provides techniques of synchronizing the playback of two recorded videos of the same surgical procedure. In one aspect, a process for generating a composite video from two recorded videos of a surgical procedure is disclosed. This process begins by receiving a first and second surgical videos of the same surgical procedure. The process then performs phase segmentation on each of the first and second surgical videos to segment the first and second surgical videos into a first set of video segments and a second set of video segments, respectively, corresponding to a sequence of predefined phases. Next, the process time-aligns each video segment of a given predefined phase in the first video with a corresponding video segment of the given predefined phase in the second video. The process next displays the time-aligned first and second surgical videos for comparative viewing.
-
公开(公告)号:US20230263365A1
公开(公告)日:2023-08-24
申请号:US18166115
申请日:2023-02-08
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Denise Ann Miller
CPC classification number: A61B1/00006 , A61B34/76 , A61B34/25 , A61B34/35 , A61B34/74 , G06T7/0012 , A61B1/000096 , A61B2034/743 , A61B34/20
Abstract: Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.
-
公开(公告)号:US11728029B2
公开(公告)日:2023-08-15
申请号:US17146823
申请日:2021-01-12
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Pablo Garcia Kilroy
Abstract: Embodiments described herein provide various examples of a system for extracting an actual procedure duration composed of actual surgical tool-tissue interactions from an overall procedure duration of a surgical procedure on a patient. In one aspect, the system is configured to obtain the actual procedure duration by: obtaining an overall procedure duration of the surgical procedure; receiving a set of operating room (OR) data from a set of OR data sources collected during the surgical procedure, wherein the set of OR data includes an endoscope video captured during the surgical procedure; analyzing the set of OR data to detect a set of non-surgical events during the surgical procedure that do not involve surgical tool-tissue interactions; extracting a set of durations corresponding to the set of non-surgical events; and determining the actual procedure duration by subtracting the set of extracted durations from the overall procedure duration.
-
公开(公告)号:US11594325B2
公开(公告)日:2023-02-28
申请号:US16894018
申请日:2020-06-05
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman
Abstract: Embodiments described herein provide various examples of automatically processing surgical videos to detect surgical tools and tool-related events, and extract surgical-tool usage information. In one aspect, a process for automatically tracking usages of robotic surgery tools is disclosed. This process can begin by receiving a surgical video captured during a robotic surgery. The process then processes the surgical video to detect a surgical tool in the surgical video. Next, the process determines whether the detected surgical tool has been engaged in the robotic surgery. If so, the process further determines whether the detected surgical tool is engaged for a first time in the robotic surgery. If the detected surgical tool is engaged for the first time, the process subsequently increments a total-engagement count of the detected surgical tool. Otherwise, the process continues monitoring the detected surgical tool in the surgical video.
-
公开(公告)号:US11576743B2
公开(公告)日:2023-02-14
申请号:US17362620
申请日:2021-06-29
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Denise Ann Miller
Abstract: Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.
-
6.
公开(公告)号:US20220028525A1
公开(公告)日:2022-01-27
申请号:US17493589
申请日:2021-10-04
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Pablo Garcia Kilroy
Abstract: This patent disclosure provides various embodiments of combining multiple modalities of non-text surgical data in forms of videos, images, and audios in a meaningful manner so that the combined data can be used to perform comprehensive data analytics for a surgical procedure. In some embodiments, the disclosed system can begin by receiving two or more modalities of surgical data during the surgical procedure. The system then time-synchronizes the two or more modalities of surgical data to generate two or more modalities of time-synchronized surgical data. Next, the system converts each modality of the time-synchronized surgical data into a corresponding array of values of a common format. The system then combines the two or more arrays of values to generate a combined set of values. The system subsequently performs comprehensive data analytics on the combined set of values to generate a surgical decision for the surgical procedure.
-
7.
公开(公告)号:US11026561B2
公开(公告)日:2021-06-08
申请号:US16361075
申请日:2019-03-21
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Dave Scott , Eric Johnson
Abstract: Embodiments described herein provide various examples of displaying video images of a surgical video captured at a first resolution on a screen of a surgical system having a second resolution lower than the first resolution. In one aspect, a process begins by receiving the surgical video and selecting a first portion of the video images having the same or substantially the same resolution as the second resolution. The process subsequently displays the first portion of the video images on the screen. While displaying the first portion of the video images, the process monitors a second portion of the video images not being displayed on the screen for a set of predetermined events, wherein the second portion is not visible to the user. When a predetermined event in the set of predetermined events is detected in the second portion, the process generates an alert to notify the user.
-
公开(公告)号:US10791301B1
公开(公告)日:2020-09-29
申请号:US16440647
申请日:2019-06-13
Applicant: Verb Surgical Inc.
Inventor: Pablo Garcia Kilroy , Jagadish Venkataraman
Abstract: Embodiments described herein provide various examples of preparing two procedure videos, in particular two surgical procedure videos for comparative learning. In some embodiments, to allow comparative learning of two recorded surgical videos, each of the two recorded surgical videos is segmented into a sequence of predefined phases/steps. Next, corresponding phases/steps of the two segmented videos are individually time-synchronized in pair-wise manner so that a given phase/step of one recorded video and a corresponding phase/step of the other segmented video can have the same or substantially the same starting time and ending timing during comparative playbacks of the two recorded videos. The disclosed comparative-learning techniques can generally be applied to any type of procedure videos which can be broken down into a sequence of predefined phases/steps, and to synchronize/slave one such procedure video to another procedure video of the same type at each segmented phase/step in the sequence of predefined phases/steps.
-
公开(公告)号:US12189821B2
公开(公告)日:2025-01-07
申请号:US18320127
申请日:2023-05-18
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Pablo Garcia Kilroy
Abstract: This patent disclosure provides various verification techniques to ensure that anonymized surgical procedure videos are indeed free of any personally-identifiable information (PII). In a particular aspect, a process for verifying that an anonymized surgical procedure video is free of PII is disclosed. This process can begin by receiving a surgical video corresponding to a surgery. The process next removes personally-identifiable information (PII) from the surgical video to generate an anonymized surgical video. Next, the process selects a set of verification video segments from the anonymized surgical procedure video. The process subsequently determines whether each segment in the set of verification video segments is free of PII. If so, the process replaces the surgical video with the anonymized surgical video for storage. If not, the process performs additional PII removal steps on the anonymized surgical video to generate an updated anonymized surgical procedure video.
-
10.
公开(公告)号:US20240290459A1
公开(公告)日:2024-08-29
申请号:US18596298
申请日:2024-03-05
Applicant: Verb Surgical Inc.
Inventor: Jagadish Venkataraman , Pablo Garcia Kilroy
Abstract: This patent disclosure provides various embodiments of combining multiple modalities of non-text surgical data in forms of videos, images, and audios in a meaningful manner so that the combined data can be used to perform comprehensive data analytics for a surgical procedure. In some embodiments, the disclosed system can begin by receiving two or more modalities of surgical data during the surgical procedure. The system then time-synchronizes the two or more modalities of surgical data to generate two or more modalities of time-synchronized surgical data. Next, the system converts each modality of the time-synchronized surgical data into a corresponding array of values of a common format. The system then combines the two or more arrays of values to generate a combined set of values. The system subsequently performs comprehensive data analytics on the combined set of values to generate a surgical decision for the surgical procedure.
-
-
-
-
-
-
-
-
-