Computer Assisted Surgery Trainer


Laparoscopic surgery, when performed by a well-trained surgeon, is a remarkably effective procedure that minimizes complications associated with large incisions, operative blood loss and post-operative pain. It also reduces the recovery time. However, the procedure is more challenging than a conventional surgery due to the restricted vision, hand-eye coordination problems, limited working space, and lack of tactile sensation. These issues make the laparoscopic surgery a very difficult technique for medical students and residents to master.

In our work, we focus on computer-assisted surgical training (CAST). Our design principles and their subsequent implementation address some of the limitations of the existing systems and advance the state of the art in surgical education, assessment, and guidance in laparoscopic surgery. Our overarching vision is to develop a fully integrated training system that will assist a practicing surgeon to perform laparoscopic surgeries. Such a system will enhance training and provide assistance in real-time during an operation. Our ultimate goal is to help improve surgical outcomes and patients safety.

We had commenced the initial (CAST I) system design and development by equipping surgical instruments with magnetic Micro Bird sensors for precise tracking and data collection. The position data obtained from the sensors is used to calculate key instrument motion metrics such as total path length, average speed, instantaneous speed, average radius of motion and the number of times "safety zones" were breached.

We had further enhanced the surgical trainer with real-time guidance and navigation   capabilities. This version, called CAST II, employs sensing and configuration space methods   to assist in training with a special focus on proper execution of movements and avoidance   of critical zones in the operating space. An inference module is employed to determine if a   particular action is potentially harmful and the reasons why the action could be harmful.   Then, guidance and feedback to prevent potentially injurious actions, and to reinforce   correct techniques are given to the trainee. Proper guidance includes displaying the   estimated optimal path and performance instructions on the screen to help the trainees   know what to do, and what not to do.

Presently, we are developing the 3rd generation CAST system which will realize the   concept of performing surgery using robotics. It is, in principle, a mechatronic device that   employs real surgical instruments, encoders for precise position sensing, servomotors and attendant software for motion control. We are currently completing the development of the CAST III prototype. CAST III will have embedded in it collision free optimal navigation system and augmented reality visualization.


The project objectives include the continued development of the computer assisted surgical trainer and to eventually make it usable in the operating environment as a guidance system. The laparoscopic trainer will be a guide for the surgeon at the time of performing the surgery. We aim to achieve the following objectives:
     1. Haptic guidance using control algorithms
     2. Visual Guidance
     3. Incorporate all the training mechanisms required for the Fundamentals of Laparoscopic Surgery (FLS).
     4. Eliminate Depth Perception using Virtual / Augmented Reality
     5. Introduce an optimal trajectory that avoids collisions in space
     6. Design of metrics that will analyze the performance of a trainee.

System Overview for Guided Laparoscopy Training

  The current version of CAST consists of two gimbals with motors attached to them. The    gimbals provide feedback by means of encoders which are attached to the motors.

  We have developed an optimal path algorithm which generates a collision free trajectory    in our block world.

  A sophisticated control algorithm nudges the trainee     back on the optimal path. The idea behind providing     guidance is to help the trainee steer the instrument     on the right path but at the same time not to them.

The trainee is able to visualize the block world through a CAD model which is presented on a monitor in   front of him. Figure1 shows the system while figure2 is a block diagram representation of the CAST III.

Image-Based Object State Modeling Of A Simplified Transfer Task

This project proposes a real-time, image-based training scenario comprehension method. This method aims to support a visual and haptic guidance system for laparoscopic surgery skills’ training. The target task of the proposed approach is a simulation model of a peg transfer task, which is one of the hands-on exam topics in the Fundamentals of Laparoscopic Surgery certification. In this project, a simulation process of an image-based object state modeling (IFBPSM) method is proposed. It generates a system object state of the transfer task to support the guidance system. A rule-based, intelligent system is used to discern the object state without the aid of any object template or model.

To simplify the problem, only one right-hand grasper and one magenta triangle are considered in this paper. The user (trainee) needs to use a grasper to grab the triangle from one peg and then transfer it to another peg. During this operation, the user may accidentally drop the triangle. In this setting, we only consider the following object states: stop, move, carry, and drop. Besides, the image understanding process is sensitive to camera noise cause by the dark scene, background color, and environment illumination. To provide a stable environment, we set up a uniform and sufficient light source for illumination with the color temperature of daylight. The background color of the environment is set to white.

The proposed algorithm is simulated using C++. To evaluate the performance of our method, we use a practical case of image sequence which has 1000 frames. This image sequence covers all kind of states of the simplified peg transfer task: stop, move, carry, and drop. The evaluation image sequence with the visualized label of each state is shown in the following demo video.

Future Research Direction

We plan to use augmented reality to overlay the optimal path with the live camera stream which will replace the existing CAD model representation for visualization. We will also be introducing more challenging training scenarios based on the feedback from expert surgeons in the domain. Metrics such as time it takes a trainee to execute a particular surgical task, how accurate he or she is, etc. will be used to objectively analyze task performance of trainees. In addition, we also plan to model tissue deformations when we use an artificial tissue for complex training procedures.

Guided training will be validated with through a pilot experimental study in which expertise level of computer-guided trainees will be compared to that of instructor-guided trainees.

The potential impact of our proposed work is improved surgical performance and, ultimately, better surgical outcomes. The optimal surgical movement planner and its attendant computer-based guidance will reinforce proper execution of specific tasks. The validation study will serve as a means of proving or disproving our presumption. The CAST system will serve as a sophisticated, low-cost solution for fundamental laparoscopic skill training. It will provide an unlimited range of training exercises with precise performance data collection and analysis tools. Thus, full introspection into one's performance will be available to instructors and students. In addition, new scientific methods for movement trajectory planning in a 3D space with rigid objects (laparoscopic instruments), along with better visualization techniques will result. The potential for employing such methods in a real surgical setting is very high (especially in planning complex procedures), and will lead to improved surgical situation awareness.