Aller au menu Aller au contenu Aller à la recherche
  • Français
  • English

PhD offer: Augmented Reality Based on Image Features in Minimally Invasive Ear Surgery

aA - +Imprimer la page

PhD offer: Augmented Reality Based on Image Features in Minimally Invasive Ear Surgery

APPLICATION DEADLINE: 30 MAY 2021

Supervisors:

Alexis Bozorg Grayeli, Sarah Leclerc, Alain Lalande

Descriptions:

Ear surgery is routinely conducted under microscopic or endoscopic vision. Target structures are situated behind the tympanic membrane and are often approached through the external auditory canal after skin incision and tympanic membrane elevation. We have developed an augmented reality (AR) system by overlaying a reconstruction of the middle ear contents based on high-resolution temporal bone CT-scan on the 2D video obtained from the microscope or the endoscope (1, 2). Indeed, the virtual endoscopy function based on DICOM data provides a precise image of this region similar to the surgical scene (3). Currently, we use manual matching of anatomical features on both images to obtain the video to CT-scan registration. Then, image processing algorithms maintain correspondence between CT and video during intraoperative movements of the microscope or the endoscope (2, 3). This system allows visualizing the middle ear structures behind the tympanic membrane, and the inner ear structures deeply seated in the temporal bone (4). We have already evaluated this system in real-time in the laboratory (3) and operating room conditions (Hussain R et al., submitted for publication).
The system workflow, its speed and precision are compatible with ear surgery in comparison to conventional navigation systems. It does not require external fiducial markers or additional imaging before surgery.
Other augmented reality projects have been reported in the field of head and Neck surgery, but to our knowledge, no project in the field of otology has been published (5).
Project: We aim to develop this augmented reality system in the following directions:

  1. Semi-automatic video to CT-scan registration
    The current process requires the manual selection of 6 corresponding points on the initial frame of the video and on the virtual endoscopy image (from Ct-scan). This task is difficult since the similarities between the images are low. The registration error can easily propagate through the whole process. We are going to develop a contour-based registration. Defining the contours of the most visible landmarks on both images is much easier. By mathematical algorithms such as k nearest neighbors or fractal analysis (Hausdorff distance) the best correspondence will be approached, and the CT scan image will be warped on the video frame. Currently a database of more than 70 cases is available in our team, with CT-scan, video and annotations.
  2. Fully automatic video to CT scan registration from neural networks A convolutional neural network will be developed to identify and extract the contours on both images and the registration will follow with the algorithms cited above. The system will be trained on a series of endoscopic videos and CT scans from patients (the dataset developed in the previous step will be upgraded with new cases). Transfer learning and data augmentation algorithms will be employed. Convolutional Neural Networks based on Spatial Transformers (6) will be investigated, trained either with supervised or unsupervised learning. Robustness improvement will be sought through the regularization effect brought by specific and innovative loss functions, dedicated to the task.
  3. Design and development of a 3D AR system
    Up to now, the development has been carried out on a 2D video feed. 3D data  are available from both the microscope (2 independent optical axes) and CT scan data. 3D information is crucial in several procedures. In this part, we will extract the 3D dicom information from the CT-scan, register it on both video inputs and recreate a coherent AR on both video channels to obtain a 3D AR projected on a screen in real-time. These steps will be evaluated on resin models, human anatomical specimen and finally in operating room conditions.

In a further step, this system will be integrated to surgical robot dedicated to ear surgery (Robotol, Collin SA, Bagneux, France). A collaboration has been initiated with this industrial partner.

kc_data:
a:8:{i:0;s:0:"";s:4:"mode";s:2:"kc";s:3:"css";s:0:"";s:9:"max_width";s:0:"";s:7:"classes";s:0:"";s:9:"thumbnail";s:0:"";s:9:"collapsed";s:0:"";s:9:"optimized";s:0:"";}
kc_raw_content:
extrait:
lien_externe:
equipe:
a:1:{i:0;s:5:"IFTIM";}
tags:
PhD

Log In

Create an account