Real-time intraoperative delineation of cancer and non-cancer brain tissues, especially in the eloquent cortex, is critical for thorough cancer resection, lengthening survival, and improving quality of life. Prior studies have established that thresholding optical attenuation values reveals cancer regions with high sensitivity and specificity. However, threshold of a single value disregards local information important to making more robust predictions. Hence, we propose deep convolutional neural networks (CNNs) trained on labeled OCT images and co-occurrence matrix features extracted from these images to synergize attenuation characteristics and texture features. Specifically, we adapt a deep ensemble model trained on 5,831 examples in a training dataset of 7 patients. We obtain 93.31% sensitivity and 97.04% specificity on a holdout set of 4 patients without the need for beam profile normalization using a reference phantom. The segmentation maps produced by parsing the OCT volume and tiling the outputs of our model are in excellent agreement with attenuation mapping-based methods. Our new approach for this important application has considerable implications for clinical translation.
ASJC Scopus subject areas
- Atomic and Molecular Physics, and Optics