Computer Science Department, MS Thesis Presentation, Xitong Zhang : vU-net: edge detection in time-lapse flurescence live cell images based on convolutional

Monday, April 16, 2018 to Tuesday, April 17, 2018
11:00 am to 12:00 pm
Floor/Room #: 

Advisor: Prof. Kwonmoo Lee (Biomedical Engineering)

Co-advisor: Prof. Dmitry Korkin (Computer Science)

Reader: Prof. Carolina Ruiz (Computer Science)


Time-lapse fluorescence live cell imaging has been widely used to study various dynamic processes in cell biology. As the initial step of image analysis, it is important to localize and segment cell edges with higher accuracy. However, fluorescence live-cell images usually have issues such as low contrast, noises, uneven illumination in comparison to immunofluorescence images. Deep convolutional neural networks, which learn features directly from training images, have successfully been applied in natural image analysis problems.

We propose a novel framework, vU-net, which combines the advantages of VGG-16 in feature extraction and U-net in feature reconstruction. Moreover, we design an auxiliary convolutional block at the end of the architecture to enhance edge detection. We evaluate our framework using dice coefficient and the distance between the predicted edge and the ground truth on high-resolution image datasets of a adhesion marker, paxillin, acquired by a Total Internal Reflection Fluorescence (TIRF) microscope. Our results demonstrate that, on difficult datasets: (i) The testing dice coefficient of vU-net is 3.2% higher than U-net with the same amount of training images. (ii) vU-net can achieve the best prediction results of U-net with one third of training images needed by U-net . (iii) vU-net produces more robust prediction than U-net. Therefore, vU-net can be more practically applied to challenging live cell movies than U-net since it requires a small size of training sets and achieved accurate segmentation.