Skip to main content

WPI - Computer Science Department, PhD Dissertation Defense :Sarun Paisarnsrisomsuk " Understanding Internal Feature Development in Deep Convolutional Neural Networks for Time Series"

Thursday, May 20, 2021
10:00 am to 11:00 am


Sarun Paisarnsrisomsuk
PhD Candidate
WPI – Computer Science 


Thursday, May 20th, 2021
Time: 10:00am – 11:00  am

Zoom Meeting
https://wpi.zoom.us/j/96432063142?pwd=Wk1xQloyYXk2VDA1UlhmMVN3dkhNQT09
Meeting ID: 964 3206 3142
Passcode: Sarun

PhD Committee:
Prof. Carolina Ruiz, Advisor, WPI – Computer Science 
Prof. Sergio Alvarez, Co-advisor and external member (Boston College),
Prof. Xiangnan Kong, WPI – Computer Science 
Prof. Jake Whitehill, WPI – Computer Science 

Abstract:

Taken as a black box, a neural network applies a complex nonlinear function to its input to produce an output. This complex nonlinear function is determined by the network's internal weights. During the network's training process, these weights are adjusted based on a sample of input-output pairs so that the difference between the network's actual outputs and desired outputs is minimized. Deep neural networks have been employed very successfully as prediction models in many applications. A major remaining challenge is to understand how the internal adaptations that result from learning enable such models to make accurate predictions. 

In this research, we propose several methods for interpreting the learned internal representations in neural networks in the context of time-series signal classification. Our two main interpretation goals are: (1) Feature Interpretation, to understand what features of the inputs are learned by the network's internal units; and (2) Feature Development, to visualize how the ability of individual network layers to differentiate among classes varies with the depth of the layer within the network.

To evaluate the proposed methods, we develop neural networks for sleep stage classification. The networks take as inputs physiological signals of human subjects during sleep and map these multidimensional time-series to sequences of symbols corresponding to different sleep stages. Our results demonstrate that our techniques succeed in our aims of Feature Interpretation and Feature Development. We show that the networks' internal units can learn features that closely resemble those used by human sleep experts in the traditional sleep stage scoring process, such as sleep spindles, K-complexes, and slow waves. Furthermore, our results describe the development of these features with layer depth, showing that the network assembles them gradually, as layer depth increases, by extracting simple building blocks in shallow layers and combining them in deeper layers to form more complex features. Additionally, we observe an increase in the ability of the network layers to differentiate among sleep stages as depth of the layers within the network increases.

DEPARTMENT(S): 
Name: 
Sarun Paisarnsrisomsuk