Data Science | MS Thesis Presentation | Harsh Nilesh Pathak | 'Parameter Continuation with Secant Approximation for Deep Neural Networks.'

Monday, December 03, 2018
9:00 am
Floor/Room #: 
Beckett Conference Room, 2nd Floor

 

MASTER of science thesis presentation
Harsh Nilesh Pathak
 

Advisor: Professor Randy Paffenroth
Reader: Professor Kyumin Lee

Title:
Parameter Continuation with Secant Approximation for Deep Neural Networks

Abstract:
Non-convex optimization of deep neural networks is a well-researched problem. We present a novel application of continuation methods for deep learning optimization that can potentially arrive at a better solution. In our method, we first decompose the original optimization problem into a sequence of problems using a homotopy method. To achieve this in neural networks, we derive the Continuous(C)-Activation function. First, C-Activation is a homotopic formulation of existing activation functions such as Sigmoid, ReLU or Tanh. Second, we apply a method which is standard in the parameter continuation domain, but to the best of our knowledge novel to the deep learning domain.  In particular, we use Natural Parameter Continuation with Secant approximation (NPCS), an effective training strategy that may find a superior local minimum for a non-convex optimization problem. Additionally, we extend our work on Step-up GANs, a data continuation approach, by deriving a method called Continuous(C)-SMOTE which is an extension of standard oversampling algorithms. We demonstrate the improvements made by our methods and establish a categorization of recent work done on continuation methods in the context of deep learning.