Data Science Ph.D. Research Qualifying Exam
Matthew Liam Weiss
January 11, 2018 || Fuller Laboratories 141 || 1:00p.m.
DEEP LEARNING for OPTIMAL FILTERING
Filtering is the process of recovering a signal, x(t), from noisy measurements, z(t). One common filter is the Kalman Filter, which is proven to be the optimal unbiased linear filter under certain conditions. One such condition is that the noise present in z(t), obscuring x(t), is a zero-mean Gaussian process with known covariance R. However, in practice the determination of the actual R is problematic and often tuned using cross-validation and domain knowledge. In this research we approach the problem in a different way. Instead of tuning R based on cross-validation and Kalman Filter performance, we fix R and learn a transformation of the input data which improves Kalman Filter performance. As this transformation needs to receive feedback from the Kalman Filter an artificial neural network is an ideal candidate. Specifically, the transformation is done using an autoencoder to first encode z(t), use the encoded data as input to a Kalman Filter and then decode the Kalman Filter's output. Using simulated data we show the Kalman Filter's estimate of x(t) is more accurate when accompanied by an autoencoder. A preliminary result of this autoencoder/Kalman Filter hybrid is that it is not necessary to know R a priori or estimate it experimentally.
Advisor: Professor Randy C. Paffenroth, Ph.D.
Co-advisor: Professor Jacob R. Whitehill, Ph.D.
Reader: Professor Xiangnan Kong, Ph.D.