Data Science Ph.D. Dissertation Defense | Data Science | Abdulaziz Alajaji

Thursday, April 20, 2023
12:00 pm to 2:00 pm

United States

Floor/Room #
Mid Centry Room, 3rd floor

Image removed.

DATA SCIENCE 

Ph.D. Dissertation Defense 

PhD Candidate: Abdulaziz Alajaji 

April 20, 2023; 12:00PM – 2:00PM

Mid-Century Room (3rd floor), Campus Center


Advisor: Prof. Emmanuel Agu 

Co-Advisor: Prof. Elke Rundensteiner 

Committee Member: Prof. Nima Kordzadeh 

Committee Member: Dr. Akhil Mathur 

Title: Robust Representation Learning for Context Recognition on Weakly-supervised Mobile Sensed Data with Covariate-shifts 

Abstract: Context-aware applications adapt their behavior based on the user’s current situation, targeting diverse domains, including smart homes, assisted living, fitness tracking, military deployment, and mobile health. Human Context Recognition (HCR), the task of detecting a user’s current situation that includes their activity, location, and other semantic information, is a fundamental problem in context-aware applications. Smartphone HCR datasets for supervised machine learning are gathered using one of two study designs: 1) Scripted studies in which users visit pre-planned contexts in a script yield high-quality data with strong context labels but are unrealistic. 2) In-the-wild, unscripted studies gathered as subjects live their lives and provide context labels periodically yield realistic datasets but are frequently imbalanced with missing or wrong labels. Moreover,  in-the-wild study designs are more vulnerable to attacks. For instance, adversaries can send modified data samples to mislead the HCR model, causing wrong predictions.  

This dissertation presents HCR research that utilizes neural networks for HCR representation learning to facilitate robust HCR on single datasets, enhance HCR robustness to distributional shifts between multiple HCR datasets and mitigate perturbations maliciously caused by adversaries. DeepContext, a novel proposed neural network for HCR, utilizes joint learning with a parameterized compatibility-based attention mechanism to focus on the most predictive parts of sensor data. Two lab-to-field methods are proposed that learn a robust HCR model from a scripted dataset with strong labels to improve performance on a weakly supervised, in-the-wild dataset with similar context labels. Covariate shifts between the scripted and in-the-wild context datasets present a challenge to such lab-to-field methods. Positive Unlabeled Context Learning (PUCL) uses transductive learning with inaccurate supervision to address erroneous labels. Triple-DARE uses a Domain Adaptation approach under incomplete supervision to utilize unlabeled, in-the-wild data. Finally, an adversarial HCR approach that learns a robust representation is proposed. This dissertation focuses on black-box evasion attacks that can generate input samples with minor changes that result in high-confidence misclassifications. We propose RobustHCR, which uses a duality-based method to improve neural network robustness, allowing it to be provably resilient to norm-bounded perturbations. Results generated using a rigorous experimental plan are presented. 

Audience(s)

DEPARTMENT(S):

Data Science
Contact Person
Kelsey Briggs

PHONE NUMBER: