During my three-month stint at NXP in summer 2021, I worked on outlier detection using linear and probabilistic models such as Isolation Forest and Mahalanobis Distance. I discovered that most anomaly detection algorithms can be viewed as similarity estimators, whether by density, distance, angle, or hyperplane division. This realization sparked my interest in deep learning (DL) algorithms.
In summer of 2022, I participated in a COVID-19 misinformation detection project, where I compared the performance of pretrained language models and custom-trained networks. This experience revealed that machine learning models can capture human language patterns. However, I also grew concerned about the trustworthiness of these models, as they often excel in one dataset but fail to generalize to others.
My concerns stem from:
- The complex interaction between hyperparameters, making tuning challenging.
- The difficulty in controlling and explaining models, which impedes error diagnosis.
- Models potentially focusing on irrelevant input features, which hampers trust in their predictions.
This led me to explore below research areas:
- Understanding what knowledge neural networks learn: While working with Professors Webber Bonnie and Lori Levin, I realized the gap between traditional linguistic knowledge and modern language models. I became interested in neural scene representations, probing tasks in NLP, and the potential benefits of incorporating linguistic knowledge into models. To expand my linguistic knowledge, I enrolled in two additional NLP courses.
- The neural scene representations that let independent agents reason about our world from visual observations in CV tasks
- Investigating causality for ML: Inspired by Judea Pearl’s work, I recognized causality’s power in addressing out-of-distribution problems and improving model interpretability. Causality has shown success in NLP and CV tasks, promoting fairer and more explainable models. To prepare for this research area, I took courses in Gaussian and Bayesian machine learning, and Machine Learning Theory.
As part of my ongoing efforts, I’m researching the role of causality in structure prediction models for my bachelor’s thesis and plan to pursue a further studies to delve deeper into these topics.
