Research
I'm interested in interpretability for diagnosing failures of robustness in ML. I research robust scaling — 1) better synthesis and curation of datasets; 2) designing sample-efficient, logic-symbolic models; 3) guaranteeing post-hoc fairness and privacy.
|
|
On the Generalization of Gradient-based Neural Network Interpretations
Ching Lam Choi, Farzan Farnia
Preprint, 2023 [cite]
Gradient-based XAI fails to generalise from train to test data: regularisation crucially helps.
|
|
Universal Adversarial Directions
Ching Lam Choi, Farzan Farnia
Preprint, 2022 [cite]
UAD is an adversarial examples game with a pure Nash equilibrium and better transferability.
|
|
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Yixiao Ge, Xiao Zhang, Ching Lam Choi, Ka Chun Cheung, Peipei Zhao, Feng Zhu, Xiaogang Wang, Rui Zhao, Hongsheng Li
Preprint, 2021 [cite]
BAKE distils self-knowledge of a network, by propagating samples similarities in each batch.
|
|
DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network
Rui Liu, Yixiao Ge, Ching Lam Choi, Hongsheng Li
CVPR, 2021 [cite]
DivCo uses contrastive learning to reduce mode collapse in GANs.
|
|