Ching Lam CHOI

               


I'm an incoming MIT EECS PhD student at CSAIL, co-supervised by Phillip Isola, Antonio Torralba and Stefanie Jegelka.

During my CUHK bachelor's, I researched with wonderful people — Aaron Courville and Yann Dauphin at MILA; Wieland Brendel at Tübingen's Max Planck Institute; Serge Belongie at Copenhagen's Pioneer Centre; CUHK's Hongsheng Li (MMLab), Anthony Man-Cho So, Farzan Farnia and Qi Dou.


CV  /  Bio [WIP]  /  Coffee-hours  /  Blog [WIP]

profile photo

Research

I'm interested in interpretability for diagnosing failures of robustness in ML. I research robust scaling — 1) better synthesis and curation of datasets; 2) designing sample-efficient, logic-symbolic models; 3) guaranteeing post-hoc fairness and privacy.

On the Generalization of Gradient-based Neural Network Interpretations On the Generalization of Gradient-based Neural Network Interpretations
Ching Lam Choi, Farzan Farnia
Preprint, 2023   [cite]

Gradient-based XAI fails to generalise from train to test data: regularisation crucially helps.

Universal Adversarial Directions Universal Adversarial Directions
Ching Lam Choi, Farzan Farnia
Preprint, 2022   [cite]

UAD is an adversarial examples game with a pure Nash equilibrium and better transferability.

Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Yixiao Ge, Xiao Zhang, Ching Lam Choi, Ka Chun Cheung, Peipei Zhao, Feng Zhu, Xiaogang Wang, Rui Zhao, Hongsheng Li
Preprint, 2021   [cite]

BAKE distils self-knowledge of a network, by propagating samples similarities in each batch.

DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network DivCo: Diverse Conditional Image Synthesis via Contrastive Generative Adversarial Network
Rui Liu, Yixiao Ge, Ching Lam Choi, Hongsheng Li
CVPR, 2021   [cite]

DivCo uses contrastive learning to reduce mode collapse in GANs.

News, Talks, Events

[Talk 27.05.2024]   "(Un)learning to Explain" at Krueger AI Safety Lab, hosted by David Krueger.

[Talk 22.03.2024]   "Robust Scaling: Trustworthy Data" at CleverHans Lab, hosted by Nicolas Papernot.

[Talk 08.05.2023]   "Robustness Transfer in Distillation" at Brendel & Bethge joint lab meeting, hosted by Wieland Brendel.

[Talk 16.02.2022]   "Federated learning and the Lottery Ticket Hypothesis" at ML Collective lab meeting, hosted by Rosanne Liu.

[Talk 31.07.2020]   "Julia Track Google Code In and Beyond" at JuliaCon 2020.

[Talk 24.07.2020]   "Corona-Net: Fighting COVID-19 With Computer Vision" at EuroPython 2020.

[Talk 13.06.2020]   "Julia – Looks like Python, feels like Lisp, runs like C/Fortran" at Hong Kong Open Source Conference 2020.
[Co-organiser]   Causality and Large Models Workshop @NeurIPS 2024

[Co-organiser]   New in ML Workshop @NeurIPS 2024

[Co-organiser]   New in ML Workshop @NeurIPS 2023

[Co-organiser]   CoSubmitting Summer Workshop @ICLR 2022

[Co-organiser]   Undergraduates in Computer Vision Social @ICCV 2021

[Reviewer]   CVPR 2023, 2024; ICCV 2023; NeurIPS 2023, 2024; ICLR 2024, 2025; ICML 2024; ECCV 2024; ACCV 2024; AAAI 2025

Website design credits to Jon Barron.