Research Interests
My research centers on distributed and federated learning. I focus on understanding how statistical and system variability — including privacy, communication, and other constraints — affects algorithm performance. Inspired by these insights, I develop intelligent and efficient algorithms for large-scale distributed and federated systems.
Federated Learning
Federated learning enables collaborative model training across multiple devices or organizations while keeping data decentralized and private. Our research focuses on communication-efficient and privacy-preserving algorithms that support large-scale collaboration without compromising performance. We leverage adaptive first-order and zeroth-order optimization to handle limited bandwidth, partial client participation, and heterogeneous data distributions.
- Zhe Li, Bicheng Ying, Zidong Liu, Chaosheng Dong, and Haibo Yang, Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization, ICLR, 2025.
- Haibo Yang, Xin Zhang, Prashant Khanduri, and Jia Liu, Anarchic Federated Learning, ICML, 2022 (Long Presentation).
- Haibo Yang, Peiwen Qiu, and Jia Liu, Taming Fat-Tailed Noise in Federated Learning, NeurIPS, 2022.
- Haibo Yang, Minghong Fang, and Jia Liu, Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning, ICLR, 2021.
Multi-Objective Learning
Multi-objective learning focuses on optimizing multiple, often competing, objectives simultaneously. We develop algorithms that balance trade-offs, adapt to conflicting goals, and achieve robust performance across applications such as healthcare, autonomous systems, recommendation engines, and resource-constrained environments.
- Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, and Michinari Momma, Federated Multi-Objective Learning, NeurIPS, 2023.
- Haibo Yang, Zhuqing Liu, Jia Liu, and Chaosheng Dong, Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning, ICML, 2024.
- Mingjing Xu, Peizhong Ju, Jia Liu, Haibo Yang, PSMGD: Periodic Stochastic Multi-Gradient Descent for Fast Multi-Objective Optimization, AAAI 2025.
Machine Learning Security
This research focuses on securing machine learning systems against malicious or faulty participants that can disrupt training, manipulate models, or degrade performance. We design robust algorithms and defense mechanisms to detect, tolerate, or mitigate adversarial behaviors while maintaining model accuracy, ensuring distributed and collaborative learning systems are resilient, trustworthy, and reliable.
- Minghong Fang, Seyedsina Nabavirazavi, Zhuqing Liu, Wei Sun, Sundararaja Sitharama Iyengar, Haibo Yang, Do We Really Need to Design New Byzantine-Robust Aggregation Rules?, NDSS 2025.
- Haibo Yang, Xin Zhang, Minghong Fang, and Jia Liu, Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach, IEEE CDC, 2019.