-
Linyi Li, Tao Xie, Bo Li
SoK: Certified Robustness for Deep Neural Networks
44th IEEE Symposium on Security and Privacy (SP 2023)
[Full Version]
[Conference Version]
[Slides]
[Code]
[Leaderboard]
[BibTex]
@inproceedings{li2023sok,
author={Linyi Li and Tao Xie and Bo Li},
title = {SoK: Certified Robustness for Deep Neural Networks},
booktitle = {44th {IEEE} Symposium on Security and Privacy, {SP} 2023, San Francisco, CA, USA, 22-26 May 2023},
publisher = {{IEEE}},
year = {2023},
}
핵심어:
certified ML
Summary
A comprehensive systemization of knowledge on DNN certified robustness, including discussion on practical and theoretical implications, findings, main challenges, and future directions, accompanied with an open-source unified platform to evaluate 20+ representative approaches.
-
Linyi Li, Yuhao Zhang, Luyao Ren, Yingfei Xiong, Tao Xie
Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects
45th IEEE/ACM International Conference on Software Engineering (ICSE 2023)
[Full Version]
[Conference Version]
[Slides]
[Code]
[BibTex]
@inproceedings{li2023reliability,
author={Linyi Li and Yuhao Zhang and Luyao Ren and Yingfei Xiong and Tao Xie},
title = {Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects},
booktitle = {45th International Conference on Software Engineering, {ICSE} 2023, Melbourne, Australia, 14-20 May 2023},
publisher = {{IEEE/ACM}},
year = {2023},
}
핵심어:
certified ML
numerical reliability
Summary
An effective and efficient white-box framework for generic DNN architectures, named RANUM, for certifying numerical reliability (e.g., not output NaN or INF), generating failure-exhibiting system tests, and suggesting fixes, where RANUM is the first automated framework for the last two tasks.
-
Mintong Kang*, Linyi Li*, Maurice Weber, Yang Liu, Ce Zhang, Bo Li
Certifying Some Distributional Fairness with Subpopulation Decomposition
Advances in Neural Information Processing Systems (NeurIPS) 2022
[Full Version]
[Conference Version]
[Code]
[Poster]
[BibTex]
@inproceedings{kang2022certifying,
title = {Certifying Some Distributional Fairness with Subpopulation Decomposition},
author = {Mintong Kang and Linyi Li and Maurice Weber and Yang Liu and Ce Zhang and Bo Li},
booktitle = {Advances in Neural Information Processing Systems 35 (NeurIPS 2022)},
year = {2022}
}
핵심어:
certified ML
fairness
Summary
A practical and scalable certification approach to provide fairness bound for a given model when distribution shifts from training, based on subpopulation decomposition.
-
Linyi Li, Jiawei Zhang, Tao Xie, Bo Li
Double Sampling Randomized Smoothing
39th International Conference on Machine Learning (ICML 2022)
[Conference Version]
[Full Version]
[Code]
[BibTex]
@inproceedings{
li2022double,
title={Double Sampling Randomized Smoothing},
author={Linyi Li and Jiawei Zhang and Tao Xie and Bo Li},
booktitle={39th International Conference on Machine Learning (ICML 2022)},
year={2022},
}
핵심어:
certified ML
Summary
A tighter certification approach for randomized smoothing, that for the first time circumvents the well-known curse of dimensionality under mild conditions by leveraging statistics from two strategically-chosen distributions.
-
Wenda Chu, Linyi Li, Bo Li
TPC: Transformation-Specific Smoothing for Point Cloud Models
39th International Conference on Machine Learning (ICML 2022)
[Full Version]
[Code]
[BibTex]
@inproceedings{
chu2022tpc,
title={TPC: Transformation-Specific Smoothing for Point Cloud Models},
author={Wenda Chu and Linyi Li and Bo Li},
booktitle={39th International Conference on Machine Learning (ICML 2022)},
year={2022},
}
핵심어:
certified ML
Summary
By extending the methodology for certifying image classifiers against transformations, we provide state-of-the-art certification algorithms for point cloud models with detailed point cloud transformation analyses.
-
Maurice Weber, Linyi Li, Boxin Wang, Zhikuan Zhao, Bo Li, Ce Zhang
Certifying Out-of-Domain Generalization for Blackbox Functions
39th International Conference on Machine Learning (ICML 2022)
[Conference Version]
[Full Version]
[Code]
[BibTex]
@inproceedings{
weber2022certifying,
title={Certifying Out-of-Domain Generalization for Blackbox Functions},
author={Maurice Weber and Linyi Li and Boxin Wang and Zhikuan Zhao and Bo Li and Ce Zhang},
booktitle={39th International Conference on Machine Learning (ICML 2022)},
year={2022},
}
핵심어:
certified ML
Summary
A scalable certification algorithm for model generalization against distributional shift which requires no assumption on the model's architecture, as long as the distributional shift is bounded by Hellinger distance, a type of f-divergence. Core methodology is based on the positive semidefinite property of Gramian matrix.
-
Fan Wu*, Linyi Li*, Chejian Xu, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
10th International Conference on Learning Representations (ICLR 2022)
[Conference Version]
[Full Version]
[Leaderboard]
[Code]
[BibTex]
@inproceedings{
wu2022copa,
title={{COPA}: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks},
author={Fan Wu and Linyi Li and Chejian Xu and Huan Zhang and Bhavya Kailkhura and Krishnaram Kenthapadi and Ding Zhao and Bo Li},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=psh0oeMSBiF}
}
핵심어:
certified ML
deep reinforcement learning
Summary
The first approach for certifying deep RL robustness against offline training dataset perturbations, i.e., poisoning attacks, by aggregating over policies trained on partitioned datasets and policies for multiple time steps.
-
Zhuolin Yang*, Linyi Li*, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li
On the Certified Robustness for Ensemble Models and Beyond
10th International Conference on Learning Representations (ICLR 2022)
[Conference Version]
[Full Version]
[Code]
[BibTex]
@inproceedings{
yang2022on,
title={On the Certified Robustness for Ensemble Models and Beyond},
author={Zhuolin Yang and Linyi Li and Xiaojun Xu and Bhavya Kailkhura and Tao Xie and Bo Li},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=tUa4REjGjTf}
}
핵심어:
certified ML
Summary
Based on a curvature bound for randomized smoothing based classifiers, we prove that large confidence margin and gradient diversity are sufficient and necessary condition for certifiably robust ensembles. By regularizing these two factors, we acheive SOTA L2 certified robustness.
-
Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, Bo Li
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing
10th International Conference on Learning Representations (ICLR 2022)
[Conference Version]
[Full Version]
[Leaderboard]
[Code]
[BibTex]
@inproceedings{
wu2022crop,
title={{CROP}: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing},
author={Fan Wu and Linyi Li and Zijian Huang and Yevgeniy Vorobeychik and Ding Zhao and Bo Li},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=HOjLHrlZhmx}
}
핵심어:
certified ML
deep reinforcement learning
Summary
The first scalable approach for certifying deep RL robustness against state perturbations, by combining randomized smoothing with a set of trajectory-based search algorithms.
-
Zhuolin Yang*, Linyi Li*, Xiaojun Xu*, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin I. P. Rubinstein, Ce Zhang, Bo Li
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
Advances in Neural Information Processing Systems (NeurIPS) 2021
[Conference Version]
[Full Version]
[Code]
[BibTex]
@inproceedings{yangli2021trs,
title = {TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness},
author = {Zhuolin Yang and Linyi Li and Xiaojun Xu and Shiliang Zuo and Qian Chen and Pan Zhou and Benjamin I. P. Rubinstein and Ce Zhang and Bo Li},
booktitle = {Advances in Neural Information Processing Systems 34 (NeurIPS 2021)},
year = {2021}
}
핵심어:
robust ML
Summary
We prove the guaranteed correlation between model diversity and adversarial transferabiltiy given bounded model smoothness, which leads to a strong regularizer that achieves SOTA ensemble robustness against existing strong attacks.
-
Linyi Li*, Maurice Weber*, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li
TSS: Transformation-Specific Smoothing for Robustness Certification
ACM Conference on Computer and Communications Security (CCS) 2021
[Conference Version]
[Full Version]
[Code]
[Slides]
[BibTex]
@inproceedings{li2021tss,
title={TSS: Transformation-Specific Smoothing for Robustness Certification},
author={Linyi Li and Maurice Weber and Xiaojun Xu and Luka Rimanic and Bhavya Kailkhura and Tao Xie and Ce Zhang and Bo Li},
year={2021},
booktitle={ACM Conference on Computer and Communications Security (CCS 2021)}
}
핵심어:
certified ML
Summary
Natural transformations such as rotation and scaling are common in the physical world. We propose the first scalable certification approach against natural transformations based on randomzied smoothing, rigorous Lipschitz analysis, and stratified sampling. For the first time, we certify non-trivial robustness (>30% certified robust accuracy) on the large-scale ImageNet dataset.
-
Linyi Li*, Zexuan Zhong*, Bo Li, Tao Xie
Robustra: Training Provable Robust Neural Networks over Reference Adversarial Space
International Joint Conference on Artificial Intelligence (IJCAI) 2019
[Paper]
[Code]
[BibTex]
@inproceedings{li2019robustra,
title = {Robustra: Training Provable Robust Neural Networks over Reference Adversarial Space},
author = {Li, Linyi and Zhong, Zexuan and Li, Bo and Xie, Tao},
booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI 2019)},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {4711--4717},
year = {2019},
month = {7},
doi = {10.24963/ijcai.2019/654},
url = {https://doi.org/10.24963/ijcai.2019/654}
}
핵심어:
certified ML
Summary
We propose a training method for achieving certified robustness by regularizing only within the reference adversarial space from a jointly trained model to alleviate the optimization hardness and achieve higher certified robustness.