About
I am an Assistant Professor in the Department of Applied Mathematics and Statistics at Colorado School of Mines. I am also an affiliated faculty in the Department of Computer Science and am part of the Mines Optimization and Deep Learning (MODL) group. Prior to joining Mines, I was an Assistant Adjunct Professor in the Department of Mathematics at UCLA. I received my PhD in applied mathematics from Emory University in 2019, where I worked under the guidance of Lars Ruthotto. My research interests lie in the intersection of applied mathematics and data science. In particular, I am interested in inverse problems, optimization, optimal control, and deep learning.
Recent News
- 10/10/2024: Our draft A Generalization Bound for a Family of Implicit Networks is out. Thanks to Benjamin Berkels for the collaboration.
- 10/04/2024: Our draft On Logical Extrapolation for Mazes with Recurrent and Implicit Networks is out. Thanks to Brandon Knutson, Amandin Chyba, Michael Ivanitskiy, Jordan Pettyjohn, Cecilia Diniz-Behn, and Daniel McKenzie for the collaboration.
- 09/30/2024: Our draft Mean-Field Control Barrier Functions: A Framework for Real-Time Swarm Control is out. Thanks to Levon Nurbekyan for the collaboration.
- 08/09/2024: Our draft Fast Partial Transforms for Large-Scale Ptychography is out. Thanks to Ricardo Parada and Stanley Osher for the collaboration.
- 08/07/2024: Our draft A hybrid SIAC – data-driven post-processing filter for discontinuities in solutions to numerical PDEs is out. Thanks to Soraya Terrab and Jennifer Ryan for the collaboration.
- 07/19/2024: Our paper Differentiating Through Integer Linear Programs with Quadratic Regularization and Davis-Yin Splitting has been accepted by Transactions on Machine Learning Research. Thanks to Daniel McKenzie and Howard Heaton for the collaboration.
- 06/05/2024: Our draft Laplace Meets Moreau: Smooth Approximation to Infimal Convolutions Using Laplace’s Method is out. Thanks to Ryan Tibshirani, Howard Heaton, and Stanley Osher for the collaboration.
05/22/2024: Alex Vidal successfully defends his PhD dissertation and will continue his journey as a Sr. Data Scientist at NerdWallet. Congratulations, Alex!
- 05/17/2024: Our draft Kernel Expansions for High-Dimensional Mean-Field Control with Non-local Interactions is out. Thanks to Alex Vidal, Luis Tenorio, Levon Nurbekyan, and Stanley Osher for the collaboration.
- 04/2024: Our draft Three-Operator Splitting for Learning to Predict Equilibria in Convex Games has been accepted by SIAM Journal on Mathematics of Data Science. Thanks to Daniel McKenzie, Howard Heaton, Qiuwei Li, Wotao Yin, and Stanley Osher for the collaboration.
- 04/2024: I am honored to receive the 2024 Laney Early Career Alumni Award
- 12/2023: Our draft Structured World Representations in Maze-Solving Transformers is out and has been accepted by the in NeurIPS Workshop on Unifying Representations in Neural Models (UniReps). Thanks to Michael Ivanitskiy, Alex Spies, Tilman Räuker, and everyone else involved for the collaboration.
- 09/2023: Our preprint A Configurable Library for Generating and Manipulating Maze Datasets is out. Thanks to Michael Ivanitskiy and everyone else involved for the collaboration, and to AI Safety Camp for the support.
- 07/2023: NSF-DMS award received! This grant will help fund our work on learning to optimize. Thanks to my colleagues and Mines for the help and support.
- 06/2023: Our paper, Explainable AI via Learning to Optimize has been accepted by Scientific Reports. Thanks to Howard Heaton for his collaboration.
- 03/2023: It was a great day to have the Mines Optimization and Deep Learning group together.
- 03/2023: Our paper Taming Hyperparameter Tuning in Continuous Normalizing Flows Using the JKO Scheme has been accepted by Scientific Reports. Thanks to Alexander Vidal, Luis Tenorio, Stanley Osher, and Levon Nuberkyan for their collaboration.
- 03/2023: Our paper A Hamilton-Jacobi-based Proximal Operator has been accepted by the Proceedings of the National Academy of Sciences. Thanks to Howard Heaton and Stan Osher for the collaboration.
- 02/2023: Our draft Faster Predict-and-Optimize with Davis-Yin Splitting is out. Thanks to Daniel McKenzie and Howard Heaton for the collaboration.
- 12/2022: Our SIAM News Article on Learning to Optimize is out. Thanks to Daniel McKenzie and Wotao Yin for the collaboration.
- 11/2022: Our paper A Numerical Algorithm for Inverse Problem from Partial Boundary Measurement Arising from Mean Field Game Problem has been accepted by the journal Inverse Problems. Thanks to Yat Tin Chow, Siting Liu, Levon Nurbekyan, and Stan Osher for the collaboration.
- 11/2022: Our paper Global Solutions to Nonconvex Problems via Evolution of Hamilton-Jacobi PDEs has been accepted by the journal Communications on Applied Mathematics and Computation. Thanks to Howard Heaton and Stanley Osher for the collaboration.
- 07/21/2022: It’s been a great summer working at the 2022 Emory Computational Mathematics for Data Science REU/RET on Model Meets Data. Congratulations to Linghai Liu, Lisa Zhou, and Allen Tong (left to right) on a successful REU and poster presentation on implicit deep learning and inverse problems.
- 05/2022: Our draft Explainable AI via Learning to Optimize is out. Thanks to Howard Heaton for the collaboration.
- 04/2022: I am humbled to receive the inaugural 2022 MGB-SIAM Early Career Fellowship.
- 04/2022: Our draft A Numerical Algorithm for Inverse Problem from Partial Boundary Measurement Arising from Mean Field Game Problem is out. Thanks to Yat Tin Chow, Siting Liu, Levon Nurbekyan, and Stan Osher for the collaboration.
- 04/2022: Our draft Adaptive Uncertainty-Weighted ADMM for Distributed Optimization has been accepted by the Journal of Applied and Numerical Optimization. Thanks to Jianping Ye and Caleb Wan for the collaboration.
- 02/2022: Our draft Random Features for High-Dimensional Nonlocal Mean-Field Games has been accepted by the Journal of Computational Physics. Thanks to Sudhanshu Agrawal, Wonjun Lee, and Levon Nurbekyan for the collaboration.
Select Publications
- Wu Fung S. Nurbekyan L. Mean-Field Control Barrier Functions: A Framework for Real-Time Swarm Control. arXiv:2409.18945. 2024
- Tibshirani R, Wu Fung S, Heaton H, Osher S. Laplace Meets Moreau: Smooth Approximation to Infimal Convolutions Using Laplace’s Method, arXiv:2406.02003. 2024
- Wu Fung S. Berkels B. A Generalization Bound for a Family of Implicit Networks. arXiv: 2410.07427. 2024
- Heaton H, Wu Fung S. Explainable AI via Learning to Optimize, Scientific Reports, 13 (10103). 2023
- Osher S, Heaton H, Wu Fung S. A Hamilton-Jacobi-based Proximal Operator, Proceedings of the National Academy of Sciences. 2023
- Wu Fung S, Heaton H, Li Q, McKenzie D, Osher S, Yin W. JFB: Jacobian-Free Backpropagation for Implicit Networks, AAAI Conference on Artificial Intelligence, 36(6), 6648-6656. 2022
- Onken D, Wu Fung S, Li X, Ruthotto L. OT-Flow: Fast and Accurate Continuous Normalizing Flows via Optimal Transport, AAAI Conference on Artificial Intelligence, 35(10), 9223-9232. 2021
- Ruthotto L, Osher S, Li W, Nurbekyan L, Wu Fung S. A Machine Learning Framework for Solving High-Dimensional Mean Field Game and Mean Field Control Problems, Proceedings of the National Academy of Sciences, 117(17), 9183-9193. 2020