Link Search Menu Expand Document

Lee Lab @ UW Madison

Research focus

  • Building Modular Deep Learning Systems with Pretrained Components (Large Language Models, Diffusion Models, …)
  • ML Fairness
    • Equal Improvability ICLR’23 (Github) [Summary]
    • FairBatch ICLR’21, its robust variant NeurIPS’21, and its decentralized variant MLSys-CrossFL 2022
    • Fundamental limits of local fair training in federated learning. [ISIT’23]
    • Fair training under distribution shift. ICML’23
    • Invited talks (2023) US-Mexico Workshop on Optimization and its Applications, (2022) KAIST AI International Symposium, IOS’22, UCSB CCDC, USC EE, (2021) UC Berkeley BLISS
  • Coded computation
    • Coded-InvNet (coded computation for deep invertible neural networks): ICML’21
    • Speeding Up Distributed Machine Learning Using Codes: T-IT’18
      • The Joint Communications Society/Information Theory Society Paper Award, 2020
    • Invited talks (2021) Seoul National University, POSTECH AI

New preprints

  • Vector-Valued Variation Spaces and Width Bounds for DNNs: Insights on Weight Decay Regularization
    Joseph Shenouda, Rahul Parhi, Kangwook Lee, and Robert D. Nowak
    Arxiv, 2023
  • DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models
    Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, Kimin Lee
    Arxiv, 2023
  • A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets
    Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, and Robert D. Nowak
    Arxiv, NeurIPS’22 OPT Workshop, 2022
  • Outlier-Robust Group Inference via Gradient Space Clustering
    Yuchen Zeng, Kristjan Greenewald, Kangwook Lee, Justin Solomon, Mikhail Yurochkin
    Arxiv, 2022

News