Lee Lab @ UW Madison
- Active Learning is a Strong Baseline for Data Subset Selection
D. Park, D. Papailiopoulos, K. Lee
NeurIPS HITY Workshop, 2022
- A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets
L. Yang, J. Zhang, J. Shenouda, D. Papailiopoulos, K. Lee, and R. Nowak
NeurIPS OPT Workshop, 2022
- Equal Improvability: A New Fairness Notion Considering the Long-Term Impact
O. Guldogan, Y. Zeng, J. Sohn, R. Pedarsani, and K. Lee
- Outlier-Robust Group Inference via Gradient Space Clustering
Y. Zeng, K. Greenewald, K. Lee, J. Solomon, and M. Yurochkin
- [Findings of EMNLP’22] Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment. (A short summary)
- [NeurIPS’22] Score-based generative modeling secretly minimizes the Wasserstein distance
- [NeurIPS’22] LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks. (Github repository) (A short summary)
- [NeurIPS’22] Rare Gems: Finding Lottery Tickets at Initialization
- [ICML’22] GenLabel: Mixup Relabeling using Generative Models
- [ICLR’22] Permutation-Based SGD: Is Random Optimal?
- [ISIT’22] Breaking Fair Binary Classification with Optimal Flipping Attacks
- [NeurIPS’21] Sample Selection for Fair and Robust Training
- [NeurIPS’21] Gradient Inversion with Generative Image Prior
- [ICML’21] Coded-InvNet for Resilient Prediction Serving Systems long oral
- [ICML’21] Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information
- [MLSys’21] Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification.
- [ICLR’21] FairBatch: Batch Selection for Model Fairness
- (July. 2022) Prof. Lee received UW Madison ECE Grainger Faculty Scholarship Award.
- (Mar. 2022) Prof. Lee received 2022 KSEA Young Investigator Grants.