Selected Publications
- ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs
Wonjun Kang, Kevin Galim, Seunghyuk Oh, Minjae Lee, Yuchen Zeng, Shuibai Zhang, Coleman Hooper, Yuezhou Hu, Hyung Il Koo, Nam Ik Cho, and Kangwook Lee
ICLR 2026 | Summary | Github - Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games
Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, and Jaewoong Cho
ICLR 2026 Outstanding Paper Award @ EMNLP 2025 Wordplay Workshop | Summary | Github - Infected Smallville: How Disease Threat Shapes Sociality in LLM Agents
Soyeon Choi, Kangwook Lee, Oliver Sng, and Joshua M. Ackerman
ICML 2025 Workshop | Summary - VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data
Thomas Zeng, Shuibai Zhang, Shutong Wu, Christian Classen, Daewon Chae, Ethan Ewer, Minjae Lee, Heeju Kim, Wonjun Kang, Jackson Kunde, Ying Fan, Jungtaek Kim, Hyung Il Koo, Kannan Ramchandran, Dimitris Papailiopoulos, and Kangwook Lee
ICML 2025 (oral) | Summary | Github | HuggingFace - Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios Chrysos, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2025 (spotlight) - Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee, Ziyang Cai, Avi Schwarzschild, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2025 - Looped Transformers for Length Generalization
Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee
ICLR 2025 | Summary | Github - Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Dongmin Park, Sebin Kim, Taehong Moon, Minkyu Kim, Kangwook Lee, and Jaewoong Cho
ICLR 2025 (spotlight) | Summary | Github - Dual Operating Modes of In-Context Learning
Ziqian Lin and Kangwook Lee
ICML 2024 | Summary | Github - The Expressive Power of Low-Rank Adaptation
Yuchen Zeng and Kangwook Lee
ICLR 2024 | Summary | Github - Teaching Arithmetic to Small Transformers
Nayoung Lee, Kartik Sreenivasan, Jason Lee, Kangwook Lee, and Dimitris Papailiopoulos
ICLR 2024 | Summary | Github - DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models
Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee
NeurIPS 2023 - Score-based generative modeling secretly minimizes the Wasserstein distance
Dohyun Kwon, Ying Fan, and Kangwook Lee
NeurIPS 2022 - LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks
Tuan Dinh*, Yuchen Zeng*, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee
NeurIPS 2022 - Coded-InvNet for Resilient Prediction Serving Systems
Tuan Dinh and Kangwook Lee
ICML 2021 (long oral) - Speeding Up Distributed Machine Learning Using Codes
Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
IEEE Transactions on Information Theory, January 2018 IEEE ComSoc/IT Society Paper Award, 2020
Preprints
- Fine-Tuning Without Forgetting In-Context Learning: A Theoretical Analysis of Linear Attention Models
Chungpa Lee, Jy-yong Sohn, and Kangwook Lee
Arxiv 2026 - In-Context Learning with Hypothesis-Class Guidance
Ziqian Lin, Shubham Kumar Bharti, and Kangwook Lee
Arxiv 2025 - LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and Regularization
Erica Zhang, Ryunosuke Goto, Naomi Sagan, Jurik Mutter, Nick Phillips, Ash Alizadeh, Kangwook Lee, Jose Blanchet, Mert Pilanci, and Robert Tibshirani
Arxiv 2025 - ReJump: A Tree-Jump Representation for Analyzing and Improving LLM Reasoning
Yuchen Zeng, Shuibai Zhang, Wonjun Kang, Shutong Wu, Lynnix Zou, Ying Fan, Heeju Kim, Ziqian Lin, Jungtaek Kim, Hyung Il Koo, Dimitris Papailiopoulos, and Kangwook Lee
Arxiv 2025 | Summary | Github - How to Correctly Report LLM-as-a-Judge Evaluations
Chungpa Lee, Thomas Zeng, Jongwon Jeong, Jy-yong Sohn, and Kangwook Lee
Arxiv 2025 | Summary | Github - Multi-Bin Batching for Increasing LLM Inference Throughput
Ozgur Guldogan, Jackson Kunde, Kangwook Lee, and Ramtin Pedarsani
Arxiv 2024 - PathProx: A Proximal Gradient Algorithm for Weight Decay Regularized Deep Neural Networks
Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, and Robert D. Nowak
Arxiv 2023
2026
- TAPE: Tool-Guided Adaptive Planning and Constrained Execution in Language Model Agents
Jongwon Jeong, Jungtaek Kim, and Kangwook Lee
ICLR 2026 Workshop on Agentic AI in the Wild | Github - Draft-based Approximate Inference for LLMs
Kevin Galim, Ethan Ewer, Wonjun Kang, Minjae Lee, Hyung Il Koo, and Kangwook Lee
ICLR 2026 | Summary | Github - ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs
Wonjun Kang, Kevin Galim, Seunghyuk Oh, Minjae Lee, Yuchen Zeng, Shuibai Zhang, Coleman Hooper, Yuezhou Hu, Hyung Il Koo, Nam Ik Cho, and Kangwook Lee
ICLR 2026 | Summary | Github - Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games
Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, and Jaewoong Cho
ICLR 2026 Outstanding Paper Award @ EMNLP 2025 Wordplay Workshop | Summary | Github - TABED: Test-Time Adaptive Ensemble Drafting for Robust Speculative Decoding in LVLMs\\ Minjae Lee, Wonjun Kang, Byeongkeun Ahn, Christian Classen, Kevin Galim, Seunghyuk Oh, Minghao Yan, Hyung Il Koo, and Kangwook Lee
EACL 2026 (Findings)
2025
- Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games
Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, Jaewoong Cho
EMNLP 2025 (Wordplay Workshop) Outstanding Paper Award - Transformers in the Dark: Navigating unknown search spaces via noisy feedback\\ Jungtaek Kim, Ziqian Lin, Thomas Zeng, Minjae Lee, Chungpa Lee, Jy-yong Sohn, Hyung Il Koo, and Kangwook Lee
NeurIPS 2025 (WCTD Workshop) - ENTP: Encoder-only Next Token Prediction
Ethan Ewer, Daewon Chae, Thomas Zeng, Jinkyu Kim, and Kangwook Lee
NeurIPS 2025 (WCTD Workshop) (spotlight) - Infected Smallville: How Disease Threat Shapes Sociality in LLM Agents
Soyeon Choi, Kangwook Lee, Oliver Sng, and Joshua M. Ackerman
ICML 2025 Workshop | Summary - Improvement-Guided Iterative DPO for Diffusion Models\\ Ying Fan, Fei Deng, Yang Zhao, Sahil Singla, Rahul Jain, Tingbo Hou, Kangwook Lee, Feng Yang, Deepak Ramachandran, and Qifei Wang
ICML 2025 Workshop - In-batch Ensemble Drafting: Toward Fast and Robust Speculative Decoding for Multimodal Language Models\\ Minjae Lee, Wonjun Kang, Byeongkeun Ahn, Christian Classen, Minghao Yan, Hyung Il Koo, and Kangwook Lee
ICLR 2025 (SCOPE Workshop) - Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee, Ziyang Cai, Avi Schwarzschild, Kangwook Lee, and Dimitris Papailiopoulos
ICLR 2025 Workshop on Scaling Self-Improving Foundation Models - Task Vectors in In-Context Learning: Emergence, Formation, and Benefit
Liu Yang, Ziqian Lin, Kangwook Lee, Dimitris Papailiopoulos, and Robert Nowak
COLM 2025 - VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data
Thomas Zeng, Shuibai Zhang, Shutong Wu, Christian Classen, Daewon Chae, Ethan Ewer, Minjae Lee, Heeju Kim, Wonjun Kang, Jackson Kunde, Ying Fan, Jungtaek Kim, Hyung Il Koo, Kannan Ramchandran, Dimitris Papailiopoulos, and Kangwook Lee
ICML 2025 (oral) | Summary | Github | HuggingFace - Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios Chrysos, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2025 (spotlight) - Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim, Wonjun Kang, Yuchen Zeng, Hyung Il Koo, and Kangwook Lee
ICML 2025 - Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee, Ziyang Cai, Avi Schwarzschild, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2025 - Looped Transformers for Length Generalization
Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee
ICLR 2025 | Summary | Github - From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee, and Dimitris Papailiopoulos
ICLR 2025 | Summary | Github - Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Dongmin Park, Sebin Kim, Taehong Moon, Minkyu Kim, Kangwook Lee, and Jaewoong Cho
ICLR 2025 (spotlight) | Summary | Github - ENTP: Encoder-only Next Token Prediction
Ethan Ewer, Daewon Chae, Thomas Zeng, Jinkyu Kim, and Kangwook Lee
Transactions on Machine Learning Research (TMLR) 2025 - Improving CLIP Counting Accuracy via Parameter-Efficient Fine-Tuning
Ruisu Zhang, Yicong Chen, and Kangwook Lee
Transactions on Machine Learning Research (TMLR) 2025 | Github - Buffer-based Gradient Projection for Continual Federated Learning
Shenghong Dai, Jy-yong Sohn, Yicong Chen, S M Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, and Kangwook Lee
Transactions on Machine Learning Research (TMLR) 2025 | Github
2024
- Can MLLMs Perform Text-to-Image In-Context Learning?
Yuchen Zeng*, Wonjun Kang*, Yicong Chen, Hyung Il Koo, and Kangwook Lee
COLM 2024 | Summary | Github - Dual Operating Modes of In-Context Learning
Ziqian Lin and Kangwook Lee
ICML 2024 | Summary | Github - Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks
Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2024 | Github - Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Jy-yong Sohn, Dohyun Kwon, Seoyeon An, and Kangwook Lee
UAI 2024 - The Expressive Power of Low-Rank Adaptation
Yuchen Zeng and Kangwook Lee
ICLR 2024 | Summary | Github - Image Clustering Conditioned on Text Criteria
Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, and Kangwook Lee
ICLR 2024 | Summary | Github - Teaching Arithmetic to Small Transformers
Nayoung Lee, Kartik Sreenivasan, Jason Lee, Kangwook Lee, and Dimitris Papailiopoulos
ICLR 2024 | Summary | Github - Looped Transformers are Better at Learning Learning Algorithms
Liu Yang, Kangwook Lee, Robert D Nowak, and Dimitris Papailiopoulos
ICLR 2024 | Summary | Github - Looped Transformers for Length Generalization\\ Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee
NeurIPS 2024 Workshop on MATH-AI - Transformers Can Learn Meta-skills for Task Generalization in In-Context Learning\\ Ying Fan, Steve Yadlowsky, Dimitris Papailiopoulos, and Kangwook Lee
NeurIPS 2024 Compositional Learning Workshop - Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim*, Wonjun Kang*, Yuchen Zeng*, Hyung Il Koo, and Kangwook Lee
NeurIPS 2024 Workshop on Fine-Tuning in Modern ML (oral) - Dual Operating Modes of In-Context Learning\\ Ziqian Lin and Kangwook Lee
ICLR 2024 Workshop on ME-FoMo - Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks\\ Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos
ICLR 2024 Workshop on ME-FoMo - Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression
Joseph Shenouda, Rahul Parhi, Kangwook Lee, and Robert D. Nowak
Journal of Machine Learning Research (JMLR) 2024 - Mini-Batch Optimization of Contrastive Loss
Jaewoong Cho*, Kartik Sreenivasan*, Keon Lee, Kyunghoo Mun, Soheun Yi, Jeong-Gwan Lee, Anna Lee, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
Transactions on Machine Learning Research (TMLR) 2024 - Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding
Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee
Transactions on Machine Learning Research (TMLR) 2024 - Superresolution emulation of large cosmological fields with a 3D conditional diffusion model
Adam Rouhiainen, Michael Gira, Moritz Münchmeyer, Kangwook Lee, and Gary Shiu
Physical Review D 2024
2023
- DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models
Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee
NeurIPS 2023 - Prompted LLMs as Chatbot Modules for Long Open-domain Conversation
Gibbeum Lee, Volker Hartmann, Jongho Park, Dimitris Papailiopoulos, and Kangwook Lee
ACL 2023 (Findings, Short) - Improving Fair Training under Correlation Shifts
Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh
ICML 2023 - Optimizing DDPM Sampling with Shortcut Fine-Tuning
Ying Fan and Kangwook Lee
ICML 2023 - Looped Transformers as Programmable Computers
Angeliki Giannou*, Shashank Rajput*, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, and Dimitris Papailiopoulos
ICML 2023 - Equal Improvability: A New Fairness Notion Considering the Long-Term Impact
Ozgur Guldogan*, Yuchen Zeng*, Jy-yong Sohn, Ramtin Pedarsani, and Kangwook Lee
ICLR 2023 - Federated Learning with Local Fairness Constraints\\ Yuchen Zeng, Hongxu Chen, and Kangwook Lee
IEEE ISIT 2023 - Online Federated Learning based Object Detection across Autonomous Vehicles in a Virtual World\\ Shenghong Dai, S M Iftekharul Alam, Ravikumar Balakrishnan, Kangwook Lee, Suman Banerjee, and Nageen Himayat
IEEE CCNC 2023 (Demo) - FedGP: Buffer-based Gradient Projection for Continual Federated Learning
Shenghong Dai, Bryce Yicong Chen, Jy-yong Sohn, S M Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, Kangwook Lee
MLSys-FLSys 2023 Best Paper Award - Image Clustering Conditioned on Text Criteria\\ Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, and Kangwook Lee
NeurIPS 2023 Workshop on R0-FOMO - Coded Prompts for Large Language Models\\ Ziqian Lin, Yicong Chen, Yuchen Zeng, and Kangwook Lee
NeurIPS 2023 Workshop on R0-FOMO - Zero-shot Improvement of Object Counting with CLIP\\ Ruisu Zhang, Yicong Chen, and Kangwook Lee
NeurIPS 2023 Workshop on R0-FOMO - The Expressive Power of Low-Rank Adaptation\\ Yuchen Zeng and Kangwook Lee
NeurIPS 2023 Workshop on OPT - Outlier-Robust Group Inference via Gradient Space Clustering\\ Yuchen Zeng, Kristjan Greenewald, Kangwook Lee, Justin Solomon, and Mikhail Yurochkin
NeurIPS 2023 Workshop on DistShift - Super-Resolution Emulation of Large Cosmological Fields with a 3D Conditional Diffusion Model\\ Adam Rouhiainen, Michael Gira, Gary Shiu, Kangwook Lee, and Moritz Münchmeyer
NeurIPS 2023 Workshop on ML and the Physical Sciences - Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding\\ Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, and Kangwook Lee
ICML 2023 Workshop on ES-FoMo - Looped Transformers are Better at Learning Learning Algorithms\\ Liu Yang, Kangwook Lee, Robert D Nowak, and Dimitris Papailiopoulos
ICML 2023 Workshop on ES-FoMo - A Representer Theorem for Vector-Valued Neural Networks\\ Joseph Shenouda, Rahul Parhi, Kangwook Lee, and Robert D Nowak
ICML 2023 Workshop on Duality Principles - Teaching Arithmetic to Small Transformers\\ Nayoung Lee, Kartik Sreenivasan, Jason Lee, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2023 Workshop on Neural Conversational AI - Looped Transformers as Programmable Computers\\ Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, and Dimitris Papailiopoulos
ICLR 2023 Workshop on ME-FoMo - Mini-Batch Optimization of Contrastive Loss\\ Kartik Sreenivasan, Keon Lee, Jeong-Gwan Lee, Anna Lee, Jaewoong Cho, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee
ICLR 2023 Workshop on ME-FoMo
2022
- Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment
Tuan Dinh, Jy-yong Sohn, Shashank Rajput, Tim Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, and Kangwook Lee
EMNLP 2022 (Findings) - Score-based generative modeling secretly minimizes the Wasserstein distance
Dohyun Kwon, Ying Fan, and Kangwook Lee
NeurIPS 2022 - LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks
Tuan Dinh*, Yuchen Zeng*, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee
NeurIPS 2022 - Rare Gems: Finding Lottery Tickets at Initialization
Kartik Sreenivasan, Jy-yong Sohn, Liu Yang, Matthew Grinde, Aliot Nagle, Hongyi Wang, Kangwook Lee, and Dimitris Papailiopoulos
NeurIPS 2022 - GenLabel: Mixup Relabeling using Generative Models
Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, and Kangwook Lee
ICML 2022 - Permutation-Based SGD: Is Random Optimal?
Shashank Rajput, Kangwook Lee, and Dimitris Papailiopoulos
ICLR 2022 - Breaking Fair Binary Classification with Optimal Flipping Attacks\\ Changhun Jo, Jy-yong Sohn, and Kangwook Lee
IEEE ISIT 2022 - Hierarchical Deep Reinforcement Learning-based Propofol Infusion Assistant Framework in Anesthesia\\ Won Joon Yun, MyungJae Shin, David Mohaisen, Kangwook Lee, and Joongheon Kim
IEEE Transactions on Neural Networks and Learning Systems 2022 - Addendum and Erratum to “The MDS Queue: Analysing the Latency Performance of Erasure Codes”\\ Kangwook Lee, Nihar B. Shah, Longbo Huang, and Kannan Ramchandran
IEEE Transactions on Information Theory 2022 - Active Learning is a Strong Baseline for Data Subset Selection\\ Dongmin Park, Dimitris Papailiopoulos, and Kangwook Lee
NeurIPS 2022 HITY Workshop - A Better Way to Decay: Proximal Gradient Training Algorithms for Weight Decay\\ Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, and Robert D. Nowak
NeurIPS 2022 OPT Workshop - Super Seeds: Extreme Model Compression by Trading Off Storage with Computation\\ Nayoung Lee*, Shashank Rajput*, Jy-yong Sohn, Hongyi Wang, Aliot Nagle, Eric P. Xing, Kangwook Lee, and Dimitris Papailiopoulos
ICML 2022 UpML Workshop (oral) - Improved Input Reprogramming for GAN Conditioning\\ Tuan Dinh, Daewon Seo, Zhixu Du, Liang Shang, and Kangwook Lee
ICML 2022 UpML Workshop - Improving Fairness via Federated Learning\\ Yuchen Zeng, Hongxu Chen, and Kangwook Lee
MLSys-CrossFL 2022 - Dynamic Decentralized Federated Learning\\ Shenghong Dai, Kangwook Lee, and Suman Banerjee
MLSys-CrossFL 2022 - Debiasing Pre-Trained Language Models via Efficient Fine-tuning\\ Michael Gira, Ruisu Zhang, and Kangwook Lee
ACL 2022 Workshop on LT-EDI - Federated Unsupervised Clustering with Generative Models\\ Jichang Chung, Kangwook Lee, and Kannan Ramchandran
AAAI 2022 FL Workshop - Improving Fairness via Federated Learning\\ Yuchen Zeng, Hongxu Chen, and Kangwook Lee
AAAI 2022 FL Workshop - Deep Neural Networks for High-fidelity Measurement of Multiqubit Circuits\\ Linipun Phuttitarn, Robert McDermott, Chuan-Hong Liu, Kangwook Lee, Liang Shang, and Daewon Seo
APS March Meeting 2022 - On a bilevel optimization approach to fair classification\\ Yuchen Zeng, Ziqian Lin, and Kangwook Lee
2022 INFORMS Optimization Society Conference
2021
- Sample Selection for Fair and Robust Training
Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh
NeurIPS 2021 - Gradient Inversion with Generative Image Prior
Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, and Jungseul Ok
NeurIPS 2021 - Coded-InvNet for Resilient Prediction Serving Systems
Tuan Dinh and Kangwook Lee
ICML 2021 (long oral) - Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information
Changhun Jo and Kangwook Lee
ICML 2021 - Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal, Hongyi Wang, Kangwook Lee, Shivaram Venkataraman, and Dimitris Papailiopoulos
MLSys 2021 - FairBatch: Batch Selection for Model Fairness
Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh
ICLR 2021 - Predicting Vehicle Collisions using Data Collected from Video Games\\ Hoon Kim*, Kangwook Lee*, and Changho Suh
Springer Machine Vision and Applications 2021 - The Roaming Edge and its Applications\\ Suman Banerjee, Remzi Arpaci-Dusseau, Shenghong Dai, Kassem Fawaz, Mohit Gupta, Kangwook Lee, and Shivaram Venkataraman
ACM GetMobile 2021 - Gradient Inversion with Generative Image Prior\\ Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, and Jungseul Ok
ICML 2021 Federated Learning Workshop - Empirical Study on the Effective VC Dimension of Low-rank Neural Networks\\ Daewon Seo, Hongyi Wang, Dimitris Papailiopoulos, and Kangwook Lee
ICML 2021 Overparameterization Workshop
2020
- Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang, Kartik Sreenivasan, Shashank Rajpu, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos
NeurIPS 2020 - Reprogramming GANs via Input Noise Design
Kangwook Lee, Changho Suh, and Kannan Ramchandran
ECML PKDD 2020 - FR-Train: A mutual information-based approach to fair and robust training
Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh
ICML 2020 - GAN-mixup: Augmenting Across Data Manifolds for Improved Robustness\\ Jy-yong Sohn, Kangwook Lee, Jaekyun Moon, and Dimitris Papailiopoulos
ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning
2019
- Synthesizing Differentially Private Datasets using Random Mixing
Kangwook Lee, Hoon Kim, Kyungmin Lee, Changho Suh, and Kannan Ramchandran
IEEE ISIT 2019 - Crash to Not Crash: Learn to Identify Dangerous Vehicles using a Simulator
Hoon Kim*, Kangwook Lee*, Gyeongjo Hwang, and Changho Suh
AAAI 2019 (long oral) - SAFFRON: Sparse-Graph Code Framework for Group Testing
Kangwook Lee, Kabir Chandrasekher, Ramtin Pedarsani, and Kannan Ramchandran
IEEE Transactions on Signal Processing 2019 - Community Recovery in Hypergraphs
Kwangjun Ahn*, Kangwook Lee*, and Changho Suh
IEEE Transactions on Information Theory 2019 - Improving Model Robustness via Automatically Incorporating Self-supervision Tasks\\ Donghwa Kim, Kangwook Lee, and Changho Suh
NeurIPS 2019 MetaLearn Workshop
2018
- Binary Rating Estimation with Graph Side Information
Kwangjun Ahn, Kangwook Lee, Hyunseung Cha, and Changho Suh
NeurIPS 2018 - Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings
Kangwook Lee*, Hoon Kim*, and Changho Suh
ICLR 2018 - On the Joint Recovery of Community Structure and Community Features\\ Jisang Yoon, Kangwook Lee, and Changho Suh
Allerton 2018 - Hierarchical Coding for Distributed Computing\\ Hyegyeong Park, Kangwook Lee, Jy-yong Sohn, Changho Suh, and Jaekyun Moon
IEEE ISIT 2018 - Straggler-proofing massive-scale distributed matrix multiplication with d-dimensional product codes\\ Tavor Baharav, Kangwook Lee, Orhan Ocal, and Kannan Ramchandran
IEEE ISIT 2018 - SGD on Random Mixtures: Private Machine Learning under Data-breach Threats\\ Kangwook Lee, Kyungmin Lee, Hoon Kim, Changho Suh, and Kannan Ramchandran
SysML 2018 - UberShuffle: Communication-efficient Data Shuffling for SGD via Coding Theory\\ Jichang Chung, Kangwook Lee, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
SysML 2018 - Speeding Up Distributed Machine Learning Using Codes
Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
IEEE Transactions on Information Theory, January 2018 IEEE ComSoc/IT Society Paper Award, 2020 - Hypergraph Spectral Clustering in the Weighted Stochastic Block Model\\ Kwangjun Ahn, Kangwook Lee, and Changho Suh
IEEE Journal of Selected Topics in Signal Processing 2018 - SGD on Random Mixtures: Private Machine Learning under Data-breach Threats\\ Kangwook Lee, Kyungmin Lee, Hoon Kim, Changho Suh, and Kannan Ramchandran
ICLR 2018 Workshop
2017 and earlier
- Matrix Sparsification for Coded Matrix Multiplication\\ Geewon Suh, Kangwook Lee, and Changho Suh
Allerton 2017 - High-Dimensional Coded Matrix Multiplication\\ Kangwook Lee, Changho Suh, and Kannan Ramchandran
IEEE ISIT 2017 - Coded Computation for Multicore Setups\\ Kangwook Lee, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
IEEE ISIT 2017 - Information-theoretic Limits of Subspace Clustering\\ Kwangjun Ahn, Kangwook Lee, and Changho Suh
IEEE ISIT 2017 - Asynchronous and Noncoherent Neighbor Discovery for the IoT Using Sparse-Graph Codes\\ Kabir Chandrasekher, Kangwook Lee, Peter Kairouz, Ramtin Pedarsani, and Kannan Ramchandran
IEEE ICC 2017 - UberShuffle\\ Jichang Chung, Kangwook Lee, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
NeurIPS 2017 ML Systems Workshop - Crash to not crash: Playing video games to predict vehicle collisions\\ Kangwook Lee*, Hoon Kim*, and Changho Suh
ICML 2017 Workshop on ML for Autonomous Vehicles - Large-scale and Interpretable Collaborative Filtering for Educational Data\\ Kangwook Lee, Jichang Chung, and Changho Suh
KDD 2017 Workshop - The MDS Queue: Analysing the Latency Performance of Erasure Codes
Kangwook Lee, Nihar Shah, Longbo Huang, and Kannan Ramchandran
IEEE Transactions on Information Theory, May 2017 - On Scheduling Redundant Requests With Cancellation Overheads
Kangwook Lee, Ramtin Pedarsani, and Kannan Ramchandran
IEEE/ACM Transactions on Networking, April 2017 - PhaseCode: Fast and Efficient Compressive Phase Retrieval based on Sparse-Graph-Codes
Ramtin Pedarsani, Dong Yin, Kangwook Lee, and Kannan Ramchandran
IEEE Transactions on Information Theory, June 2017 - Community Recovery in Hypergraphs\\ Kwangjun Ahn, Kangwook Lee, and Changho Suh
Allerton 2016 - Speeding Up Distributed Machine Learning using Codes
Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
IEEE ISIT 2016 - SAFFRON: Sparse-Graph Code Framework for Group Testing\\ Kangwook Lee, Ramtin Pedarsani, and Kannan Ramchandran
IEEE ISIT 2016 - Learning Analytics: Collaborative Filtering or Regression With Experts?\\ Kangwook Lee, Jichang Chung, Youngmin Cha, and Changho Suh
NeurIPS 2016 Workshop on ML for Education - When Do Redundant Requests Reduce Latency?
Nihar Shah, Kangwook Lee, and Kannan Ramchandran
IEEE Transactions on Communications, February 2016 - On Scheduling Redundant Requests with Cancellation Overheads\\ Kangwook Lee, Ramtin Pedarsani, and Kannan Ramchandran
Allerton 2015 - Sparse Covariance Estimation Based on Sparse-Graph Codes\\ Ramtin Pedarsani, Kangwook Lee, and Kannan Ramchandran
Allerton 2015 - Fast and Robust Compressive Phase Retrieval with Sparse-Graph Codes\\ Dong Yin, Kangwook Lee, and Kannan Ramchandran
IEEE ISIT 2015 - Capacity-Approaching PhaseCode for Low-Complexity Compressive Phase Retrieval\\ Ramtin Pedarsani, Kangwook Lee, and Kannan Ramchandran
IEEE ISIT 2015 - Speeding Up Distributed Machine Learning using Codes\\ Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran
NeurIPS 2015 ML Systems Workshop - PhaseCode: Fast and Efficient Compressive Phase Retrieval based on Sparse-Graph-Codes\\ Ramtin Pedarsani, Kangwook Lee, and Kannan Ramchandran
Allerton 2014 - The MDS Queue: Analysing the Latency Performance of Erasure Codes\\ Nihar B. Shah, Kangwook Lee, and Kannan Ramchandran
IEEE ISIT 2014 - When Do Redundant Requests Reduce Latency?\\ Nihar B. Shah, Kangwook Lee, and Kannan Ramchandran
Allerton 2013 - A VoD System for Massively Scaled, Heterogeneous Environments: Design and Implementation
Kangwook Lee, Lisa Yan, Abhay Parekh, and Kannan Ramchandran
IEEE MASCOTS 2013 Best Paper Award finalist - An Optimized Distributed Video-on-Demand Streaming System: Theory and Design\\ Kangwook Lee, Hao Zhang, Ziyu Shao, Minghua Chen, Abhay Parekh, and Kannan Ramchandran
Allerton 2012 - Codes for a Distributed Caching based Video-On-Demand System\\ Sameer Pawar, Salim Rouayheb, Hao Zhang, Kangwook Lee, and Kannan Ramchandran
Asilomar 2011 - Experiment evaluation of optimal CSMA\\ Bruno Nardelli, Jinsung Lee, Kangwook Lee, Yung Yi, Song Chong, Edward Knightly, and Mung Chiang
IEEE INFOCOM 2011
Selected Talks
- (Mar. 2026) NVIDIA GTC 2026 Panel — Charting a Course for the Next Decade of Gaming with AI
- (Dec. 2025) Department Seminar, Seoul National University — AI for Video Games
- (Oct. 2025) AWS Research Day for UW-Madison — Toward More Efficient and Useful LLM Agents
- (July 2025) ICML 2025 Workshop on Tiny-Titans (video) — Towards Principled Design of SLM Agents for Edge Devices
- (May 2025) Department Seminar, Korea University — Generative AI and AI Agents
- (Apr. 2025) UCSC ECE Seminar — Bridging Large Language Models and Classical Machine Learning: From LIFT to LLM-Lasso
- (Mar. 2025) Helmholtz/ELLIS Workshop on Foundation Models in Science, Berlin — Bridging Large Language Models and Classical Machine Learning: From LIFT to LLM-Lasso
- (Mar. 2025) EnCORE Workshop on Theoretical Perspective on LLMs (video) — Beyond Decoder-Only Next Token Prediction
- (Feb. 2025) ECE Grad Seminar, University of Pittsburgh — Beyond Decoder-Only Next Token Prediction
- (Nov. 2024) Seminars on AI Core and Applications, Seoul National University
- (Oct. 2024) 2024 SIAM Conference on Mathematics of Data Science, Atlanta — Dual Operating Modes of In-Context Learning
- (Apr. 2024) Johns Hopkins University CIS/MINDS seminar — Theoretical Exploration of Foundation Model Adaptation Methods
- (Feb. 2024) Foundations of Data Science, UCSD/NSF EnCORE (video) — Theoretical Exploration of Foundation Model Adaptation Methods
- (Dec. 2023) CSP Seminar, University of Michigan (video) — Towards a Theoretical Understanding of Parameter-Efficient Fine-Tuning (and Beyond)
- (Nov. 2023) Efficient ML workshop, Google Research, New York — The Expressive Power of Low-Rank Adaptation (LoRA)
All Talks
2026
- (Mar. 2026) NVIDIA GTC 2026 Panel — Charting a Course for the Next Decade of Gaming with AI
2025
- (Dec. 2025) Department Seminar, Seoul National University — AI for Video Games
- (Oct. 2025) AWS Research Day for UW-Madison — Toward More Efficient and Useful LLM Agents
- (July 2025) SILO Seminar, UW-Madison — Generative Agents in Social Psychology and Video Gaming
- (July 2025) ICML 2025 Workshop on Tiny-Titans (video) — Towards Principled Design of SLM Agents for Edge Devices
- (June 2025) Oh Lab, University of Washington — Dual Operating Modes of In-Context Learning
- (May 2025) Department Seminar, Korea University — Generative AI and AI Agents
- (Apr. 2025) UCSC ECE Seminar — Bridging Large Language Models and Classical Machine Learning: From LIFT to LLM-Lasso
- (Apr. 2025) Invited lecture at Microbiology/Oncology 545, UW Madison — Bridging Large Language Models and Classical Machine Learning: From LIFT to LLM-Lasso
- (Mar. 2025) Helmholtz/ELLIS Workshop on Foundation Models in Science, Berlin — Bridging Large Language Models and Classical Machine Learning: From LIFT to LLM-Lasso
- (Mar. 2025) Invited talk at KFAS — Impacts of AI on Researchers
- (Mar. 2025) EnCORE Workshop on Theoretical Perspective on LLMs (video) — Beyond Decoder-Only Next Token Prediction
- (Feb. 2025) ECE Grad Seminar, University of Pittsburgh — Beyond Decoder-Only Next Token Prediction
2024
- (Nov. 2024) SILO Seminar, UW-Madison — ENTP: Encoder-only Next Token Prediction
- (Nov. 2024) Seminars on AI Core and Applications, Seoul National University
- (Oct. 2024) 2024 SIAM Conference on Mathematics of Data Science, Atlanta — Dual Operating Modes of In-Context Learning
- (Sept. 2024) IFDS seminar, UW Madison — ENTP: Encoder-only Next Token Prediction
- (Apr. 2024) Johns Hopkins University CIS/MINDS seminar — Theoretical Exploration of Foundation Model Adaptation Methods
- (Mar. 2024) UW Madison Machine Learning Lunch Meetings — Dual Operating Modes of In-Context Learning
- (Mar. 2024) 58th CISS @ Princeton University — A Probabilistic Framework for Understanding In-Context Task Learning and Retrieval
- (Feb. 2024) 2024 Information Theory and Applications Workshop, San Diego — The Expressive Power of Low-Rank Adaptation (LoRA)
- (Feb. 2024) Foundations of Data Science, UCSD/NSF EnCORE (video) — Theoretical Exploration of Foundation Model Adaptation Methods
2023
- (Dec. 2023) CSP Seminar, University of Michigan (video) — Towards a Theoretical Understanding of Parameter-Efficient Fine-Tuning (and Beyond)
- (Nov. 2023) Efficient ML workshop, Google Research, New York — The Expressive Power of Low-Rank Adaptation (LoRA)
- (Oct. 2023) Trust Perspectives in ML, Law, and Public Policy, IDEAL, Northwestern University — Demystifying Large Language Models: A Comprehensive Overview
- (Oct. 2023) Symposium in Honor of AI Pioneer Professor Leonard Uhr, UW Madison — Demystifying Large Language Models: A Comprehensive Overview
- (Oct. 2023) Distinguished Lectures in Microbiology, UW Madison — GPT: Transforming Science, Engineering, and Beyond
- (Oct. 2023) AI in Imaging and Medicine, UW Madison — The Potential of Large Language Models in Imaging and Medicine
- (Sept. 2023) ML4MI, UW-Madison — Exploring Generative AI: An Introduction to Large Language Models and Diffusion Models
- (Aug. 2023) Fairness and Ethics in ML Seminars, AmFam — Equal Improvability: A New Fairness Notion Considering the Long-Term Impact
- (July 2023) MADLab monthly meeting — Modular Deep Learning Systems with Pretrained Components
- (June 2023) BarryFest, Madison — Unfolding the Magic of GPT: A Flipped Classroom Ode to Barry
- (June 2023) Innovation in Data Seminar, Early Warning — GPT: Transforming Science, Engineering, and Beyond
- (May 2023) The second annual Wisconsin Digital Symposium — Power and Possibility of Large Language Models (like ChatGPT)
- (May 2023) KSEA Distinguished Guest Series — GPT: Transforming Science, Engineering, and Beyond
- (Apr. 2023) UW-Madison Law School, Law 915 — Recent advances in Trustworthy ML
- (Mar. 2023) Keynote, Midwest Regional Conference — Revolutionizing Science and Engineering through Language Models
- (Feb. 2023) Keynote, CSL Student Conference, UIUC — Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
- (Feb. 2023) Information Theory and Applications Workshop, San Diego — Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
- (Feb. 2023) SILO Seminar, UW-Madison — Theoretical Exploration of Foundation Model Adaptation Methods
- (Jan. 2023) Information Theory and Data Science Workshop, Singapore — Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
- (Jan. 2023) SILO Seminar, UW-Madison — Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
- (Jan. 2023) The 12th US-Mexico Workshop on Optimization and its Applications, Huatulco, Mexico — On a Bilevel Optimization Approach to Fair Classification
2022
- (Oct. 2022) KRAFTON Developer Conference 2022 — Deep Learning and Video Games
- (Oct. 2022) American Family Mutual Insurance Company’s Visiting Professor Series — Mixup Relabeling using Generative Models
- (Sept. 2022) KAIST CS Colloquium — Recent advances in Trustworthy ML
- (Aug. 2022) Samsung Advanced Institute of Technology (SAIT) — LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks
- (July 2022) Electronic & Information Research Information Center — LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks
- (Apr. 2022) Electrical Engineering Department, University of Southern California — On a bilevel optimization approach to fair classification
- (Apr. 2022) CCDC Seminar Series, UC Santa Barbara — On a bilevel optimization approach to fair classification
- (Mar. 2022) EE Colloquium Lecture Series, KAIST — On Trustworthy and Scalable Machine Learning
- (Feb. 2022) BLISS Seminar, UC Berkeley — Improving Fairness via Federated Learning
- (Feb. 2022) MLWiNS Tech Talks, INTEL — Improving Fairness via Federated Learning
- (Jan. 2022) Physics meets Machine Learning Seminar, UW-Madison — A gentle introduction to new ideas in modern ML
2021
- (Dec. 2021) KAIST International Symposium on AI and future Society — Improving Fairness via Federated Learning
- (Nov. 2021) WID All Hands, UW-Madison — On Trustworthy Machine Learning
- (Oct. 2021) AI+Society seminar, UW-Madison — On Trustworthy Machine Learning
- (Oct. 2021) KRAFTON Developer Connect — On Trustworthy Machine Learning
- (Sept. 2021) American Family Insurance Visiting Professor Series — Mixup Relabeling using Generative Models
- (June 2021) KRAFTON — Recent Trends of AI Research
- (June 2021) POSTECH — Information Theory and Coding for Trustworthy and Scalable Machine Learning
- (May 2021) “Shannon meets Turing” Colloquium, Seoul National University — Information Theory and Coding for Trustworthy and Scalable Machine Learning
- (Apr. 2021) IFDS Ethics & Algorithms SIG, UC Santa Cruz — Fairness in AI
- (Mar. 2021) Furiosa.ai — Recent Trends of AI Research
- (Feb. 2021) Korea Information and Communications Society — Fairness in AI
2020
- (Dec. 2020) Machine Learning Ideas, Microsoft Research New England — Fairness in AI
- (Nov. 2020) SILO Seminar, UW-Madison — Fairness in AI
- (Nov. 2020) BLISS Seminar, UC Berkeley — Fairness in AI
- (Oct. 2020) The 11th International Conference on ICT Convergence — Information Theory and Coding for Trustworthy and Scalable Machine Learning
- (May 2020) Air Force Research Laboratory — FR-Train: A mutual information-based approach to fair and robust training
- (Feb. 2020) The Chaos and Complex Systems Seminar, UW-Madison — Information Theory and Coding for Machine Learning at Scale
- (Jan. 2020) SK T-Brain — Information Theory and Coding for Machine Learning at Scale
- (Jan. 2020) Furiosa.ai — Information Theory and Coding for Machine Learning at Scale
2019
- (Oct. 2019) SILO Seminar, UW-Madison — Binary Rating Estimation with Graph Side Information
- (Aug. 2019) Samsung Electronics — Learning with Simulated Data
- (May 2019) The 29th Joint Conference on Communications and Information, Korea — Binary Rating Estimation with Graph Side Information
- (Apr. 2019) Korea Information and Communications Society — Learning with Simulated Data
- (Mar. 2019) ECE, UW-Madison — Information Theory and Coding for Machine Learning at Scale
- (Jan. 2019) Korea Information and Communications Society — Machine Learning (Introduction and Advanced Topics)
2018
- (May 2018) Kakao Brain — Binary Rating Estimation with Graph Side Information
- (Jan. 2018) National Information Society Agency, Daegu — Speeding Up Distributed Machine Learning Using Codes
- (Jan. 2018) DGIST, Daegu — Speeding Up Distributed Machine Learning Using Codes
2017
- (Dec. 2017) Seoul National University — Speeding Up Distributed Machine Learning Using Codes
- (Nov. 2017) UC Berkeley BASiCS Seminar — Binary Rating Estimation with Graph Side Information
- (May 2017) Naver — Speeding Up Distributed Machine Learning Using Codes
- (May 2017) Information Theory and Machine Learning Workshop, KAIST — Speeding Up Distributed Machine Learning Using Codes
2016 and earlier
- (Nov. 2016) National Information Society Agency, Daegu — Machine Learning (Introduction and Advanced Topics)
- (June 2016) Samsung Electronics DMC R&D Center — Speeding Up Distributed Machine Learning Using Codes
- (Feb. 2016) Information Theory and Applications Workshop — Speeding Up Distributed Machine Learning Using Codes
- (Jan. 2016) Seoul National University — Sub-linear Time Algorithms for Sparse Signal Recovery Based on Sparse-graph Codes
- (May 2015) IEEE Communication Theory Workshop — A VoD System for Massively Scaled, Heterogeneous Environments
- (May 2015) University of Seoul — A VoD System for Massively Scaled, Heterogeneous Environments
- (May 2014) KAIST — The MDS Queue: Analysing the Latency Performance of Codes
- (Dec. 2013) DIMACS Workshop on Algorithms for Green Data Storage, Rutgers University — When Do Redundant Requests Reduce Latency?
- (Oct. 2013) IEEE International Conference on Big Data — The MDS Queue: Analysing the Latency Performance of Codes
Lee Lab @ KRAFTON/UW-Madison
I am hiring student interns/postdocs to directly work with me at KRAFTON, and also hosting visiting researchers. Location: Seoul/Bay Area.
Jungtaek Kim, postdoc
Nayoung Lee, PhD student (co-advised with Prof. Dimitris Papailiopoulos)
Liu Yang, PhD student (co-advised with Prof. Rob Nowak and Prof. Dimitris Papailiopoulos)
Joseph Shenouda, PhD student (co-advised with Prof. Rob Nowak)
Thomas Zeng, PhD student
Jongwon Jeong, PhD student
Ethan Ewer, undergraduate
Lynnix Zou, undergraduate
Chungpa Lee, visiting researcher (Yonsei University)
Dosung Lee, visiting researcher (Korea University)
Dr. Yuchen Zeng, PhD (2025) => Senior Researcher @ Microsoft Research
Dr. Ziqian Lin, PhD (2025) => Research Scientist @ Google
Dr. Ying Fan, PhD (2025) => Senior Researcher @ Microsoft Research
Dr. Tuan Dinh, PhD (2023) => Postdoc @ UCSF
Dr. Changhun Jo, PhD (2022)
Ruisu Zhang, MS (2024) => Machine Learning Engineer @ WeRide
Andrew Geng, MS (2023) => Research Engineer @ IBM
Liang Shang, MS (2021) => PhD student @ UW Madison
Dr. Jy-yong Sohn, postdoc (2021–2022) => Assistant Professor @ Yonsei University, Korea
Dr. Daewon Seo, postdoc (2020–2021) => Assistant Professor @ DGIST, Korea
Jackson Kunde, undergraduate (2024-2025 Hilldale Fellow) => Machine Learning Engineer @ Ohalo
Bryce Chen, undergraduate (2023-2024 Hilldale Fellow) => PhD student @ University of Washington, Seattle
Michael Gira, undergraduate (2022-2023 Hilldale Fellow) => Software Engineer @ Microsoft
Bokdol Lee, furry collaborator (Philosophy, Math, and Kinesiology)
Awards & Service
Awards and Honors
- Outstanding Paper Award, The 5th Wordplay Workshop @ EMNLP 2025, 2025
- Fusion Fund Distinguished Scholar Network, Inaugural Member, 2025
- NSF CAREER Award, 2024
- Amazon Research Awards, 2024
- Best Paper Award, The Federated Learning Systems (FLSys) Workshop @ MLSys 2023, 2023
- ECE Grainger Faculty Scholarship Award, UW Madison ECE, 2022
- Young Investigator Grants Award, KSEA, 2022
- The Joint Communications Society/Information Theory Society Paper Award, IEEE, 2020
- The Outstanding Graduate Student Instructor Award, UC Berkeley, 2016
- Best Paper Award Finalist, IEEE MASCOTS 2013, 2013
- KFAS Fellowship, Korea Foundation for Advanced Studies (KFAS), 2010 - 2015
- Highest GPA (4.19/4.30) among all 800+ graduates across all departments, KAIST, 2010
- Korea Talent Award (Presidential Award), 2009, KOFAC
Selected Services
- Area Chair, NeurIPS 2025, 2024, 2023, 2022, 2021
- Area Chair, ICML 2026, 2025, 2024, 2023
- Area Chair, ICLR 2026, 2025
- Area Chair, COLM 2026, 2025, 2024
- Program Committee, MLSys 2026, 2025, 2024, 2023, 2022, 2021, 2020
- Action Editor, Transactions on Machine Learning Research (TMLR), 2026, 2025, 2024, 2023, 2022
Teaching
At UW Madison
- ECE 901 Advanced Topics in Large Language Models, Fall 2025
- ECE/ISYE 570 Ethics of Data for Engineers, Spring 2025, Spring 2024.
- ECE/CS/ME 539 Introduction to Artificial Neural Networks, Fall 2024.
- ECE 901 Theory of Deep Learning Algorithms and Architectures, Spring 2023.
- ECE/CS 561 Probability and Information Theory in Machine Learning, Fall 2022.
- ECE/CS/ME 532 Matrix Methods in Machine Learning, Spring 2022, Fall 2020, Fall 2019.
- ECE 204 Data Science & Engineering, Fall 2021.
- ECE/CS 761 Mathematical Foundations of Machine Learning, Spring 2021, Spring 2020.
At UC Berkeley
- Head GSI (Outstanding GSI Award), EECS 126 Probability and Random Processes, Fall 2015, Fall 2014. webpage
Background
Academic Appointments
- Associate Professor (with tenure), University of Wisconsin-Madison, 2025.07 – 2026.01
- Assistant Professor, University of Wisconsin-Madison, 2019.08 – 2025.06
- Research Assistant Professor, KAIST, 2018.10 – 2019.06
Mentor: Prof. Changho Suh - Postdoctoral Fellow, KAIST, 2016.06 – 2018.09
Mentor: Prof. Changho Suh - Graduate Student Researcher, UC Berkeley, 2010.08 – 2016.05
Education
- Ph.D., University of California, Berkeley, 2010.08 – 2016.05 (Electrical Engineering and Computer Sciences)
Advisor: Prof. Kannan Ramchandran - M.S., University of California, Berkeley, 2010.08 – 2012.12 (Electrical Engineering and Computer Sciences)
Advisor: Prof. Kannan Ramchandran - B.S., KAIST, 2006.03 – 2010.05 (Electrical Engineering)
Advisor: Prof. Sae-Young Chung and Prof. Yung Yi
Highest GPA (4.19/4.30) among all 800+ graduates across all departments, 2010
Work Experience
- CAIO, KRAFTON, 2026.02 – present
- CTO, Ludo Robotics, 2026.02 – present
- Head of Deep Learning R&D, KRAFTON, 2022.04 – 2026.01
- Software Engineer Intern, Lytmus Inc., 2013.06 – 2013.09
- Software Engineer Intern, Samsung Electronics, 2009.07
- Software/Hardware Engineer Intern, LG Display, 2008.06 – 2008.08