Talks Show all talks
(Mar. 2026) NVIDIA GTC 2026 Panel — Charting a Course for the Next Decade of Gaming with AI (Dec. 2025) Department Seminar, Seoul National University — AI for Video Games (July 2025) ICML 2025 Workshop on Tiny-Titans (video) — Towards Principled Design of SLM Agents for Edge Devices (May 2025) Department Seminar, Korea University — Generative AI and AI Agents (Apr. 2025) UCSC ECE Seminar — From LIFT to LLM-Lasso (Mar. 2025) Helmholtz/ELLIS Workshop, Berlin — From LIFT to LLM-Lasso for Predictive Modeling (Mar. 2025) EnCORE Workshop, San Diego (video) — Beyond Decoder-Only Next Token Prediction (Feb. 2025) ECE Grad Seminar, University of Pittsburgh — Beyond Decoder-Only Next Token Prediction (Nov. 2024) Seminars on AI Core and Applications, Seoul National University (Oct. 2024) 2024 SIAM Conference on Mathematics of Data Science — Dual Operating Modes of ICL (Apr. 2024) Johns Hopkins University CIS/MINDS seminar — Theoretical Exploration of Foundation Model Adaptation Methods (Feb. 2024) Foundations of Data Science, UCSD/NSF EnCORE (video) — Theoretical Exploration of Foundation Model Adaptation Methods (Dec. 2023) CSP Seminar, University of Michigan (video) — Towards a Theoretical Understanding of Parameter-Efficient Fine-Tuning (Nov. 2023) Efficient ML workshop, Google Research New York — The Expressive Power of Low-Rank Adaptation (Mar. 2026) NVIDIA GTC 2026 Panel — Charting a Course for the Next Decade of Gaming with AI (Dec. 2025) Department Seminar, Seoul National University — AI for Video Games (July 2025) ICML 2025 Workshop on Tiny-Titans (video) — Towards Principled Design of SLM Agents for Edge Devices (May 2025) Department Seminar, Korea University — Generative AI and AI Agents (Apr. 2025) UCSC ECE Seminar — From LIFT to LLM-Lasso (Mar. 2025) Helmholtz/ELLIS Workshop, Berlin — From LIFT to LLM-Lasso for Predictive Modeling (Mar. 2025) EnCORE Workshop, San Diego (video) — Beyond Decoder-Only Next Token Prediction (Feb. 2025) ECE Grad Seminar, University of Pittsburgh — Beyond Decoder-Only Next Token Prediction (Nov. 2024) Seminars on AI Core and Applications, Seoul National University (Oct. 2024) 2024 SIAM Conference on Mathematics of Data Science — Dual Operating Modes of ICL (Apr. 2024) Johns Hopkins University CIS/MINDS seminar — Theoretical Exploration of Foundation Model Adaptation Methods (Mar. 2024) 58th CISS @ Princeton University — A Probabilistic Framework for Understanding In-Context Task Learning and Retrieval (Feb. 2024) 2024 Information Theory and Applications Workshop — The Expressive Power of Low-Rank Adaptation (LoRA) (Feb. 2024) Foundations of Data Science, UCSD/NSF EnCORE (video) — Theoretical Exploration of Foundation Model Adaptation Methods (Dec. 2023) CSP Seminar, University of Michigan (video) — Towards a Theoretical Understanding of Parameter-Efficient Fine-Tuning (Nov. 2023) Efficient ML workshop, Google Research New York — The Expressive Power of Low-Rank Adaptation (June 2021) KRAFTON — Recent Trends of AI Research (June 2021) POSTECH — Information Theory and Coding for Trustworthy and Scalable Machine Learning (May 2021) “Shannon meets Turing” Colloquium, Seoul National University — Information Theory and Coding for Trustworthy and Scalable Machine Learning (Apr. 2021) IFDS Ethics & Algorithms SIG, UC Santa Cruz — Fairness in AI (Mar. 2021) Furiosa.ai — Recent Trends of AI Research (Feb. 2021) Korea Information and Communications Society — Fairness in AI (Dec. 2020) Machine Learning Ideas, Microsoft Research New England — Fairness in AI (Nov. 2020) SILO Seminar, UW-Madison — Fairness in AI (Nov. 2020) BLISS Seminar, UC Berkeley — Fairness in AI (Oct. 2020) The 11th International Conference on ICT Convergence — Information Theory and Coding for Trustworthy and Scalable Machine Learning (May 2020) Air Force Research Laboratory — FR-Train: A mutual information-based approach to fair and robust training (Feb. 2020) The Chaos and Complex Systems Seminar, UW-Madison — Information Theory and Coding for Machine Learning at Scale (Jan. 2020) SK T-Brain — Information Theory and Coding for Machine Learning at Scale (Jan. 2020) Furiosa.ai — Information Theory and Coding for Machine Learning at Scale (Oct. 2019) SILO Seminar, UW-Madison — Binary Rating Estimation with Graph Side Information (Aug. 2019) Samsung Electronics — Learning with Simulated Data (May 2019) The 29th Joint Conference on Communications and Information, Korea — Binary Rating Estimation with Graph Side Information (Apr. 2019) Korea Information and Communications Society — Learning with Simulated Data (Mar. 2019) ECE, UW-Madison — Information Theory and Coding for Machine Learning at Scale (Jan. 2019) Korea Information and Communications Society — Machine Learning (Introduction and Advanced Topics) (May 2018) Kakao Brain — Binary Rating Estimation with Graph Side Information (Jan. 2018) National Information Society Agency, Daegu — Speeding Up Distributed Machine Learning Using Codes (Jan. 2018) DGIST, Daegu — Speeding Up Distributed Machine Learning Using Codes (Dec. 2017) Seoul National University — Speeding Up Distributed Machine Learning Using Codes (Nov. 2017) UC Berkeley BASiCS Seminar — Binary Rating Estimation with Graph Side Information (May 2017) Naver — Speeding Up Distributed Machine Learning Using Codes (May 2017) Information Theory and Machine Learning Workshop, KAIST — Speeding Up Distributed Machine Learning Using Codes (Nov. 2016) National Information Society Agency, Daegu — Machine Learning (Introduction and Advanced Topics) (June 2016) Samsung Electronics DMC R&D Center — Speeding Up Distributed Machine Learning Using Codes (Feb. 2016) Information Theory and Applications Workshop — Speeding Up Distributed Machine Learning Using Codes (Jan. 2016) Seoul National University — Sub-linear Time Algorithms for Sparse Signal Recovery Based on Sparse-graph Codes (May 2015) IEEE Communication Theory Workshop — A VoD System for Massively Scaled, Heterogeneous Environments (May 2015) University of Seoul — A VoD System for Massively Scaled, Heterogeneous Environments (May 2014) KAIST — The MDS Queue: Analysing the Latency Performance of Codes (Oct. 2013) IEEE International Conference on Big Data — The MDS Queue: Analysing the Latency Performance of Codes (Dec. 2013) DIMACS Workshop on Algorithms for Green Data Storage, Rutgers University — When Do Redundant Requests Reduce Latency? Publications Show all publications
Selected Publications ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs Wonjun Kang, Kevin Galim, Seunghyuk Oh, Minjae Lee, Yuchen Zeng, Shuibai Zhang, Coleman Hooper, Yuezhou Hu, Hyung Il Koo, Nam Ik Cho, and Kangwook Lee ICLR 2026Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games Dongmin Park, Minkyu Kim, Beongjun Choi, et al., Kangwook Lee, and Jaewoong Cho ICLR 2026 Outstanding Paper Award @ EMNLP 2025 Wordplay Workshop VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data Thomas Zeng, Shuibai Zhang, et al., and Kangwook Lee ICML 2025 (oral) Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition Zheyang Xiong, Ziyang Cai, et al., Kangwook Lee, and Dimitris Papailiopoulos ICML 2025 (spotlight) Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges Nayoung Lee, Ziyang Cai, Avi Schwarzschild, Kangwook Lee, and Dimitris Papailiopoulos ICML 2025Looped Transformers for Length Generalization Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee ICLR 2025Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance Dongmin Park, Sebin Kim, Taehong Moon, Minkyu Kim, Kangwook Lee, and Jaewoong Cho ICLR 2025 (spotlight) Dual Operating Modes of In-Context Learning Ziqian Lin and Kangwook Lee ICML 2024The Expressive Power of Low-Rank Adaptation Yuchen Zeng and Kangwook Lee ICLR 2024Teaching Arithmetic to Small Transformers Nayoung Lee, Kartik Sreenivasan, Jason Lee, Kangwook Lee, and Dimitris Papailiopoulos ICLR 2024DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models Ying Fan, Olivia Watkins, et al., and Kimin Lee NeurIPS 2023LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks Tuan Dinh*, Yuchen Zeng*, et al., and Kangwook Lee NeurIPS 2022Score-based generative modeling secretly minimizes the Wasserstein distance Dohyun Kwon, Ying Fan, and Kangwook Lee NeurIPS 2022Coded-InvNet for Resilient Prediction Serving Systems Tuan Dinh and Kangwook Lee ICML 2021 (long oral) Speeding Up Distributed Machine Learning Using Codes Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran IEEE Transactions on Information Theory, 2018 The Joint Communications Society/Information Theory Society Paper Award, 2020 Preprints Fine-Tuning Without Forgetting In-Context Learning: A Theoretical Analysis of Linear Attention Models Chungpa Lee, Jy-yong Sohn, and Kangwook Lee Arxiv 2026In-Context Learning with Hypothesis-Class Guidance Ziqian Lin, Shubham Kumar Bharti, and Kangwook Lee Arxiv 2025LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and Regularization Erica Zhang, Ryunosuke Goto, Naomi Sagan, Jurik Mutter, Nick Phillips, Ash Alizadeh, Kangwook Lee, Jose Blanchet, Mert Pilanci, and Robert Tibshirani Arxiv 2025ReJump: A Tree-Jump Representation for Analyzing and Improving LLM Reasoning Yuchen Zeng, Shuibai Zhang, Wonjun Kang, Shutong Wu, Lynnix Zou, Ying Fan, Heeju Kim, Ziqian Lin, Jungtaek Kim, Hyung Il Koo, Dimitris Papailiopoulos, and Kangwook Lee Arxiv 2025 | Summary | Github How to Correctly Report LLM-as-a-Judge Evaluations Chungpa Lee, Thomas Zeng, Jongwon Jeong, Jy-yong Sohn, and Kangwook Lee Arxiv 2025 | Summary | Github Multi-Bin Batching for Increasing LLM Inference Throughput Ozgur Guldogan, Jackson Kunde, Kangwook Lee, and Ramtin Pedarsani Arxiv 20242026 TAPE: Tool-Guided Adaptive Planning and Constrained Execution in Language Model Agents Jongwon Jeong, Jungtaek Kim, and Kangwook Lee ICLR 2026 Workshop on Agentic AI in the Wild | Github Draft-based Approximate Inference for LLMs Kevin Galim, Ethan Ewer, Wonjun Kang, Minjae Lee, Hyung Il Koo, and Kangwook Lee ICLR 2026 | Summary | Github ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs Wonjun Kang, Kevin Galim, Seunghyuk Oh, Minjae Lee, Yuchen Zeng, Shuibai Zhang, Coleman Hooper, Yuezhou Hu, Hyung Il Koo, Nam Ik Cho, and Kangwook Lee ICLR 2026 | Summary | Github Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, and Jaewoong Cho ICLR 2026 Outstanding Paper Award @ EMNLP 2025 Wordplay Workshop | Summary | Github TABED: Test-Time Adaptive Ensemble Drafting for Robust Speculative Decoding in LVLMs Minjae Lee, Wonjun Kang, Byeongkeun Ahn, Christian Classen, Kevin Galim, Seunghyuk Oh, Minghao Yan, Hyung Il Koo, and Kangwook Lee EACL 2026 (Findings) 2025 Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, Jaewoong Cho EMNLP 2025 (Wordplay Workshop) Outstanding Paper Award Transformers in the Dark: Navigating unknown search spaces via noisy feedback Jungtaek Kim, Ziqian Lin, Thomas Zeng, Minjae Lee, Chungpa Lee, Jy-yong Sohn, Hyung Il Koo, and Kangwook Lee NeurIPS 2025 (WCTD Workshop) ENTP: Encoder-only Next Token Prediction Ethan Ewer, Daewon Chae, Thomas Zeng, Jinkyu Kim, and Kangwook Lee NeurIPS 2025 (WCTD Workshop) (spotlight) Infected Smallville: How Disease Threat Shapes Sociality in LLM Agents Soyeon Choi, Kangwook Lee, Oliver Sng, and Joshua M. Ackerman ICML 2025 Workshop | Summary Improvement-Guided Iterative DPO for Diffusion Models Ying Fan, Fei Deng, Yang Zhao, Sahil Singla, Rahul Jain, Tingbo Hou, Kangwook Lee, Feng Yang, Deepak Ramachandran, and Qifei Wang ICML 2025 Workshop In-batch Ensemble Drafting: Toward Fast and Robust Speculative Decoding for Multimodal Language Models Minjae Lee, Wonjun Kang, Byeongkeun Ahn, Christian Classen, Minghao Yan, Hyung Il Koo, and Kangwook Lee ICLR 2025 (SCOPE Workshop) Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges Nayoung Lee, Ziyang Cai, Avi Schwarzschild, Kangwook Lee, and Dimitris Papailiopoulos ICLR 2025 Workshop on Scaling Self-Improving Foundation ModelsTask Vectors in In-Context Learning: Emergence, Formation, and Benefit Liu Yang, Ziqian Lin, Kangwook Lee, Dimitris Papailiopoulos, and Robert Nowak COLM 2025VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data Thomas Zeng, Shuibai Zhang, Shutong Wu, Christian Classen, Daewon Chae, Ethan Ewer, Minjae Lee, Heeju Kim, Wonjun Kang, Jackson Kunde, Ying Fan, Jungtaek Kim, Hyung Il Koo, Kannan Ramchandran, Dimitris Papailiopoulos, and Kangwook Lee ICML 2025 (oral) | Summary | Github | HuggingFace Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition Zheyang Xiong, Ziyang Cai, John Cooper, Albert Ge, Vasilis Papageorgiou, Zack Sifakis, Angeliki Giannou, Ziqian Lin, Liu Yang, Saurabh Agarwal, Grigorios Chrysos, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos ICML 2025 (spotlight) Parameter-Efficient Fine-Tuning of State Space Models Kevin Galim, Wonjun Kang, Yuchen Zeng, Hyung Il Koo, and Kangwook Lee ICML 2025Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges Nayoung Lee, Ziyang Cai, Avi Schwarzschild, Kangwook Lee, and Dimitris Papailiopoulos ICML 2025Looped Transformers for Length Generalization Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee ICLR 2025 | Summary | Github From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee, and Dimitris Papailiopoulos ICLR 2025 | Summary | Github Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance Dongmin Park, Sebin Kim, Taehong Moon, Minkyu Kim, Kangwook Lee, and Jaewoong Cho ICLR 2025 (spotlight) | Summary | Github ENTP: Encoder-only Next Token Prediction Ethan Ewer, Daewon Chae, Thomas Zeng, Jinkyu Kim, and Kangwook Lee Transactions on Machine Learning Research (TMLR) 2025Improving CLIP Counting Accuracy via Parameter-Efficient Fine-Tuning Ruisu Zhang, Yicong Chen, and Kangwook Lee Transactions on Machine Learning Research (TMLR) 2025 | Github Buffer-based Gradient Projection for Continual Federated Learning Shenghong Dai, Jy-yong Sohn, Yicong Chen, S M Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, and Kangwook Lee Transactions on Machine Learning Research (TMLR) 2025 | Github 2024 Can MLLMs Perform Text-to-Image In-Context Learning? Yuchen Zeng*, Wonjun Kang*, Yicong Chen, Hyung Il Koo, and Kangwook Lee COLM 2024 | Summary | Github Dual Operating Modes of In-Context Learning Ziqian Lin and Kangwook Lee ICML 2024 | Summary | Github Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, and Dimitris Papailiopoulos ICML 2024 | Github Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks Jy-yong Sohn, Dohyun Kwon, Seoyeon An, and Kangwook Lee UAI 2024The Expressive Power of Low-Rank Adaptation Yuchen Zeng and Kangwook Lee ICLR 2024 | Summary | Github Image Clustering Conditioned on Text Criteria Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, and Kangwook Lee ICLR 2024 | Summary | Github Teaching Arithmetic to Small Transformers Nayoung Lee, Kartik Sreenivasan, Jason Lee, Kangwook Lee, and Dimitris Papailiopoulos ICLR 2024 | Summary | Github Looped Transformers are Better at Learning Learning Algorithms Liu Yang, Kangwook Lee, Robert D Nowak, and Dimitris Papailiopoulos ICLR 2024 | Summary | Github Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression Joseph Shenouda, Rahul Parhi, Kangwook Lee, and Robert D. Nowak Journal of Machine Learning Research (JMLR) 2024Mini-Batch Optimization of Contrastive Loss Jaewoong Cho*, Kartik Sreenivasan*, Keon Lee, Kyunghoo Mun, Soheun Yi, Jeong-Gwan Lee, Anna Lee, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee Transactions on Machine Learning Research (TMLR) 2024Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee Transactions on Machine Learning Research (TMLR) 2024Superresolution emulation of large cosmological fields with a 3D conditional diffusion model Adam Rouhiainen, Michael Gira, Moritz Münchmeyer, Kangwook Lee, and Gary Shiu Physical Review D 20242023 DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee NeurIPS 2023Prompted LLMs as Chatbot Modules for Long Open-domain Conversation Gibbeum Lee, Volker Hartmann, Jongho Park, Dimitris Papailiopoulos, and Kangwook Lee ACL 2023 (Findings, Short)Improving Fair Training under Correlation Shifts Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh ICML 2023Optimizing DDPM Sampling with Shortcut Fine-Tuning Ying Fan and Kangwook Lee ICML 2023Looped Transformers as Programmable Computers Angeliki Giannou*, Shashank Rajput*, Jy-yong Sohn, Kangwook Lee, Jason D. Lee, and Dimitris Papailiopoulos ICML 2023Equal Improvability: A New Fairness Notion Considering the Long-Term Impact Ozgur Guldogan*, Yuchen Zeng*, Jy-yong Sohn, Ramtin Pedarsani, and Kangwook Lee ICLR 2023FedGP: Buffer-based Gradient Projection for Continual Federated Learning Shenghong Dai, Bryce Yicong Chen, Jy-yong Sohn, S M Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, Kangwook Lee MLSys-FLSys 2023 Best Paper Award 2022 Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment Tuan Dinh, Jy-yong Sohn, Shashank Rajput, Tim Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, and Kangwook Lee EMNLP 2022 (Findings)Score-based generative modeling secretly minimizes the Wasserstein distance Dohyun Kwon, Ying Fan, and Kangwook Lee NeurIPS 2022LIFT: Language-Interfaced FineTuning for Non-Language Machine Learning Tasks Tuan Dinh*, Yuchen Zeng*, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee NeurIPS 2022Rare Gems: Finding Lottery Tickets at Initialization Kartik Sreenivasan, Jy-yong Sohn, Liu Yang, Matthew Grinde, Aliot Nagle, Hongyi Wang, Kangwook Lee, and Dimitris Papailiopoulos NeurIPS 2022GenLabel: Mixup Relabeling using Generative Models Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, and Kangwook Lee ICML 2022Permutation-Based SGD: Is Random Optimal? Shashank Rajput, Kangwook Lee, and Dimitris Papailiopoulos ICLR 20222021 Sample Selection for Fair and Robust Training Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh NeurIPS 2021Gradient Inversion with Generative Image Prior Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, and Jungseul Ok NeurIPS 2021Coded-InvNet for Resilient Prediction Serving Systems Tuan Dinh and Kangwook Lee ICML 2021 (long oral) Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information Changhun Jo and Kangwook Lee ICML 2021Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification Saurabh Agarwal, Hongyi Wang, Kangwook Lee, Shivaram Venkataraman, and Dimitris Papailiopoulos MLSys 2021FairBatch: Batch Selection for Model Fairness Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh ICLR 20212020 Attack of the Tails: Yes, You Really Can Backdoor Federated Learning Hongyi Wang, Kartik Sreenivasan, Shashank Rajpu, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos NeurIPS 2020Reprogramming GANs via Input Noise Design Kangwook Lee, Changho Suh, and Kannan Ramchandran ECML PKDD 2020FR-Train: A mutual information-based approach to fair and robust training Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh ICML 20202019 Synthesizing Differentially Private Datasets using Random Mixing Kangwook Lee, Hoon Kim, Kyungmin Lee, Changho Suh, and Kannan Ramchandran IEEE ISIT 2019Crash to Not Crash: Learn to Identify Dangerous Vehicles using a Simulator Hoon Kim*, Kangwook Lee*, Gyeongjo Hwang, and Changho Suh AAAI 2019 long oral SAFFRON: Sparse-Graph Code Framework for Group Testing Kangwook Lee, Kabir Chandrasekher, Ramtin Pedarsani, and Kannan Ramchandran IEEE Transactions on Signal Processing 2019Community Recovery in Hypergraphs Kwangjun Ahn*, Kangwook Lee*, and Changho Suh IEEE Transactions on Information Theory 20192018 Binary Rating Estimation with Graph Side Information Kwangjun Ahn, Kangwook Lee, Hyunseung Cha, and Changho Suh NeurIPS 2018Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings Kangwook Lee*, Hoon Kim*, and Changho Suh ICLR 2018Speeding Up Distributed Machine Learning Using Codes Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran IEEE Transactions on Information Theory, January 2018 The Joint Communications Society/Information Theory Society Paper Award, 2020 2017 and earlier The MDS Queue: Analysing the Latency Performance of Erasure Codes Kangwook Lee, Nihar Shah, Longbo Huang, and Kannan Ramchandran IEEE Transactions on Information Theory, May 2017On Scheduling Redundant Requests With Cancellation Overheads Kangwook Lee, Ramtin Pedarsani, and Kannan Ramchandran IEEE/ACM Transactions on Networking, April 2017When Do Redundant Requests Reduce Latency? Nihar Shah, Kangwook Lee, and Kannan Ramchandran IEEE Transactions on Communications, February 2016PhaseCode: Fast and Efficient Compressive Phase Retrieval based on Sparse-Graph-Codes Ramtin Pedarsani, Dong Yin, Kangwook Lee, and Kannan Ramchandran IEEE Transactions on Information Theory, June 2017A VoD System for Massively Scaled, Heterogeneous Environments: Design and Implementation Kangwook Lee, Lisa Yan, Abhay Parekh, and Kannan Ramchandran IEEE MASCOTS 2013 Best Paper Award finalist Lee Lab @ UW-Madison Postdocs PhD Students Undergraduate Students Ethan Ewer (ECE, Math, Violin Performance)Lynnix Zou (ECE, CS) Visiting Researchers Alumni — PhD Students Alumni — MS Students Ruisu Zhang (2024) => Machine Learning Engineer @ WeRide Andrew Geng (2023) => Research Engineer @ IBM Liang Shang (2021) => PhD student @ UW Madison Alumni — Postdocs Dr. Jy-yong Sohn (3/2021 – 12/2022) => Assistant Professor @ Yonsei University, KoreaDr. Daewon Seo (1/2020 – 7/2021) => Assistant Professor @ DGIST, KoreaAlumni — Undergraduate Students Jackson Kunde (2024-2025 Hilldale Fellow) => Machine Learning Engineer @ OhaloBryce Chen (2023-2024 Hilldale Fellow) => PhD student @ University of Washington, SeattleMichael Gira (2022-2023 Hilldale Fellow) => Software Engineer @ MicrosoftFurry Collaborators Bokdol Lee (Philosophy, Math, and Kinesiology) Bokdol is a Maltese from South Korea. With his three PhDs, he takes an interdisciplinary approach to search for the perfect doggie life. He gets his most creative ideas while taking naps.
Awards & Service Awards and Honors Outstanding Paper Award, The 5th Wordplay Workshop @ EMNLP 2025 Fusion Fund Distinguished Scholar Network, Inaugural Member, 2025 NSF CAREER Award, 2024. Amazon Research Awards, 2024. Best Paper Award, The Federated Learning Systems (FLSys) Workshop @ MLSys 2023 ECE Grainger Faculty Scholarship Award, 2022, UW Madison ECE. Young Investigator Grants Award, 2022, KSEA. (link) The Joint Communications Society/Information Theory Society Paper Award, 2020, IEEE. (link) The Outstanding Graduate Student Instructor Award, 2016. (link) Best Paper Award Finalist, 2013, IEEE MASCOTS 2013 KFAS Fellowship, 2010 - 2015, Korea Foundation for Advanced Studies (KFAS) Graduated with the highest GPA (4.19/4.30) among all (> 800) students across all departments at KAIST who completed their degrees in 2010. (Presidential Award) Korea Talent Award, 2009, KOFAC Selected Services Conference Senior Program CommitteeArea Chair, NeurIPS 2025, 2024, 2023, 2022, 2021 Area Chair, ICML 2026, 2025, 2024, 2023 Area Chair, ICLR 2026, 2025 Area Chair, COLM 2026, 2025, 2024 Program Committee, MLSys 2026, 2025, 2024, 2023, 2022, 2021, 2020 Journal Associate EditorAction Editor, Transactions on Machine Learning Research, 2026, 2025, 2024, 2023, 2022 Teaching At UW Madison ECE 901 Advanced Topics in Large Language Models , Fall 2025ECE/ISYE 570 Ethics of Data for Engineers, Spring 2025, Spring 2024. ECE/CS/ME 539 Introduction to Artificial Neural Networks, Fall 2024. ECE 901 Theory of Deep Learning Algorithms and Architectures, Spring 2023. ECE/CS 561 Probability and Information Theory in Machine Learning, Fall 2022. ECE/CS/ME 532 Matrix Methods in Machine Learning, Spring 2022, Fall 2020, Fall 2019 ECE 204 Data Science & Engineering, Fall 2021. ECE/CS 761 Mathematical Foundations of Machine Learning, Spring 2021, Spring 2020 At UC Berkeley Head GSI (Outstanding GSI Award), EECS 126 Probability and Random Processes, Fall 2015. webpage Head GSI, EECS 126 Probability and Random Processes, Fall 2014. webpage Background Academic Appointments Associate Professor, UW Madison, 2025.07 – 2026.01 Assistant Professor, UW Madison, 2019.08 – 2025.06 Research Assistant Professor, KAIST, 2018.10 – 2019.06 Postdoctoral Fellow, KAIST, 2016.06 – 2018.09 Graduate Student Researcher, UC Berkeley, 2010.08 – 2016.05 Education Ph.D., University of California, Berkeley, 2010.08 – 2016.05 (EECS) M.S., University of California, Berkeley, 2010.08 – 2012.12 (EECS) B.S., KAIST, 2006.03 – 2010.05 (Electrical Engineering) Work Experience CAIO, KRAFTON, 2026.02 – present CTO, Ludo Robotics, 2026.02 – present Head of Deep Learning R&D, KRAFTON, 2022.04 – 2026.01 Software Engineer Intern, Lytmus Inc., 2013.06 – 2013.09 Software Engineer Intern, Samsung Electronics, 2009.07 Software/Hardware Engineer Intern, LG Display, 2008.06 – 2008.08