Minsoo Kim

minsoo2333 [at] hanyang.ac.kr

mskim_230127_idcard.jpg

Hi! I am Minsoo Kim, a Machine Learning Researcher at Apple MIND team. I received my Ph.D. from the AI Hardware & Algorithm lab at Hanyang University, advised by Professor Jungwook Choi. Here is my CV.

My core research centers on enhancing the efficient algorithm for generative language models, with a specific focus on real-world applications with Large Language Models (LLMs) and Large Multimodal Models. My research addresses key efficiency challenges in these models, including long context processing, efficient retrieval mechanisms, and inference acceleration.

News

Apr 2026 1 paper accepted and attending @ ICML 26 🇰🇷
Mar 2026 Joining Apple as an ML Researcher
Sep 2025 1 paper accepted and attending @ NeurIPS 25 🇺🇸
Nov 2024 I am starting as a PhD Intern at Apple

Selected Publications

  1. ICML 26
    EpiCache: Episodic KV Cache Management for Long-Term Conversation on Resource-Constrained Environments
    Minsoo Kim, Arnav Kundu, Han-Byul Kim, Richa Dixit, and Minsik Cho
    Forty-Third International Conference on Machine Learning, Jul 2026
  2. NeurIPS 2025
    InfiniPot-V: Memory-Constrained KV Cache Compression for Streaming Video Understanding
    Minsoo Kim, Kyuhong Shim, Jungwook Choi, and Simyung Chang
    The Thirty-Ninth Annual Conference on Neural Information Processing Systems, Dec 2025
  3. EMNLP 2024
    InfiniPot: Infinite Context Processing on Memory-Constrained LLMs
    Minsoo Kim, Kyuhong Shim, Jungwook Choi, and Simyung Chang
    Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024
  4. ACL 2024
    RA-LoRA: Rank-Adaptive Parameter-Efficient Fine-Tuning for Accurate 2-bit Quantized Large Language Models
    Minsoo Kim, Sihwa Lee, Won Yong Sung, and Jungwook Choi
    Findings of the Association for Computational Linguistics: ACL 2024, Aug 2024
  5. ACL 2024
    Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
    Janghwan* Lee, Seongmin* Park, Suk-Jin Hong, Minsoo Kim, Du-Seong Chang, and 1 more author
    Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Aug 2024
  6. NeurIPS 2023
    Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
    Minsoo Kim, Sihwa Lee, Janghwan Lee, Suk-Jin Hong, Du-Seong Chang, and 2 more authors
    The Thirty-Seventh Annual Conference on Neural Information Processing Systems, Dec 2023
  7. EMNLP 2023
    Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
    Janghwan Lee*, Minsoo Kim*, Seungcheol Baek, Seokjoong Hwang, Wonyong Sung, and 1 more author
    Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (Main Track), *Co-First author, Dec 2023
  8. EACL 2023
    Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
    Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, and Jungwook Choi
    Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (Main Track), May 2023
  9. EMNLP 2022
    Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders
    Minsoo Kim, Sihwa Lee, Suk-Jin Hong, Du-Seong Chang, and Jungwook Choi
    Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (Main Track), Dec 2022
  10. DAC 2022
    NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
    Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, and 2 more authors
    Proceedings of the 59th ACM/IEEE Design Automation Conference, Dec 2022