Minsoo Kim

minsoo2333 [at] hanyang.ac.kr

mskim_230127_idcard.jpg

Hi! I am Minsoo Kim, Ph.D. student in the AI Hardware & Algorithm lab at Hanyang University advided by professor Jungwook Choi and working as a PhD intern in Qualcomm AI Research.

My core research centers on enhancing the efficient algorithm for generative models, with a specific focus on understanding the effects of Quantization and Distillation across diverse NLP tasks aimed for efficient inference of LLMs. Beyond, I am passionated into advanced techniques for fine-tuning LLMs, aiming to fully unlock the potential of specialized LLMs.

News

May 2024 2 papers (1 main and 1 findings) accepted @ ACL 24 🇹🇭
Mar 2024 I am starting as a PhD Intern in Qualcomm AI Research :fire:
Nov 2023 Selcted as a winner in Qualcomm Innovation Fellowship 2023 :trophy:
Oct 2023 1 paper accepted and attending @ EMNLP 23 🇸🇬
1 paper accepted and attending @ NeurIPS 23 🇺🇸

Selected Publications

  1. ACL 2024
    RA-LoRA: Rank-Adaptive Parameter-Efficient Fine-Tuning for Accurate 2-bit Quantized Large Language Models
    Minsoo Kim, Sihwa Lee, Won Yong Sung, and Jungwook Choi
    In Findings of the Association for Computational Linguistics: ACL 2024, Aug 2024
  2. ACL 2024
    Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
    Janghwan* Lee, Seongmin* Park, Suk-Jin Hong, Minsoo Kim, Du-Seong Chang, and 1 more author
    In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Aug 2024
  3. NeurIPS 2023
    Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
    Minsoo Kim, Sihwa Lee, Janghwan Lee, Suk-Jin Hong, Du-Seong Chang, and 2 more authors
    Thirty-seventh Conference on Neural Information Processing System, Dec 2023
  4. EMNLP 2023
    Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
    Janghwan Lee*, Minsoo Kim*, Seungcheol Baek, Seokjoong Hwang, Wonyong Sung, and 1 more author
    Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (Main Track), *Co-First author, Dec 2023
  5. EACL 2023
    Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
    Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, and Jungwook Choi
    Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (Main Track), May 2023
  6. EMNLP 2022
    Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders
    Minsoo Kim, Sihwa Lee, Suk-Jin Hong, Du-Seong Chang, and Jungwook Choi
    Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (Main Track), Dec 2022
  7. DAC 2022
    NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
    Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, and 2 more authors
    Proceedings of the 59th ACM/IEEE Design Automation Conference, Dec 2022