publications

publications by categories in reversed chronological order. generated by jekyll-scholar.

2025

  1. Under Review
    InfiniPot-V: Memory-Constrained Streaming Video Understanding via Query-Agnostic KV Cache Compression on Multimodal LLMs
    Minsoo Kim, Kyuhong Shim, Jungwook Choi, and Simyung Chang
    Under Review, 2025
  2. AAAI 2025
    RILQ: Rank-Insensitive LoRA-based Quantization Error Compensation for Boosting 2-bit Large Language Model Accuracy
    Geonho Lee, Janghwan Lee, Sukjin Hong, Minsoo Kim, Euijai Ahn, and 2 more authors
    39th Annual AAAI Conference on Artificial Intelligence,, Feb 2025

2024

  1. EMNLP 2024
    InfiniPot: Infinite Context Processing on Memory-Constrained LLMs
    Minsoo Kim, Kyuhong Shim, Jungwook Choi, and Simyung Chang
    Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Nov 2024
  2. ACL 2024
    RA-LoRA: Rank-Adaptive Parameter-Efficient Fine-Tuning for Accurate 2-bit Quantized Large Language Models
    Minsoo Kim, Sihwa Lee, Won Yong Sung, and Jungwook Choi
    Findings of the Association for Computational Linguistics: ACL 2024, Aug 2024
  3. ACL 2024
    Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
    Janghwan* Lee, Seongmin* Park, Suk-Jin Hong, Minsoo Kim, Du-Seong Chang, and 1 more author
    Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Aug 2024

2023

  1. NeurIPS 2023
    Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
    Minsoo Kim, Sihwa Lee, Janghwan Lee, Suk-Jin Hong, Du-Seong Chang, and 2 more authors
    Thirty-seventh Conference on Neural Information Processing System, Dec 2023
  2. EMNLP 2023
    Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
    Janghwan Lee*, Minsoo Kim*, Seungcheol Baek, Seokjoong Hwang, Wonyong Sung, and 1 more author
    Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (Main Track), *Co-First author, Dec 2023
  3. EACL 2023
    Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
    Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, and Jungwook Choi
    Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (Main Track), May 2023

2022

  1. EMNLP 2022
    Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders
    Minsoo Kim, Sihwa Lee, Suk-Jin Hong, Du-Seong Chang, and Jungwook Choi
    Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (Main Track), Dec 2022
  2. DAC 2022
    NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference
    Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, and 2 more authors
    Proceedings of the 59th ACM/IEEE Design Automation Conference, Dec 2022