Qunhong Zeng

profile.jpg

Beijing, China

qunhongzeng@gmail.com

I am a second-year graduate student at the School of Computer Science and Technology at Beijing Institute of Technology, advised by Prof. Yuxia Zhang. Prior to this, I received my B.S. degree in Computer Science from the same university in 2023.

My research primarily focuses on AI for Software Engineering (AI4SE). Recently, I have concentrated on reinforcement learning approaches to enhance LLMs’ reasoning capabilities, with the goal of improving their performance on automatic SE tasks such as code generation.

I am also interested in MLSys and enjoy building scalable systems. I have had several MLSys internships and recently I’ve been actively contributing to the verl community. I believe RL represents a critical pathway toward AGI and it’s an art of combining research and engineering.

Please feel free to contact me if you’re interested in having a discussion :)

Publications

  1. ToolTrain: Tool-integrated Reinforcement Learning for Repo Deep Search
    Zexiong Ma, Chao Peng, Qunhong Zeng, Pengfei Gao, Yanzhen Zou, and Bing Xie
    2026
  2. Evaluating Generated Commit Messages with Large Language Models
    Qunhong Zeng, Yuxia Zhang, Zexiong Ma, Bo Jiang, Ningyuan Sun, Klaas-Jan Stol, Xingyu Mou, and Hui Liu
    In Proceedings of the IEEE/ACM 48th International Conference on Software Engineering (ICSE), 2026
  3. A First Look at Conventional Commits Classification
    Qunhong Zeng, Yuxia Zhang, Zhiqing Qiu, and Hui Liu
    In Proceedings of the IEEE/ACM 47th International Conference on Software Engineering (ICSE), 2025
  4. COLARE: Commit Classification via Fine-grained Context-aware Representation of Code Changes
    Qunhong Zeng, Yuxia Zhang, Zeyu Sun, Yujie Guo, and Hui Liu
    In 2024 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), 2024

Experiences

  1. MiniMax, Top Talent Intern, 2024.8 - now
    • Working on foundation model’s coding capabilities.

  2. Meituan, Beidou Program Intern, 2024.6 - 2025.7
    • Researched multi-turn conversational RL to enhance LLM’s dialogue capabilities.

  3. ByteDance, MarsCode CodeAI Intern, 2024.9 - 2025.6
    • Conducted research on automated commit message generation and quality evaluation, with findings accepted for publication in the research track of ICSE 2026.

    • Researched RL approaches to enhance LLM’s reasoning capabilities, such as mathematical reasoning, competitive programming contest test generation, and tool-interactive reasoning. Collaborating with Zexiong Ma and mentored by Bo Jiang.

  4. Oneflow, Framework Intern, 2024.3 - 2024.9
    • Extended deep learning framework oneflow compatibility beyond CUDA to diverse hardware accelerators (NPU/XPU), implementing Ascend CANN/AscendC-based operators that enabled end-to-end ResNet, GPT2, and Llama model inference/training on Ascend chips.

    • Managed framework infrastructure including Docker containerization, CI/CD, cross-platform compilation, and Manylinux-compliant Python/C++ extension packaging.

  5. Momenta, HD-Map Backend Intern, 2022.9 - 2023.1
    • Engineered high-performance Python/C++ extensions for HD maps algorithm library, delivering 20x performance improvement.

  6. ByteDance, AI-Lab Intern, 2021.12 - 2022.6
    • Contributed to ByteDance’s machine learning platform development as part of the engineering team.