My name is Junchi Yao (pronounced “JOON-chee YOW”). I am doing my sophmore year in Information System and Information Management from University of Electronic Science and Technology of China (UESTC). Currently, I am a research intern at Shanghai AI Lab, where I have the privilege of working with Researcher Peng Ye. Before that, I gained valuable research experience as a research intern at King Abdullah University of Science and Technology (KAUST) under the guidance of Prof. Di Wang.

My research focuses on Large Language Models, particularly in explainability (XAI), LLM agents, and LLM4Science, including social science and physics. My goal is to advance LLM development for interpretable, robust, and impactful real-world applications.

I am actively seeking research collaborators, whether you are new or experienced.. Feel free to reach out, or learn more from My CV.

AI Researcher

  • Research focus on LLMs
  • Internships at Shanghai AI Lab
  • Publications at ACL

World Explorer

  • Visited 11 countries worldwide
  • Traveled to 27 provinces in China
  • Rich experience in hiking

News

  • 2025.06:  🎉 1 Paper is accepted by The 42nd International Conference on Machine Learning(ICML 2025)Multi-Agent System Workshop.
  • 2025.05: 🎉 2 Papers are accepted by The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) Findings. See you in Vienna, Austria!
  • 2025.04: 🎉 1 Paper is accepted by Nature - Scientific Reports.
  • 2025.03:  I have joined Shanghai AI Lab as a Research Intern under the guidance of Researcher Peng Ye, where I focus on LLM Agents and LLM for Physics.

    Publications

Preprint
Pipeline of PHYSICS Dataset

Scaling Physical Reasoning with the PHYSICS Dataset

Shenghe Zheng*, Qianjia Cheng*, Junchi Yao*, Mengsong Wu, Haonan He, Ning Ding, Yu Cheng, Shuyue Hu, Lei Bai, Dongzhan Zhou, Ganqu Cui, Peng Ye
Preprint
ICML 2025 Workshop
WandaPlan's Framework

Is Your LLM-Based Multi-Agent a Reliable Real-World Planner? Exploring Fraud Detection in Travel Planning

Junchi Yao*, Jianhua Xu*, Tianyu Xin*, Ziyi Wang, Shenzhe Zhu, Shu Yang, Di Wang
The 42nd International Conference on Machine Learning(ICML 2025)Multi-Agent System Workshop
ACL 2025 Findings
Repeat Curse Framework

Understanding the Repeat Curse in Large Language Models from a Feature Perspective

Junchi Yao*, Shu Yang*, Jianhua Xu, Lijie Hu, Mengdi Li, Di Wang
In The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) Findings
ACL 2025 Findings
Fraud-R1 Pipeline

Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements

Shu Yang*, Shenzhe Zhu*, Zeyu Wu, Keyu Wang, Junchi Yao, Junchao Wu, Lijie Hu, Mengdi Li, Derek F. Wong, Di Wang
In The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) Findings
Nature SR
Social Opinion Prediction Framework

Social opinions prediction utilizes fusing dynamics equation with LLM-based agents

Junchi Yao*, Hongjie Zhang*, Jie Ou, Dingyi Zuo, Zheng Yang, Zhicheng Dong
Nature Scientific Reports

Educations