I am currently a senior undergrad student in the School of Computer Science and Technology, South China University of Technology. I am honored to be advised by Prof. Yue-Jiao Gong, and fortunate to work closely with Zeyuan Ma and Hongshu Guo. We co-founded GMC-DRL research team aiming to make further exploration in Meta-Black-Box optimization.
I was an SURF fellow in Computing + Mathematical Sciences (CMS) Department at Caltech, where I am honored to be advised by Prof. Yisong Yue and Dr. Kaiyu Yang and work on AI4Math research project.
My research interest includes two main directions.
- Formal Theorem Proving (Decision making process under certainty). Compared to informal one, formal theorem proving usually use proof assistants such as Coq, Isabelle, and Lean. The correctness of formal proofs can be conveniently verified by these proof assitants. Thereforce, decision making process in formal theorem proving allows no mistakes to reach the goal.
- Meta-Black-Box optimization (Decision making process under uncertainty). Meta-Black-Box optimization is generally to mitigates the labour-intensive development in traditional black-box optimization algorithms through meta-learning. (e.g. leverage learning-based methods to automatically deside the most suitable configuration of some black-box optimization algorithms.) The decision making process in Meta-Black-Box optimization is under uncertainty due to the intrinsic randomness in black-box optimization.
🔥 News
- 2024.05: 🎉🎉 SYMBOL will be presented in ICLR 2024 as a poster!
- 2023.12: 🎉🎉 MetaBox will be presented in Neurips 2023 as the oral presentation!
📝 Publications
Reasoning in Reasoning: A Hierarchical Framework for Neural Theorem Proving (NeurIPS 2024 Workshop MATH-AI)
Ziyu Ye, Jiacheng Chen, Jonathan Light, Yifei Wang, Jiankai Sun, Mac Schwager, Philip Torr, Guohao Li, Yuxin Chen, Kaiyu Yang, Yisong Yue, Ziniu Hu.
SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning (ICLR 2024)
Jiacheng Chen*, Zeyuan Ma*, Hongshu Guo, Yining Ma, Jie Zhang, Yue-Jiao Gong.
- Unlike previous methods incrementally auto-configuring some existing black-box algortithms, SYMBOL directly generate stepwise update rule in the form of symbolic eqution to achieve more flexible and interpretable optimization behaviour.
MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning (Neurips 2023 Oral)
Zeyuan Ma, Hongshu Guo, Jiacheng Chen, Zhenrui Li, Guojun Peng, Yue-Jiao Gong, Yining Ma, Zhiguang Cao.
- We released a benchmark platform for Meta-Black-Box Optimization named MetaBox. We integrate three different testsuits, about 20 baselines including traditional black-box methods and Meta-Black-Box methods, and new evaluation metrics tailored for Meta-Black-Box optimization. The codebase can be found here.
Auto-configuring Exploration-Exploitation Tradeoff in Evolutionary Computation via Deep Reinforcement Learning (GECCO 2024)
Zeyuan Ma*, Jiacheng Chen*, Hongshu Guo, Yining Ma, Yue-Jiao Gong.
- We explore about how to make a trade-off between exploration and exploitation in Black-Box optimization through learn-based method. In this work, we carefully designed a framework which is based on transfromer-based model and leverage exploration-exploitation related feature tailored for black-box optimization scenario to resolve this problem.
Neural Exploratory Landscape Analysis
Zeyuan Ma, Jiacheng Chen, Hongshu Guo, Yue-Jiao Gong.
- We developed an Neural-Network based lanscape analyser to replace the feature-extracting parts in Meta-Black-Box works which is usually manually designed. To ensure the generalization ability of the NeurELA, we let it operate in Multi-task setting and use neuroevolution to train it.
LLaMoCo: Instruction Tuning of Large Language Models for Optimization Code Generation
Zeyuan Ma, Hongshu Guo, Jiacheng Chen, Guojun Peng, Zhiguang Cao, Yining Ma, Yue-Jiao Gong.
- We propose to fine-tune language model to generate executable code that can be used for optimization tasks. We proposed a dataset that containing diversed optimization problems and corresponding algorithm in this paper, also leverage some tricks during training process and finally provided a fine-tuned LM for optimization tasks.
- Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution, Hongshu Guo, Yining Ma, Zeyuan Ma, Jiacheng Chen, Xinglin Zhang, Zhiguang Cao, Jun Zhang, Yue-Jiao Gong, GECCO 2024
🎖 Honors and Awards
- 2024, Caltech Summer Undergraduate Research Fellowship (SURF).
- 2023 - 2024, China National Scholarship.
- 2021 - 2022, China National Scholarship.
📖 Educations
- 2021.09 - present, School of Computer Science and Technology, South China University of Technology.
💻 Research Experience
- 2024.06 - 2024.08, SURF, Caltech.
- Thesis: AI for Math.
- Advisor: Prof. Yisong Yue and Dr. Kaiyu Yang.
- 2022-03 - 2023.03, SRP, SCUT.
- Thesis: Meta Black Box Optimization.
- Advisor: Prof. Yue-Jiao Gong