Yingru LI

Yingru LI

Ph.D. Candidate

The Chinese University of Hong Kong

Biography

Mr. Yingru Li is a Ph.D. Candidate in The Chinese University of Hong Kong (CUHK), Shenzhen, China. Fortunately, he is advised by Zhi-Quan (Tom) Luo. He received the bachelor degree in Computer Science (ACM Honors Program) from Huazhong University of Science and Technology with an advisory of Kun He. He was a research visiting student at Cornell University with John E. Hopcroft. His Ph.D. research is supported by SRIBD Ph.D. Fellowship, Presidential Ph.D. Fellowship and Tencent AI Ph.D. Fellowship.

He organized Reinforcement Learning Seminar in CUHK-SZ from 2019 to 2023.

Now actively seeking postdoctoral & research positions! my resumé.

Latest Updates (Swipe to Explore More!)

✈️July 2024: Will deliver an Invited Long Talk at the International Symposium on Mathematical Programming (ISMP), Montréal. The ISMP is the leading triennial conference focusing on mathematical optimization.

🎉May 2024: HyperAgent accepted to ICML!

💻May 2024: AISTATS, Valencia, Spain. Our paper offers the first prior-dependent analysis of PSRL under function approximation. This helps understand how integrating prior knowledge like historical data or pre-trained models (LLMs) enhances RL agent efficiency.

💻May 2024: Remote presentation HyperAgent at the ICLR in Vienna, Austria, during the Workshop on Bridging the Gap Between Practice and Theory in Deep Learning. HyperAgent represents a significant stride towards aligning theoretical foundations with practical deep RL applications.

💻March 2024: Two Talks at the Informs Optimization Society (IOS) Conference at Rice University. (1) “HyperAgent: A simple, efficient, scalable and provable RL framework for complex environments” and (2) “A Tutorial on Thompson Sampling and Ensemble Sampling”.

🎉Jan 2024: Our work about HyperAgent received Best Paper Award in the third doctoral and postdoctoral Daoyuan academic forum.

✈️December 2023: NeurIPS, New Orleans 🚀 My research addresses efficiency challenges in reinforcement learning (RL). It encompasses both theoretical aspects of high-dimentional probability and practical applications in Deep RL [1]. I have developed a novel random projection tool for high-dimensional sequentially dependent data, a non-trivial martingale extension of Johnson–Lindenstrauss [2]. 🚀

Interests
  • Sequential Decision-making & Reinforcement Learning
  • Algorithms Design & Analysis
  • Mathematical Optimization
  • Applied probability
Education
  • Ph.D. in Computer and Information Engineering, 2018 - present

    The Chinese University of Hong Kong

  • B.Eng. in Computer Science (Honors Program). Outstanding Graduate, 2017

    Huazhong University of Science and Technology, China

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). Optimistic Thompson Sampling for No-Regret Learning in Unknown Games.

Cite arXiv

(2024). Prior-dependent analysis of posterior sampling reinforcement learning with function approximation. The 27th International Conference on Artificial Intelligence and Statistics (AISTATS).

Cite

(2024). Simple, unified analysis of Johnson-Lindenstrauss with applications. The 37th Annual Conference on Learning Theory (COLT) (Submitted).

Cite arXiv

(2023). Efficient and scalable reinforcement learning via hypermodel. NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World.

PDF Cite Poster Slides Website

(2022). HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning. International Conference on Learning Representations (ICLR).

Cite ICLR2022

Contact

You only live once.

Hits

Gallery