Yingru Li
Yingru Li
Home
Posts
Research
Contact
Resume
RL-Seminar
Light
Dark
Automatic
Posts
Language as a Universal Interface for Reinforcement Learning Agents
This post establishes a formal mathematical framework for language agents, deriving fundamental challenges from first principles and providing concrete design guidelines with real-world examples from SWE-Bench.
Yingru LI
Nov 7, 2025
22 min read
Research
,
Theory
,
Engineering
Information Bandwidth in Reinforcement Learning
An information-theoretic analysis showing that scalar advantage formulations learn ≤ log₂(B) bits per episode, while per-timestep advantages preserve full reward entropy.
Yingru LI
Last updated on Nov 4, 2025
16 min read
Research
,
Theory
When Speed Kills Stability: Demystifying RL Collapse from the Training-Inference Mismatch
The relentless push for faster inference creates a dangerous training-inference mismatch that silently kills RL with LLMs. We reveal the vicious cycle—particularly acute in reasoning and agentic RL—and show that sequence-level importance sampling is the principled solution.
Jiacai Liu
,
Yingru LI
,
Yuqian Fu
,
Jiawei Wang
,
Qian Liu
,
Yu Shen
Sep 17, 2025
1 min read
Research
,
Theory
Mathematical Formulations of Rollout Correction Methods
Definitive mathematical formulations for rollout correction methods in VeRL, progressing from REINFORCE to PPO to Decoupled PPO. Handles policy mismatch, temporal lag, replay buffers, and off-policy algorithms with importance sampling and rejection sampling techniques.
Yingru LI
Nov 4, 2024
1 min read
Research
,
Theory
,
Documentation