This paper delves into the Bayesian regret of posterior sampling reinforcement learning (PSRL) and presents a novel prior-dependent regret bound within the linear mixture model. The bound hinges on the variance of the underlying MDP in the prior distribution, offering a distinctive perspective in the realm of randomized exploration.