Reinforcement learning could involve sensitive information in states, rewards, and transitions. In this talk, we discuss how differentially private algorithms could protect the information from being inferred by an attacker. The talk will focus on continuous RL settings and provide analyses on privacy and utility, while several recent works follow.