RL Robustness Analysis

Analysing robustness of safe reinforcement learning policies and control agents in the face of environmental deviations such as steering, friction, and sensor noise.

Formal Methods (FM) 2024

📄 Read the Paper

Safe reinforcement learning policies are often evaluated under idealized simulation conditions, but real-world deployment exposes them to environmental deviations — variations in steering, friction, and sensor noise that can cause policy failures. This work designs stochastic optimization methods to systematically analyze the robustness of RL policies and control agents against such deviations in cyber-physical systems.

We evaluate policies for autonomous driving under actuation and environmental perturbations, in collaboration with Toyota Motor North America, and identify failure cases that are missed by standard evaluation protocols.

RL Robustness Analysis