Adversarial Robustness of Modular Autonomous Driving Agents for Lane Keeping

dc.contributor.authorDinashi, Kimia
dc.date.accessioned2026-04-27T18:13:12Z
dc.date.available2026-04-27T18:13:12Z
dc.date.issued2026-04-27
dc.date.submitted2026-04-20
dc.description.abstractModular autonomous driving (AD) pipelines are widely used because their intermediate representations improve interpretability and facilitate targeted debugging. However, modularity does not necessarily imply robustness: adversarial perturbations can enter at multiple interfaces and propagate to downstream control. This thesis investigates adversarial robustness in a modular deep learning lane-keeping agent in the CARLA simulator, consisting of a learned lane detection module followed by a learned steering angle predictor that consumes RGB and lane-mask inputs. We evaluate white-box, digital, ℓ∞-bounded evasion attacks using Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Attacks are injected at different points in the pipeline to isolate perception-side (lane) and control-side (steering) vulnerabilities, including a leakage configuration that forwards the adversarial RGB to the steering module. Robustness is assessed using closed-loop safety metrics—attack success rate (ASR) and time-to-failure (TTF)—and complemented with offline steering-error analysis to separate numerical sensitivity from compounding vehicle dynamics. Experiments show that the steering predictor is the dominant point of failure: steering targeted perturbations consistently induce rapid behavioral failures, whereas lane-targeted attacks require substantially larger perturbation budgets to achieve comparable impact. Offline analysis confirms that gradient-aligned perturbations can amplify steering prediction error by orders of magnitude in the baseline model, while random noise of equal magnitude has negligible effect. Motivated by these findings, we apply adversarial training to the steering module as a targeted defense. The adversarially trained steering predictor substantially reduces sensitivity to gradient-based attacks and yields consistent improvements in closed-loop safety, demonstrating that module-specific hardening can mitigate the primary failure mechanism in modular lane-keeping systems.
dc.identifier.urihttps://hdl.handle.net/10012/23059
dc.language.isoen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectmodular autonomous driving
dc.subjectadversarial attacks
dc.subjectfgsm
dc.subjectpgd
dc.subjectcarla
dc.subjectdeep learning
dc.subjectlane keeping
dc.subjectadversarial training
dc.titleAdversarial Robustness of Modular Autonomous Driving Agents for Lane Keeping
dc.typeMaster Thesis
uws-etd.degreeMaster of Applied Science
uws-etd.degree.departmentSystems Design Engineering
uws-etd.degree.disciplineSystem Design Engineering
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms2 years
uws.contributor.advisorLashgarian Azad, Nasser
uws.contributor.advisorXiong, Pulei
uws.contributor.affiliation1Faculty of Engineering
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Dinashi_Kimia.pdf
Size:
2.54 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: