Simulation-Based Analysis of Q-Learning Enhanced Braitenberg Vehicles for Obstacle Avoidance

Authors

  • Xingran Bu

DOI:

https://doi.org/10.56028/aetr.15.1.1509.2025

Keywords:

Braitenberg vehicle; Q-learning; obstacle avoidance; minimalist robotics.

Abstract

 Adaptive navigation on resource-constrained robots remains difficult because strict limits on compute, memory, and latency restrict the use of complex learning architectures. Braitenberg Vehicles (BVs) offer a minimalist control philosophy based on direct sensor–motor couplings, enabling fast, reactive behaviors without centralized planning. However, their fixed control rules often fail in dynamic or noisy environments. This study integrates a discretized Q-learning controller into a MATLAB/Simulink differential-drive BV model equipped with infrared sensors and PWM actuators to preserve BV simplicity while enabling online adaptation. The reward function jointly optimizes collision avoidance, path efficiency, and energy use, and an ε-greedy exploration strategy is annealed over 5,000 training episodes. Experiments in static mazes, dynamic obstacle fields, and noisy sensing conditions (σ = 5%) show that the hybrid BV reduces collision rates by 41–67% and shortens path length by 28% compared with classical BVs. The controller’s <1 kB memory footprint and sub-millisecond decision latency meet the constraints of embedded platforms. Improvements under noise are statistically significant (100 trials, t-tests, p < 0.05). These results demonstrate that model-free reinforcement learning can be seamlessly integrated into reactive BV architectures, yielding robust adaptability without sacrificing computational simplicity, and provide a reproducible MATLAB/Simulink framework for benchmarking lightweight robotic systems in uncertain environments.

Downloads

Published

2025-11-20