L. Chen¹, M. Kowalski², S. Patel¹ ¹Department of Robotics, Tsinghua University, Beijing, China ²Institute of Autonomous Systems, Warsaw University of Technology, Poland
Autonomous Navigation and Task Allocation in Unstructured Environments: A Modular Deep Reinforcement Learning Approach
Recent works (e.g., [1,2]) have applied end-to-end DRL to mobile robots, but they often fail when task objectives change (e.g., from “go to point A” to “inspect three zones”). Conversely, classical SLAM + planning pipelines are brittle under perceptual aliasing.
Autonomous Robots (Springer) Status: Submitted – Under Review (LetPub ID: AUTO-2026-0417) Abstract The deployment of autonomous robots in unstructured environments—such as disaster zones, dense forests, or planetary surfaces—requires robust navigation and real-time task allocation under uncertainty. This paper presents a novel modular framework that integrates deep reinforcement learning (DRL) with a dynamic graph-based task scheduler. Unlike end-to-end policies, our system separates perception (LiDAR + RGB), local path planning (SAC algorithm), and global task allocation (Hungarian algorithm with receding horizon). Experiments in both simulation (Habitat 2.0, Gazebo) and physical trials (Clearpath Jackal robots) show a 34% improvement in task completion rate and a 41% reduction in collision frequency compared to baseline DRL methods. Ablation studies confirm the modular design’s generalizability across unseen obstacle densities. We release the code and simulation environment for reproducibility.