Title
ALARMM_SBO: Accurate large-area Localization and spatial Alignment with Robust Markerless Methods (Research)
Abstract
There are numerous industrial needs for accurate and robust 3D localization in large-scale indoor shopfloors, with ALARMM_SBO focusing on (1) location-sensitive cognitive assistance via Augmented Reality (AR) guidance for production employees, and (2) autonomous vehicles (i.e., AMRs) executing fine-grained tasks like machine tending. Typical localization approaches use local alignment by exploiting complex data-driven 6DOF pose estimation of objects (which is unreliable and time-consuming to scale to large environments), apply outside-in tracking with external devices or dedicated infrastructure (which is expensive), use graphical markers for spatial reference (which can be impractical in large industrial shopfloors), or use visual SLAM. While cost-effective, SLAM-based solutions are prone to drift, especially in shopfloor-wide scenarios, and are not robust to varying operational conditions (e.g., low-light or low-texture environments) nor in dynamic environments which, in worst cases, can render the localization system entirely unreliable.
ALARMM_SBO will improve the accuracy, reliability and robustness of SLAM-based localization by jointly exploiting (1) graphical factory representations (e.g., drift-free 3D shopfloor scan or globally aligned CAD models), (2) semantic knowledge about the industrial environment, (3) known dynamic models of machines, and (4) Deep Learning-based IMU localization. Each of these four elements will output specific constraints on the localization optimization process and will be integrated in a smart uncertainty-based constraint selection module to maximize its impact on accuracy, reliability and robustness, tailored to the application's needs. This will yield high frame-rate 3D global localization with at most 1cm spatial accuracy tolerance in large-area indoor shopfloors when sufficient visual features are present, and a robust graceful degradation towards 5cm in the case of more challenging low-light and low-texture environments.
Period of project
01 April 2025 - 31 March 2029