Each vehicle features an obstacle detection system, currently based upon a LiDAR laser scanner sensor, to detect obstacles of 7+ centimeters wide within a reach of 40-100 meters. A failsafe detection algorithm is used that scans the area in front of the vehicles ‘empty’. The obstacle detection directly plugs-in to the low level motion controllers and can interrupt the navigation task. The obstacle detection algorithms are able to distinguish true obstacles from ‘ghost’ objects such as rain, snow or leaves falling from the trees.
The software for the interpretation of the sensor raw scans is developed by 2getthere. The sensor features a wide aperture and four layer scans, scanning the base (vehicle swept path) and extended hull. Both hulls are divided into cells, each scanned several times per second to determine a probability factor of an object being present. Based on probability the reaction is computed, which is translated to an adapted speed profile. The vehicle will slow down for obstacles in the extended hull, while stopping for obstacles in the base hull.
2getthere is actively developing enhanced perception system for better sense and understanding. This includes sensor fusion, using 3D camera systems, LiDAR, radar and ultrasound sensors. The aims are to improve redundancy and safety, ensure operations under all adverse weather conditions and ultimately 360-degree awareness. The latter is in anticipation of operations as a Shared Autonomous Vehicle in mixed traffic with other road users, allowing to anticipate the unexpected through recognition and understanding of their intentions.