Christoph Seeger

Symbolic picture for the article. The link opens the image in a large view.

Obstacle Fusion and Scene Interpretation for Autonomous Driving with Occupancy Grids

Autonomous driving is, besides electrification, currently one of the most competitive
areas in the automotive industry. A substantially challenging part of autonomous
vehicles is the reliable perception of the driving environment. This dissertation is
concerned with improving the sensor-based perception and classification of the static
driving environment. In particular, it focuses on recognizing road boundaries and
obstacles on the road, which is indispensable for collision-free automated driving.
Moreover, an exact perception of static road infrastructure is essential for accurate
localization in a priorly built highly precise navigation map, which is currently commonly
used to extend the environment model beyond the limited range of sensors.
The first contribution is concerned with environment sensors with a narrow vertical
field of view, which frequently fail to detect obstacles with a small vertical extent
from close range. As an inverse beam sensor model infers free space if there is no measurement,
those obstacles are deleted from an occupancy grid even though they have
been observed in past measurements. The approach presented here explicitly models
those errors using multiple hypotheses in an evidential grid mapping framework that
neither requires a classification nor a height of obstacles. Furthermore, the grid mapping
framework, which usually assumes mutually independent cells, is extended to
information from neighboring cells. The evaluation in several freeway scenarios and
a challenging scene with a boom barrier shows that the proposed method is superior
to evidential grid mapping with an inverse beam sensor model.
In the second contribution, a common shortcoming of occupancy grid mapping is
approached. Multi-sensor fusion algorithms, such as a Kalman filter, usually perform
obstacle association and gating to improve the obstacle position if multiple sensors
detected it. However, this strategy is not common in occupancy grid fusion. In this
dissertation, an efficient method to associate obstacles across sensor grids is proposed.
Imprecise sensors are discounted locally in cells where a more accurate sensor detected
the same obstacle and derived free space. The quantitative evaluation with an exact
navigation map shows an increased obstacle position accuracy compared to standard
evidential occupancy grid mapping.
Whereas the first two contributions were concerned with multi-sensor fusion approaches
for collision avoidance, the third uses occupancy grids for situation interpretation.
In particular, this work proposes to use occupancy maps to classify the
driving environment into the categories freeway, country or rural road, parking area,
and city street. Identifying the current road type is essential for autonomous driving
systems designed for limited environment types. Inspired by the success of deep learning
approaches in image classification, end-to-end Convolutional Neural Networks are
compared to Support Vector Machines trained on hand-crafted features. Two novel
CNN architectures for occupancy-grid-based situation classification, designed for embedded
applications with limited computing power are proposed. Furthermore, the
occupancy-grid-based classification is fused with camera-image-based classification,
and the road type probabilities are recursively estimated over time with a Bayes filter.
The evaluation of the approaches on an extensive data set yielded accuracies of up
to 99 %.