Majid Geravand
Objective:
This project aimed to create a sophisticated environment perception system for autonomous vehicles. By integrating advanced sensor fusion, object detection, tracking, and semantic/instance segmentation algorithms, we will develop high-definition environment maps crucial for safe and efficient navigation. Our focus was on achieving real-time performance, high accuracy, and robustness in challenging driving conditions. Key components include:
Key Features:
Sensor Fusion: Combining data from various sensors (LiDAR, radar, cameras) to create a unified and comprehensive perception of the environment.
Object Detection: Accurately identifying and classifying static and dynamic objects within the scene, such as vehicles, pedestrians, cyclists, and road infrastructure.
Object Tracking: Maintaining continuous tracking of detected objects, predicting their trajectories, and estimating future states.
Semantic/Instance Segmentation: Assigning semantic labels to different parts of the environment and distinguishing individual instances of objects.
High-Definition Map Building: Generating detailed and up-to-date maps of the environment, including road geometry, lane markings, traffic signs, and obstacles.