Skip to content

Commit

Permalink
Merge pull request #103 from ucsdwcsng/pushkal-dev
Browse files Browse the repository at this point in the history
Corrections
  • Loading branch information
PushkalM11 authored Dec 27, 2024
2 parents a6926b5 + b65ca0c commit b849f28
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions _posts/2024-12-10-c-shenron.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,11 @@ description: # all combinations are possible: (title+text+image, title+image, te
- title: Overview
text: "The advancement of self-driving technology has become a focal point in outdoor robotics, driven by the need for robust and efficient perception systems. This paper addresses the critical role of sensor integration in autonomous vehicles, particularly emphasizing the underutilization of radar compared to cameras and LiDARs. While extensive research has been conducted on the latter two due to the availability of large-scale datasets, radar technology offers unique advantages such as all-weather sensing and occlusion penetration, which are essential for safe autonomous driving. This study presents a novel integration of a realistic radar sensor model within the CARLA simulator, enabling researchers to develop and test navigation algorithms using radar data. Utilizing this radar sensor and showcasing its capabilities in simulation, we demonstrate improved performance in end-to-end driving scenarios. Our findings aim to rekindle interest in radar-based self-driving research and promote the development of algorithms that leverage radar's strengths."
- title: High Level Implementation
text: "The following diagram illustrates a high level overview of our sensor integration into Carla and the evaluation framework for End-to-End Driving."
image: /assets/images/c-shenron/c-shenron-flowchart.png
image_width: 800
text: <p>The <a href="https://github.com/autonomousvision/carla_garage">Transfuser++ model</a> is the state-of-the-art End-to-End driving model that utilizes Camera and LiDAR sensors for perception and path planning. The model is trained on data from an expert driver provided by Carla and it predicts the future waypoints/direction and the velocity of the ego vehicle. We substitute the LiDAR input with our integrated C-Shenron radar sensor and re-train multiple models with varying radar views. In our results, we showcase that using radar sensors have improved the driving score and overall situational awareness of the model, indicating the accuracy of our sensor.<\p>
text: <p>The following diagram illustrates a high level overview of our sensor integration into CARLA and the evaluation framework for End-to-End Driving.</p>

<a href="/assets/images/c-shenron/c-shenron-flowchart.png"><center><img src="/assets/images/c-shenron/c-shenron-flowchart.png" width="50%" style="float:center" ></center> </a>
<br>
<p>The <a href="https://github.com/autonomousvision/carla_garage">Transfuser++ model</a> is the state-of-the-art End-to-End driving model that utilizes Camera and LiDAR sensors for perception and path planning. The model is trained on data from an expert driver provided by CARLA and it predicts the future waypoints/direction and the velocity of the ego vehicle. We substitute the LiDAR input with our integrated C-Shenron radar sensor and re-train multiple models with varying radar views. In our results, we showcase that using radar sensors have improved the driving score and overall situational awareness of the model, indicating the accuracy of our sensor.</p>
- title: Sensor Views
image: /assets/images/c-shenron/c-shenron.png
text: "Comparison of views from Camera, Semantic LiDAR, and Shenron Radar in CARLA simulator."
Expand Down

0 comments on commit b849f28

Please sign in to comment.