- 0_scenarios: Contains the original scenarios that can be replayed to the models.
- 1_mixnet: Logs generated by replaying the scenarios to the module using MixNet.
- 2_indynet: Logs generated by replaying the scenarios to the module using IndyNet.
- MixNet
python tools/evaluation.py --logdir data/evaluation_data/1_mixnet --save-path data/evaluation_data/results/mix_net --MAE_tot
---------- Total MAE (L2-Norm) ----------
Number of datapoints used for evaluation: 38294
Overall MAE (L2-Norm) = 4.913 m
Overall average velocity in the logs: 71.322 m/s
- IndyNet:
python tools/evaluation.py --logdir data/evaluation_data/2_indynet --save-path data/evaluation_data/results/indy_net --MAE_tot
Output:
---------- Total MAE (L2-Norm) ----------
Number of datapoints used for evaluation: 38143
Overall MAE (L2-Norm) = 5.363 m
Overall average velocity in the logs: 71.532 m/s
python tools/evaluation_line_plot.py --save-path data/evaluation_data/line_plots
Out of the box solution
python tools/input_output_analysis.py
Overall MAE: 0.7737981677055359
weights in: [0.4 0.1 0.3 0.2]
weights out: [0.22620717 0.17092304 0.10150512 0.50136465]
Recommended scenario to evaluate is scenario_02
at predictionID = 760. To compare the MixNet with the benchmark model both replayed logs have to be inputted, which are:
- logdir_benchmark: "data/evaluation_data/2_indynet/12_13_21/"
- logdir_mixnet: "data/evaluation_data/1_mixnet/14_23_40/"
Run the following command to visualize the exemplary sample:
python tools/visualize_smoothness.py