-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code for AEMCARL #7
Comments
Ah I see thanks, so the train command in the get started section does in fact train your AEMCARL network since the test policy flag is set to 5? |
Another question, what test case/sim setup did you use to produce the video on the read me of the repo with many agents in crowd_sim? Also how did you get the extra graphic that shows how many GRU units are in use etc. Thanks! |
The command "python test.py --policy actenvcarl --test_policy_flag 5 --multi_process self_attention --agent_timestep 0.4 --human_timestep 0.5 --reward_increment 2.0 --position_variance 2.0 --direction_variance 2.0 --model_dir data/output --phase test --visualize --test_case 0", is used to configure the test case. The visualization of the GRU code at the line 777 of "https://github.com/SJWang2015/AEMCARL/blob/main/crowd_sim/envs/crowd_sim_video.py" |
Thanks, this test command only produces 5 agents, the sim has 17 i think and the motion patterns are different to test case 0. I tried the same command with --human_num 17 added but the motion pattern is different to the one shown in the video. |
You can change the term "human_num" at the https://github.com/SJWang2015/AEMCARL/blob/202f3f1ce67f24a7b29276496ee0ed4f91683f96/crowd_nav/configs/env.config |
Dear owners,
As far as I can tell it doesnt seem like the code for your proposed AEMCARL solution is included in this repo. Would it be possible to add this?
Many thanks,
Sahil
The text was updated successfully, but these errors were encountered: