- 16.1. Creating our First Agent with Baseline
- 16.1.1. Evaluating the Trained Agent
- 16.1.2. Storing and Loading the Trained Agent
- 16.1.3. Viewing the Trained Agent
- 16.1.4. Putting it all Together
- 16.2. Multiprocessing with Vectorized Environments
- 16.2.1. SubprocVecEnv
- 16.2.2. DummyVecEnv
- 16.3. Integrating the Custom Environments
- 16.4. Playing Atari Games with DQN and its Variants
- 16.4.1. Implementing DQN Variants
- 16.5. Lunar Lander using A2C
- 16.5.1. Creating a Custom Network
- 16.6. Swinging up a Pendulum using DDPG
- 16.6.1. Viewing the Computational Graph in TensorBoard
- 16.7. Training an Agent to Walk using TRPO
- 16.7.1. Installing MuJoCo Environment
- 16.7.2. Implementing TRPO
- 16.7.3. Recoding the video
- 16.8. Training Cheetah Bot to Run using PPO
- 16.8.1. Making a GIF of a Trained Agent
- 16.8.2. Implementing GAIL
16. Deep Reinforcement Learning with Stable Baselines
Folders and files
Name | Name | Last commit date | ||
---|---|---|---|---|
parent directory.. | ||||