Code for NeurIPS 2022 paper Exploiting Reward Shifting in Value-Based Deep RL
-
Updated
Oct 29, 2023 - Python
Code for NeurIPS 2022 paper Exploiting Reward Shifting in Value-Based Deep RL
Common repository for our readings and discussions
Socio-Emotional Reward Design for Intrinsically-Motivated Reinforcement Learning Agents
Code for the paper "Reward Design for Justifiable Sequential Decision-Making"; ICLR 2024
A gymnasium-compatible framework to create reinforcement learning (RL) environment for solving the optimal power flow (OPF) problem. Contains five OPF benchmark environments for comparable research.
Add a description, image, and links to the reward-design topic page so that developers can more easily learn about it.
To associate your repository with the reward-design topic, visit your repo's landing page and select "manage topics."