Skip to content

[NeurIPS 2023] Rewiring Neurons in Non-Stationary Environments

License

Notifications You must be signed in to change notification settings

feifeiobama/RewireNeuron

Repository files navigation

RewireNeuron

Code for NeurIPS 2023 paper Rewiring Neurons in Non-Stationary Environments.

Requirements

  • Make sure you have PyTorch and JAX installed with CUDA support.
  • Install SaLinA and Continual World following their instructions. Note that the latter only supports MuJoCo version 2.0.
  • Install additional packages via pip install -r requirements.txt.

Getting started

Simply run the file run.py with the desired config available in configs:

python run.py -cn=METHOD scenario=SCENARIO OPTIONAL_CONFIGS

Available methods

Expand

We present 9 different CRL methods all built on top of soft-actor critic algorithm. To try them, just add the flag -cn=my_method on the command line. You can find the hyperparameters in configs:

Available scenarios

Expand

We integrate 9 CRL scenarios over 3 different Brax domains and 2 scenarios of the Continual World domain. To try them, just add the flag scenario=... on the command line:

  • halfcheetah/forgetting: 8 tasks - 1M samples for each task.
  • halfcheetah/transfer: 8 tasks - 1M samples for each task.
  • halfcheetah/robustness: 8 tasks - 1M samples for each task.
  • halfcheetah/compositionality: 8 tasks - 1M samples for each task.
  • ant/forgetting: 8 tasks - 1M samples for each task.
  • ant/transfer: 8 tasks - 1M samples for each task.
  • ant/robustness: 8 tasks - 1M samples for each task.
  • ant/compositionality: 8 tasks - 1M samples for each task.
  • humanoid/hard: 4 tasks - 2M samples for each task.
  • continual_world/t1-t8: 8 triplets of 3 tasks - 1M samples for each task.
  • continual_world/cw10: 10 tasks - 1M samples for each task.

Repository structure

Expand

The core.py file contains the building blocks of this framework. Each experiment consists in running a Framework over a Scenario, i.e. a sequence of train and test Task. The models are learning procedures that use CRL agents to interact with the tasks and learn from them through one or multiple algorithms.

  • frameworks contains generic learning procedures (e.g. using only one algorithm, or adding a regularization method in the end).
  • scenarios contains CRL scenarios i.e sequence of train and test tasks.
  • algorithms contains different RL / CL algorithms (e.g. SAC, or EWC).
  • agents contains CRL agents (e.g. PackNet, CSP, or Rewire).
  • configs contains the configs files of benchmarked methods/scenarios.

Acknowledgement

Our implementation is based on:

  • SaLinA with multiple CRL scenarios and methods already in place.
  • SoftSort for parameter-efficient differentiable sorting.

About

[NeurIPS 2023] Rewiring Neurons in Non-Stationary Environments

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages