Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SimBa Policy: Simplicity Bias for Scaling Up Parameters in DRL #59

Merged
merged 15 commits into from
Jan 14, 2025

Conversation

araffin
Copy link
Owner

@araffin araffin commented Nov 1, 2024

Description

https://openreview.net/forum?id=jXLiDKsuDo

https://arxiv.org/abs/2410.09754

Perf report: https://wandb.ai/openrlbenchmark/sbx/reports/Simba-SBX-Perf-Report--VmlldzoxMDM5MjQxOQ

Motivation and Context

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist:

  • I've read the CONTRIBUTION guide (required)
  • I have updated the changelog accordingly (required).
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
  • I have reformatted the code using make format (required)
  • I have checked the codestyle using make check-codestyle and make lint (required)
  • I have ensured make pytest and make type both pass. (required)
  • I have checked that the documentation builds using make doc (required)

Note: You can run most of the checks using make commit-checks.

Note: we are using a maximum length of 127 characters per line

@LucasAlegre
Copy link

Hi @araffin! I was curious to see that you have not implemented RSNorm (at least until now) in this Simba implementation. From the paper (see Figure 12), RSNorm is critical to the performance of Simba. I found this particularly very surprising, and I was wondering why not simply using BatchNorm to normalize the inputs has the same effect (from Figure 12, it is much worse).

@araffin
Copy link
Owner Author

araffin commented Nov 4, 2024

I was curious to see that you have not implemented RSNorm (at least until now) in this Simba implementation.

because I use VecNormalize:

import optax


default_hyperparams = dict(
    n_envs=1,
    n_timesteps=int(5e5),
    policy="SimbaPolicy",
    learning_rate=3e-4,
    # qf_learning_rate=1e-3,
    policy_kwargs={
        "optimizer_class": optax.adamw,
        # "optimizer_kwargs": {"weight_decay": 0.01},
        "net_arch": {"pi": [128], "qf": [256, 256]},
        "n_critics": 2,
    },
    learning_starts=10_000,
    normalize={"norm_obs": True, "norm_reward": False},
)

hyperparams = {}

for env_id in [
    "HalfCheetah-v4",
    "Humanoid-v4",
    "HalfCheetahBulletEnv-v0",
    "Ant-v4",
    "Hopper-v4",
    "Walker2d-v4",
    "Swimmer-v4",
    "AntBulletEnv-v0",
    "HopperBulletEnv-v0",
    "Walker2DBulletEnv-v0",
    "BipedalWalkerHardcore-v3",
    "Pendulum-v1",
]:
    hyperparams[env_id] = default_hyperparams

So far in my test, having a second critic was more important. I'm suspecting that the hyperparameters presented are overfitted to the dmc hard benchmark.

why not simply using BatchNorm

probably because they would need to use CrossQ for that.

@LucasAlegre
Copy link

That is interesting, I was also surprised they removed clipped double q-learning from SAC, and there is no ablation on that in the paper. At the moment, I am using CrossQ+DroQ for a personal project, and I am really curious if its worth changing it to Simba. It would be really cool if you could share your findings, thanks! :)

@araffin
Copy link
Owner Author

araffin commented Nov 25, 2024

It would be really cool if you could share your findings, thanks! :)

so far, I'm actually quite happy with TQC + Simba (see #60 (comment)), but I need to do a more systematic evaluation soon.

@LucasAlegre
Copy link

It would be really cool if you could share your findings, thanks! :)

so far, I'm actually quite happy with TQC + Simba (see #60 (comment)), but I need to do a more systematic evaluation soon.

Thanks a lot for the reply! Do you have any insights on how it compares with CrossQ? Or is it possible to combine CrossQ and Simba?

@Jackflyingzzz
Copy link

@araffin Hi Araffin, how do you find the performance of TQC simba compare to TQC droQ with similar parameter? Thanks!

@araffin
Copy link
Owner Author

araffin commented Dec 11, 2024

Current perf report (early results, only 3 seeds, MuJoCo envs only, pybullet envs coming later): https://wandb.ai/openrlbenchmark/sbx/reports/Simba-SBX-Perf-Report--VmlldzoxMDM5MjQxOQ

how do you find the performance of TQC simba compare to TQC droQ with similar parameter?

I didn't have much time to investigate that.

@araffin araffin changed the title Simba SAC Simba Network Dec 20, 2024
@araffin araffin marked this pull request as ready for review December 20, 2024 14:42
@araffin araffin requested a review from jan1854 December 20, 2024 15:08
@araffin araffin changed the title Simba Network Add SimBa Policy: Simplicity Bias for Scaling Up Parameters in DRL Dec 20, 2024
@araffin araffin merged commit 9cad1d0 into master Jan 14, 2025
4 checks passed
@araffin araffin deleted the feat/simba branch January 14, 2025 13:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants