You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm pointing out that gas submissions should include benchmarks to help sponsors and judges accurately assess the gas savings achieved by the findings in the report. I've seen numerous reports that contain findings such as move checks to the top of functions to save gas, but in practice, this often results in lower than minimal gas savings and merely returns unused gas in some cases. Additionally, findings such as Refactor code or functions to save gas and rearrange variables or structs often lack practical data(means exact gas benchmark which could saved by refactoring) to support their potential impact. Wardens proposing changes to code structures , storage structures or function calls for gas optimization should experimentally validate their findings using tools like (Foundry, Hardhat, or others) to provide concrete gas benchmarks.
I agree that for findings like Use bitmaps instead of booleans in mapping (saves 100000), providing gas benchmarks is impractical as it would necessitate a complete overhaul of the protocol's codebase. However, these types of findings can be effectively identified by automated bots, no need of manual work
Recommendation
We recommend mandating the inclusion of gas benchmarks in gas submissions. This would encourage wardens to experimentally validate their findings with code, leading to more unique and valuable insights for sponsors.
(Correct this if wrong)
Thank you
The text was updated successfully, but these errors were encountered:
Summary
I'm pointing out that gas submissions should include benchmarks to help sponsors and judges accurately assess the gas savings achieved by the findings in the report. I've seen numerous reports that contain findings such as
move checks to the top of functions to save gas,
but in practice, this often results in lower than minimal gas savings and merely returns unused gas in some cases. Additionally, findings such asRefactor code or functions to save gas
andrearrange variables or structs
often lack practical data(means exact gas benchmark which could saved by refactoring) to support their potential impact. Wardens proposing changes to code structures , storage structures or function calls for gas optimization should experimentally validate their findings using tools like (Foundry, Hardhat, or others) to provide concrete gas benchmarks.I agree that for findings like
Use bitmaps instead of booleans in mapping (saves 100000),
providing gas benchmarks is impractical as it would necessitate a complete overhaul of the protocol's codebase. However, these types of findings can be effectively identified by automated bots, no need of manual workRecommendation
We recommend mandating the inclusion of gas benchmarks in gas submissions. This would encourage wardens to experimentally validate their findings with code, leading to more unique and valuable insights for sponsors.
(Correct this if wrong)
Thank you
The text was updated successfully, but these errors were encountered: