You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We would like to submit our model, Kermut, to the supervised substitution benchmark leaderboards, where it achieves state-of-the-art performance across splits.
All code required to run the model has been added to proteingym/baselines/kermut and model details have been added to proteingym/constants.json. We have included a README with detailed instructions on how to access either raw or preprocessed data (recommended) and train/evaluate the model. Our data directory matches that of ProteinGym and should work seamlessly if scripts are called from the ProteinGym root directory. Aggregated results across assays can be found in kermut/results/summary, while the per-variant predictions can be accessed via the README.
To reproduce results, the preprocessed data (e.g., precomputed embeddings) can be downloaded and unzipped as described, after which either example_scripts/benchmark_single_dataset.sh or example_scripts/benchmark_all_datasets.sh can be run. We have a script to merge all individual results (src/results/merge_score_files.py), the output of which can be used directly with performance_DMS_supervised_benchmarks.py. Note that our environment has additional dependencies, all of which are listed in kermut/environment.yaml.
The results in results/summary are from the updated CV splits (see issue), while results for the old splits which can be directly compared to the current leaderboard can be found in results/summary_old.
Thank you for your efforts!
Peter
The text was updated successfully, but these errors were encountered:
Hi ProteinGym team!
We would like to submit our model, Kermut, to the supervised substitution benchmark leaderboards, where it achieves state-of-the-art performance across splits.
All code required to run the model has been added to
proteingym/baselines/kermut
and model details have been added toproteingym/constants.json
. We have included a README with detailed instructions on how to access either raw or preprocessed data (recommended) and train/evaluate the model. Ourdata
directory matches that of ProteinGym and should work seamlessly if scripts are called from the ProteinGym root directory. Aggregated results across assays can be found inkermut/results/summary
, while the per-variant predictions can be accessed via the README.To reproduce results, the preprocessed data (e.g., precomputed embeddings) can be downloaded and unzipped as described, after which either
example_scripts/benchmark_single_dataset.sh
orexample_scripts/benchmark_all_datasets.sh
can be run. We have a script to merge all individual results (src/results/merge_score_files.py
), the output of which can be used directly withperformance_DMS_supervised_benchmarks.py
. Note that our environment has additional dependencies, all of which are listed inkermut/environment.yaml
.The results in
results/summary
are from the updated CV splits (see issue), while results for the old splits which can be directly compared to the current leaderboard can be found inresults/summary_old
.Thank you for your efforts!
Peter
The text was updated successfully, but these errors were encountered: