conda create --name myenv --file conda_env/attn_tf22_py37.txt
,
pip install tensorflow-gpu==2.2.0
or conda install tensorflow-gpu==2.2.0
- To learn corresponding attention weights,
python META_train --option attn_layer
- To learn corresponding final layer weights,
python META_train --option last_layer
- To evaluate trained models on tests and generate intermediate results, such as probabilities,
python META_eval --option eval --exp <num> --task <task code>
- To plot the final results,
python META_eval --option plot --exp <num> --task <task code>
. <num>
is a number among1, 2
or3
.<task code>
is a string betweenEXP
orretrain
whereEXP
corresponds to models trained using attention layer andretrain
corresponds to training the last layer without attention.
- Model definition, custom data generator, fitting function can be found in
top_down_attention/keras_custom/
- Model training code can be found in
top_down_attention/TRAIN/
- Model evaluation and results plotting code can be found in
top_down_attention/EVAL/
@article{Luo2021TheNetworks,
title = {{The costs and benefits of goal-directed attention in deep convolutional neural networks}},
year = {2021},
journal = {Computational Brain {\&} Behavior},
author = {Luo, Xiaoliang and Roads, Brett D. and Love, Bradley C.},
month = {2},
pages = {1--18},
url = {https://doi.org/10.1007/s42113-021-00098-y},
doi = {10.1007/s42113-021-00098-y},
issn = {23318422},
}