Skip to content

Latest commit

 

History

History
45 lines (37 loc) · 1.12 KB

README.md

File metadata and controls

45 lines (37 loc) · 1.12 KB

Paramixer

About

The PyTorch implementation of Paramixer from the paper Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention.

Citation

@inproceedings{9878955,
  title     = {Paramixer: Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention}, 
  author    = {Yu, Tong and Khalitov, Ruslan and Cheng, Lei and Yang, Zhirong},
  booktitle = {2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
  year      = {2022},
  pages     = {681--690}
}

Datasets

  1. LRA: https://mega.nz/file/tBdAyCwA#AvMIYJrkLset-Xb9ruA7fK04zZ_Jx2p7rdwrVVaTckE

Training Steps

  1. Create a data folder:
mkdir data
  1. Download the dataset compressed archive
wget $URL
  1. Decompress the dataset compressed archive and put the contents into the data folder
unzip $dataset.zip
mv $datast ./data/$datast
  1. Run the main file
python $dataset_main.py --task="$task"

Requirements

To install requirements:

pip3 install -r requirements.txt