A comprehensive collection of IQA papers, datasets and codes. We also provide PyTorch implementations of mainstream metrics in IQA-PyTorch
Related Resources:
- Awesome Image Aesthetic Assessment and Cropping. A curated list of resources including papers, datasets, and relevant links to aesthetic evaluation and cropping.
All IQA types unified in a single model
[ECCV 2024]
PromptIQA: Boosting the Performance and Generalization for No-Reference Image Quality Assessment via Prompts, Chen et al. Bibtex[ICML 2024]
Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels, Wu et al. Github | Bibtex
Human readable IQA, mostly with large language models
[TPAMI 2024]
Q-Bench+: A Benchmark for Multi-modal Foundation Models on Low-level Vision from Single Images to Pairs, Zhang et al. Github | Bibtex[ACM MM2024]
Q-Ground: Image Quality Grounding with Large Multi-modality Models, Chen et al. Bibtex | Github[Arxiv 2024]
VisualCritic: Making LMMs Perceive Visual Quality Like Humans, Huang et al. Bibtex[ECCV 2024]
A Comprehensive Study of Multimodal Large Language Models for Image Quality Assessment, Wu et al. Github | Bibtex[ECCV 2024]
Towards Open-ended Visual Quality Comparison, Wu et al. Github | Bibtex[ECCV 2024]
Depicting Beyond Scores: Advancing Image Quality Assessment through Multi-modal Language Models, You et al. Project | Bibtex[CVPR 2024]
Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models, Wu et al. Github | Bibtex
[ICLR 2024]
Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision, Wu et al. Github | Bibtex[ICCV 2023]
TIFA: Text-to-Image Faithfulness Evaluation with Question Answering, Hu et al. Github | Bibtex | Project[NeurIPS 2023]
ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation, Xu et al. Github | Bibtex[ICCV2023]
Better Aligning Text-to-Image Models with Human Preference, Wu et al. Github | Github(HPSv2) | Bibtex[NeurIPS 2023]
Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation, Yuval et al. Github | Bibtex[TCSVT2023]
A Fine-grained Subjective Perception & Alignment Database for AI Generated Image Quality Assessment, Li et al. Github | Bibtex
-
[Arxiv 2024]
Q-Mamba: On First Exploration of Vision Mamba for Image Quality Assessment, Guan et al. Bibtex -
[Arxiv 2024]
Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare, Zhe et al. Bibtex -
[Arxiv 2024]
Quality-aware Image-Text Alignment for Real-World Image Quality Assessment, Agnolucci et al. Github | Bibtex -
[CVPR 2024]
Bridging the Synthetic-to-Authentic Gap: Distortion-Guided Unsupervised Domain Adaptation for Blind Image Quality Assessment, Li et al. Bibtex -
[CVPR 2024]
Boosting Image Quality Assessment through Efficient Transformer Adaptation with Local Feature Enhancement , Xu et al. | Bibtex -
[WACV2024]
ARNIQA: Learning Distortion Manifold for Image Quality Assessment, Agnolucci et al. Github | Bibtex -
[TIP2023]
TOPIQ: A Top-down Approach from Semantics to Distortions for Image Quality Assessment, Chen et al. Github | Bibtex -
[ICCV2023]
Test Time Adaptation for Blind Image Quality Assessment, Roy et al. Github | Bibtex -
[CVPR2023]
Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild, Saha et al. Bibtex | Github -
[CVPR2023]
Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective, Zhang et al. Github | Bibtex -
[CVPR2023]
Quality-aware Pre-trained Models for Blind Image Quality Assessment, Zhao et al. Bibtex -
[AAAI2023]
Exploring CLIP for Assessing the Look and Feel of Images, Wang et al. Github | Bibtex -
[AAAI2023]
Data-Efficient Image Quality Assessment with Attention-Panel Decoder, Qin et al. Github | Bibtex -
[TPAMI2022]
Continual Learning for Blind Image Quality Assessment , Zhang et al. Github | Bibtex -
[TIP2022]
No-Reference Image Quality Assessment by Hallucinating Pristine Features, Chen et al. Github | Bibtex -
[TIP2022]
VCRNet: Visual Compensation Restoration Network for No-Reference Image Quality Assessment, Pan et al. Github | Bibtex -
[TMM2022]
GraphIQA: Learning Distortion Graph Representations for Blind Image Quality Assessment, Sun et al. Github | Bibtex -
[CVPR2021]
Troubleshooting Blind Image Quality Models in the Wild, Wang et al. Github | Bibtex
Paper Link | Method | Type | Published | Code | Keywords |
---|---|---|---|---|---|
arXiv | MANIQA | NR | CVPRW2022 | Official | Transformer, multi-dimension attention, dual branch |
arXiv | TReS | NR | WACV2022 | Official | Transformer, relative ranking, self-consistency |
KonIQ++ | NR | BMVC2021 | Official | Multi-task with distortion prediction | |
arXiv | MUSIQ | NR | ICCV2021 | Official / Pytorch | Multi-scale, transformer, Aspect Ratio Preserved (ARP) resizing |
arXiv | CKDN | NR | ICCV2021 | Official | Degraded reference, Conditional knowledge distillation (related to HIQA) |
HyperIQA | NR | CVPR2020 | Official | Content-aware hyper network | |
arXiv | Meta-IQA | NR | CVPR2020 | Official | Meta-learning |
arXiv | GIQA | NR | ECCV2020 | Official | Generated image |
arXiv | PI | NR | 2018 PIRM Challenge | Project | 1/2 * (NIQE + (10 - NRQM)). |
arXiv | HIQA | NR | CVPR2018 | Project | Hallucinated reference |
arXiv | BPSQM | NR | CVPR2018 | Pixel-wise quality map | |
arXiv | RankIQA | NR | ICCV2017 | Github | Pretrain on synthetically ranked data |
CNNIQA | NR | CVPR2014 | PyTorch | First CNN-based NR-IQA | |
arXiv | UNIQUE | NR | TIP2021 | Github | Combine synthetic and authentic image pairs |
arXiv | DBCNN | NR | TCSVT2020 | Official | Two branches for synthetic and authentic distortions |
SFA | NR | TMM2019 | Official | Aggregate ResNet50 features of multiple cropped patches | |
pdf/arXiv | PQR | NR/Aesthetic | TIP2019 | Official1/Official2 | Unify different type of aesthetic labels |
arXiv | WaDIQaM (deepIQA) | NR/FR | TIP2018 | PyTorch | Weighted average of patch qualities, shared FR/NR models |
NIMA | NR | TIP2018 | PyTorch/Tensorflow | Squared EMD loss | |
MEON | NR | TIP2017 | Multi-task: distortion learning and quality prediction | ||
arXiv | dipIQ | NR | TIP2017 | download | Similar to RankIQA |
arXiv | NRQM (Ma) | NR | CVIU2017 | Project | Traditional, Super resolution |
arXiv | FRIQUEE | NR | JoV2017 | Official | Authentically Distorted, Bag of Features |
IEEE | HOSA | NR | TIP2016 | Matlab download | Traditional |
ILNIQE | NR | TIP2015 | Official | Traditional | |
BRISQUE | NR | TIP2012 | Official | Traditional | |
BLIINDS-II | NR | TIP2012 | Official | ||
CORNIA | NR | CVPR2012 | Matlab download | Codebook Representation | |
NIQE | NR | SPL2012 | Official | Traditional | |
DIIVINE | NR | TIP2011 | Official |
[ECCV2022]
Shift-tolerant Perceptual Similarity Metric, Ghildyal et al. Github | Bibtex[BMVC2022]
Content-Diverse Comparisons improve IQA, Thong et al. Bibtex[ACM MM2022]
Quality Assessment of Image Super-Resolution: Balancing Deterministic and Statistical Fidelity, Zhou et al. Github | Bibtex
Paper Link | Method | Type | Published | Code | Keywords |
---|---|---|---|---|---|
arXiv | AHIQ | FR | CVPR2022 NTIRE workshop | Official | Attention, Transformer |
arXiv | JSPL | FR | CVPR2022 | Official | semi-supervised and positive-unlabeled (PU) learning |
arXiv | CVRKD | NAR | AAAI2022 | Official | Non-Aligned content reference, knowledge distillation |
arXiv | IQT | FR | CVPRW2021 | PyTorch | Transformer |
arXiv | A-DISTS | FR | ACMM2021 | Official | |
arXiv | DISTS | FR | TPAMI2021 | Official | |
arXiv | LPIPS | FR | CVPR2018 | Project | Perceptual similarity, Pairwise Preference |
arXiv | PieAPP | FR | CVPR2018 | Project | Perceptual similarity, Pairwise Preference |
arXiv | WaDIQaM | NR/FR | TIP2018 | Official | |
arXiv | JND-SalCAR | FR | TCSVT2020 | JND (Just-Noticeable-Difference) | |
QADS | FR | TIP2019 | Project | Super-resolution | |
FSIM | FR | TIP2011 | Project | Traditional | |
VIF/IFC | FR | TIP2006 | Project | Traditional | |
MS-SSIM | FR | Project | Traditional | ||
SSIM | FR | TIP2004 | Project | Traditional | |
PSNR | FR | Traditional |
[ACMMM 2024]
AesExpert: Towards Multi-modality Foundation Model for Image Aesthetics Perception, Huang et al. Project | Github | Bibtex[Arxiv 2024]
AesBench: An Expert Benchmark for Multimodal Large Language Models on Image Aesthetics Perception, Huang et al. Github | Bibtex[ECCV2024]
Scaling Up Personalized Aesthetic Assessment via Task Vector Arithmetic, Yun et al. Bibtex | Project[CVPR2023]
VILA: Learning Image Aesthetics from User Comments with Vision-Language Pretraining, Ke et al. Bibtex[CVPR2023]
Towards Artistic Image Aesthetics Assessment: a Large-scale Dataset and a New Method, Yi et al. Github | Bibtex
[ECCV 2024]
Multiscale Sliced Wasserstein Distances as Perceptual Color Difference Measures, He et al. Github | Bibtex[CVPR 2023]
Learning a Deep Color Difference Metric for Photographic Images, Chen et al. Github | Bibtex
[CVPR2024]
DSL-FIQA: Assessing Facial Image Quality via Dual-Set Degradation Learning and Landmark-Guided Transformer, Chen et al. Bibtex | Project
[NeurIPS 2023]
Assessor360: Multi-sequence Network for Blind Omnidirectional Image Quality Assessment , Wu et al. Bibtex | Github
[Arxiv 2024]
ESIQA: Perceptual Quality Assessment of Vision-Pro-based Egocentric Spatial Images, Zhu et al. Bibtex | Github
[Arxiv 2024]
Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics, Gushchin et al. Bibtex | Github | Project[NeurIPS 2022]
Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop, Zhang et al. Bibtex | Github
Title | Method | Published | Code | Keywords |
---|---|---|---|---|
arXiv | NiNLoss | ACMM2020 | Official | Norm-in-Norm Loss |
Paper Link | Dataset Name | Type | Published | Website | Images | Annotations |
---|---|---|---|---|---|---|
arXiv | UHD-IQA | NR | ECCVW2024 | Project | 6k (~3840x2160) | 20 ratings per image |
arXiv | PaQ-2-PiQ | NR | CVPR2020 | Official github | 40k, 120k patches | 4M |
CVF | SPAQ | NR | CVPR2020 | Offical github | 11k (smartphone) | |
arXiv | KonIQ-10k | NR | TIP2020 | Project | 10k from YFCC100M | 1.2M |
arXiv | AADB | NR/Aesthentic | ECCV2016 | Official github | 10k images (8500/500/1000), 11 attributes | |
arXiv | CLIVE | NR | TIP2016 | Project | 1200 | 350k |
AVA | NR / Aesthentic | CVPR2012 | Github/Project | 250k (60 categories) | ||
arXiv | PIPAL | FR | ECCV2020 | Project | 250 | 1.13M |
arXiv | KADIS-700k | FR | arXiv | Project | 140k pristine / 700k distorted | 30 ratings (DCRs) per image. |
IEEE | KADID-10k | FR | QoMEX2019 | Project | 81 | 10k distortions |
Waterloo-Exp | FR | TIP2017 | Project | 4744 | 94k distortions | |
MDID | FR | PR2017 | --- | 20 | 1600 distortions | |
TID2013 | FR | SP2015 | Project | 25 | 3000 distortions | |
LIVEMD | FR | ACSSC2012 | Project | 15 pristine images | two successive distortions | |
CSIQ | FR | Journal of Electronic Imaging 2010 | --- | 30 | 866 distortions | |
TID2008 | FR | 2009 | Project | 25 | 1700 distortions | |
LIVE IQA | FR | TIP2006 | Project | 29 images, 780 synthetic distortions | ||
link | IVC | FR | 2005 | --- | 10 | 185 distortions |
Paper Title | Dataset Name | Type | Published | Website | Images | Annotations |
---|---|---|---|---|---|---|
arXiv | BAPPS(LPIPS) | FR | CVPR2018 | Project | 187.7k | 484k |
arXiv | PieAPP | FR | CVPR2018 | Project | 200 images | 2.3M |