pfevaluator: A library for evaluating performance metrics of Pareto fronts in multiple/many objective optimization problems
"Knowledge is power, sharing it is the premise of progress in life. It seems like a burden to someone, but it is the only way to achieve immortality." --- Thieu Nguyen
- Python (>= 3.6)
- Numpy (>= 1.18.1)
- pygmo (>= 2.13.0)
Install the current PyPI release:
pip install pfevaluator
Or install the development version from GitHub:
pip install git+https://github.com/thieu1995/pfevaluator
- GD: Generational Distance
- IGD: Inverted Generational Distance
- MPFE: Maximum Pareto Front Error
- HV: Hyper Volume (Using Different Library)
- HAR: Hyper Area Ratio (Using Different Library)
- UD: Uniform Distribution
- S: Spacing
- STE: Spacing To Extend
- NDC: Number of Distinct Choices (Not Implemented Yet)
- RNI: Ratio of Non-dominated Individuals
- ER: Error Ratio
- ONVG: Overall Non-dominated Vector Generation
- PDI: Pareto Dominance Indicator (Not Implemented Yet)
- MS: Maximum Spread
+ front: the file contains class Metric for evaluating all posible solution (population of obtained fronts).
+ pfront (Pareto front): the file contains class Metric for evaluating the obtained front from each test case.
+ tpfront: (True pareto front): the file contains class Metric for evaluating the obtained front and True pareto front
(Reference front). Means, you need to pass the Reference front in this class.
+ True pareto front (Reference front) can be obtained by:
1) You provide it (If you know the True Pareto front for your problem)
2) Calculate from all possible fronts obtained from all test case.
+ Assumption you have N1 algorithms to test.
+ Each algorithm give you a Obtained front.
+ Each algorithm you run N2 independent trials --> Number of all possible fronts: N1 * N2
+ Pass all N1*N2 front in our function to calculate the Non-donminated Solutions (Reference front
- Approximate Pareto front - True Pareto front)
import pfevaluator
## Some avaiable performance metrics for evaluate each type of Pareto front.
pfront_metrics = ["UD", "NDC"]
tpfront_metrics = ["ER", "ONVG", "MS", "GD", "IDG", "MPFE", "S", "STE"]
volume_metrics = ["HV", "HAR"]
pm = pfevaluator.metric_pfront(obtained_front, pfront_metrics) # Evaluate for each algorithm in each trial
tm = pfevaluator.metric_tpfront(obtained_front, reference_front, tpfront_metrics) # Same above
vm = pfevaluator.metric_volume(obtained_front, reference_front, volume_metrics, None, all_fronts=matrix_fitness)
## obtained_front: is your front you found in each test case (each trial of each algorithm)
## reference_front (True Pareto front): is your True Pareto front of your problem.
## If you don't know your True Pareto front, do the above step to get it from population of obtained fronts.
## Using this function: reference_front = pfevaluator.find_reference_front(matrix_fitness)
## matrix_fitness is all of your fronts in all test cases.
## The results is dict such as: pm = { "UD": 0.2, "NDC": 0.1 }
- The full test case in the file: examples/full.py
-
Official source code repo: https://github.com/thieu1995/pfevaluator
-
Download releases: https://pypi.org/project/pfevaluator/
-
Issue tracker: https://github.com/thieu1995/pfevaluator/issues
-
Change log: https://github.com/thieu1995/pfevaluator/blob/master/ChangeLog.md
-
This project also related to my another projects which are "meta-heuristics" and "neural-network", check it here
- If you use pfevaluator in your project, please cite my works:
@article{nguyen2019efficient,
title={Efficient Time-Series Forecasting Using Neural Network and Opposition-Based Coral Reefs Optimization},
author={Nguyen, Thieu and Nguyen, Tu and Nguyen, Binh Minh and Nguyen, Giang},
journal={International Journal of Computational Intelligence Systems},
volume={12},
number={2},
pages={1144--1161},
year={2019},
publisher={Atlantis Press}
}
- Yen, G. G., & He, Z. (2013). Performance metric ensemble for multiobjective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 18(1), 131-144.
- Panagant, N., Pholdee, N., Bureerat, S., Yildiz, A. R., & Mirjalili, S. (2021). A Comparative Study of Recent Multi-objective Metaheuristics for Solving Constrained Truss Optimisation Problems. Archives of Computational Methods in Engineering, 1-17.
- Knowles, J., & Corne, D. (2002, May). On metrics for comparing nondominated sets. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No. 02TH8600) (Vol. 1, pp. 711-716). IEEE.
- Yen, G. G., & He, Z. (2013). Performance metric ensemble for multiobjective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 18(1), 131-144.
- Guerreiro, A. P., Fonseca, C. M., & Paquete, L. (2020). The hypervolume indicator: Problems and algorithms. arXiv preprint arXiv:2005.00515.