[COLING22] An End-to-End Library for Evaluating Natural Language Generation
-
Updated
Dec 18, 2023 - Python
[COLING22] An End-to-End Library for Evaluating Natural Language Generation
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
PyTorch code for ACL 2022 paper: RoMe: A Robust Metric for Evaluating Natural Language Generation https://aclanthology.org/2022.acl-long.387/
💵 Code for Less is More for Long Document Summary Evaluation by LLMs (Wu, Iso et al; EACL 2024)
Add a description, image, and links to the nlg-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the nlg-evaluation topic, visit your repo's landing page and select "manage topics."