Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metadata correction for 2024.findings-emnlp.604 #4173

Open
mjpost opened this issue Dec 18, 2024 · 3 comments
Open

Metadata correction for 2024.findings-emnlp.604 #4173

mjpost opened this issue Dec 18, 2024 · 3 comments
Assignees
Labels
correction for corrections submitted to the anthology metadata Correction to metadata

Comments

@mjpost
Copy link
Member

mjpost commented Dec 18, 2024

{
  "anthology_id": "2024.findings-emnlp.604",
  "title": "<fixed-case>I</fixed-case>n2<fixed-case>C</fixed-case>ore: Leveraging Influence Functions for Coreset Selection in Instruction Finetuning of Large Language Models",
  "authors": [
    {
      "first": "Ayrton",
      "last": "San Joaquin",
      "id": "ayrton-san-joaquin",
      "affiliation": ""
    },
    {
      "first": "Bin",
      "last": "Wang",
      "id": "bin-wang",
      "affiliation": ""
    },
    {
      "first": "Zhengyuan",
      "last": "Liu",
      "id": "zhengyuan-liu",
      "affiliation": ""
    },
    {
      "first": "Nicholas",
      "last": "Asher",
      "id": "nicholas-asher",
      "affiliation": ""
    },
    {
      "first": "Brian",
      "last": "Lim",
      "id": "brian-lim",
      "affiliation": ""
    },
    {
      "first": "Philippe",
      "last": "Muller",
      "id": "philippe-muller",
      "affiliation": ""
    },
    {
      "first": "Nancy F.",
      "last": "Chen",
      "id": "nancy-chen",
      "affiliation": ""
    }
  ],
  "abstract": "Despite advancements, fine-tuning Large Language Models (LLMs) remains costly due to the extensive parameter count and substantial data requirements for model generalization. Accessibility to computing resources remains a barrier for the open-source community. To address this challenge, we propose the In2Core algorithm, which selects a coreset by analyzing the correlation between training and evaluation samples with a trained model. Notably, we assess the model’s internal gradients to estimate this relationship, aiming to rank the contribution of each training point. To enhance efficiency, we propose an optimization to compute influence functions with a reduced number of layers while achieving similar accuracy. By applying our algorithm to instruction fine-tuning data of LLMs, we can achieve similar performance with just 50% of the training data. Meantime, using influence functions to analyze model coverage to certain testing samples could provide a reliable and interpretable signal on the training set’s coverage of those test points."
}
@mjpost mjpost added correction for corrections submitted to the anthology metadata Correction to metadata labels Dec 18, 2024
Copy link

Found ACL Anthology entry:

📄 Paper: https://aclanthology.org/2024.findings-emnlp.604

Thumbnail

Copy link

LLM Validation Failed

Details:

  • The proposed title edit is incorrect. The correct title from the PDF is: 'In2Core: Leveraging Influence Functions for Coreset Selection in Instruction Finetuning of Large Language Models'. The proposed title has a minor difference in casing with 'I' instead of 'In' and 'C' instead of 'Core'.
  • The proposed list of authors is missing one author. The correct list of authors from the PDF is: 'Ayrton San Joaquin, Bin Wang, Zhengyuan Liu, Nicholas Asher, Brian Lim, Philippe Muller, Nancy F. Chen, and David A. Smith'. The proposed list does not include 'David A. Smith'.
  • The order of authors in the proposed list is correct, but since one author is missing, the overall list is invalid.

@mjpost
Copy link
Member Author

mjpost commented Dec 20, 2024

LLM (gpt-4o-mini) hallucinated an author on this paper!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
correction for corrections submitted to the anthology metadata Correction to metadata
Projects
None yet
Development

No branches or pull requests

2 participants