Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ruoxining committed Mar 19, 2024
1 parent 563b189 commit 9833705
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@ graph LR

Our input data (including the novel, question, and options) is open-source on the [🤗 Huggingface]() platform. Participants who expect to evaluate their model are expected to download the data through Huggingface first. You may either execute the generative subtask with only the novel and quetion, or execute the multichoice subtask by inputting the novel, question, and options. Warning: The input data are only for internal evaluation use. Please do not publicly spread the input data online. The competition hosts are not responsible for any possible violation of novel copyright caused by the participants' spreading the input data publicly online.

After inputting the data and obtaining the model output, you are expected to submit your model output to the [⚖️ Codabench]() platform for evaluation. Such a procedure is set for the purpose of preserving the confidentiality of the gold answers. The Codabench platform automatically runs evaluation on your result, and generates the accuracy score within an average of 5 minutes. If your submission fails or your evaluation is obviously above average, you may email us with the results to have us manually run the evaluation for you. For details about the Codabench platform and the evaluation procedure, see our instructions in our Codabench page.
After inputting the data and obtaining the model output, you are expected to submit your model output to the [⚖️ Codabench](https://www.codabench.org/competitions/2295/) platform for evaluation. Such a procedure is set for the purpose of preserving the confidentiality of the gold answers. The Codabench platform automatically runs evaluation on your result, and generates the accuracy score within an average of 5 minutes. If your submission fails or your evaluation is obviously above average, you may email us with the results to have us manually run the evaluation for you. For details about the Codabench platform and the evaluation procedure, see our instructions in our Codabench page.

Your accuracy score is further expected to submit to us through the [🗳️ Google Form]() if you evaluate your results through Codabench to have us update it on our [🏆 Leaderboard](). Our leaderboard presents the Top 7 models on the two subtasks separately.
Your accuracy score is further expected to submit to us through the [🗳️ Google Form](https://docs.google.com/forms/d/e/1FAIpQLSdGneRm_Cna6sigDaugGEToVDjlAR0cogAI105fZa4dvILbnA/viewform?usp=sf_link) if you evaluate your results through Codabench to have us update it on our [🏆 Leaderboard](https://novelqa.github.io/). Our leaderboard presents the Top 7 models on the two subtasks separately.

# 📜 License

Expand Down

0 comments on commit 9833705

Please sign in to comment.