Skip to content

wimlds/wnut_ner_evaluation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

---------------------------------------------------------------------
- WNUT Named Entity Recognition in Twitter Shared task
---------------------------------------------------------------------
- Organizers: Marie-Catherine de Marneffe, Young-Bum Kim, Alan Ritter
---------------------------------------------------------------------

1) Data
- Training and development data is in the "data" directory.  This is represented using the ConLL data format (http://www.cnts.ua.ac.be/conll2002/ner/).
- There are 2 datasets corresponding to 2 seperate evaluations: one where the task is to predict fine-grained types and the other in which no type information is to be predicted.
  a) "train" and "dev" are annotated with 10 fine-grained NER categories: person, geo-location, company, facility, product,music artist, movie, sports team, tv show and other.  
  b) "train_notype" and "dev_notype" entities are annotated without any type information.

2) Baseline System
- For convenience we have provided a baseline script based on crfsuite (http://www.chokkan.org/software/crfsuite/).  
  The baseline includes gazateer features generated from the files in the "lexicon" directory.  To run the baseline system, simply install
  crfsuite on your local machine and run the script "./baseline_gazet_ortho.sh".
  To install crfsuite on OSX using homebrew ( http://brew.sh/ ) you can do:
  > brew tap homebrew/science
  > brew install crfsuite
  > ./baseline.sh

- You can change between the 10 types and no types data by adjusting the "TRAIN_DATA" and "TEST_DATA" variables in "baseline_gazet_ortho.sh".

Here are the results:

10 Entity Types:
accuracy:  96.06%; precision:  53.91%; recall:  36.80%; FB1:  43.74
          company: precision:  70.00%; recall:  51.22%; FB1:  59.15  30
         facility: precision:  37.50%; recall:  30.00%; FB1:  33.33  16
          geo-loc: precision:  64.29%; recall:  46.55%; FB1:  54.00  42
            movie: precision:  20.00%; recall:  33.33%; FB1:  25.00  5
      musicartist: precision:  25.00%; recall:   8.33%; FB1:  12.50  4
            other: precision:  33.33%; recall:  13.11%; FB1:  18.82  24
           person: precision:  61.46%; recall:  50.43%; FB1:  55.40  96
          product: precision:  36.36%; recall:  22.22%; FB1:  27.59  11
       sportsteam: precision:  27.27%; recall:  16.67%; FB1:  20.69  11
           tvshow: precision:  25.00%; recall:  12.50%; FB1:  16.67  4

No Types:
accuracy:  96.64%; precision:  60.36%; recall:  47.47%; FB1:  53.14
                 : precision:  60.36%; recall:  47.47%; FB1:  53.14  280
                 : precision:  60.36%; recall:  47.47%; FB1:  53.14  280

3) Evaluation
- Evaluation is performed using the ConLL evaluation script (http://www.cnts.ua.ac.be/conll2002/ner/bin/conlleval.txt)
- The baseline script prints out results using the ConLL evaluation script.
- At the beginning of the evaluation period we will release test data without annotations, participating teams will submit their predictions in ConLL format, 
  we will score the resutls against gold annotations using the ConLL evaluation script and release the predictions, precision, recall and F1 and rankings (by F1 score) for all participating teams.  
  We will release the gold test annotations after the evaluation period.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages