From a7195d7e5ac44766beb87e3dbf789fe5ff61511d Mon Sep 17 00:00:00 2001 From: Heather Turner Date: Tue, 6 Oct 2020 13:39:37 +0100 Subject: [PATCH] update docs, bump version 0.3.0 --- DESCRIPTION | 84 +- NEWS.md | 366 +- R/PlackettLuce-package.R | 86 +- R/preflib.R | 394 +- README.Rmd | 354 +- README.md | 503 +- appveyor.yml | 110 +- cran-comments.md | 21 +- docs/404.html | 162 + docs/CONDUCT.html | 314 +- docs/articles/Overview.html | 2503 +++--- .../empty-anchor.js | 15 + .../bootstrap-3.3.5/css/bootstrap-theme.css | 587 ++ .../css/bootstrap-theme.css.map | 1 + .../css/bootstrap-theme.min.css | 5 + .../bootstrap-3.3.5/css/bootstrap.css | 6799 +++++++++++++++++ .../bootstrap-3.3.5/css/bootstrap.css.map | 1 + .../bootstrap-3.3.5/css/bootstrap.min.css | 5 + .../bootstrap-3.3.5/css/cerulean.min.css | 11 + .../bootstrap-3.3.5/css/cosmo.min.css | 30 + .../bootstrap-3.3.5/css/darkly.min.css | 30 + .../bootstrap-3.3.5/css/flatly.min.css | 30 + .../bootstrap-3.3.5/css/fonts/Lato.ttf | Bin 0 -> 81980 bytes .../bootstrap-3.3.5/css/fonts/LatoBold.ttf | Bin 0 -> 82368 bytes .../bootstrap-3.3.5/css/fonts/LatoItalic.ttf | Bin 0 -> 81332 bytes .../bootstrap-3.3.5/css/fonts/NewsCycle.ttf | Bin 0 -> 28716 bytes .../css/fonts/NewsCycleBold.ttf | Bin 0 -> 28724 bytes .../bootstrap-3.3.5/css/fonts/OpenSans.ttf | Bin 0 -> 34112 bytes .../css/fonts/OpenSansBold.ttf | Bin 0 -> 35760 bytes .../css/fonts/OpenSansBoldItalic.ttf | Bin 0 -> 33064 bytes .../css/fonts/OpenSansItalic.ttf | Bin 0 -> 32808 bytes .../css/fonts/OpenSansLight.ttf | Bin 0 -> 35340 bytes .../css/fonts/OpenSansLightItalic.ttf | Bin 0 -> 32680 bytes .../bootstrap-3.3.5/css/fonts/Raleway.ttf | Bin 0 -> 63796 bytes .../bootstrap-3.3.5/css/fonts/RalewayBold.ttf | Bin 0 -> 62224 bytes .../bootstrap-3.3.5/css/fonts/Roboto.ttf | Bin 0 -> 32652 bytes .../bootstrap-3.3.5/css/fonts/RobotoBold.ttf | Bin 0 -> 32500 bytes .../bootstrap-3.3.5/css/fonts/RobotoLight.ttf | Bin 0 -> 32664 bytes .../css/fonts/RobotoMedium.ttf | Bin 0 -> 32580 bytes .../css/fonts/SourceSansPro.ttf | Bin 0 -> 35064 bytes .../css/fonts/SourceSansProBold.ttf | Bin 0 -> 34908 bytes .../css/fonts/SourceSansProItalic.ttf | Bin 0 -> 33864 bytes .../css/fonts/SourceSansProLight.ttf | Bin 0 -> 35368 bytes .../bootstrap-3.3.5/css/fonts/Ubuntu.ttf | Bin 0 -> 73608 bytes .../bootstrap-3.3.5/css/journal.min.css | 24 + .../bootstrap-3.3.5/css/lumen.min.css | 37 + .../bootstrap-3.3.5/css/paper.min.css | 36 + .../bootstrap-3.3.5/css/readable.min.css | 24 + .../bootstrap-3.3.5/css/sandstone.min.css | 24 + .../bootstrap-3.3.5/css/simplex.min.css | 24 + .../bootstrap-3.3.5/css/spacelab.min.css | 36 + .../bootstrap-3.3.5/css/united.min.css | 18 + .../bootstrap-3.3.5/css/yeti.min.css | 50 + .../fonts/glyphicons-halflings-regular.eot | Bin 0 -> 20127 bytes .../fonts/glyphicons-halflings-regular.svg | 288 + .../fonts/glyphicons-halflings-regular.ttf | Bin 0 -> 45404 bytes .../fonts/glyphicons-halflings-regular.woff | Bin 0 -> 23424 bytes .../fonts/glyphicons-halflings-regular.woff2 | Bin 0 -> 18028 bytes .../bootstrap-3.3.5/js/bootstrap.js | 2363 ++++++ .../bootstrap-3.3.5/js/bootstrap.min.js | 7 + .../Overview_files/bootstrap-3.3.5/js/npm.js | 13 + .../bootstrap-3.3.5/shim/html5shiv.min.js | 7 + .../bootstrap-3.3.5/shim/respond.min.js | 8 + .../figure-html/always-loses-1.png | Bin 16377 -> 4458 bytes .../figure-html/nascar-qv-1.png | Bin 45130 -> 26423 bytes .../figure-html/plot-pltree-1.png | Bin 36807 -> 20418 bytes .../figure-html/pudding-qv-1.png | Bin 16258 -> 13429 bytes .../jquery-1.11.3/jquery.min.js | 5 + .../lightable-0.0.1/lightable.css | 234 + .../navigation-1.1/codefolding.js | 59 + .../navigation-1.1/sourceembed.js | 12 + .../Overview_files/navigation-1.1/tabsets.js | 141 + docs/articles/index.html | 301 +- docs/authors.html | 326 +- docs/bootstrap-toc.css | 60 + docs/bootstrap-toc.js | 159 + docs/index.html | 481 +- docs/news/index.html | 873 +-- docs/pkgdown.css | 165 +- docs/pkgdown.js | 15 +- docs/pkgdown.yml | 13 +- docs/reference/PlackettLuce-deprecated.html | 340 +- docs/reference/PlackettLuce-package.html | 393 +- docs/reference/PlackettLuce.html | 1067 +-- docs/reference/Rplot001.png | Bin 0 -> 1011 bytes docs/reference/Rplot002.png | Bin 0 -> 27186 bytes docs/reference/adjacency.html | 436 +- docs/reference/aggregate.html | 523 +- docs/reference/beans.html | 535 +- docs/reference/choices.html | 453 +- docs/reference/complete.html | 374 +- docs/reference/connectivity.html | 475 +- docs/reference/decode.html | 438 +- docs/reference/figures/always-loses-1.png | Bin 16604 -> 1880 bytes docs/reference/figures/qv-1.png | Bin 15586 -> 4410 bytes docs/reference/fitted.PlackettLuce.html | 462 +- docs/reference/group.html | 564 +- docs/reference/index.html | 688 +- docs/reference/itempar.PlackettLuce.html | 475 +- docs/reference/nascar.html | 428 +- docs/reference/plfit.html | 506 +- docs/reference/pltree-1.png | Bin 34781 -> 73051 bytes docs/reference/pltree-2.png | Bin 34654 -> 72254 bytes docs/reference/pltree-summaries.html | 547 +- docs/reference/pltree.html | 552 +- docs/reference/preflib.html | 546 +- docs/reference/pudding.html | 432 +- docs/reference/qvcalc.PlackettLuce-1.png | Bin 15388 -> 20755 bytes docs/reference/qvcalc.PlackettLuce.html | 524 +- docs/reference/rankings.html | 811 +- docs/reference/reexports.html | 352 +- docs/reference/simulate.PlackettLuce.html | 537 +- docs/reference/summaries.html | 451 +- inst/PlackettLuce0/PlackettLuce0.R | 941 +-- inst/PlackettLuce0/coef0.R | 68 +- inst/PlackettLuce0/igraph0.R | 96 +- inst/Reference_Implementations/vcov_hessian.R | 342 +- man/PlackettLuce-package.Rd | 2 +- man/PlackettLuce.Rd | 61 +- man/adjacency.Rd | 4 +- man/figures/always-loses-1.png | Bin 16604 -> 1880 bytes man/figures/qv-1.png | Bin 15586 -> 4410 bytes man/fitted.PlackettLuce.Rd | 4 +- man/nascar.Rd | 3 +- man/preflib.Rd | 4 +- man/qvcalc.PlackettLuce.Rd | 10 +- man/simulate.PlackettLuce.Rd | 4 +- 127 files changed, 22291 insertions(+), 10371 deletions(-) create mode 100644 docs/404.html create mode 100644 docs/articles/Overview_files/accessible-code-block-0.0.1/empty-anchor.js create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/bootstrap-theme.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/bootstrap-theme.css.map create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/bootstrap-theme.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/bootstrap.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/bootstrap.css.map create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/bootstrap.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/cerulean.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/cosmo.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/darkly.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/flatly.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/Lato.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/LatoBold.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/LatoItalic.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/NewsCycle.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/NewsCycleBold.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/OpenSans.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/OpenSansBold.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/OpenSansBoldItalic.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/OpenSansItalic.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/OpenSansLight.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/OpenSansLightItalic.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/Raleway.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/RalewayBold.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/Roboto.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/RobotoBold.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/RobotoLight.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/RobotoMedium.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/SourceSansPro.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/SourceSansProBold.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/SourceSansProItalic.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/SourceSansProLight.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/fonts/Ubuntu.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/journal.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/lumen.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/paper.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/readable.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/sandstone.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/simplex.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/spacelab.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/united.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/css/yeti.min.css create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/fonts/glyphicons-halflings-regular.eot create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/fonts/glyphicons-halflings-regular.svg create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/fonts/glyphicons-halflings-regular.ttf create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/fonts/glyphicons-halflings-regular.woff create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/fonts/glyphicons-halflings-regular.woff2 create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/js/bootstrap.js create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/js/bootstrap.min.js create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/js/npm.js create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/shim/html5shiv.min.js create mode 100644 docs/articles/Overview_files/bootstrap-3.3.5/shim/respond.min.js create mode 100644 docs/articles/Overview_files/jquery-1.11.3/jquery.min.js create mode 100644 docs/articles/Overview_files/lightable-0.0.1/lightable.css create mode 100644 docs/articles/Overview_files/navigation-1.1/codefolding.js create mode 100644 docs/articles/Overview_files/navigation-1.1/sourceembed.js create mode 100644 docs/articles/Overview_files/navigation-1.1/tabsets.js create mode 100644 docs/bootstrap-toc.css create mode 100644 docs/bootstrap-toc.js create mode 100644 docs/reference/Rplot001.png create mode 100644 docs/reference/Rplot002.png diff --git a/DESCRIPTION b/DESCRIPTION index 557f757..0f9ce6b 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,42 +1,42 @@ -Package: PlackettLuce -Type: Package -Title: Plackett-Luce Models for Rankings -Version: 0.2-9.9000 -Authors@R: c(person("Heather", "Turner", - email = "ht@heatherturner.net", role = c("aut", "cre"), - comment = c(ORCID = "0000-0002-1256-3375")), - person("Ioannis", "Kosmidis", role = "aut", - comment = c(ORCID = "0000-0003-1556-0302")), - person("David", "Firth", role = "aut", - comment = c(ORCID = "0000-0003-0302-2312")), - person("Jacob", "van Etten", role = "ctb", - comment = c(ORCID = "0000-0001-7554-2558"))) -URL: https://hturner.github.io/PlackettLuce/ -BugReports: https://github.com/hturner/PlackettLuce/issues -Description: Functions to prepare rankings data and fit the Plackett-Luce model - jointly attributed to Plackett (1975) and Luce - (1959, ISBN:0486441369). The standard Plackett-Luce model is generalized - to accommodate ties of any order in the ranking. Partial rankings, in which - only a subset of items are ranked in each ranking, are also accommodated in - the implementation. Disconnected/weakly connected networks implied by the - rankings may be handled by adding pseudo-rankings with a hypothetical item. - Optionally, a multivariate normal prior may be set on the log-worth - parameters and ranker reliabilities may be incorporated as proposed by - Raman and Joachims (2014) . Maximum a - posteriori estimation is used when priors are set. Methods are provided to - estimate standard errors or quasi-standard errors for inference as well as - to fit Plackett-Luce trees. See the package website or vignette for further - details. -License: GPL-3 -Encoding: UTF-8 -LazyData: true -Depends: R (>= 2.10) -Imports: Matrix, igraph, methods, partykit, psychotools, - psychotree, RSpectra, qvcalc, sandwich, stats -Suggests: BiocStyle, BayesMallows, BradleyTerry2, - PLMIX, ROlogit, StatRank, bookdown, covr, hyper2, kableExtra, knitr, - lbfgs, gnm, pmr, rmarkdown, testthat -RoxygenNote: 7.1.1 -Roxygen: list(markdown = TRUE) -VignetteBuilder: knitr -Language: en-GB +Package: PlackettLuce +Type: Package +Title: Plackett-Luce Models for Rankings +Version: 0.3.0 +Authors@R: c(person("Heather", "Turner", + email = "ht@heatherturner.net", role = c("aut", "cre"), + comment = c(ORCID = "0000-0002-1256-3375")), + person("Ioannis", "Kosmidis", role = "aut", + comment = c(ORCID = "0000-0003-1556-0302")), + person("David", "Firth", role = "aut", + comment = c(ORCID = "0000-0003-0302-2312")), + person("Jacob", "van Etten", role = "ctb", + comment = c(ORCID = "0000-0001-7554-2558"))) +URL: https://hturner.github.io/PlackettLuce/ +BugReports: https://github.com/hturner/PlackettLuce/issues +Description: Functions to prepare rankings data and fit the Plackett-Luce model + jointly attributed to Plackett (1975) and Luce + (1959, ISBN:0486441369). The standard Plackett-Luce model is generalized + to accommodate ties of any order in the ranking. Partial rankings, in which + only a subset of items are ranked in each ranking, are also accommodated in + the implementation. Disconnected/weakly connected networks implied by the + rankings may be handled by adding pseudo-rankings with a hypothetical item. + Optionally, a multivariate normal prior may be set on the log-worth + parameters and ranker reliabilities may be incorporated as proposed by + Raman and Joachims (2014) . Maximum a + posteriori estimation is used when priors are set. Methods are provided to + estimate standard errors or quasi-standard errors for inference as well as + to fit Plackett-Luce trees. See the package website or vignette for further + details. +License: GPL-3 +Encoding: UTF-8 +LazyData: true +Depends: R (>= 2.10) +Imports: Matrix, igraph, methods, partykit, psychotools, + psychotree, RSpectra, qvcalc, sandwich, stats +Suggests: BiocStyle, BayesMallows, BradleyTerry2, + PLMIX, ROlogit, StatRank, bookdown, covr, hyper2, kableExtra, knitr, + lbfgs, gnm, pmr, rmarkdown, testthat +RoxygenNote: 7.1.1 +Roxygen: list(markdown = TRUE) +VignetteBuilder: knitr +Language: en-GB diff --git a/NEWS.md b/NEWS.md index 64eb8cc..90975c3 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,183 +1,183 @@ -# PlackettLuce 0.3-0 - -* Now correctly handles cases where intermediate tie orders are not observed, fixing corresponding tie parameters to zero (#42). - -# PlackettLuce 0.2-9 - -* `vcov.PlackettLuce()` works again for `ref = NULL` (bug introduced with vcov method in version 0.2-4) -* avoid dependency on R >= 3.6.0 by providing alternatives to `asplit()` -* `read.soi()` and `read.toi()` now handle incomplete rankings with very irregular lengths correctly. -* `read.*()` functions for Preflib formats now give a meaningful error when the file or URL does not exist, and a warning if the file is corrupt. -* `as.rankings` with `input = "orderings"` now checks coded values can be matched to item names, if provided. -* `PlackettLuce()` now works with `nspeudo > 0` when there are no observed paired comparisons. -* `?PlackettLuce` now gives advice on analysing data with higher order ties. - -# PlackettLuce 0.2-8 - -* Fix bug in `as.rankings.matrix()` introduced in version 0.2-7. -* Import `eigs` from RSpectra vs rARPACK. - -# PlackettLuce 0.2-7 - -## New Features - -* New `"aggregated_rankings"` object to store aggregated rankings with the corresponding frequencies. Objects of class `"rankings"` can be aggregated via the `aggregate` method; alternatively `rankings()` and `as.rankings()` will create an `"aggregated_rankings"` object when `aggregate = TRUE`. `as.rankings()` also handles pre-aggregated data, accepting frequencies via the `freq` argument. -* New `freq()` function to extract frequencies from aggregated rankings. -* `as.rankings()` can now create a `"grouped_rankings"` object, if a grouping index is passed via the `index` argument. -* New `as.matrix()` methods for rankings and aggregated rankings to extract the underlying matrix of rankings, with frequencies in the final column if relevant. This means rankings can be saved easily with `write.table()`. -* New `complete()` and `decode()` functions to help pre-process orderings before converting to rankings, `complete()` infers the item(s) in r'th rank given the items in the other (r - 1) ranks. `decode()` converts coded (partial) orderings to orderings of the items in each ordering. -* New `read.soi()`, `read.toc()` and `read.toi()` to read the corresponding PrefLib file formats (for data types "Strict Orders - Incomplete List", "Orders with Ties - Complete List" and "Orders with Ties - Incomplete List"). An `as.aggregated_rankings()` method is provided to convert the data frame of aggregated orderings to an `"aggregated_rankings"` object. - -## Improvements - -* `pltree()` now respects `na.action` and will pad predictions and fitted values for `na.action = "na.exclude"` if the rankings are missing for a whole group or one of the model covariates has a missing value. -* `PlackettLuce()` now has an `na.action` argument for handling of missing rankings. -* `fitted()` and `choices()` now return data frames, with list columns as necessary. - -## Changes in behaviour - -* `rankings()` now sets redundant/inconsistent ranks to `NA` rather than dropping them. This does not affect the final ranking, unless it is completely `NA`. -* The frequencies column in the data frame returned by `read.soc()` is now named `Freq` rather than `n`. -* The `"item"` attribute of the data frame returned by `read.soc()` is now named `"items"`. -* The `labels` argument in `as.rankings()` has been deprecated and replaced by `items`. -* `grouped_ranking()` has been deprecated and replaced by `group()`. -* The redundant columns in the `nascar` data have been dropped. - -# PlackettLuce 0.2-6 - -* Avoid using `isFALSE()` for compatibility with R < 3.5. -* Don't test number of iterations when comparing models on grouped and ungrouped rankings. - -# PlackettLuce 0.2-5 - -* Higher tolerance in tests of `vcov()` for CRAN Windows test machine. - -# PlackettLuce 0.2-4 - -## New Features - -* `PlackettLuce()` now supports MAP estimation with a multivariate normal prior on log-worths and/or a gamma prior on ranker adherence. -* `PlackettLuce()` now returns log-likelihood and degrees of freedom for the null model (where all outcomes, including ties, have equal probability). -* There is now a `vcov` method for Plackett-Luce trees. - -## Changes in Behaviour - -* `itempar.PlackettLuce()` now always returns a matrix, even for a single node tree. - -## Bug Fixes - -* `pltree()` or `PlackettLuce()` with grouped rankings now work correctly with weights. - -# PlackettLuce 0.2-3 - -## Improvements - -* Print methods for `"PlackettLuce"` and `"summary.PlacketLuce"` objects now respect `options("width")`. - -## Changes in Behaviour - -* `fitted` always returns `n` which is now weighted count of rankings (previously only returned unweighted count with argument `aggregate = TRUE`). - -## Bug fixes - -* Correct vcov for weighted rankings of more than two items. -* Enable `AIC.pltree` to work on `"pltree"` object with one node. - -# PlackettLuce 0.2-2 - -## New features - -* Add `AIC.pltree` to enable computation of AIC on new observations (e.g. data held out in cross-validation). -* Add `fitted.pltree` to return combined fitted probabilities for each choice within each ranking, for each node in a Plackett-Luce tree. - -## Bug fixes - -* `vcov.PlackettLuce` now works for models with non-integer weights (fixes #25). -* `plot.pltree` now works for `worth = TRUE` with psychotree version 0.15-2 (currently pre-release on https://r-forge.r-project.org/R/?group_id=330) -* `PlackettLuce` and `plfit` now work when `start` argument is set. -* `itempar.PlackettLuce` now works with `alias = FALSE` - -# PlackettLuce 0.2-1 - -## New features - -* Add **pkgdown** site. -* Add content to README (fixes #5). -* Add `plot.PlackettLuce` method so that plotting works for a saved -`"PlackettLuce"` object - -## Improvements - -* Improved vignette, particularly example based on `beans` data (which has been -updated). -* Improved help files particularly `?PlackettLuce` and new -`package?PlackettLuce`. (Fixes #14 and #21). - -## Changes in behaviour - -* `maxit` defaults to 500 in `PlackettLuce`. -* Steffensen acceleration only applied in iterations where it will increase the -log-likelihood (still only attempted once iterations have reached a solution -that is "close enough" as specified by `steffensen` argument). - -## Bug fixes - -* `coef.pltree()` now respects `log = TRUE` argument (fixes #19). -* Fix bug causes lack of convergence with iterative scaling plus -pseudo-rankings. -* `[.grouped_rankings]` now works for replicated indices. - -# PlackettLuce 0.2-0 - -## New Features - -* Add vignette. -* Add data sets `pudding`, `nascar` and `beans`. -* Add `pltree()` function for use with `partykit::mob()`. Requires new -objects of type `"grouped_rankings"` that add a grouping index to a `"rankings"` -object and store other derived objects used by `PlackettLuce`. Methods to print, -plot and predict from Plackett-Luce tree are provided. -* Add `connectivity()` function to check connectivity of a network given -adjacency matrix. New `adjacency()` function computes adjacency matrix without -creating edgelist, so remove `as.edgelist` generic and method for -`"PlackettLuce" objects. -* Add `as.data.frame` methods so that rankings and grouped rankings can be added -to model frames. -* Add `format` methods for rankings and grouped_rankings, for pretty printing. -* Add `[` methods for rankings and grouped_rankings, to create valid rankings -from selected rankings and/or items. -* Add method argument to offer choices of iterative scaling (default), or -direct maximisation of the likelihood via BFGS or L-BFGS. -* Add `itempar` method for "PlackettLuce" objects to obtain different -parameterizations of the worth parameters. -* Add `read.soc` function to read Strict Orders - Complete List (.soc) files -from http://www.preflib.org. - -## Changes in behaviour - -Old behaviour should be reproducible with arguments - - npseudo = 0, steffensen = 0, start = c(rep(1/N, N), rep(0.1, D)) - -where `N` is number of items and `D` is maximum order of ties. - -* Implement pseudo-data approach - now used by default. -* Improve starting values for ability parameters -* Add Steffensen acceleration to iterative scaling algorithm -* Dropped `ref` argument from `PlackettLuce`; should be specified instead when -calling `coef`, `summary`, `vcov` or `itempar`. -* `qvcalc` generic now imported from **qvcalc** - -## Improvements - -* Refactor code to speed up model fitting and computation of fitted values and -vcov. -* Implement ranking weights and starting values in `PlackettLuce`. -* Add package tests -* Add `log` argument to `coef` so that worth parameters (probability of coming -first in strict ranking of all items) can be obtained easily. - - -# PlackettLuce 0.1-0 - -* GitHub-only release of prototype package. +# PlackettLuce 0.3.0 + +* Now correctly handles cases where intermediate tie orders are not observed, fixing corresponding tie parameters to zero (#42). + +# PlackettLuce 0.2-9 + +* `vcov.PlackettLuce()` works again for `ref = NULL` (bug introduced with vcov method in version 0.2-4) +* avoid dependency on R >= 3.6.0 by providing alternatives to `asplit()` +* `read.soi()` and `read.toi()` now handle incomplete rankings with very irregular lengths correctly. +* `read.*()` functions for Preflib formats now give a meaningful error when the file or URL does not exist, and a warning if the file is corrupt. +* `as.rankings` with `input = "orderings"` now checks coded values can be matched to item names, if provided. +* `PlackettLuce()` now works with `nspeudo > 0` when there are no observed paired comparisons. +* `?PlackettLuce` now gives advice on analysing data with higher order ties. + +# PlackettLuce 0.2-8 + +* Fix bug in `as.rankings.matrix()` introduced in version 0.2-7. +* Import `eigs` from RSpectra vs rARPACK. + +# PlackettLuce 0.2-7 + +## New Features + +* New `"aggregated_rankings"` object to store aggregated rankings with the corresponding frequencies. Objects of class `"rankings"` can be aggregated via the `aggregate` method; alternatively `rankings()` and `as.rankings()` will create an `"aggregated_rankings"` object when `aggregate = TRUE`. `as.rankings()` also handles pre-aggregated data, accepting frequencies via the `freq` argument. +* New `freq()` function to extract frequencies from aggregated rankings. +* `as.rankings()` can now create a `"grouped_rankings"` object, if a grouping index is passed via the `index` argument. +* New `as.matrix()` methods for rankings and aggregated rankings to extract the underlying matrix of rankings, with frequencies in the final column if relevant. This means rankings can be saved easily with `write.table()`. +* New `complete()` and `decode()` functions to help pre-process orderings before converting to rankings, `complete()` infers the item(s) in r'th rank given the items in the other (r - 1) ranks. `decode()` converts coded (partial) orderings to orderings of the items in each ordering. +* New `read.soi()`, `read.toc()` and `read.toi()` to read the corresponding PrefLib file formats (for data types "Strict Orders - Incomplete List", "Orders with Ties - Complete List" and "Orders with Ties - Incomplete List"). An `as.aggregated_rankings()` method is provided to convert the data frame of aggregated orderings to an `"aggregated_rankings"` object. + +## Improvements + +* `pltree()` now respects `na.action` and will pad predictions and fitted values for `na.action = "na.exclude"` if the rankings are missing for a whole group or one of the model covariates has a missing value. +* `PlackettLuce()` now has an `na.action` argument for handling of missing rankings. +* `fitted()` and `choices()` now return data frames, with list columns as necessary. + +## Changes in behaviour + +* `rankings()` now sets redundant/inconsistent ranks to `NA` rather than dropping them. This does not affect the final ranking, unless it is completely `NA`. +* The frequencies column in the data frame returned by `read.soc()` is now named `Freq` rather than `n`. +* The `"item"` attribute of the data frame returned by `read.soc()` is now named `"items"`. +* The `labels` argument in `as.rankings()` has been deprecated and replaced by `items`. +* `grouped_ranking()` has been deprecated and replaced by `group()`. +* The redundant columns in the `nascar` data have been dropped. + +# PlackettLuce 0.2-6 + +* Avoid using `isFALSE()` for compatibility with R < 3.5. +* Don't test number of iterations when comparing models on grouped and ungrouped rankings. + +# PlackettLuce 0.2-5 + +* Higher tolerance in tests of `vcov()` for CRAN Windows test machine. + +# PlackettLuce 0.2-4 + +## New Features + +* `PlackettLuce()` now supports MAP estimation with a multivariate normal prior on log-worths and/or a gamma prior on ranker adherence. +* `PlackettLuce()` now returns log-likelihood and degrees of freedom for the null model (where all outcomes, including ties, have equal probability). +* There is now a `vcov` method for Plackett-Luce trees. + +## Changes in Behaviour + +* `itempar.PlackettLuce()` now always returns a matrix, even for a single node tree. + +## Bug Fixes + +* `pltree()` or `PlackettLuce()` with grouped rankings now work correctly with weights. + +# PlackettLuce 0.2-3 + +## Improvements + +* Print methods for `"PlackettLuce"` and `"summary.PlacketLuce"` objects now respect `options("width")`. + +## Changes in Behaviour + +* `fitted` always returns `n` which is now weighted count of rankings (previously only returned unweighted count with argument `aggregate = TRUE`). + +## Bug fixes + +* Correct vcov for weighted rankings of more than two items. +* Enable `AIC.pltree` to work on `"pltree"` object with one node. + +# PlackettLuce 0.2-2 + +## New features + +* Add `AIC.pltree` to enable computation of AIC on new observations (e.g. data held out in cross-validation). +* Add `fitted.pltree` to return combined fitted probabilities for each choice within each ranking, for each node in a Plackett-Luce tree. + +## Bug fixes + +* `vcov.PlackettLuce` now works for models with non-integer weights (fixes #25). +* `plot.pltree` now works for `worth = TRUE` with psychotree version 0.15-2 (currently pre-release on https://r-forge.r-project.org/R/?group_id=330) +* `PlackettLuce` and `plfit` now work when `start` argument is set. +* `itempar.PlackettLuce` now works with `alias = FALSE` + +# PlackettLuce 0.2-1 + +## New features + +* Add **pkgdown** site. +* Add content to README (fixes #5). +* Add `plot.PlackettLuce` method so that plotting works for a saved +`"PlackettLuce"` object + +## Improvements + +* Improved vignette, particularly example based on `beans` data (which has been +updated). +* Improved help files particularly `?PlackettLuce` and new +`package?PlackettLuce`. (Fixes #14 and #21). + +## Changes in behaviour + +* `maxit` defaults to 500 in `PlackettLuce`. +* Steffensen acceleration only applied in iterations where it will increase the +log-likelihood (still only attempted once iterations have reached a solution +that is "close enough" as specified by `steffensen` argument). + +## Bug fixes + +* `coef.pltree()` now respects `log = TRUE` argument (fixes #19). +* Fix bug causes lack of convergence with iterative scaling plus +pseudo-rankings. +* `[.grouped_rankings]` now works for replicated indices. + +# PlackettLuce 0.2-0 + +## New Features + +* Add vignette. +* Add data sets `pudding`, `nascar` and `beans`. +* Add `pltree()` function for use with `partykit::mob()`. Requires new +objects of type `"grouped_rankings"` that add a grouping index to a `"rankings"` +object and store other derived objects used by `PlackettLuce`. Methods to print, +plot and predict from Plackett-Luce tree are provided. +* Add `connectivity()` function to check connectivity of a network given +adjacency matrix. New `adjacency()` function computes adjacency matrix without +creating edgelist, so remove `as.edgelist` generic and method for +`"PlackettLuce" objects. +* Add `as.data.frame` methods so that rankings and grouped rankings can be added +to model frames. +* Add `format` methods for rankings and grouped_rankings, for pretty printing. +* Add `[` methods for rankings and grouped_rankings, to create valid rankings +from selected rankings and/or items. +* Add method argument to offer choices of iterative scaling (default), or +direct maximisation of the likelihood via BFGS or L-BFGS. +* Add `itempar` method for "PlackettLuce" objects to obtain different +parameterizations of the worth parameters. +* Add `read.soc` function to read Strict Orders - Complete List (.soc) files +from https://www.preflib.org. + +## Changes in behaviour + +Old behaviour should be reproducible with arguments + + npseudo = 0, steffensen = 0, start = c(rep(1/N, N), rep(0.1, D)) + +where `N` is number of items and `D` is maximum order of ties. + +* Implement pseudo-data approach - now used by default. +* Improve starting values for ability parameters +* Add Steffensen acceleration to iterative scaling algorithm +* Dropped `ref` argument from `PlackettLuce`; should be specified instead when +calling `coef`, `summary`, `vcov` or `itempar`. +* `qvcalc` generic now imported from **qvcalc** + +## Improvements + +* Refactor code to speed up model fitting and computation of fitted values and +vcov. +* Implement ranking weights and starting values in `PlackettLuce`. +* Add package tests +* Add `log` argument to `coef` so that worth parameters (probability of coming +first in strict ranking of all items) can be obtained easily. + + +# PlackettLuce 0.1-0 + +* GitHub-only release of prototype package. diff --git a/R/PlackettLuce-package.R b/R/PlackettLuce-package.R index 580d444..f789346 100644 --- a/R/PlackettLuce-package.R +++ b/R/PlackettLuce-package.R @@ -1,43 +1,43 @@ -#' Plackett-Luce Models for Rankings -#' -#' Plackett-Luce provides functions to prepare rankings data in order to fit -#' the Plackett-Luce model or Plackett-Luce trees. The implementation can handle -#' ties, sub-rankings and rankings that imply disconnected or weakly connected -#' preference networks. Methods are provided for summary and inference. -#' -#' The main function in the package is the model-fitting function -#' \code{\link{PlackettLuce}} and the help file for that function provides -#' details of the Plackett-Luce model, which is extended here to accommodate -#' ties. -#' -#' Rankings data must be passed to \code{PlackettLuce} in a specific form, see -#' \code{\link{rankings}} for more details. Other functions for handling -#' rankings include \code{choices} to express the rankings as -#' choices from alternatives; \code{adjacency} to create an adjacency matrix of -#' wins and losses implied by the rankings and \code{connectivity} to check the -#' connectivity of the underlying preference network. -#' -#' Several methods are available to inspect fitted Plackett-Luce models, help -#' files are available for less common methods or where arguments may be -#' specified: \code{\link{coef}}, \code{deviance}, \code{\link{fitted}}, -#' \code{\link{itempar}}, \code{logLik}, \code{print}, -#' \code{\link{qvcalc}}, \code{\link{summary}}, \code{\link{vcov}}. -#' -#' PlackettLuce also provides the function \code{pltree} to fit a Plackett-Luce -#' tree i.e. a tree that partitions the rankings by covariate values, -#' identifying subgroups with different sets of worth parameters for the items. -#' In this case \code{\link{group}} must be used to prepare the data. -#' -#' Several data sets are provided in the package: \code{\link{beans}}, -#' \code{\link{nascar}}, \code{\link{pudding}}. The help files for these give -#' further illustration of preparing rankings data for modelling. The -#' \code{\link{read.soc}} function enables further example data sets of -#' "Strict Orders - Complete List" format (i.e. complete rankings with no ties) -#' to be downloaded from \href{http://www.preflib.org/}{PrefLib}. -#' -#' A full explanation of the methods with illustrations using the package data -#' sets is given in the vignette, -#' \code{vignette("Overview", package = "PlackettLuce")}. -#' @docType package -#' @name PlackettLuce-package -NULL +#' Plackett-Luce Models for Rankings +#' +#' Plackett-Luce provides functions to prepare rankings data in order to fit +#' the Plackett-Luce model or Plackett-Luce trees. The implementation can handle +#' ties, sub-rankings and rankings that imply disconnected or weakly connected +#' preference networks. Methods are provided for summary and inference. +#' +#' The main function in the package is the model-fitting function +#' \code{\link{PlackettLuce}} and the help file for that function provides +#' details of the Plackett-Luce model, which is extended here to accommodate +#' ties. +#' +#' Rankings data must be passed to \code{PlackettLuce} in a specific form, see +#' \code{\link{rankings}} for more details. Other functions for handling +#' rankings include \code{choices} to express the rankings as +#' choices from alternatives; \code{adjacency} to create an adjacency matrix of +#' wins and losses implied by the rankings and \code{connectivity} to check the +#' connectivity of the underlying preference network. +#' +#' Several methods are available to inspect fitted Plackett-Luce models, help +#' files are available for less common methods or where arguments may be +#' specified: \code{\link{coef}}, \code{deviance}, \code{\link{fitted}}, +#' \code{\link{itempar}}, \code{logLik}, \code{print}, +#' \code{\link{qvcalc}}, \code{\link{summary}}, \code{\link{vcov}}. +#' +#' PlackettLuce also provides the function \code{pltree} to fit a Plackett-Luce +#' tree i.e. a tree that partitions the rankings by covariate values, +#' identifying subgroups with different sets of worth parameters for the items. +#' In this case \code{\link{group}} must be used to prepare the data. +#' +#' Several data sets are provided in the package: \code{\link{beans}}, +#' \code{\link{nascar}}, \code{\link{pudding}}. The help files for these give +#' further illustration of preparing rankings data for modelling. The +#' \code{\link{read.soc}} function enables further example data sets of +#' "Strict Orders - Complete List" format (i.e. complete rankings with no ties) +#' to be downloaded from \href{https://www.preflib.org/}{PrefLib}. +#' +#' A full explanation of the methods with illustrations using the package data +#' sets is given in the vignette, +#' \code{vignette("Overview", package = "PlackettLuce")}. +#' @docType package +#' @name PlackettLuce-package +NULL diff --git a/R/preflib.R b/R/preflib.R index 3c9eb9c..198bacb 100644 --- a/R/preflib.R +++ b/R/preflib.R @@ -1,197 +1,197 @@ -#' Read Preflib Election Data Files -#' -#' Read orderings from `.soc`, `.soi`, `.toc` or `.toi` file types storing -#' election data as defined by -#' \href{http://www.preflib.org/}{\{PrefLib\}: A Library for Preferences}. -#' -#' The file types supported are -#' \describe{ -#' \item{.soc}{Strict Orders - Complete List} -#' \item{.soi}{Strict Orders - Incomplete List} -#' \item{.toc}{Orders with Ties - Complete List} -#' \item{.toi}{Orders with Ties - Incomplete List} -#' } -#' Note that the file types do not distinguish between types of incomplete -#' orderings, i.e. whether they are a complete ranking of a subset of items -#' (as supported by [PlackettLuce()]) or top-\eqn{n} rankings of \eqn{n} items -#' from the full set of items (not currently supported by [PlackettLuce()]). -#' -#' The numerically coded orderings and their frequencies are read into a -#' data frame, storing the item names as an attribute. The -#' `as.aggregated_rankings` method converts these to an -#' [`"aggregated_rankings"`][aggregate.rankings] object with the items labelled -#' by the item names. -#' -#' A Preflib file may be corrupt, in the sense that the ordered items do not -#' match the named items. In this case, the file can be read is as a data -#' frame (with a warning) using the corresponding `read.*` function, but -#' `as.aggregated_rankings` will throw an error. -#' @return A data frame of class `"preflib"` with first column \code{Freq}, -#' giving the frequency of the ranking in that row, and remaining columns -#' \code{Rank 1}, \ldots, \code{Rank p} giving the items ranked from first to -#' last place in that ranking. Ties are represented by vector elements in list -#' columns. The data frame has an attribute \code{"items"} giving the labels -#' corresponding to each item number. -#' -#' @param file An election data file, conventionally with extension `.soc`, -#' `.soi`, `.toc` or `.toi` according to data type. -#' @param x An object of class `"preflib"`. -#' @param ... Additional arguments passed to [as.rankings()]: `freq`, -#' `input` or `items` will be ignored with a warning as they are set -#' automatically. -#' @note The Netflix and cities datasets used in the examples are from -#' Caragiannis et al (2017) and Bennet and Lanning (2007) respectively. These -#' data sets require a citation for re-use. -#' @references -#' Mattei, N. and Walsh, T. (2013) PrefLib: A Library of Preference Data. -#' \emph{Proceedings of Third International Conference on Algorithmic Decision -#' Theory (ADT 2013)}. Lecture Notes in Artificial Intelligence, Springer. -#' -#' Caragiannis, I., Chatzigeorgiou, X, Krimpas, G. A., and Voudouris, A. A. -#' (2017) Optimizing positional scoring rules for rank aggregation. -#' In \emph{Proceedings of the 31st AAAI Conference on Artificial Intelligence}. -#' -#' Bennett, J. and Lanning, S. (2007) The Netflix Prize. -#' \emph{Proceedings of The KDD Cup and Workshops}. -#' -#' @examples -#' -#' # can take a little while depending on speed of internet connection -#' -#' \dontrun{ -#' # url for preflib data in the "Election Data" category -#' preflib <- "http://www.preflib.org/data/election/" -#' -#' # strict complete orderings of four films on Netflix -#' netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000101.soc")) -#' head(netflix) -#' attr(netflix, "items") -#' -#' head(as.rankings(netflix)) -#' -#' # strict incomplete orderings of 6 random cities from 36 in total -#' cities <- read.soi(file.path(preflib, "cities/ED-00034-00000001.soi")) -#' -#' # strict incomplete orderings of drivers in the 1961 F1 races -#' # 8 races with 17 to 34 drivers in each -#' f1 <- read.soi(file.path(preflib, "f1/ED-00010-00000001.soi")) -#' -#' # complete orderings with ties of 30 skaters -#' skaters <- read.toc(file.path(preflib, "skate/ED-00006-00000001.toc")) -#' -#' # incomplete orderings with ties of 10 sushi items from 100 total -#' # orderings were derived from numeric ratings -#' sushi <- read.toi(file.path(preflib, "sushi/ED-00014-00000003.toi")) -#' } -#' @importFrom utils read.csv -#' @name preflib -NULL - -read.items <- function(file){ # read one line to find number of items - test <- tryCatch(file(file, "rt"), silent = TRUE, - warning = function(w) w, error = function(e) e) - if (!inherits(test, "connection")){ - stop(test$message, call. = FALSE) - } else close(test) - p <- as.integer(read.csv(file, nrows = 1L, header = FALSE)) - # get items - items <- read.csv(file, skip = 1L, nrows = p, header = FALSE, - stringsAsFactors = FALSE, strip.white = TRUE)[,2L] - names(items) <- seq_len(p) - items -} - -#' @importFrom utils count.fields -read.strict <- function(file, incomplete = FALSE){ - items <- read.items(file) - # count maximum number of ranks - if (incomplete){ - r <- max(count.fields(file, sep = ",")) - 1L - } else r <- length(items) - # read frequencies and ordered items - nm <- c("Freq", paste("Rank", seq_len(r))) - obs <- read.csv(file, skip = length(items) + 2L, header = FALSE, - col.names = nm, check.names = FALSE) - preflib(obs, items) -} - -#' @importFrom utils count.fields -read.ties <- function(file, incomplete = FALSE){ - items <- read.items(file) - r <- length(items) - skip <- r + 2L - input <- chartr("{}", "''", readLines(file, encoding = "UTF-8")) - # count maximum number of ranks for incomplete rankings - if (incomplete){ - r <- max(count.fields(textConnection(input), quote = "'", - sep = ",", skip = skip)) - 1L - } - # read counts and ordered items - nm <- c("Freq", paste("Rank", seq_len(r))) - obs <- read.csv(text = input, skip = skip, header = FALSE, quote = "'", - col.names = nm, check.names = FALSE, - na.strings = "", stringsAsFactors = FALSE) - # split up ties (don't use array list as dim attribute kep on MacOS) - rank_class <- vapply(obs, is.character, logical(1)) - for (i in which(rank_class)){ - obs[[i]] <- lapply(strsplit( obs[[i]], ","), as.numeric) - } - preflib(obs, items) -} - -preflib <- function(obs, items){ - obs_items <- sort(unique(unlist(obs[, -1]))) - unnamed <- setdiff(as.character(obs_items), names(items)) - n <- length(unnamed) - if (n) { - warning("Corrupt file. Items with no name:\n", - paste(unnamed[seq(min(n, 10L))], ", ..."[n > 10L], - collapse = ", ")) - } - structure(obs, items = items, class = c("preflib", class(obs))) -} - -#' @rdname preflib -#' @export -read.soc <- function(file){ - read.strict(file, incomplete = FALSE) -} - -#' @rdname preflib -#' @export -read.soi <- function(file){ - # unused ranks will be NA - read.strict(file, incomplete = TRUE) -} - -#' @rdname preflib -#' @export -read.toc <- function(file){ - read.ties(file, incomplete = FALSE) -} - -#' @rdname preflib -#' @export -read.toi <- function(file){ - # unused ranks will be NA - read.ties(file, incomplete = TRUE) -} - -#' @rdname preflib -#' @method as.aggregated_rankings preflib -#' @export -as.aggregated_rankings.preflib <- function(x, ...){ - nc <- ncol(x) - if (identical(colnames(x), c("Freq", paste("Rank", seq(nc - 1))))){ - dots <- match.call(as.aggregated_rankings.preflib, - expand.dots = FALSE)[["..."]] - ignore <- names(dots) %in% c("freq", "input", "items") - if (any(ignore)) - warning("`freq`, `input` and `items` are set automatically for ", - "items of class \"preflib\"") - dots <- dots[setdiff(names(dots), c("freq", "input", "items"))] - do.call(as.rankings.matrix, - c(list(as.matrix(x[, -1]), freq = x[, 1], - input = "orderings", items = attr(x, "items")), dots)) - } else stop("`x` is not a valid \"preflib\" object") -} +#' Read Preflib Election Data Files +#' +#' Read orderings from `.soc`, `.soi`, `.toc` or `.toi` file types storing +#' election data as defined by +#' \href{https://www.preflib.org/}{\{PrefLib\}: A Library for Preferences}. +#' +#' The file types supported are +#' \describe{ +#' \item{.soc}{Strict Orders - Complete List} +#' \item{.soi}{Strict Orders - Incomplete List} +#' \item{.toc}{Orders with Ties - Complete List} +#' \item{.toi}{Orders with Ties - Incomplete List} +#' } +#' Note that the file types do not distinguish between types of incomplete +#' orderings, i.e. whether they are a complete ranking of a subset of items +#' (as supported by [PlackettLuce()]) or top-\eqn{n} rankings of \eqn{n} items +#' from the full set of items (not currently supported by [PlackettLuce()]). +#' +#' The numerically coded orderings and their frequencies are read into a +#' data frame, storing the item names as an attribute. The +#' `as.aggregated_rankings` method converts these to an +#' [`"aggregated_rankings"`][aggregate.rankings] object with the items labelled +#' by the item names. +#' +#' A Preflib file may be corrupt, in the sense that the ordered items do not +#' match the named items. In this case, the file can be read is as a data +#' frame (with a warning) using the corresponding `read.*` function, but +#' `as.aggregated_rankings` will throw an error. +#' @return A data frame of class `"preflib"` with first column \code{Freq}, +#' giving the frequency of the ranking in that row, and remaining columns +#' \code{Rank 1}, \ldots, \code{Rank p} giving the items ranked from first to +#' last place in that ranking. Ties are represented by vector elements in list +#' columns. The data frame has an attribute \code{"items"} giving the labels +#' corresponding to each item number. +#' +#' @param file An election data file, conventionally with extension `.soc`, +#' `.soi`, `.toc` or `.toi` according to data type. +#' @param x An object of class `"preflib"`. +#' @param ... Additional arguments passed to [as.rankings()]: `freq`, +#' `input` or `items` will be ignored with a warning as they are set +#' automatically. +#' @note The Netflix and cities datasets used in the examples are from +#' Caragiannis et al (2017) and Bennet and Lanning (2007) respectively. These +#' data sets require a citation for re-use. +#' @references +#' Mattei, N. and Walsh, T. (2013) PrefLib: A Library of Preference Data. +#' \emph{Proceedings of Third International Conference on Algorithmic Decision +#' Theory (ADT 2013)}. Lecture Notes in Artificial Intelligence, Springer. +#' +#' Caragiannis, I., Chatzigeorgiou, X, Krimpas, G. A., and Voudouris, A. A. +#' (2017) Optimizing positional scoring rules for rank aggregation. +#' In \emph{Proceedings of the 31st AAAI Conference on Artificial Intelligence}. +#' +#' Bennett, J. and Lanning, S. (2007) The Netflix Prize. +#' \emph{Proceedings of The KDD Cup and Workshops}. +#' +#' @examples +#' +#' # can take a little while depending on speed of internet connection +#' +#' \dontrun{ +#' # url for preflib data in the "Election Data" category +#' preflib <- "https://www.preflib.org/data/election/" +#' +#' # strict complete orderings of four films on Netflix +#' netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000101.soc")) +#' head(netflix) +#' attr(netflix, "items") +#' +#' head(as.rankings(netflix)) +#' +#' # strict incomplete orderings of 6 random cities from 36 in total +#' cities <- read.soi(file.path(preflib, "cities/ED-00034-00000001.soi")) +#' +#' # strict incomplete orderings of drivers in the 1961 F1 races +#' # 8 races with 17 to 34 drivers in each +#' f1 <- read.soi(file.path(preflib, "f1/ED-00010-00000001.soi")) +#' +#' # complete orderings with ties of 30 skaters +#' skaters <- read.toc(file.path(preflib, "skate/ED-00006-00000001.toc")) +#' +#' # incomplete orderings with ties of 10 sushi items from 100 total +#' # orderings were derived from numeric ratings +#' sushi <- read.toi(file.path(preflib, "sushi/ED-00014-00000003.toi")) +#' } +#' @importFrom utils read.csv +#' @name preflib +NULL + +read.items <- function(file){ # read one line to find number of items + test <- tryCatch(file(file, "rt"), silent = TRUE, + warning = function(w) w, error = function(e) e) + if (!inherits(test, "connection")){ + stop(test$message, call. = FALSE) + } else close(test) + p <- as.integer(read.csv(file, nrows = 1L, header = FALSE)) + # get items + items <- read.csv(file, skip = 1L, nrows = p, header = FALSE, + stringsAsFactors = FALSE, strip.white = TRUE)[,2L] + names(items) <- seq_len(p) + items +} + +#' @importFrom utils count.fields +read.strict <- function(file, incomplete = FALSE){ + items <- read.items(file) + # count maximum number of ranks + if (incomplete){ + r <- max(count.fields(file, sep = ",")) - 1L + } else r <- length(items) + # read frequencies and ordered items + nm <- c("Freq", paste("Rank", seq_len(r))) + obs <- read.csv(file, skip = length(items) + 2L, header = FALSE, + col.names = nm, check.names = FALSE) + preflib(obs, items) +} + +#' @importFrom utils count.fields +read.ties <- function(file, incomplete = FALSE){ + items <- read.items(file) + r <- length(items) + skip <- r + 2L + input <- chartr("{}", "''", readLines(file, encoding = "UTF-8")) + # count maximum number of ranks for incomplete rankings + if (incomplete){ + r <- max(count.fields(textConnection(input), quote = "'", + sep = ",", skip = skip)) - 1L + } + # read counts and ordered items + nm <- c("Freq", paste("Rank", seq_len(r))) + obs <- read.csv(text = input, skip = skip, header = FALSE, quote = "'", + col.names = nm, check.names = FALSE, + na.strings = "", stringsAsFactors = FALSE) + # split up ties (don't use array list as dim attribute kep on MacOS) + rank_class <- vapply(obs, is.character, logical(1)) + for (i in which(rank_class)){ + obs[[i]] <- lapply(strsplit( obs[[i]], ","), as.numeric) + } + preflib(obs, items) +} + +preflib <- function(obs, items){ + obs_items <- sort(unique(unlist(obs[, -1]))) + unnamed <- setdiff(as.character(obs_items), names(items)) + n <- length(unnamed) + if (n) { + warning("Corrupt file. Items with no name:\n", + paste(unnamed[seq(min(n, 10L))], ", ..."[n > 10L], + collapse = ", ")) + } + structure(obs, items = items, class = c("preflib", class(obs))) +} + +#' @rdname preflib +#' @export +read.soc <- function(file){ + read.strict(file, incomplete = FALSE) +} + +#' @rdname preflib +#' @export +read.soi <- function(file){ + # unused ranks will be NA + read.strict(file, incomplete = TRUE) +} + +#' @rdname preflib +#' @export +read.toc <- function(file){ + read.ties(file, incomplete = FALSE) +} + +#' @rdname preflib +#' @export +read.toi <- function(file){ + # unused ranks will be NA + read.ties(file, incomplete = TRUE) +} + +#' @rdname preflib +#' @method as.aggregated_rankings preflib +#' @export +as.aggregated_rankings.preflib <- function(x, ...){ + nc <- ncol(x) + if (identical(colnames(x), c("Freq", paste("Rank", seq(nc - 1))))){ + dots <- match.call(as.aggregated_rankings.preflib, + expand.dots = FALSE)[["..."]] + ignore <- names(dots) %in% c("freq", "input", "items") + if (any(ignore)) + warning("`freq`, `input` and `items` are set automatically for ", + "items of class \"preflib\"") + dots <- dots[setdiff(names(dots), c("freq", "input", "items"))] + do.call(as.rankings.matrix, + c(list(as.matrix(x[, -1]), freq = x[, 1], + input = "orderings", items = attr(x, "items")), dots)) + } else stop("`x` is not a valid \"preflib\" object") +} diff --git a/README.Rmd b/README.Rmd index 8394ffc..ea91432 100644 --- a/README.Rmd +++ b/README.Rmd @@ -1,177 +1,177 @@ ---- -output: github_document -bibliography: ["readme.bib"] ---- - -```{r rmd-setup, include = FALSE} -knitr::opts_chunk$set(fig.path = "man/figures/") -``` - -# PlackettLuce - -[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/PlackettLuce)](https://cran.r-project.org/package=PlackettLuce) -[![Travis-CI Build Status](https://travis-ci.org/hturner/PlackettLuce.svg?branch=master)](https://travis-ci.org/hturner/PlackettLuce) -[![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/hturner/PlackettLuce?branch=master&svg=true)](https://ci.appveyor.com/project/hturner/PlackettLuce) -[![Coverage Status](https://img.shields.io/codecov/c/github/hturner/PlackettLuce/master.svg)](https://codecov.io/github/hturner/PlackettLuce?branch=master) - -Package website: https://hturner.github.io/PlackettLuce/. - -## Overview - -The **PlackettLuce** package implements a generalization of the model jointly -attributed to @Plackett1975 and @Luce1959 for modelling rankings data. -Examples of rankings data might be the finishing order of competitors in a race, -or the preference of consumers over a set of competing products. - -The output of the model is an estimated **worth** for each item that appears in -the rankings. The parameters are generally presented on the log scale for -inference. - -The implementation of the Plackett-Luce model in **PlackettLuce**: - - - Accommodates ties (of any order) in the rankings, e.g. - bananas $\succ$ {apples, oranges} $\succ$ pears. - - Accommodates sub-rankings, e.g. pears $\succ$ apples, when the full set of items is {apples, bananas, oranges, pears}. - - Handles disconnected or weakly connected networks implied by the rankings, e.g. where one item always loses as in figure below. This is achieved by adding pseudo-rankings with a -hypothetical or ghost item. - -```{r always-loses, message = FALSE, echo = FALSE, fig.width = 3.5, fig.height = 3.5} -library(PlackettLuce) -library(igraph) -R <- matrix(c(1, 2, 0, 0, - 2, 0, 1, 0, - 1, 0, 0, 2, - 2, 1, 0, 0, - 0, 1, 2, 0), byrow = TRUE, ncol = 4, - dimnames = list(NULL, LETTERS[1:4])) -R <- as.rankings(R) -A <- adjacency(R) -net <- graph_from_adjacency_matrix(A) -plot(net, edge.arrow.size = 0.5, vertex.size = 30) -``` -
- -In addition the package provides methods for - - - Obtaining quasi-standard errors, that don't depend on the constraints applied - to the worth parameters for identifiability. - - Fitting Plackett-Luce trees, i.e. a tree that partitions the rankings by - covariate values, such as consumer attributes or racing conditions, identifying - subgroups with different sets of worth parameters for the items. - -## Installation - -The package may be installed from CRAN via - -```{r, eval = FALSE} -install.packages("PlackettLuce") -``` - -The development version can be installed via - -```{r, eval = FALSE} -# install.packages("devtools") -devtools::install_github("hturner/PlackettLuce") -``` - -## Usage - -The [Netflix Prize](https://www.netflixprize.com/) was a competition devised by -Netflix to improve the accuracy of its recommendation system. To facilitate -this they released ratings about movies from the users of the system that have -been transformed to preference data and are available from -[PrefLib](http://www.preflib.org/data/election/netflix/), [@Bennett2007]. Each data set -comprises rankings of a set of 3 or 4 movies selected at random. Here we -consider rankings for just one set of movies to illustrate the functionality of -**PlackettLuce**. - -The data can be read in using the `read.soc` function in **PlackettLuce** -```{r} -library(PlackettLuce) -preflib <- "http://www.preflib.org/data/election/" -netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000138.soc")) -head(netflix, 2) -``` -Each row corresponds to a unique ordering of the four movies in this data set. -The number of Netflix users that assigned that ordering is given in the first -column, followed by the four movies in preference order. So for example, 68 -users ranked movie 2 first, followed by movie 1, then movie 4 and finally movie -3. - -`PlackettLuce`, the model-fitting function in **PlackettLuce** requires that -the data are provided in the form of *rankings* rather than *orderings*, i.e. -the rankings are expressed by giving the rank for each item, rather than -ordering the items. We can create a `"rankings"` object from a set of orderings -as follows -```{r} -R <- as.rankings(netflix[,-1], input = "orderings", - items = attr(netflix, "items")) -R[1:3, as.rankings = FALSE] -``` -Note that `read.soc` saved the names of the movies in the `"items"` attribute of -`netflix`, so we have used these to label the items. Subsetting the -rankings object `R` with `as.rankings = FALSE`, returns the underlying matrix of -rankings corresponding to the subset. So for example, in the first ranking the -second movie (Beverly Hills Cop) is ranked number 1, followed by the first movie -(Mean Girls) with rank 2, followed by the fourth movie (Mission: Impossible II) -and finally the third movie (The Mummy Returns), giving the same ordering as in -the original data. - -Various methods are provided for `"rankings"` objects, in particular if we -subset the rankings without `as.rankings = FALSE`, the result is again a -`"rankings"` object and the corresponding print method is used: - -```{r} -R[1:3] -print(R[1:3], width = 60) -``` - -The rankings can now be passed to `PlackettLuce` to fit the Plackett-Luce model. -The counts of each ranking provided in the downloaded data are used as weights -when fitting the model. -```{r} -mod <- PlackettLuce(R, weights = netflix$Freq) -coef(mod, log = FALSE) -``` -Calling `coef` with `log = FALSE` gives the worth parameters, constrained to -sum to one. These parameters represent the probability that each movie is ranked -first. - -For inference these parameters are converted to the log scale, by default -setting the first parameter to zero so that the standard errors are estimable: -```{r} -summary(mod) -``` -In this way, Mean Girls is treated as the reference movie, the positive -parameter for Beverly Hills Cop shows this was more popular among the users, -while the negative parameters for the other two movies show these were less -popular. - -Comparisons between different pairs of movies can be made visually by plotting -the log-worth parameters with comparison intervals based on quasi standard -errors. -```{r qv, fig.width = 9} -qv <- qvcalc(mod) -plot(qv, ylab = "Worth (log)", main = NULL) -``` - -If the intervals overlap there is no significant difference. So we can see that -Beverly Hills Cop is significantly more popular than the other three movies, -Mean Girls is significant more popular than The Mummy Returns or -Mission: Impossible II, but there was no significant difference in users' -preference for these last two movies. - -## Going Further - -The core functionality of **PlackettLuce** is illustrated in the package -vignette, along with details of the model used in the package and a comparison -to other packages. The vignette can be found on the [package website](https://hturner.github.io/PlackettLuce/) or from within R once the -package has been installed, e.g. via - - vignette("Overview", package = "PlackettLuce") - -## Code of Conduct - -Please note that this project is released with a [Contributor Code of Conduct](https://github.com/hturner/PlackettLuce/blob/master/CONDUCT.md). By participating in this project you agree to abide by its terms. - -## References +--- +output: github_document +bibliography: ["readme.bib"] +--- + +```{r rmd-setup, include = FALSE} +knitr::opts_chunk$set(fig.path = "man/figures/") +``` + +# PlackettLuce + +[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/PlackettLuce)](https://cran.r-project.org/package=PlackettLuce) +[![Travis-CI Build Status](https://travis-ci.org/hturner/PlackettLuce.svg?branch=master)](https://travis-ci.org/hturner/PlackettLuce) +[![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/hturner/PlackettLuce?branch=master&svg=true)](https://ci.appveyor.com/project/hturner/PlackettLuce) +[![Coverage Status](https://img.shields.io/codecov/c/github/hturner/PlackettLuce/master.svg)](https://codecov.io/github/hturner/PlackettLuce?branch=master) + +Package website: https://hturner.github.io/PlackettLuce/. + +## Overview + +The **PlackettLuce** package implements a generalization of the model jointly +attributed to @Plackett1975 and @Luce1959 for modelling rankings data. +Examples of rankings data might be the finishing order of competitors in a race, +or the preference of consumers over a set of competing products. + +The output of the model is an estimated **worth** for each item that appears in +the rankings. The parameters are generally presented on the log scale for +inference. + +The implementation of the Plackett-Luce model in **PlackettLuce**: + + - Accommodates ties (of any order) in the rankings, e.g. + bananas $\succ$ {apples, oranges} $\succ$ pears. + - Accommodates sub-rankings, e.g. pears $\succ$ apples, when the full set of items is {apples, bananas, oranges, pears}. + - Handles disconnected or weakly connected networks implied by the rankings, e.g. where one item always loses as in figure below. This is achieved by adding pseudo-rankings with a +hypothetical or ghost item. + +```{r always-loses, message = FALSE, echo = FALSE, fig.width = 3.5, fig.height = 3.5} +library(PlackettLuce) +library(igraph) +R <- matrix(c(1, 2, 0, 0, + 2, 0, 1, 0, + 1, 0, 0, 2, + 2, 1, 0, 0, + 0, 1, 2, 0), byrow = TRUE, ncol = 4, + dimnames = list(NULL, LETTERS[1:4])) +R <- as.rankings(R) +A <- adjacency(R) +net <- graph_from_adjacency_matrix(A) +plot(net, edge.arrow.size = 0.5, vertex.size = 30) +``` +
+ +In addition the package provides methods for + + - Obtaining quasi-standard errors, that don't depend on the constraints applied + to the worth parameters for identifiability. + - Fitting Plackett-Luce trees, i.e. a tree that partitions the rankings by + covariate values, such as consumer attributes or racing conditions, identifying + subgroups with different sets of worth parameters for the items. + +## Installation + +The package may be installed from CRAN via + +```{r, eval = FALSE} +install.packages("PlackettLuce") +``` + +The development version can be installed via + +```{r, eval = FALSE} +# install.packages("devtools") +devtools::install_github("hturner/PlackettLuce") +``` + +## Usage + +The [Netflix Prize](https://www.netflixprize.com/) was a competition devised by +Netflix to improve the accuracy of its recommendation system. To facilitate +this they released ratings about movies from the users of the system that have +been transformed to preference data and are available from +[PrefLib](https://www.preflib.org/data/election/netflix/), [@Bennett2007]. Each data set +comprises rankings of a set of 3 or 4 movies selected at random. Here we +consider rankings for just one set of movies to illustrate the functionality of +**PlackettLuce**. + +The data can be read in using the `read.soc` function in **PlackettLuce** +```{r} +library(PlackettLuce) +preflib <- "https://www.preflib.org/data/election/" +netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000138.soc")) +head(netflix, 2) +``` +Each row corresponds to a unique ordering of the four movies in this data set. +The number of Netflix users that assigned that ordering is given in the first +column, followed by the four movies in preference order. So for example, 68 +users ranked movie 2 first, followed by movie 1, then movie 4 and finally movie +3. + +`PlackettLuce`, the model-fitting function in **PlackettLuce** requires that +the data are provided in the form of *rankings* rather than *orderings*, i.e. +the rankings are expressed by giving the rank for each item, rather than +ordering the items. We can create a `"rankings"` object from a set of orderings +as follows +```{r} +R <- as.rankings(netflix[,-1], input = "orderings", + items = attr(netflix, "items")) +R[1:3, as.rankings = FALSE] +``` +Note that `read.soc` saved the names of the movies in the `"items"` attribute of +`netflix`, so we have used these to label the items. Subsetting the +rankings object `R` with `as.rankings = FALSE`, returns the underlying matrix of +rankings corresponding to the subset. So for example, in the first ranking the +second movie (Beverly Hills Cop) is ranked number 1, followed by the first movie +(Mean Girls) with rank 2, followed by the fourth movie (Mission: Impossible II) +and finally the third movie (The Mummy Returns), giving the same ordering as in +the original data. + +Various methods are provided for `"rankings"` objects, in particular if we +subset the rankings without `as.rankings = FALSE`, the result is again a +`"rankings"` object and the corresponding print method is used: + +```{r} +R[1:3] +print(R[1:3], width = 60) +``` + +The rankings can now be passed to `PlackettLuce` to fit the Plackett-Luce model. +The counts of each ranking provided in the downloaded data are used as weights +when fitting the model. +```{r} +mod <- PlackettLuce(R, weights = netflix$Freq) +coef(mod, log = FALSE) +``` +Calling `coef` with `log = FALSE` gives the worth parameters, constrained to +sum to one. These parameters represent the probability that each movie is ranked +first. + +For inference these parameters are converted to the log scale, by default +setting the first parameter to zero so that the standard errors are estimable: +```{r} +summary(mod) +``` +In this way, Mean Girls is treated as the reference movie, the positive +parameter for Beverly Hills Cop shows this was more popular among the users, +while the negative parameters for the other two movies show these were less +popular. + +Comparisons between different pairs of movies can be made visually by plotting +the log-worth parameters with comparison intervals based on quasi standard +errors. +```{r qv, fig.width = 9} +qv <- qvcalc(mod) +plot(qv, ylab = "Worth (log)", main = NULL) +``` + +If the intervals overlap there is no significant difference. So we can see that +Beverly Hills Cop is significantly more popular than the other three movies, +Mean Girls is significant more popular than The Mummy Returns or +Mission: Impossible II, but there was no significant difference in users' +preference for these last two movies. + +## Going Further + +The core functionality of **PlackettLuce** is illustrated in the package +vignette, along with details of the model used in the package and a comparison +to other packages. The vignette can be found on the [package website](https://hturner.github.io/PlackettLuce/) or from within R once the +package has been installed, e.g. via + + vignette("Overview", package = "PlackettLuce") + +## Code of Conduct + +Please note that this project is released with a [Contributor Code of Conduct](https://github.com/hturner/PlackettLuce/blob/master/CONDUCT.md). By participating in this project you agree to abide by its terms. + +## References diff --git a/README.md b/README.md index ddb321c..3599fc0 100644 --- a/README.md +++ b/README.md @@ -1,251 +1,252 @@ - -# PlackettLuce - -[![CRAN\_Status\_Badge](https://www.r-pkg.org/badges/version/PlackettLuce)](https://cran.r-project.org/package=PlackettLuce) -[![Travis-CI Build -Status](https://travis-ci.org/hturner/PlackettLuce.svg?branch=master)](https://travis-ci.org/hturner/PlackettLuce) -[![AppVeyor Build -Status](https://ci.appveyor.com/api/projects/status/github/hturner/PlackettLuce?branch=master&svg=true)](https://ci.appveyor.com/project/hturner/PlackettLuce) -[![Coverage -Status](https://img.shields.io/codecov/c/github/hturner/PlackettLuce/master.svg)](https://codecov.io/github/hturner/PlackettLuce?branch=master) - -Package website: . - -## Overview - -The **PlackettLuce** package implements a generalization of the model -jointly attributed to Plackett (1975) and Luce (1959) for modelling -rankings data. Examples of rankings data might be the finishing order of -competitors in a race, or the preference of consumers over a set of -competing products. - -The output of the model is an estimated **worth** for each item that -appears in the rankings. The parameters are generally presented on the -log scale for inference. - -The implementation of the Plackett-Luce model in **PlackettLuce**: - - - Accommodates ties (of any order) in the rankings, e.g. bananas - \(\succ\) {apples, oranges} \(\succ\) pears. - - Accommodates sub-rankings, e.g. pears \(\succ\) apples, when the - full set of items is {apples, bananas, oranges, pears}. - - Handles disconnected or weakly connected networks implied by the - rankings, e.g. where one item always loses as in figure below. This - is achieved by adding pseudo-rankings with a hypothetical or ghost - item. - -![](man/figures/always-loses-1.png)
- -In addition the package provides methods for - - - Obtaining quasi-standard errors, that don’t depend on the - constraints applied to the worth parameters for identifiability. - - Fitting Plackett-Luce trees, i.e. a tree that partitions the - rankings by covariate values, such as consumer attributes or racing - conditions, identifying subgroups with different sets of worth - parameters for the items. - -## Installation - -The package may be installed from CRAN via - -``` r -install.packages("PlackettLuce") -``` - -The development version can be installed via - -``` r -# install.packages("devtools") -devtools::install_github("hturner/PlackettLuce") -``` - -## Usage - -The [Netflix Prize](https://www.netflixprize.com/) was a competition -devised by Netflix to improve the accuracy of its recommendation system. -To facilitate this they released ratings about movies from the users of -the system that have been transformed to preference data and are -available from [PrefLib](http://www.preflib.org/data/election/netflix/), -(Bennett and Lanning 2007). Each data set comprises rankings of a set of -3 or 4 movies selected at random. Here we consider rankings for just one -set of movies to illustrate the functionality of **PlackettLuce**. - -The data can be read in using the `read.soc` function in -**PlackettLuce** - -``` r -library(PlackettLuce) -preflib <- "http://www.preflib.org/data/election/" -netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000138.soc")) -head(netflix, 2) -``` - - ## Freq Rank 1 Rank 2 Rank 3 Rank 4 - ## 1 68 2 1 4 3 - ## 2 53 1 2 4 3 - -Each row corresponds to a unique ordering of the four movies in this -data set. The number of Netflix users that assigned that ordering is -given in the first column, followed by the four movies in preference -order. So for example, 68 users ranked movie 2 first, followed by movie -1, then movie 4 and finally movie 3. - -`PlackettLuce`, the model-fitting function in **PlackettLuce** requires -that the data are provided in the form of *rankings* rather than -*orderings*, i.e. the rankings are expressed by giving the rank for each -item, rather than ordering the items. We can create a `"rankings"` -object from a set of orderings as follows - -``` r -R <- as.rankings(netflix[,-1], input = "orderings", - items = attr(netflix, "items")) -R[1:3, as.rankings = FALSE] -``` - - ## Mean Girls Beverly Hills Cop The Mummy Returns Mission: Impossible II - ## 1 2 1 4 3 - ## 2 1 2 4 3 - ## 3 2 1 3 4 - -Note that `read.soc` saved the names of the movies in the `"items"` -attribute of `netflix`, so we have used these to label the items. -Subsetting the rankings object `R` with `as.rankings = FALSE`, returns -the underlying matrix of rankings corresponding to the subset. So for -example, in the first ranking the second movie (Beverly Hills Cop) is -ranked number 1, followed by the first movie (Mean Girls) with rank 2, -followed by the fourth movie (Mission: Impossible II) and finally the -third movie (The Mummy Returns), giving the same ordering as in the -original data. - -Various methods are provided for `"rankings"` objects, in particular if -we subset the rankings without `as.rankings = FALSE`, the result is -again a `"rankings"` object and the corresponding print method is used: - -``` r -R[1:3] -``` - - ## 1 - ## "Beverly Hills Cop > Mean Girls > Mis ..." - ## 2 - ## "Mean Girls > Beverly Hills Cop > Mis ..." - ## 3 - ## "Beverly Hills Cop > Mean Girls > The ..." - -``` r -print(R[1:3], width = 60) -``` - - ## 1 - ## "Beverly Hills Cop > Mean Girls > Mission: Impossible II ..." - ## 2 - ## "Mean Girls > Beverly Hills Cop > Mission: Impossible II ..." - ## 3 - ## "Beverly Hills Cop > Mean Girls > The Mummy Returns > Mis ..." - -The rankings can now be passed to `PlackettLuce` to fit the -Plackett-Luce model. The counts of each ranking provided in the -downloaded data are used as weights when fitting the model. - -``` r -mod <- PlackettLuce(R, weights = netflix$Freq) -coef(mod, log = FALSE) -``` - - ## Mean Girls Beverly Hills Cop The Mummy Returns - ## 0.2306285 0.4510655 0.1684719 - ## Mission: Impossible II - ## 0.1498342 - -Calling `coef` with `log = FALSE` gives the worth parameters, -constrained to sum to one. These parameters represent the probability -that each movie is ranked first. - -For inference these parameters are converted to the log scale, by -default setting the first parameter to zero so that the standard errors -are estimable: - -``` r -summary(mod) -``` - - ## Call: PlackettLuce(rankings = R, weights = netflix$Freq) - ## - ## Coefficients: - ## Estimate Std. Error z value Pr(>|z|) - ## Mean Girls 0.00000 NA NA NA - ## Beverly Hills Cop 0.67080 0.07472 8.978 < 2e-16 *** - ## The Mummy Returns -0.31404 0.07593 -4.136 3.53e-05 *** - ## Mission: Impossible II -0.43128 0.07489 -5.759 8.47e-09 *** - ## --- - ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 - ## - ## Residual deviance: 3493.5 on 3525 degrees of freedom - ## AIC: 3499.5 - ## Number of iterations: 7 - -In this way, Mean Girls is treated as the reference movie, the positive -parameter for Beverly Hills Cop shows this was more popular among the -users, while the negative parameters for the other two movies show these -were less popular. - -Comparisons between different pairs of movies can be made visually by -plotting the log-worth parameters with comparison intervals based on -quasi standard errors. - -``` r -qv <- qvcalc(mod) -plot(qv, ylab = "Worth (log)", main = NULL) -``` - -![](man/figures/qv-1.png) - -If the intervals overlap there is no significant difference. So we can -see that Beverly Hills Cop is significantly more popular than the other -three movies, Mean Girls is significant more popular than The Mummy -Returns or Mission: Impossible II, but there was no significant -difference in users’ preference for these last two movies. - -## Going Further - -The core functionality of **PlackettLuce** is illustrated in the package -vignette, along with details of the model used in the package and a -comparison to other packages. The vignette can be found on the [package -website](https://hturner.github.io/PlackettLuce/) or from within R once -the package has been installed, e.g. via - - vignette("Overview", package = "PlackettLuce") - -## Code of Conduct - -Please note that this project is released with a [Contributor Code of -Conduct](https://github.com/hturner/PlackettLuce/blob/master/CONDUCT.md). -By participating in this project you agree to abide by its terms. - -## References - -
- -
- -Bennett, J., and S. Lanning. 2007. “The Netflix Prize.” In *Proceedings -of the KDD Cup Workshop 2007*, 3–6. ACM. - -
- -
- -Luce, R. Duncan. 1959. *Individual Choice Behavior: A Theoretical -Analysis*. New York: Wiley. - -
- -
- -Plackett, Robert L. 1975. “The Analysis of Permutations.” *Appl. -Statist* 24 (2): 193–202. . - -
- -
+ +# PlackettLuce + +[![CRAN\_Status\_Badge](https://www.r-pkg.org/badges/version/PlackettLuce)](https://cran.r-project.org/package=PlackettLuce) +[![Travis-CI Build +Status](https://travis-ci.org/hturner/PlackettLuce.svg?branch=master)](https://travis-ci.org/hturner/PlackettLuce) +[![AppVeyor Build +Status](https://ci.appveyor.com/api/projects/status/github/hturner/PlackettLuce?branch=master&svg=true)](https://ci.appveyor.com/project/hturner/PlackettLuce) +[![Coverage +Status](https://img.shields.io/codecov/c/github/hturner/PlackettLuce/master.svg)](https://codecov.io/github/hturner/PlackettLuce?branch=master) + +Package website: . + +## Overview + +The **PlackettLuce** package implements a generalization of the model +jointly attributed to Plackett (1975) and Luce (1959) for modelling +rankings data. Examples of rankings data might be the finishing order of +competitors in a race, or the preference of consumers over a set of +competing products. + +The output of the model is an estimated **worth** for each item that +appears in the rankings. The parameters are generally presented on the +log scale for inference. + +The implementation of the Plackett-Luce model in **PlackettLuce**: + + - Accommodates ties (of any order) in the rankings, e.g. bananas + \(\succ\) {apples, oranges} \(\succ\) pears. + - Accommodates sub-rankings, e.g. pears \(\succ\) apples, when the + full set of items is {apples, bananas, oranges, pears}. + - Handles disconnected or weakly connected networks implied by the + rankings, e.g. where one item always loses as in figure below. This + is achieved by adding pseudo-rankings with a hypothetical or ghost + item. + +![](man/figures/always-loses-1.png)
+ +In addition the package provides methods for + + - Obtaining quasi-standard errors, that don’t depend on the + constraints applied to the worth parameters for identifiability. + - Fitting Plackett-Luce trees, i.e. a tree that partitions the + rankings by covariate values, such as consumer attributes or racing + conditions, identifying subgroups with different sets of worth + parameters for the items. + +## Installation + +The package may be installed from CRAN via + +``` r +install.packages("PlackettLuce") +``` + +The development version can be installed via + +``` r +# install.packages("devtools") +devtools::install_github("hturner/PlackettLuce") +``` + +## Usage + +The [Netflix Prize](https://www.netflixprize.com/) was a competition +devised by Netflix to improve the accuracy of its recommendation system. +To facilitate this they released ratings about movies from the users of +the system that have been transformed to preference data and are +available from +[PrefLib](https://www.preflib.org/data/election/netflix/), (Bennett and +Lanning 2007). Each data set comprises rankings of a set of 3 or 4 +movies selected at random. Here we consider rankings for just one set of +movies to illustrate the functionality of **PlackettLuce**. + +The data can be read in using the `read.soc` function in +**PlackettLuce** + +``` r +library(PlackettLuce) +preflib <- "https://www.preflib.org/data/election/" +netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000138.soc")) +head(netflix, 2) +``` + + ## Freq Rank 1 Rank 2 Rank 3 Rank 4 + ## 1 68 2 1 4 3 + ## 2 53 1 2 4 3 + +Each row corresponds to a unique ordering of the four movies in this +data set. The number of Netflix users that assigned that ordering is +given in the first column, followed by the four movies in preference +order. So for example, 68 users ranked movie 2 first, followed by movie +1, then movie 4 and finally movie 3. + +`PlackettLuce`, the model-fitting function in **PlackettLuce** requires +that the data are provided in the form of *rankings* rather than +*orderings*, i.e.  the rankings are expressed by giving the rank for +each item, rather than ordering the items. We can create a `"rankings"` +object from a set of orderings as follows + +``` r +R <- as.rankings(netflix[,-1], input = "orderings", + items = attr(netflix, "items")) +R[1:3, as.rankings = FALSE] +``` + + ## Mean Girls Beverly Hills Cop The Mummy Returns Mission: Impossible II + ## 1 2 1 4 3 + ## 2 1 2 4 3 + ## 3 2 1 3 4 + +Note that `read.soc` saved the names of the movies in the `"items"` +attribute of `netflix`, so we have used these to label the items. +Subsetting the rankings object `R` with `as.rankings = FALSE`, returns +the underlying matrix of rankings corresponding to the subset. So for +example, in the first ranking the second movie (Beverly Hills Cop) is +ranked number 1, followed by the first movie (Mean Girls) with rank 2, +followed by the fourth movie (Mission: Impossible II) and finally the +third movie (The Mummy Returns), giving the same ordering as in the +original data. + +Various methods are provided for `"rankings"` objects, in particular if +we subset the rankings without `as.rankings = FALSE`, the result is +again a `"rankings"` object and the corresponding print method is used: + +``` r +R[1:3] +``` + + ## 1 + ## "Beverly Hills Cop > Mean Girls > Mis ..." + ## 2 + ## "Mean Girls > Beverly Hills Cop > Mis ..." + ## 3 + ## "Beverly Hills Cop > Mean Girls > The ..." + +``` r +print(R[1:3], width = 60) +``` + + ## 1 + ## "Beverly Hills Cop > Mean Girls > Mission: Impossible II ..." + ## 2 + ## "Mean Girls > Beverly Hills Cop > Mission: Impossible II ..." + ## 3 + ## "Beverly Hills Cop > Mean Girls > The Mummy Returns > Mis ..." + +The rankings can now be passed to `PlackettLuce` to fit the +Plackett-Luce model. The counts of each ranking provided in the +downloaded data are used as weights when fitting the model. + +``` r +mod <- PlackettLuce(R, weights = netflix$Freq) +coef(mod, log = FALSE) +``` + + ## Mean Girls Beverly Hills Cop The Mummy Returns + ## 0.2306285 0.4510655 0.1684719 + ## Mission: Impossible II + ## 0.1498342 + +Calling `coef` with `log = FALSE` gives the worth parameters, +constrained to sum to one. These parameters represent the probability +that each movie is ranked first. + +For inference these parameters are converted to the log scale, by +default setting the first parameter to zero so that the standard errors +are estimable: + +``` r +summary(mod) +``` + + ## Call: PlackettLuce(rankings = R, weights = netflix$Freq) + ## + ## Coefficients: + ## Estimate Std. Error z value Pr(>|z|) + ## Mean Girls 0.00000 NA NA NA + ## Beverly Hills Cop 0.67080 0.07472 8.978 < 2e-16 *** + ## The Mummy Returns -0.31404 0.07593 -4.136 3.53e-05 *** + ## Mission: Impossible II -0.43128 0.07489 -5.759 8.47e-09 *** + ## --- + ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 + ## + ## Residual deviance: 3493.5 on 3525 degrees of freedom + ## AIC: 3499.5 + ## Number of iterations: 7 + +In this way, Mean Girls is treated as the reference movie, the positive +parameter for Beverly Hills Cop shows this was more popular among the +users, while the negative parameters for the other two movies show these +were less popular. + +Comparisons between different pairs of movies can be made visually by +plotting the log-worth parameters with comparison intervals based on +quasi standard errors. + +``` r +qv <- qvcalc(mod) +plot(qv, ylab = "Worth (log)", main = NULL) +``` + +![](man/figures/qv-1.png) + +If the intervals overlap there is no significant difference. So we can +see that Beverly Hills Cop is significantly more popular than the other +three movies, Mean Girls is significant more popular than The Mummy +Returns or Mission: Impossible II, but there was no significant +difference in users’ preference for these last two movies. + +## Going Further + +The core functionality of **PlackettLuce** is illustrated in the package +vignette, along with details of the model used in the package and a +comparison to other packages. The vignette can be found on the [package +website](https://hturner.github.io/PlackettLuce/) or from within R once +the package has been installed, e.g. via + + vignette("Overview", package = "PlackettLuce") + +## Code of Conduct + +Please note that this project is released with a [Contributor Code of +Conduct](https://github.com/hturner/PlackettLuce/blob/master/CONDUCT.md). +By participating in this project you agree to abide by its terms. + +## References + +
+ +
+ +Bennett, J., and S. Lanning. 2007. “The Netflix Prize.” In *Proceedings +of the KDD Cup Workshop 2007*, 3–6. ACM. + +
+ +
+ +Luce, R. Duncan. 1959. *Individual Choice Behavior: A Theoretical +Analysis*. New York: Wiley. + +
+ +
+ +Plackett, Robert L. 1975. “The Analysis of Permutations.” *Appl. +Statist* 24 (2): 193–202. . + +
+ +
diff --git a/appveyor.yml b/appveyor.yml index f8fdafc..1522a07 100644 --- a/appveyor.yml +++ b/appveyor.yml @@ -1,55 +1,55 @@ -# DO NOT CHANGE the "init" and "install" sections below - -# Download script file from GitHub -init: - ps: | - $ErrorActionPreference = "Stop" - Invoke-WebRequest https://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "..\appveyor-tool.ps1" - Import-Module '..\appveyor-tool.ps1' - -install: - ps: Bootstrap - -cache: - - C:\RLibrary - -environment: - BIOC_USE_DEVEL: FALSE - USE_RTOOLS: TRUE - # env vars that may need to be set, at least temporarily, from time to time - # see https://github.com/krlmlr/r-appveyor#readme for details - # USE_RTOOLS: true - # R_REMOTES_STANDALONE: true - -# Adapt as necessary starting from here - -build_script: - # Installs all packages in the DESCRIPTION file. - - travis-tool.sh install_bioc_deps - - travis-tool.sh install_deps - -test_script: - - travis-tool.sh run_tests - -on_failure: - - 7z a failure.zip *.Rcheck\* - - appveyor PushArtifact failure.zip - -artifacts: - - path: '*.Rcheck\**\*.log' - name: Logs - - - path: '*.Rcheck\**\*.out' - name: Logs - - - path: '*.Rcheck\**\*.fail' - name: Logs - - - path: '*.Rcheck\**\*.Rout' - name: Logs - - - path: '\*_*.tar.gz' - name: Bits - - - path: '\*_*.zip' - name: Bits +# DO NOT CHANGE the "init" and "install" sections below + +# Download script file from GitHub +init: + ps: | + $ErrorActionPreference = "Stop" + Invoke-WebRequest https://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "..\appveyor-tool.ps1" + Import-Module '..\appveyor-tool.ps1' + +install: + ps: Bootstrap + +cache: + - C:\RLibrary + +environment: + BIOC_USE_DEVEL: FALSE + USE_RTOOLS: TRUE + # env vars that may need to be set, at least temporarily, from time to time + # see https://github.com/krlmlr/r-appveyor#readme for details + # USE_RTOOLS: true + # R_REMOTES_STANDALONE: true + +# Adapt as necessary starting from here + +build_script: + # Installs all packages in the DESCRIPTION file. + - travis-tool.sh install_bioc_deps + - travis-tool.sh install_deps + +test_script: + - travis-tool.sh run_tests + +on_failure: + - 7z a failure.zip *.Rcheck\* + - appveyor PushArtifact failure.zip + +artifacts: + - path: '*.Rcheck\**\*.log' + name: Logs + + - path: '*.Rcheck\**\*.out' + name: Logs + + - path: '*.Rcheck\**\*.fail' + name: Logs + + - path: '*.Rcheck\**\*.Rout' + name: Logs + + - path: '\*_*.tar.gz' + name: Bits + + - path: '\*_*.zip' + name: Bits diff --git a/cran-comments.md b/cran-comments.md index 02ee792..e5a6d53 100644 --- a/cran-comments.md +++ b/cran-comments.md @@ -1,26 +1,17 @@ ## Comments -This submission: - - uses alternative to base::asplit in R < 3.6.0 (to fix errors on CRAN) - - fixes bug in vcov method that causes NA values for a particular parameterization. - -Some other minor fixes and improvements have been made at the same time. +This submission fixes the warnings on Windows in the CRAN checks. ## Test environments -1. (Local) Ubuntu 18.04.2 LTS, R 3.6.1 -2. (R-hub) Fedora Linux, R-devel, clang, gfortran -3. (R-hub) Ubuntu Linux 16.04 LTS, R-release, GCC -4. (R-hub) Windows Server 2008 R2 SP1, R-devel, 32/64 bit -5. (R-hub) macOS 10.11 El Capitan, R-release -6. (Win-builder) Windows Server 2008 (64-bit), R-devel -7. (Win-builder) Windows Server 2008 (64-bit), R-oldrelease +1. (Local) Ubuntu 20.04.1 LTS, R 4.0.2 +2. (Local) Windows 10, R 4.0.2 +3. (R-hub) Windows Server 2008 R2 SP1, R-devel, 32/64 bit +4. (Win-builder) Windows Server 2008 (64-bit), R-devel ### Check results -Checks 3, 4, 6 & 7 return a note regarding URLs/DOIs. This is a false alarm: all the links redirect to valid pages on jstor.org. - -Check 7 gives an additional note: "no visible global function definition for 'asplit'". However, the code conditions on R version, so asplit is only called if R >= 3.6.0. This conditional execution is validated by the fact that the examples and tests do not fail on this installation, which has R-3.5.3. +Checks return a note regarding URLs/DOIs. This is a false alarm: all the links redirect to valid pages on jstor.org. ## revdepcheck results diff --git a/docs/404.html b/docs/404.html new file mode 100644 index 0000000..1f36d1d --- /dev/null +++ b/docs/404.html @@ -0,0 +1,162 @@ + + + + + + + + +Page not found (404) • PlackettLuce + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +Content not found. Please use links in the navbar. + +
+ + + +
+ + + +
+ + +
+

Site built with pkgdown 1.6.1.

+
+ +
+
+ + + + + + + + diff --git a/docs/CONDUCT.html b/docs/CONDUCT.html index 3981e8b..993d281 100644 --- a/docs/CONDUCT.html +++ b/docs/CONDUCT.html @@ -1,144 +1,170 @@ - - - - - - - - -Contributor Code of Conduct • PlackettLuce - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - - -
- -
-
- - -
- -

As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.

-

We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion.

-

Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.

-

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team.

-

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers.

-

This Code of Conduct is adapted from the Contributor Covenant, version 1.0.0, available at https://contributor-covenant.org/version/1/0/0/

-
- -
- -
- - -
- - -
-

Site built with pkgdown 1.3.0.

-
-
-
- - - - - - + + + + + + + + +Contributor Code of Conduct • PlackettLuce + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + +
+ +
+
+ + +
+ +

As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.

+

We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion.

+

Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.

+

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team.

+

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers.

+

This Code of Conduct is adapted from the Contributor Covenant, version 1.0.0, available at https://contributor-covenant.org/version/1/0/0/

+
+ +
+ + + +
+ + + +
+ + +
+

Site built with pkgdown 1.6.1.

+
+ +
+
+ + + + + + + + diff --git a/docs/articles/Overview.html b/docs/articles/Overview.html index d21824a..6c092ad 100644 --- a/docs/articles/Overview.html +++ b/docs/articles/Overview.html @@ -6,1327 +6,1368 @@ Introduction to PlackettLuce • PlackettLuce - - - + + + + + - - - - -
-
+ + +
+
+ - - -
-

Abstract

- The PlackettLuce package implements a generalization of the model jointly - attributed to Plackett (1975) and Luce (1959) for modelling rankings data. The - generalization accommodates both ties (of any order) and partial rankings - (complete rankings of only some items). By default, the implementation adds a set of - pseudo-rankings with a hypothetical item, ensuring that the network of wins - and losses is always strongly connected, i.e. all items are connected to every - other item by both a path of wins and a path of losses. This means that the - worth of each item is always estimable with finite standard error. It also has - a regularization effect, shrinking the estimated parameters towards equal item worth. In addition to - standard methods for model summary, PlackettLuce provides a method to - estimate quasi-standard errors for the item parameters, so that comparison - intervals can be derived even when a reference item is set. Finally the - package provides a method for model-based partitioning using ranking-specific covariates, enabling the - identification of subgroups where items have been ranked differently. -
- -
+
+ + +
+

Abstract

+

The PlackettLuce package implements a generalization of the model jointly + attributed to Plackett (1975) and Luce (1959) for modelling rankings data. The + generalization accommodates both ties (of any order) and partial rankings + (complete rankings of only some items). By default, the implementation adds a set of + pseudo-rankings with a hypothetical item, ensuring that the network of wins + and losses is always strongly connected, i.e. all items are connected to every + other item by both a path of wins and a path of losses. This means that the + worth of each item is always estimable with finite standard error. It also has + a regularization effect, shrinking the estimated parameters towards equal item worth. In addition to + standard methods for model summary, PlackettLuce provides a method to + estimate quasi-standard errors for the item parameters, so that comparison + intervals can be derived even when a reference item is set. Finally the + package provides a method for model-based partitioning using ranking-specific covariates, enabling the + identification of subgroups where items have been ranked differently.

+
+ +

-1 Introduction

-

Rankings data, in which each observation is an ordering of a set of items, -arises in a range of applications, for example sports tournaments and consumer -studies. A classic model for such data is the Plackett-Luce model. This model -depends on Luce’s axiom of choice (Luce 1959, 1977) which states that the odds of -choosing an item over another do not depend on the set of items from which the -choice is made. Suppose we have a set of \(J\) items

-

\[S = \{i_1, i_2, \ldots, i_J\}.\]

-

Then under Luce’s axiom, the probability of selecting some item \(j\) -from \(S\) is given by

-

\[P(j | S) = \frac{\alpha_{j}}{\sum_{i \in S} \alpha_i}\]

-

where \(\alpha_i\) represents the worth of item \(i\). Viewing a ranking of \(J\) -items as a sequence of choices — first choosing the top-ranked item from all -items, then choosing the second-ranked item from the remaining items and so -on — it follows that the probability of the ranking -\({i_1 \succ \ldots \succ i_J}\) is

-

\[\prod_{j=1}^J \frac{\alpha_{i_j}}{\sum_{i \in A_j} \alpha_i}\]

-

where \(A_j\) is the set of alternatives \(\{i_j, i_{j + 1}, \ldots, i_J\}\) from -which item \(i_j\) is chosen. The above model is also derived in Plackett (1975), -hence the name Plackett-Luce model.

-

The PlackettLuce package implements a novel extension of the Plackett-Luce -model that accommodates tied rankings, which may be applied to either full or -partial rankings. Pseudo-rankings are utilised to obtain estimates in cases -where the maximum likelihood estimates do not exist, or do not have finite -standard errors. Methods are provided to obtain different parameterizations with -corresponding standard errors or quasi-standard errors (that are independent of -parameter constraints). There is also a method to work with the psychotree -package to fit Plackett-Luce trees.

-
+1 Introduction +

Rankings data, in which each observation is an ordering of a set of items, +arises in a range of applications, for example sports tournaments and consumer +studies. A classic model for such data is the Plackett-Luce model. This model +depends on Luce’s axiom of choice (Luce 1959, 1977) which states that the odds of +choosing an item over another do not depend on the set of items from which the +choice is made. Suppose we have a set of \(J\) items

+

\[S = \{i_1, i_2, \ldots, i_J\}.\]

+

Then under Luce’s axiom, the probability of selecting some item \(j\) +from \(S\) is given by

+

\[P(j | S) = \frac{\alpha_{j}}{\sum_{i \in S} \alpha_i}\]

+

where \(\alpha_i\) represents the worth of item \(i\). Viewing a ranking of \(J\) +items as a sequence of choices — first choosing the top-ranked item from all +items, then choosing the second-ranked item from the remaining items and so +on — it follows that the probability of the ranking +\({i_1 \succ \ldots \succ i_J}\) is

+

\[\prod_{j=1}^J \frac{\alpha_{i_j}}{\sum_{i \in A_j} \alpha_i}\]

+

where \(A_j\) is the set of alternatives \(\{i_j, i_{j + 1}, \ldots, i_J\}\) from +which item \(i_j\) is chosen. The above model is also derived in Plackett (1975), +hence the name Plackett-Luce model.

+

The PlackettLuce package implements a novel extension of the Plackett-Luce +model that accommodates tied rankings, which may be applied to either full or +partial rankings. Pseudo-rankings are utilised to obtain estimates in cases +where the maximum likelihood estimates do not exist, or do not have finite +standard errors. Methods are provided to obtain different parameterizations with +corresponding standard errors or quasi-standard errors (that are independent of +parameter constraints). There is also a method to work with the psychotree +package to fit Plackett-Luce trees.

+

-1.1 Comparison with other packages

-

Even though the Plackett-Luce model is a well-established method for analysing -rankings, the software available to fit the model is limited. By considering -each choice in the ranking as a multinomial observation, with one item observed -out of a possible set, the “Poisson trick” (see, for example, Baker 1994) can be -applied to express the model as a log-linear model, where the response is the -count (one or zero) of each possible outcome within each choice. In principle, the -model can then be fitted using standard software for generalized linear -models. However there are a number of difficulties with this. Firstly, dummy -variables must be set up to represent the presence or absence of each item in -each choice and a factor created to identify each choice, which is a -non-standard task. Secondly the factor identifying each choice will have many -levels: greater than the number of rankings, for rankings of more than two -objects. Thus there are many parameters to estimate and a standard function such -as glm will be slow to fit the model, or may even fail as the corresponding -model matrix will be too large to fit in memory. This issue can be circumvented -by using the gnm function from gnm, which provides an eliminate argument -to efficiently estimate the effects of such a factor. Even then, the -model-fitting may be relatively slow, given the expansion in the number of -observations when rankings are converted to counts. For example, the ranking {item -3 \(\prec\) item 1 \(\prec\) item 2} expands to two choices with five counts all -together:

-
##      choice item 1 item 2 item 3 count
-## [1,]      1      1      0      0     0
-## [2,]      1      0      1      0     0
-## [3,]      1      0      0      1     1
-## [4,]      2      1      0      0     1
-## [5,]      2      0      1      0     0
-

It is possible to aggregate observations of the same choice from the same set of -alternatives, but the number of combinations increases quickly with the number -of items.

-

Given the issues with applying general methods, custom algorithms and software -have been developed. One approach is using Hunter’s (2004) -minorization-maximization (MM) algorithm to maximize the likelihood, which is -equivalent to an iterative scaling algorithm; this algorithm is used by the -StatRank package. Alternatively -the likelihood of the observed data under the PlackettLuce model can be -maximised directly using a generic optimisation method such the -Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, as is done by the pmr -and hyper2 packages. Finally, Bayesian methods can be used to either -maximize the posterior distribution via an Expectation Maximization (EM) -algorithm or to simulate the posterior distribution using -Markov-chain Monte-Carlo (MCMC) techniques, both of which are provided by -PLMIX. PlackettLuce offers both iterative scaling and generic -optimization using either BFGS or a limited memory variant (L-BFGS) via the -lbfgs package.

-

Even some of these specialized implementations can scale poorly with the number -of items and/or the number of rankings as shown by the example timings in Table -1.2. Specifically pmr::pl becomes impractical to use with -a moderate number of items (~10), while the functions from hyper2 and -StatRank take much longer to run with a large number (1000s) of unique -rankings. PlackettLuce copes well with these moderately-sized data sets, -though is not quite as fast as PLMIX when both the number of items and the -number of unique rankings is large.

- +1.1 Comparison with other packages +

Even though the Plackett-Luce model is a well-established method for analysing +rankings, the software available to fit the model is limited. By considering +each choice in the ranking as a multinomial observation, with one item observed +out of a possible set, the “Poisson trick” (see, for example, Baker 1994) can be +applied to express the model as a log-linear model, where the response is the +count (one or zero) of each possible outcome within each choice. In principle, the +model can then be fitted using standard software for generalized linear +models. However there are a number of difficulties with this. Firstly, dummy +variables must be set up to represent the presence or absence of each item in +each choice and a factor created to identify each choice, which is a +non-standard task. Secondly the factor identifying each choice will have many +levels: greater than the number of rankings, for rankings of more than two +objects. Thus there are many parameters to estimate and a standard function such +as glm will be slow to fit the model, or may even fail as the corresponding +model matrix will be too large to fit in memory. This issue can be circumvented +by using the gnm function from gnm, which provides an eliminate argument +to efficiently estimate the effects of such a factor. Even then, the +model-fitting may be relatively slow, given the expansion in the number of +observations when rankings are converted to counts. For example, the ranking {item +3 \(\prec\) item 1 \(\prec\) item 2} expands to two choices with five counts all +together:

+
##      choice item 1 item 2 item 3 count
+## [1,]      1      1      0      0     0
+## [2,]      1      0      1      0     0
+## [3,]      1      0      0      1     1
+## [4,]      2      1      0      0     1
+## [5,]      2      0      1      0     0
+

It is possible to aggregate observations of the same choice from the same set of +alternatives, but the number of combinations increases quickly with the number +of items.

+

Given the issues with applying general methods, custom algorithms and software +have been developed. One approach is using Hunter’s (2004) +minorization-maximization (MM) algorithm to maximize the likelihood, which is +equivalent to an iterative scaling algorithm; this algorithm is used by the +StatRank package. Alternatively +the likelihood of the observed data under the PlackettLuce model can be +maximised directly using a generic optimisation method such the +Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, as is done by the pmr +and hyper2 packages. Finally, Bayesian methods can be used to either +maximize the posterior distribution via an Expectation Maximization (EM) +algorithm or to simulate the posterior distribution using +Markov-chain Monte-Carlo (MCMC) techniques, both of which are provided by +PLMIX. PlackettLuce offers both iterative scaling and generic +optimization using either BFGS or a limited memory variant (L-BFGS) via the +lbfgs package.

+

Even some of these specialized implementations can scale poorly with the number +of items and/or the number of rankings as shown by the example timings in Table +1.2. Specifically pmr::pl becomes impractical to use with +a moderate number of items (~10), while the functions from hyper2 and +StatRank take much longer to run with a large number (1000s) of unique +rankings. PlackettLuce copes well with these moderately-sized data sets, +though is not quite as fast as PLMIX when both the number of items and the +number of unique rankings is large.

+ - + - - - - + + + + - - - - + + + + - - - - + + + + - - - - + + + +
-Table 1.1: Features of example data sets from PrefLib (Mattei and Walsh 2013). The Netflix data are from (Bennett and Lanning 2007). - +Table 1.1: Features of example data sets from PrefLib (Mattei and Walsh 2013). The Netflix data are from (Bennett and Lanning 2007). +
- -Rankings - -Unique rankings - -Items - + +Rankings + +Unique rankings + +Items +
-Netflix - -1256 - -24 - -4 - +Netflix + +1256 + +24 + +4 +
-T-shirt - -30 - -30 - -11 - +T-shirt + +30 + +30 + +11 +
-Sushi - -5000 - -4926 - -10 - +Sushi + +5000 + +4926 + +10 +
- + - - + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - +
-Table 1.2: Timings in seconds for fitting the Plackett-Luce model to data sets summarised in Table 1.1 using different packages. See Appendix 5.1 for details and code. - +Table 1.2: Timings in seconds for fitting the Plackett-Luce model to data sets summarised in Table 1.1 using different packages. See Appendix 5.1 for details and code. +
- -
-Time elapsed (s) -
-
+ +
+Time elapsed (s) +
+
- -PlackettLuce - -hyper2 - -PLMIX - -pmr - -StatRank - + +PlackettLuce + +hyper2 + +PLMIX + +pmr + +StatRank +
-Netflix - -0.019 - -0.070 - -0.371 - -0.357 - -0.405 - +Netflix + +0.019 + +0.070 + +0.371 + +0.357 + +0.405 +
-T-shirt - -0.020 - -0.109 - -0.007 - -a - -7.228 - +T-shirt + +0.020 + +0.109 + +0.007 + +a + +7.228 +
-Sushi - -1.4 - -69.527 - -0.12 - -a - -12.56 - +Sushi + +1.4 + +69.527 + +0.12 + +a + +12.56 +
-a Function fails to complete. - +a Function fails to complete. +
-

As the number of items increases, it is typically more common to -observe partial rankings than complete rankings. Partial rankings can be of two -types: sub-rankings, where only a subset of items are ranked each time, and -incomplete rankings, where the top \(n\) items are selected and the remaining -items are unranked, but implicitly ranked lower than the top \(n\). -PlackettLuce handles -sub-rankings only, while PLMIX handles incomplete rankings only and -hyper2 can handle both types. StatRank seems to support partial rankings, but -the extent of this support is not clear. The timings in Table 1.3 -for fitting the Plackett-Luce model on the NASCAR data from Hunter (2004) -illustrate that PlackettLuce is more efficient than -hyper2 for modelling sub-rankings of a relatively large number of items.

- +

As the number of items increases, it is typically more common to +observe partial rankings than complete rankings. Partial rankings can be of two +types: sub-rankings, where only a subset of items are ranked each time, and +incomplete rankings, where the top \(n\) items are selected and the remaining +items are unranked, but implicitly ranked lower than the top \(n\). +PlackettLuce handles +sub-rankings only, while PLMIX handles incomplete rankings only and +hyper2 can handle both types. StatRank seems to support partial rankings, but +the extent of this support is not clear. The timings in Table 1.3 +for fitting the Plackett-Luce model on the NASCAR data from Hunter (2004) +illustrate that PlackettLuce is more efficient than +hyper2 for modelling sub-rankings of a relatively large number of items.

+ - + - - + + - - - - - + + + + + - - - - - + + + + +
-Table 1.3: Timings in seconds for fitting the Plackett-Luce model to the NASCAR data from Hunter (2004) using different packages. All rankings are unique. - +Table 1.3: Timings in seconds for fitting the Plackett-Luce model to the NASCAR data from Hunter (2004) using different packages. All rankings are unique. +
-
-Features of NASCAR data -
-
-
-Time elapsed (s) -
-
+
+Features of NASCAR data +
+
+
+Time elapsed (s) +
+
-Rankings - -Items - -Items per ranking - -PlackettLuce - -hyper2 - +Rankings + +Items + +Items per ranking + +PlackettLuce + +hyper2 +
-36 - -83 - -42-43 - -0.129 - -29.447 - +36 + +83 + +42-43 + +0.129 + +29.447 +
-

PlackettLuce is the only package out of those based on maximum likelihood -estimation with the functionality to compute standard errors for the item -parameters and thereby the facility to conduct inference about these parameters. -PLMIX allows for inference based on the posterior distribution. In some -cases, when the network of wins and losses is disconnected or weakly connected, -the maximum likelihood estimate does not exist, or has infinite standard -error; such issues are handled in PlackettLuce by utilising pseudo-rankings. -This is similar to incorporating prior information as in the Bayesian approach.

-

PlackettLuce is also the only package that can accommodate tied rankings, -through a novel extension of the Plackett-Luce model. On the other hand -hyper2 is currently the only package that can handle rankings of -combinations of items, for example team rankings in sports. PLMIX offers the -facility to model heterogeneous populations of subjects that have different sets -of worth parameters via mixture models. This is similar in spirit to the -model-based partitioning offered by PlackettLuce, except here the -sub-populations are defined by binary splits on subject attributes. A summary of -the features of the various packages for Plackett-Luce models is given in Table 1.4.

+

PlackettLuce is the only package out of those based on maximum likelihood +estimation with the functionality to compute standard errors for the item +parameters and thereby the facility to conduct inference about these parameters. +PLMIX allows for inference based on the posterior distribution. In some +cases, when the network of wins and losses is disconnected or weakly connected, +the maximum likelihood estimate does not exist, or has infinite standard +error; such issues are handled in PlackettLuce by utilising pseudo-rankings. +This is similar to incorporating prior information as in the Bayesian approach.

+

PlackettLuce is also the only package that can accommodate tied rankings, +through a novel extension of the Plackett-Luce model. On the other hand +hyper2 is currently the only package that can handle rankings of +combinations of items, for example team rankings in sports. PLMIX offers the +facility to model heterogeneous populations of subjects that have different sets +of worth parameters via mixture models. This is similar in spirit to the +model-based partitioning offered by PlackettLuce, except here the +sub-populations are defined by binary splits on subject attributes. A summary of +the features of the various packages for Plackett-Luce models is given in Table 1.4.

- + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - + + + + + +
-Table 1.4: Features of packages for fitting the Plackett-Luce model. - +Table 1.4: Features of packages for fitting the Plackett-Luce model. +
-Feature - -PlackettLuce - -hyper2 - -pmr - -StatRank - -PLMIX - +Feature + +PlackettLuce + +hyper2 + +pmr + +StatRank + +PLMIX +
-Inference - -Frequentist - -No - -No - -No - -Bayesian - +Inference + +Frequentist + +No + +No + +No + +Bayesian +
-Disconnected networks - -Yes - -No - -No - -No - -Yes - +Disconnected networks + +Yes + +No + +No + +No + +Yes +
-Ties - -Yes - -No - -No - -No - -No - +Ties + +Yes + +No + +No + +No + +No +
-Teams - -No - -Yes - -No - -No - -No - +Teams + +No + +Yes + +No + +No + +No +
-Heterogenous case - -Trees - -No - -No - -No - -Mixtures - +Heterogenous case + +Trees + +No + +No + +No + +Mixtures +
-
-
-
+
+
+

-2 Methods

-
+2 Methods +

-2.1 Extended Plackett-Luce model

-

The PlackettLuce package permits rankings of the form

-

\[R = \{C_1, C_2, \ldots, C_J\}\]

-

where the items in set \(C_1\) are ranked higher than (better than) the items -in \(C_2\), and so on. If there are multiple objects in set \(C_j\) these items -are tied in the ranking. For a set \(S\), let

-

\[f(S) = \delta_{|S|} \left(\prod_{i \in S} \alpha_i \right)^\frac{1}{|S|}\]

-

where \(|S|\) is the cardinality of the set \(S\), \(\delta_n\) is a parameter -representing the prevalence of ties of order \(n\) (with \(\delta_1 \equiv 1\)), -and \(\alpha_i\) is a parameter -representing the worth of item \(i\). Then under an extension of the -Plackett-Luce model allowing ties up to order \(D\), the probability of the -ranking \(R\) is given by

-

\[\begin{equation} -\prod_{j = 1}^J \frac{f(C_j)}{ -\sum_{k = 1}^{\min(D_j, D)} \sum_{S \in {A_j \choose k}} f(S)} -\tag{2.1} -\end{equation}\]

-

where \(D_j\) is the cardinality of \(A_j\), the set of alternatives from -which \(C_j\) is chosen, and \(A_j \choose k\) is all the possible choices of \(k\) -items from \(A_j\). The value of \(D\) can be set to the maximum number of tied -items observed in the data, so that \(\delta_n = 0\) for \(n > D\).

-

When the worth parameters are constrained to sum to one, they represent the -probability that the corresponding item comes first in a ranking of all items, -given that first place is not tied.

-

The 2-way tie prevalence parameter \(\delta_2\) is interpretable via the probability -that two given items of equal worth tie for first place, given that the -first place is not a 3-way or higher tie. Specifically, that probability is \(\delta_2/(2 + \delta_2)\).

-

The 3-way and higher tie-prevalence parameters are interpretable similarly, in terms of -tie probabilities among equal-worth items.

-
+2.1 Extended Plackett-Luce model +

The PlackettLuce package permits rankings of the form

+

\[R = \{C_1, C_2, \ldots, C_J\}\]

+

where the items in set \(C_1\) are ranked higher than (better than) the items +in \(C_2\), and so on. If there are multiple objects in set \(C_j\) these items +are tied in the ranking. For a set \(S\), let

+

\[f(S) = \delta_{|S|} \left(\prod_{i \in S} \alpha_i \right)^\frac{1}{|S|}\]

+

where \(|S|\) is the cardinality of the set \(S\), \(\delta_n\) is a parameter +representing the prevalence of ties of order \(n\) (with \(\delta_1 \equiv 1\)), +and \(\alpha_i\) is a parameter +representing the worth of item \(i\). Then under an extension of the +Plackett-Luce model allowing ties up to order \(D\), the probability of the +ranking \(R\) is given by

+

\[\begin{equation} +\prod_{j = 1}^J \frac{f(C_j)}{ +\sum_{k = 1}^{\min(D_j, D)} \sum_{S \in {A_j \choose k}} f(S)} +\tag{2.1} +\end{equation}\]

+

where \(D_j\) is the cardinality of \(A_j\), the set of alternatives from +which \(C_j\) is chosen, and \(A_j \choose k\) is all the possible choices of \(k\) +items from \(A_j\). The value of \(D\) can be set to the maximum number of tied +items observed in the data, so that \(\delta_n = 0\) for \(n > D\).

+

When the worth parameters are constrained to sum to one, they represent the +probability that the corresponding item comes first in a ranking of all items, +given that first place is not tied.

+

The 2-way tie prevalence parameter \(\delta_2\) is interpretable via the probability +that two given items of equal worth tie for first place, given that the +first place is not a 3-way or higher tie. Specifically, that probability is \(\delta_2/(2 + \delta_2)\).

+

The 3-way and higher tie-prevalence parameters are interpretable similarly, in terms of +tie probabilities among equal-worth items.

+

When intermediate tie orders are not observed (e.g. ties of order 2 +and order 4 are observed, but no ties of order 3), the maximum +likelihood estimate of the corresponding tie prevalence parameters +is zero, so these parameters are excluded from the model.

+

-2.1.1 Pudding example (with ties)

-

When each ranking contains only two items, then the model in Equation -(2.1) reduces to extended Bradley-Terry model proposed by -Davidson (1970) for paired comparisons with ties. The pudding data set, -available in PlackettLuce, provides the data from Example 2 of that paper, in -which respondents were asked to test two brands of chocolate pudding from a -total of six brands. For each pair of brands \(i\) and \(j\), the data set -gives the frequencies that brand \(i\) was preferred (\(w_{ij}\)), that brand \(j\) -was preferred (\(w_{ji}\)) and that the brands were tied (\(t_{ij}\)).

-
library(PlackettLuce)
-head(pudding)
-
##   i j r_ij w_ij w_ji t_ij
-## 1 1 2   57   19   22   16
-## 2 1 3   47   16   19   12
-## 3 2 3   48   19   19   10
-## 4 1 4   54   18   23   13
-## 5 2 4   51   23   19    9
-## 6 3 4   54   19   20   15
-

PlackettLuce, the model-fitting function in PlackettLuce, requires data in the form of -rankings, with the rank (1st, 2nd, 3rd, \(\ldots\)) for each item. In this case it is -more straight-forward to define the orderings (winner, loser) initially, -corresponding to the wins for item \(i\), the wins for item \(j\) and the ties:

-
i_wins <- data.frame(Winner = pudding$i, Loser = pudding$j)
-j_wins <- data.frame(Winner = pudding$j, Loser = pudding$i)
-if (getRversion() < "3.6.0"){
-  n <- nrow(pudding)
-  ties <- data.frame(Winner = array(split(pudding[c("i", "j")], 1:n), n),
-                     Loser = rep(NA, 15))
-} else {
-  ties <- data.frame(Winner = asplit(pudding[c("i", "j")], 1),
-                     Loser = rep(NA, 15))
-}
-head(ties, 2)
-
##   Winner Loser
-## 1   1, 2    NA
-## 2   1, 3    NA
-

In the last case, we split the i and j columns of pudding by row, using -the base R function asplit, if available. For each pair, this gives a vector -of items that we can specify as the winner, while the loser is missing.

-

Now the as.rankings() function from PlackettLuce can be used to convert -the combined orderings to an object of class "rankings".

-
R <- as.rankings(rbind(i_wins, j_wins, ties),
-                 input = "orderings")
-head(R, 2)
-
## [1] "1 > 2" "1 > 3"
-
tail(R, 2)
-
## [1] "4 = 6" "5 = 6"
-

The print method displays the rankings in a readable -form, however the underlying data structure stores the rankings in the form -of a matrix:

-
head(unclass(R), 2)
-
##      1 2 3 4 5 6
-## [1,] 1 2 0 0 0 0
-## [2,] 1 0 2 0 0 0
-

The six columns represent the pudding brands. In each row, 0 represents an -unranked brand (not in the comparison), 1 represents the brand(s) ranked -in first place and 2 represents the brand in second place, if applicable.

-

To specify the full set of rankings, we need the frequency of each ranking, -which will be specified to the model-fitting function as a weight vector:

-
w <- unlist(pudding[c("w_ij", "w_ji", "t_ij")])
-

Now we can fit the model with PlackettLuce, passing the rankings object -and the weight vector as arguments. Setting npseudo = 0 means that standard -maximum likelihood estimation is performed and maxit = 7 limits the number of -iterations to obtain the same worth parameters as Davidson (1970):

-
mod <- PlackettLuce(R, weights = w, npseudo = 0, maxit = 7)
-
## Warning in PlackettLuce(R, weights = w, npseudo = 0, maxit = 7): Iterations
-## have not converged.
-
coef(mod, log = FALSE)
-
##         1         2         3         4         5         6      tie2 
-## 0.1388005 0.1729985 0.1617420 0.1653930 0.1586805 0.2023855 0.7468147
-

Note here that we have specified log = FALSE in order to report the estimates -in the parameterization of Equation (2.1). In the next section we -discuss why it is more appropriate to use the log scale for inference.

-
-
-
+2.1.1 Pudding example (with ties) +

When each ranking contains only two items, then the model in Equation +(2.1) reduces to extended Bradley-Terry model proposed by +Davidson (1970) for paired comparisons with ties. The pudding data set, +available in PlackettLuce, provides the data from Example 2 of that paper, in +which respondents were asked to test two brands of chocolate pudding from a +total of six brands. For each pair of brands \(i\) and \(j\), the data set +gives the frequencies that brand \(i\) was preferred (\(w_{ij}\)), that brand \(j\) +was preferred (\(w_{ji}\)) and that the brands were tied (\(t_{ij}\)).

+ +
##   i j r_ij w_ij w_ji t_ij
+## 1 1 2   57   19   22   16
+## 2 1 3   47   16   19   12
+## 3 2 3   48   19   19   10
+## 4 1 4   54   18   23   13
+## 5 2 4   51   23   19    9
+## 6 3 4   54   19   20   15
+

PlackettLuce, the model-fitting function in PlackettLuce, requires data in the form of +rankings, with the rank (1st, 2nd, 3rd, \(\ldots\)) for each item. In this case it is +more straight-forward to define the orderings (winner, loser) initially, +corresponding to the wins for item \(i\), the wins for item \(j\) and the ties:

+
+i_wins <- data.frame(Winner = pudding$i, Loser = pudding$j)
+j_wins <- data.frame(Winner = pudding$j, Loser = pudding$i)
+if (getRversion() < "3.6.0"){
+  n <- nrow(pudding)
+  ties <- data.frame(Winner = array(split(pudding[c("i", "j")], 1:n), n),
+                     Loser = rep(NA, 15))
+} else {
+  ties <- data.frame(Winner = asplit(pudding[c("i", "j")], 1),
+                     Loser = rep(NA, 15))
+}
+head(ties, 2)
+
##   Winner Loser
+## 1   1, 2    NA
+## 2   1, 3    NA
+

In the last case, we split the i and j columns of pudding by row, using +the base R function asplit, if available. For each pair, this gives a vector +of items that we can specify as the winner, while the loser is missing.

+

Now the as.rankings() function from PlackettLuce can be used to convert +the combined orderings to an object of class "rankings".

+
+R <- as.rankings(rbind(i_wins, j_wins, ties),
+                 input = "orderings")
+head(R, 2)
+
## [1] "1 > 2" "1 > 3"
+
+tail(R, 2)
+
## [1] "4 = 6" "5 = 6"
+

The print method displays the rankings in a readable +form, however the underlying data structure stores the rankings in the form +of a matrix:

+
+head(unclass(R), 2)
+
##      1 2 3 4 5 6
+## [1,] 1 2 0 0 0 0
+## [2,] 1 0 2 0 0 0
+

The six columns represent the pudding brands. In each row, 0 represents an +unranked brand (not in the comparison), 1 represents the brand(s) ranked +in first place and 2 represents the brand in second place, if applicable.

+

To specify the full set of rankings, we need the frequency of each ranking, +which will be specified to the model-fitting function as a weight vector:

+
+w <- unlist(pudding[c("w_ij", "w_ji", "t_ij")])
+

Now we can fit the model with PlackettLuce, passing the rankings object +and the weight vector as arguments. Setting npseudo = 0 means that standard +maximum likelihood estimation is performed and maxit = 7 limits the number of +iterations to obtain the same worth parameters as Davidson (1970):

+
+mod <- PlackettLuce(R, weights = w, npseudo = 0, maxit = 7)
+
## Warning in PlackettLuce(R, weights = w, npseudo = 0, maxit = 7): Iterations have
+## not converged.
+
+coef(mod, log = FALSE)
+
##         1         2         3         4         5         6      tie2 
+## 0.1388005 0.1729985 0.1617420 0.1653930 0.1586805 0.2023855 0.7468147
+

Note here that we have specified log = FALSE in order to report the estimates +in the parameterization of Equation (2.1). In the next section we +discuss why it is more appropriate to use the log scale for inference.

+
+
+

-2.2 Inference

-

A standard way to report model parameter estimates is to report them along with their -corresponding standard error. This is an indication of the estimate’s precision, -however implicitly this invites comparison with zero. Such comparison is made -explicit in many summary methods for models in R, with the addition of -partial t or Z tests testing the null hypothesis that the parameter is -equal to zero, given the other parameters in the model. However this hypothesis -is generally not of interest for the worth parameters in a Plackett-Luce model: -we expect most items to have some worth, the question is whether the items -differ in their worth. In addition, a Z test based on asymptotic normality of -the maximum likelihood estimate will not be appropriate for worth parameters -near zero or one, since it does not take account of the fact that the parameters -cannot be outside of these limits.

-

On the log scale however, there are no bounds on the parameters and we can set -a reference level to provide meaningful comparisons. By default, the summary -method for "PlackettLuce" objects sets the first item (the first element of -colnames(R)) as the reference:

-
summary(mod)
-
## Call: PlackettLuce(rankings = R, npseudo = 0, weights = w, maxit = 7)
-## 
-## Coefficients:
-##      Estimate Std. Error z value Pr(>|z|)    
-## 1      0.0000         NA      NA       NA    
-## 2      0.2202     0.1872   1.176 0.239429    
-## 3      0.1530     0.1935   0.790 0.429271    
-## 4      0.1753     0.1882   0.931 0.351683    
-## 5      0.1339     0.1927   0.695 0.487298    
-## 6      0.3771     0.1924   1.960 0.049983 *  
-## tie2  -0.2919     0.0825  -3.539 0.000402 ***
-## ---
-## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-## 
-## Residual deviance:  1619.4 on 1484 degrees of freedom
-## AIC:  1631.4 
-## Number of iterations: 7
-

None of the Z tests for the item parameters provides significant evidence -against the null hypothesis of no difference from the worth of item 1, which is consistent with the test for equal -preferences presented in Davidson (1970). The tie parameter is also shown on the -log scale here, but it is an integral part of the model rather than a parameter -of interest for inference, and its scale is not as relevant as that of the worth parameters.

-

The reference level for the item parameters can be changed via the ref -argument, for example setting to NULL sets the mean worth as the reference:

-
summary(mod, ref = NULL)
-
## Call: PlackettLuce(rankings = R, npseudo = 0, weights = w, maxit = 7)
-## 
-## Coefficients:
-##       Estimate Std. Error z value Pr(>|z|)    
-## 1    -0.176581   0.121949  -1.448 0.147619    
-## 2     0.043664   0.121818   0.358 0.720019    
-## 3    -0.023617   0.126823  -0.186 0.852274    
-## 4    -0.001295   0.122003  -0.011 0.991533    
-## 5    -0.042726   0.127054  -0.336 0.736657    
-## 6     0.200555   0.126594   1.584 0.113140    
-## tie2 -0.291938   0.082499  -3.539 0.000402 ***
-## ---
-## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
-## 
-## Residual deviance:  1619.4 on 1484 degrees of freedom
-## AIC:  1631.4 
-## Number of iterations: 7
-

As can be seen from the output above, the standard error of the item parameters -changes with the reference level. Therefore in cases where there is not a -natural reference (like for example in comparisons of own brand versus competitor’s brands), inference can -depend on an arbitrary choice. This problem can be handled through the use of -quasi standard errors that remain constant for a given item regardless of the -reference. In addition quasi standard errors are defined for the reference item, -so even in cases where there is a natural reference, the uncertainty around the -worth of that item can still be represented.

-

Quasi standard errors for the item parameters are implemented via a method for -the qvcalc function from the qvcalc package:

- -
## Model call:  PlackettLuce(rankings = R, npseudo = 0, weights = w, maxit = 7) 
-##        estimate        SE   quasiSE   quasiVar
-##     1 0.0000000 0.0000000 0.1328950 0.01766108
-##     2 0.2202447 0.1872168 0.1327373 0.01761919
-##     3 0.1529644 0.1935181 0.1395740 0.01948091
-##     4 0.1752864 0.1882110 0.1330240 0.01769538
-##     5 0.1338550 0.1927043 0.1399253 0.01957908
-##     6 0.3771362 0.1924059 0.1392047 0.01937796
-## Worst relative errors in SEs of simple contrasts (%):  -0.8 0.8 
-## Worst relative errors over *all* contrasts (%):  -1.7 1.7
-

Again by default, the first item is taken as the reference, but this may be -changed via a ref argument. The plot method for the returned object visualizes -the item parameters (log-worth parameters) along with comparison intervals - -item parameters for which the comparison intervals do not cross are -significantly different:

-
plot(qv, xlab = "Brand of pudding", ylab = "Worth (log)", main = NULL)
+2.2 Inference +

A standard way to report model parameter estimates is to report them along with their +corresponding standard error. This is an indication of the estimate’s precision, +however implicitly this invites comparison with zero. Such comparison is made +explicit in many summary methods for models in R, with the addition of +partial t or Z tests testing the null hypothesis that the parameter is +equal to zero, given the other parameters in the model. However this hypothesis +is generally not of interest for the worth parameters in a Plackett-Luce model: +we expect most items to have some worth, the question is whether the items +differ in their worth. In addition, a Z test based on asymptotic normality of +the maximum likelihood estimate will not be appropriate for worth parameters +near zero or one, since it does not take account of the fact that the parameters +cannot be outside of these limits.

+

On the log scale however, there are no bounds on the parameters and we can set +a reference level to provide meaningful comparisons. By default, the summary +method for "PlackettLuce" objects sets the first item (the first element of +colnames(R)) as the reference:

+
+summary(mod)
+
## Call: PlackettLuce(rankings = R, npseudo = 0, weights = w, maxit = 7)
+## 
+## Coefficients:
+##      Estimate Std. Error z value Pr(>|z|)    
+## 1      0.0000         NA      NA       NA    
+## 2      0.2202     0.1872   1.176 0.239429    
+## 3      0.1530     0.1935   0.790 0.429271    
+## 4      0.1753     0.1882   0.931 0.351683    
+## 5      0.1339     0.1927   0.695 0.487298    
+## 6      0.3771     0.1924   1.960 0.049983 *  
+## tie2  -0.2919     0.0825  -3.539 0.000402 ***
+## ---
+## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+## 
+## Residual deviance:  1619.4 on 1484 degrees of freedom
+## AIC:  1631.4 
+## Number of iterations: 7
+

None of the Z tests for the item parameters provides significant evidence +against the null hypothesis of no difference from the worth of item 1, which is consistent with the test for equal +preferences presented in Davidson (1970). The tie parameter is also shown on the +log scale here, but it is an integral part of the model rather than a parameter +of interest for inference, and its scale is not as relevant as that of the worth parameters.

+

The reference level for the item parameters can be changed via the ref +argument, for example setting to NULL sets the mean worth as the reference:

+
+summary(mod, ref = NULL)
+
## Call: PlackettLuce(rankings = R, npseudo = 0, weights = w, maxit = 7)
+## 
+## Coefficients:
+##       Estimate Std. Error z value Pr(>|z|)    
+## 1    -0.176581   0.121949  -1.448 0.147619    
+## 2     0.043664   0.121818   0.358 0.720019    
+## 3    -0.023617   0.126823  -0.186 0.852274    
+## 4    -0.001295   0.122003  -0.011 0.991533    
+## 5    -0.042726   0.127054  -0.336 0.736657    
+## 6     0.200555   0.126594   1.584 0.113140    
+## tie2 -0.291938   0.082499  -3.539 0.000402 ***
+## ---
+## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+## 
+## Residual deviance:  1619.4 on 1484 degrees of freedom
+## AIC:  1631.4 
+## Number of iterations: 7
+

As can be seen from the output above, the standard error of the item parameters +changes with the reference level. Therefore in cases where there is not a +natural reference (like for example in comparisons of own brand versus competitor’s brands), inference can +depend on an arbitrary choice. This problem can be handled through the use of +quasi standard errors that remain constant for a given item regardless of the +reference. In addition quasi standard errors are defined for the reference item, +so even in cases where there is a natural reference, the uncertainty around the +worth of that item can still be represented.

+

Quasi standard errors for the item parameters are implemented via a method for +the qvcalc function from the qvcalc package:

+
+qv <- qvcalc(mod)
+summary(qv)
+
## Model call:  PlackettLuce(rankings = R, npseudo = 0, weights = w, maxit = 7) 
+##        estimate        SE   quasiSE   quasiVar
+##     1 0.0000000 0.0000000 0.1328950 0.01766108
+##     2 0.2202447 0.1872168 0.1327373 0.01761919
+##     3 0.1529644 0.1935181 0.1395740 0.01948091
+##     4 0.1752864 0.1882110 0.1330240 0.01769538
+##     5 0.1338550 0.1927043 0.1399253 0.01957908
+##     6 0.3771362 0.1924059 0.1392047 0.01937796
+## Worst relative errors in SEs of simple contrasts (%):  -0.8 0.8 
+## Worst relative errors over *all* contrasts (%):  -1.7 1.7
+

Again by default, the first item is taken as the reference, but this may be +changed via a ref argument. The plot method for the returned object visualizes +the item parameters (log-worth parameters) along with comparison intervals - +item parameters for which the comparison intervals do not cross are +significantly different:

+
+plot(qv, xlab = "Brand of pudding", ylab = "Worth (log)", main = NULL)
- -Worth of brands of chocolate pudding. Intervals based on quasi-standard errors.

-Figure 2.1: Worth of brands of chocolate pudding. Intervals based on quasi-standard errors. -

-
-

The quasi-variances allow comparisons that are approximately correct, for every possible contrast among the parameters. The routine error report in the last two lines printed above by summary(qv) tells us that, in this example, the approximation error has been very small: the approximation error for the standard error of any simple -contrast among the parameters is less than 0.8%.

-
-
+ +Worth of brands of chocolate pudding. Intervals based on quasi-standard errors.

+Figure 2.1: Worth of brands of chocolate pudding. Intervals based on quasi-standard errors. +

+
+

The quasi-variances allow comparisons that are approximately correct, for every possible contrast among the parameters. The routine error report in the last two lines printed above by summary(qv) tells us that, in this example, the approximation error has been very small: the approximation error for the standard error of any simple +contrast among the parameters is less than 0.8%.

+
+

-2.3 Disconnected networks

-

The wins and losses between items can be represented as a directed network. For -example, consider the following set of paired comparisons

- -

Note that even though the data were specified as a rankings matrix, we have -used as.rankings() to create a formal rankings object (input is set to -"rankings" by default). The -as.rankings() function checks that the rankings are specified as dense -rankings, i.e. consecutive integers with no rank skipped for tied items, -recoding as necessary; sets rankings with only one item to NA since these are -uninformative, and adds column names if necessary.

-

The adjacency function from PlackettLuce can be used to convert these -rankings to an adjacency matrix where element \((i, j)\) is the number of times -item \(i\) is ranked higher than item \(j\):

- -
##   A B C D
-## A 0 1 0 1
-## B 1 0 1 0
-## C 1 0 0 0
-## D 0 0 0 0
-## attr(,"class")
-## [1] "adjacency" "matrix"
-

Using functions from igraph we can visualise the corresponding network:

-
library(igraph)
-net <- graph_from_adjacency_matrix(A)
-plot(net, edge.arrow.size = 0.5, vertex.size = 30)
+2.3 Disconnected networks +

The wins and losses between items can be represented as a directed network. For +example, consider the following set of paired comparisons

+
+R <- matrix(c(1, 2, 0, 0,
+              2, 0, 1, 0,
+              1, 0, 0, 2,
+              2, 1, 0, 0,
+              0, 1, 2, 0), byrow = TRUE, ncol = 4,
+            dimnames = list(NULL, LETTERS[1:4]))
+R <- as.rankings(R)
+

Note that even though the data were specified as a rankings matrix, we have +used as.rankings() to create a formal rankings object (input is set to +"rankings" by default). The +as.rankings() function checks that the rankings are specified as dense +rankings, i.e. consecutive integers with no rank skipped for tied items, +recoding as necessary; sets rankings with only one item to NA since these are +uninformative, and adds column names if necessary.

+

The adjacency function from PlackettLuce can be used to convert these +rankings to an adjacency matrix where element \((i, j)\) is the number of times +item \(i\) is ranked higher than item \(j\):

+
+A <- adjacency(R)
+A
+
##   A B C D
+## A 0 1 0 1
+## B 1 0 1 0
+## C 1 0 0 0
+## D 0 0 0 0
+## attr(,"class")
+## [1] "adjacency" "matrix"
+

Using functions from igraph we can visualise the corresponding network:

+
+library(igraph)
+net <- graph_from_adjacency_matrix(A)
+plot(net, edge.arrow.size = 0.5, vertex.size = 30)
- -Network representation of toy rankings.

-Figure 2.2: Network representation of toy rankings. -

-
-

A sufficient condition for the worth parameters (on the log scale) to have finite -maximum likelihood estimates (MLEs) and -standard errors is that the network is strongly connected, i.e. there is a -path of wins and a path of losses between each pair of items. In the example -above, A, B and C are strongly connected. For example, C directly loses against -B and although C never directly beats B, it does beat A and A in turn beats B, -so C indirectly beats B. Similar paths of wins and losses can be found for all -pairs of A, B and C. On the other hand D is only observed to lose, therefore the -MLE of the log-worth would be \(-\infty\), with infinite standard error.

-

If one item always wins, the MLE of the log-worth would be \(+\infty\) with infinite -standard error. Or if there are clusters of items that are strongly connected -with each other, but disconnected or connected only by wins or only by loses -(weakly connected) to other clusters, then the maximum likelihood estimates are -undefined, because there is no information on the relative worth of the -clusters or one cluster is infinitely worse than the other.

-

The connectivity of the network can be checked with the connectivity function -from PlackettLuce

- -
## $membership
-## A B C D 
-## 1 1 1 2 
-## 
-## $csize
-## [1] 3 1
-## 
-## $no
-## [1] 2
-

If the network is not strongly connected, information on the clusters within -the network is returned. In this case a model could be estimated by excluding -item D:

- -
## [1] "A > B" "C > A" NA      "B > A" "B > C"
-
mod <- PlackettLuce(R2, npseudo = 0)
-summary(mod)
-
## Call: PlackettLuce(rankings = R2, npseudo = 0)
-## 
-## Coefficients:
-##   Estimate Std. Error z value Pr(>|z|)
-## A   0.0000         NA      NA       NA
-## B   0.8392     1.3596   0.617    0.537
-## C   0.4196     1.5973   0.263    0.793
-## 
-## Residual deviance:  5.1356 on 2 degrees of freedom
-## AIC:  9.1356 
-## Number of iterations: 3
-

Note that since R is a rankings object, the rankings are automatically -updated when items are dropped, so in this case the paired comparison with item -D is set to NA.

-

By default however PlackettLuce provides a way to handle disconnected/weakly -connected networks, through the addition of pseudo-rankings. This works by -adding a win and a loss between each item and a hypothetical or ghost item with -fixed worth. This makes the network strongly connected so all the worth -parameters are estimable. It also has an interpretation as a Bayesian prior, in -particular an exchangeable prior in which all items have equal worth.

-

The npseudo argument defines the number of wins and loses with the ghost item -that are added for each real item. Setting npseudo = 0 means that no -pseudo-rankings are added, so PlackettLuce will return the standard MLE if the -network is strongly connected and throw an error otherwise. The larger -npseudo is, the stronger the influence of the prior, by default npseudo -is set to 0.5, so each pseudo-ranking is weighted by 0.5. This is enough to -connect the network, but is a weak prior. In this toy example, the item -parameters change quite considerably:

- -
##          A          B          C          D 
-##  0.0000000  0.5184185  0.1354707 -1.1537565
-

This is because there are only 5 rankings, so there is not much information in -the data. In more realistic examples, the default prior will have a weak -shrinkage effect, shrinking the items’ worth parameters towards \(1/N\), -where \(N\) is the number of items.

-

For a practical example, we consider the NASCAR data from Hunter (2004). This -collects the results of the 36 races in the 2002 NASCAR season in the United -States. Each race involves 43 drivers out of a total of 87 drivers. The -nascar data provided by PlackettLuce records the results as an ordering -of the drivers in each race:

- -
##      rank1 rank2 rank3 rank4 rank5 rank6 rank7 rank8 rank9 rank10 rank11
-## [1,]    83    18    20    48    53    51    67    72    32     42      2
-## [2,]    52    72     4    82    60    31    32    66     3     44      2
-##      rank12 rank13 rank14 rank15 rank16 rank17 rank18 rank19 rank20 rank21
-## [1,]     31     62     13     37      6     60     66     33     77     56
-## [2,]     48     83     67     41     77     33     61     45     38     51
-##      rank22 rank23 rank24 rank25 rank26 rank27 rank28 rank29 rank30 rank31
-## [1,]     63     55     70     14     43     71     35     12     44     79
-## [2,]     14     42     62     35     12     25     37     34      6     18
-##      rank32 rank33 rank34 rank35 rank36 rank37 rank38 rank39 rank40 rank41
-## [1,]      3     52      4      9     45     41     61     34     39     49
-## [2,]     79     39     59     43     55     49     56      9     53      7
-##      rank42 rank43
-## [1,]     15     82
-## [2,]     13     71
-

For example, in the first race, driver 83 came first, followed by driver 18 and -so on. The names corresponding to the driver IDs are available as an attribute of -nascar; we can provide these names when converting the orderings to rankings -via the items argument:

-
R <- as.rankings(nascar, input = "orderings", items = attr(nascar, "drivers"))
-R[1:2]
-
## [1] "Ward Burton > Elliott Sadler > Geoff ..."
-## [2] "Matt Kenseth > Sterling Marlin > Bob ..."
-

Maximum likelihood estimation cannot be used in this example, because four -drivers placed last in each race they entered. So Hunter (2004) dropped these -four drivers to fit the Plackett-Luce model, which we can reproduce as follows:

- -

In order to demonstrate the correspondence with the results from Hunter (2004), we -order the item parameters by the driver’s average rank:

-
avRank <- apply(R, 2, function(x) mean(x[x > 0]))
-coefs <- round(coef(mod)[order(avRank[keep])], 2)
-head(coefs, 3)
-
##     PJ Jones Scott Pruett  Mark Martin 
-##         4.15         3.62         2.08
-
tail(coefs, 3)
-
##  Dave Marcis Dick Trickle    Joe Varde 
-##         0.03        -0.31        -0.15
-

Now we fit the Plackett-Luce model to the full data, using the default -pseudo-rankings method.

- -

For items that were in the previous model, we see that the log-worth parameters -generally shrink towards zero:

-
coefs2 <- round(coef(mod2), 2)
-coefs2[names(coefs)[1:3]]
-
##     PJ Jones Scott Pruett  Mark Martin 
-##         3.20         2.77         1.91
-
coefs2[names(coefs)[81:83]]
-
##  Dave Marcis Dick Trickle    Joe Varde 
-##         0.02        -0.38        -0.12
-

The new items have relative large negative log worth

- -
## Andy Hillenburg  Gary Bradberry  Jason Hedlesky   Randy Renfrow 
-##           -2.17           -1.74           -1.59           -1.77
-

Nonetheless, the estimates are finite and have finite standard errors:

-
coef(summary(mod2))[84:87,]
-
##                  Estimate Std. Error    z value  Pr(>|z|)
-## Andy Hillenburg -2.171065   1.812994 -1.1975028 0.2311106
-## Gary Bradberry  -1.744754   1.855365 -0.9403828 0.3470212
-## Jason Hedlesky  -1.590764   1.881708 -0.8453828 0.3978972
-## Randy Renfrow   -1.768629   1.904871 -0.9284767 0.3531604
-

Note that the reference here is simply the driver that comes first -alphabetically: A. Cameron. We can plot the quasi-variances for a better -comparison:

-
qv <- qvcalc(mod2)
-qv$qvframe <- qv$qvframe[order(coef(mod2)),]
-plot(qv, xlab = NULL, ylab = "Ability (log)", main = NULL,
-     xaxt="n", xlim = c(3, 85))
-axis(1, at = seq_len(87), labels = rownames(qv$qvframe), las = 2, cex.axis = 0.6)
+ +Network representation of toy rankings.

+Figure 2.2: Network representation of toy rankings. +

+
+

A sufficient condition for the worth parameters (on the log scale) to have finite +maximum likelihood estimates (MLEs) and +standard errors is that the network is strongly connected, i.e. there is a +path of wins and a path of losses between each pair of items. In the example +above, A, B and C are strongly connected. For example, C directly loses against +B and although C never directly beats B, it does beat A and A in turn beats B, +so C indirectly beats B. Similar paths of wins and losses can be found for all +pairs of A, B and C. On the other hand D is only observed to lose, therefore the +MLE of the log-worth would be \(-\infty\), with infinite standard error.

+

If one item always wins, the MLE of the log-worth would be \(+\infty\) with infinite +standard error. Or if there are clusters of items that are strongly connected +with each other, but disconnected or connected only by wins or only by loses +(weakly connected) to other clusters, then the maximum likelihood estimates are +undefined, because there is no information on the relative worth of the +clusters or one cluster is infinitely worse than the other.

+

The connectivity of the network can be checked with the connectivity function +from PlackettLuce

+ +
## $membership
+## A B C D 
+## 1 1 1 2 
+## 
+## $csize
+## [1] 3 1
+## 
+## $no
+## [1] 2
+

If the network is not strongly connected, information on the clusters within +the network is returned. In this case a model could be estimated by excluding +item D:

+
+R2 <- R[, -4]
+R2
+
## [1] "A > B" "C > A" NA      "B > A" "B > C"
+
+mod <- PlackettLuce(R2, npseudo = 0)
+summary(mod)
+
## Call: PlackettLuce(rankings = R2, npseudo = 0)
+## 
+## Coefficients:
+##   Estimate Std. Error z value Pr(>|z|)
+## A   0.0000         NA      NA       NA
+## B   0.8392     1.3596   0.617    0.537
+## C   0.4196     1.5973   0.263    0.793
+## 
+## Residual deviance:  5.1356 on 2 degrees of freedom
+## AIC:  9.1356 
+## Number of iterations: 3
+

Note that since R is a rankings object, the rankings are automatically +updated when items are dropped, so in this case the paired comparison with item +D is set to NA.

+

By default however PlackettLuce provides a way to handle disconnected/weakly +connected networks, through the addition of pseudo-rankings. This works by +adding a win and a loss between each item and a hypothetical or ghost item with +fixed worth. This makes the network strongly connected so all the worth +parameters are estimable. It also has an interpretation as a Bayesian prior, in +particular an exchangeable prior in which all items have equal worth.

+

The npseudo argument defines the number of wins and loses with the ghost item +that are added for each real item. Setting npseudo = 0 means that no +pseudo-rankings are added, so PlackettLuce will return the standard MLE if the +network is strongly connected and throw an error otherwise. The larger +npseudo is, the stronger the influence of the prior, by default npseudo +is set to 0.5, so each pseudo-ranking is weighted by 0.5. This is enough to +connect the network, but is a weak prior. In this toy example, the item +parameters change quite considerably:

+
+mod2 <- PlackettLuce(R)
+coef(mod2)
+
##          A          B          C          D 
+##  0.0000000  0.5184185  0.1354707 -1.1537565
+

This is because there are only 5 rankings, so there is not much information in +the data. In more realistic examples, the default prior will have a weak +shrinkage effect, shrinking the items’ worth parameters towards \(1/N\), +where \(N\) is the number of items.

+

For a practical example, we consider the NASCAR data from Hunter (2004). This +collects the results of the 36 races in the 2002 NASCAR season in the United +States. Each race involves 43 drivers out of a total of 87 drivers. The +nascar data provided by PlackettLuce records the results as an ordering +of the drivers in each race:

+
+data(nascar)
+nascar[1:2, ]
+
##      rank1 rank2 rank3 rank4 rank5 rank6 rank7 rank8 rank9 rank10 rank11 rank12
+## [1,]    83    18    20    48    53    51    67    72    32     42      2     31
+## [2,]    52    72     4    82    60    31    32    66     3     44      2     48
+##      rank13 rank14 rank15 rank16 rank17 rank18 rank19 rank20 rank21 rank22
+## [1,]     62     13     37      6     60     66     33     77     56     63
+## [2,]     83     67     41     77     33     61     45     38     51     14
+##      rank23 rank24 rank25 rank26 rank27 rank28 rank29 rank30 rank31 rank32
+## [1,]     55     70     14     43     71     35     12     44     79      3
+## [2,]     42     62     35     12     25     37     34      6     18     79
+##      rank33 rank34 rank35 rank36 rank37 rank38 rank39 rank40 rank41 rank42
+## [1,]     52      4      9     45     41     61     34     39     49     15
+## [2,]     39     59     43     55     49     56      9     53      7     13
+##      rank43
+## [1,]     82
+## [2,]     71
+

For example, in the first race, driver 83 came first, followed by driver 18 and +so on. The names corresponding to the driver IDs are available as an attribute of +nascar; we can provide these names when converting the orderings to rankings +via the items argument:

+
+R <- as.rankings(nascar, input = "orderings", items = attr(nascar, "drivers"))
+R[1:2]
+
## [1] "Ward Burton > Elliott Sadler > Geoff ..."
+## [2] "Matt Kenseth > Sterling Marlin > Bob ..."
+

Maximum likelihood estimation cannot be used in this example, because four +drivers placed last in each race they entered. So Hunter (2004) dropped these +four drivers to fit the Plackett-Luce model, which we can reproduce as follows:

+
+keep <- seq_len(83)
+R2 <- R[, keep]
+mod <- PlackettLuce(R2, npseudo = 0)
+

In order to demonstrate the correspondence with the results from Hunter (2004), we +order the item parameters by the driver’s average rank:

+
+avRank <- apply(R, 2, function(x) mean(x[x > 0]))
+coefs <- round(coef(mod)[order(avRank[keep])], 2)
+head(coefs, 3)
+
##     PJ Jones Scott Pruett  Mark Martin 
+##         4.15         3.62         2.08
+
+tail(coefs, 3)
+
##  Dave Marcis Dick Trickle    Joe Varde 
+##         0.03        -0.31        -0.15
+

Now we fit the Plackett-Luce model to the full data, using the default +pseudo-rankings method.

+
+mod2 <- PlackettLuce(R)
+

For items that were in the previous model, we see that the log-worth parameters +generally shrink towards zero:

+
+coefs2 <- round(coef(mod2), 2)
+coefs2[names(coefs)[1:3]]
+
##     PJ Jones Scott Pruett  Mark Martin 
+##         3.20         2.77         1.91
+
+coefs2[names(coefs)[81:83]]
+
##  Dave Marcis Dick Trickle    Joe Varde 
+##         0.02        -0.38        -0.12
+

The new items have relative large negative log worth

+
+coefs2[84:87]
+
## Andy Hillenburg  Gary Bradberry  Jason Hedlesky   Randy Renfrow 
+##           -2.17           -1.74           -1.59           -1.77
+

Nonetheless, the estimates are finite and have finite standard errors:

+
+coef(summary(mod2))[84:87,]
+
##                  Estimate Std. Error    z value  Pr(>|z|)
+## Andy Hillenburg -2.171065   1.812994 -1.1975028 0.2311106
+## Gary Bradberry  -1.744754   1.855365 -0.9403828 0.3470212
+## Jason Hedlesky  -1.590764   1.881708 -0.8453828 0.3978972
+## Randy Renfrow   -1.768629   1.904871 -0.9284767 0.3531604
+

Note that the reference here is simply the driver that comes first +alphabetically: A. Cameron. We can plot the quasi-variances for a better +comparison:

+
+qv <- qvcalc(mod2)
+qv$qvframe <- qv$qvframe[order(coef(mod2)),]
+plot(qv, xlab = NULL, ylab = "Ability (log)", main = NULL,
+     xaxt="n", xlim = c(3, 85))
+axis(1, at = seq_len(87), labels = rownames(qv$qvframe), las = 2, cex.axis = 0.6)
- -Ability of drivers based on NASCAR 2002 season. Intervals based on quasi-standard errors.

-Figure 2.3: Ability of drivers based on NASCAR 2002 season. Intervals based on quasi-standard errors. -

-
-

As in the previous example, we can use summary(qv) to see a report on the -accuracy of the quasi-variance approximation. In this example the error of -approximation, across the standard errors of all of the 3741 possible simple -contrasts (contrasts between pairs of the 87 driver-specific parameters), -ranges between -0.7% and +6.7% — which is still remarkably accurate, and which -means that the plot of comparison intervals is a good visual guide to the -uncertainty about drivers’ relative abilities. The results of using -summary(qv) are not shown here, as they would occupy too much space.

-

Although the prior pseudo-rankings are only necessary to add when the network is -incomplete, the default behaviour is always to use them (with a weight of 0.5) -because the small shrinkage effect that the pseudo-data delivers typically -has a beneficial impact in terms of reducing both the bias and the variance of -the estimators of the worth parameters.

-
-
-
+ +Ability of drivers based on NASCAR 2002 season. Intervals based on quasi-standard errors.

+Figure 2.3: Ability of drivers based on NASCAR 2002 season. Intervals based on quasi-standard errors. +

+
+

As in the previous example, we can use summary(qv) to see a report on the +accuracy of the quasi-variance approximation. In this example the error of +approximation, across the standard errors of all of the 3741 possible simple +contrasts (contrasts between pairs of the 87 driver-specific parameters), +ranges between -0.7% and +6.7% — which is still remarkably accurate, and which +means that the plot of comparison intervals is a good visual guide to the +uncertainty about drivers’ relative abilities. The results of using +summary(qv) are not shown here, as they would occupy too much space.

+

Although the prior pseudo-rankings are only necessary to add when the network is +incomplete, the default behaviour is always to use them (with a weight of 0.5) +because the small shrinkage effect that the pseudo-data delivers typically +has a beneficial impact in terms of reducing both the bias and the variance of +the estimators of the worth parameters.

+
+
+

-3 Plackett-Luce Trees

-

A Plackett-Luce model that assumes the same worth parameters across all rankings -may sometimes be an over-simplification. For example, if rankings are made by -different judges, the worth parameters may vary between judges with different -characteristics. Model-based partitioning provides an automatic way to determine -subgroups of rankings with significantly different sets of worth parameters, -based on ranking-specific covariates. A Plackett-Luce tree is constructed via the following -steps:

+3 Plackett-Luce Trees +

A Plackett-Luce model that assumes the same worth parameters across all rankings +may sometimes be an over-simplification. For example, if rankings are made by +different judges, the worth parameters may vary between judges with different +characteristics. Model-based partitioning provides an automatic way to determine +subgroups of rankings with significantly different sets of worth parameters, +based on ranking-specific covariates. A Plackett-Luce tree is constructed via the following +steps:

    -
  1. Fit a Plackett-Luce model to the full data.
  2. -
  3. Assess the stability of the worth parameters with respect to each available -covariate.
  4. -
  5. If there is significant instability, split the full data by the covariate -with the strongest instability and use the cut-point with the highest -improvement in model fit.
  6. -
  7. Repeat steps 1-3 until there are no more significant instabilities, or a -split produces a sub-group below a given size threshold.
  8. +
  9. Fit a Plackett-Luce model to the full data.
  10. +
  11. Assess the stability of the worth parameters with respect to each available +covariate.
  12. +
  13. If there is significant instability, split the full data by the covariate +with the strongest instability and use the cut-point with the highest +improvement in model fit.
  14. +
  15. Repeat steps 1-3 until there are no more significant instabilities, or a +split produces a sub-group below a given size threshold.
-

This is an extension of Bradley-Terry trees, implemented in the R package -psychotree and described in more detail by Strobl, Wickelmaier, and Zeileis (2011).

-

To illustrate this approach, we consider data from a trial of different -varieties of bean in Nicaragua, run by Bioversity International (Van Etten et al. 2016). -Farmers were -asked to grow three experimental varieties of bean in one of the growing -seasons, Primera (May - August), Postrera (September - October) or Apante -(November - January). At the end of the season, they were asked which variety -they thought was best and which variety they thought was worse, to give a -ranking of the three varieties. In addition, they were asked to compare each -trial variety to the standard local variety and say whether it was better or -worse.

-

The data are provided as the dataset beans in Plackett-Luce. The data require -some preparation to collate the rankings, which is detailed in -Appendix 5.2. -The same code is provided in the examples section of the help file of -beans

-
example("beans", package = "PlackettLuce")
-

The result is a rankings object R with all rankings of the three -experimental varieties and the output of their comparison with the local -variety.

-

In order to fit a Plackett-Luce tree, we need to create a "grouped_rankings" -object, that defines how the rankings map to the covariate values. In this case -we wish to group by each record in the original data set, so we group by an -index that identifies the record number for each of the four rankings from -each farmer (one ranking of order three plus three pairwise rankings with the -local variety):

-
n <- nrow(beans)
-G <- group(R, index = rep(seq_len(n), 4))
-format(head(G, 2), width = 50)
-
##                                                                   1 
-##  "PM2 Don Rey > SJC 730-79 > BRT 103-182, Local > BRT 103-182, ..." 
-##                                                                   2 
-## "INTA Centro Sur > INTA Sequia > INTA Rojo, Local > INTA Rojo, ..."
-

For each record in the original data, we have three covariates: season -the season-year the beans were planted, year the year of planting, -and maxTN the maximum temperature at night during the vegetative cycle. -The following code fits a Plackett-Luce tree with up to three nodes and at -least 5% of the records in each node:

-
beans$year <- factor(beans$year)
-tree <- pltree(G ~ ., data = beans[c("season", "year", "maxTN")],
-               minsize = 0.05*n, maxdepth = 3)
-tree
-

The algorithm identifies three nodes, with the first split defined by high -night-time temperatures, and the second splitting the single Primera season -from the others. So for early planting in regions where the night-time -temperatures were not too high, INTA Rojo (7) was most preferred, closely -followed by the local variety. During the regular growing seasons (Postrera and -Apante) in regions where the night-time temperatures were not too high, the -local variety was most preferred, closely followed by INTA Sequia (8). Finally -in regions where the maximum night-time temperature was high, INTA Sequia (8) -was most preferred, closely followed by BRT 103-182 (2) and INTA Centro Sur -(3). A plot method is provided to visualise the tree:

-
plot(tree, names = FALSE, abbreviate = 2)
+

This is an extension of Bradley-Terry trees, implemented in the R package +psychotree and described in more detail by Strobl, Wickelmaier, and Zeileis (2011).

+

To illustrate this approach, we consider data from a trial of different +varieties of bean in Nicaragua, run by Bioversity International (Van Etten et al. 2016). +Farmers were +asked to grow three experimental varieties of bean in one of the growing +seasons, Primera (May - August), Postrera (September - October) or Apante +(November - January). At the end of the season, they were asked which variety +they thought was best and which variety they thought was worse, to give a +ranking of the three varieties. In addition, they were asked to compare each +trial variety to the standard local variety and say whether it was better or +worse.

+

The data are provided as the dataset beans in Plackett-Luce. The data require +some preparation to collate the rankings, which is detailed in +Appendix 5.2. +The same code is provided in the examples section of the help file of +beans

+
+example("beans", package = "PlackettLuce")
+

The result is a rankings object R with all rankings of the three +experimental varieties and the output of their comparison with the local +variety.

+

In order to fit a Plackett-Luce tree, we need to create a "grouped_rankings" +object, that defines how the rankings map to the covariate values. In this case +we wish to group by each record in the original data set, so we group by an +index that identifies the record number for each of the four rankings from +each farmer (one ranking of order three plus three pairwise rankings with the +local variety):

+
+n <- nrow(beans)
+G <- group(R, index = rep(seq_len(n), 4))
+format(head(G, 2), width = 50)
+
##                                                                   1 
+##  "PM2 Don Rey > SJC 730-79 > BRT 103-182, Local > BRT 103-182, ..." 
+##                                                                   2 
+## "INTA Centro Sur > INTA Sequia > INTA Rojo, Local > INTA Rojo, ..."
+

For each record in the original data, we have three covariates: season +the season-year the beans were planted, year the year of planting, +and maxTN the maximum temperature at night during the vegetative cycle. +The following code fits a Plackett-Luce tree with up to three nodes and at +least 5% of the records in each node:

+
+beans$year <- factor(beans$year)
+tree <- pltree(G ~ ., data = beans[c("season", "year", "maxTN")],
+               minsize = 0.05*n, maxdepth = 3)
+tree
+

The algorithm identifies three nodes, with the first split defined by high +night-time temperatures, and the second splitting the single Primera season +from the others. So for early planting in regions where the night-time +temperatures were not too high, INTA Rojo (7) was most preferred, closely +followed by the local variety. During the regular growing seasons (Postrera and +Apante) in regions where the night-time temperatures were not too high, the +local variety was most preferred, closely followed by INTA Sequia (8). Finally +in regions where the maximum night-time temperature was high, INTA Sequia (8) +was most preferred, closely followed by BRT 103-182 (2) and INTA Centro Sur +(3). A plot method is provided to visualise the tree:

+
+plot(tree, names = FALSE, abbreviate = 2)
- -Worth parameters for the ten trial varieties and the local variety for each node in the Plackett-Luce tree. Varieties are 1: ALS 0532-6, 2: BRT 103-182, 3: INTA Centro Sur, 4: INTA Ferroso, 5: INTA Matagalpa, 6: INTA Precoz, 7: INTA Rojo, 8: INTA Sequia, 9: Local, 10: PM2 Don Rey, 11: SJC 730-79.

-Figure 3.1: Worth parameters for the ten trial varieties and the local variety for each node in the Plackett-Luce tree. Varieties are 1: ALS 0532-6, 2: BRT 103-182, 3: INTA Centro Sur, 4: INTA Ferroso, 5: INTA Matagalpa, 6: INTA Precoz, 7: INTA Rojo, 8: INTA Sequia, 9: Local, 10: PM2 Don Rey, 11: SJC 730-79. -

-
-
-
+ +Worth parameters for the ten trial varieties and the local variety for each node in the Plackett-Luce tree. Varieties are 1: ALS 0532-6, 2: BRT 103-182, 3: INTA Centro Sur, 4: INTA Ferroso, 5: INTA Matagalpa, 6: INTA Precoz, 7: INTA Rojo, 8: INTA Sequia, 9: Local, 10: PM2 Don Rey, 11: SJC 730-79.

+Figure 3.1: Worth parameters for the ten trial varieties and the local variety for each node in the Plackett-Luce tree. Varieties are 1: ALS 0532-6, 2: BRT 103-182, 3: INTA Centro Sur, 4: INTA Ferroso, 5: INTA Matagalpa, 6: INTA Precoz, 7: INTA Rojo, 8: INTA Sequia, 9: Local, 10: PM2 Don Rey, 11: SJC 730-79. +

+
+ +

-4 Discussion

-

PlackettLuce is a feature-rich package for the handling of ranking data. The -package provides methods for importing, handling and visualising partial ranking -data, and for the estimation and inference from generalizations of the -Plackett-Luce model that can handle partial rankings of items and ties of -arbitrary order. Disconnected item networks are handled by appropriately -augmenting the data with pseudo-rankings for a hypothetical item. The package -also allows for the construction of generalized Plackett-Luce trees to account -for heterogeneity across the item worth parameters due to ranking-specific -covariates.

-

Current work involves support for the online estimation from streams of -partial-rankings and formally accounting for spatio-temporal heterogeneity in worth -parameters.

-
-
+4 Discussion +

PlackettLuce is a feature-rich package for the handling of ranking data. The +package provides methods for importing, handling and visualising partial ranking +data, and for the estimation and inference from generalizations of the +Plackett-Luce model that can handle partial rankings of items and ties of +arbitrary order. Disconnected item networks are handled by appropriately +augmenting the data with pseudo-rankings for a hypothetical item. The package +also allows for the construction of generalized Plackett-Luce trees to account +for heterogeneity across the item worth parameters due to ranking-specific +covariates.

+

Current work involves support for the online estimation from streams of +partial-rankings and formally accounting for spatio-temporal heterogeneity in worth +parameters.

+
+

-5 Appendix

-
+5 Appendix +

-5.1 Timings

-

Data for the package comparison in Table 1.2 was downloaded -from PrefLib (Mattei and Walsh 2013) using the read.soc function provided in -PlackettLuce to read in files with the “Strict Orders - Complete List” -format.

-
library(PlackettLuce)
-# read in example data sets
-preflib <- "http://www.preflib.org/data/election/"
-netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000101.soc"))
-tshirt <- read.soc(file.path(preflib, "shirt/ED-00012-00000001.soc"))
-sushi <- read.soc(file.path(preflib, "sushi/ED-00014-00000001.soc"))
-

A wrapper was defined for each function in the comparison, to prepare the -rankings and run each function with reasonable defaults. The Plackett-Luce model -was fitted to aggregated rankings where possible (for PlackettLuce, hyper2, -and pmr). Arguments were set to obtain the maximum likelihood estimate, with -the default convergence criteria. The default iterative scaling algorithm was -used for PlackettLuce.

-
pl <- function(dat, ...){
-    # convert ordered items to ranking
-    R <- as.rankings(dat[,-1], "ordering")
-    # fit without adding pseudo-rankings, weight rankings by count
-    PlackettLuce(R, npseudo = 0, weights = dat$Freq)
-}
-hyper2 <- function(dat, ...){
-    requireNamespace("hyper2")
-    # create likelihood object based on ordered items and counts
-    H <- hyper2::hyper2(pnames = paste0("p", seq_len(ncol(dat) - 1)))
-    for (i in seq_len(nrow(dat))){
-        x <-  dat[i, -1][dat[i, -1] > 0]
-        H <- H + hyper2::order_likelihood(x, times = dat[i, 1])
-    }
-    # find parameters to maximise likelihood
-    p <- hyper2::maxp(H)
-    structure(p, loglik = hyper2::loglik(H, p[-length(p)]))
-}
-plmix <- function(dat, ...){
-    requireNamespace("PLMIX")
-    # disaggregate data (no functionality for weights or counts)
-    r <- rep(seq_len(nrow(dat)), dat$Freq)
-    # maximum a posteriori estimate, with non-informative prior
-    # K items in each ranking, single component distribution
-    # default starting values do not always work so specify as uniform
-    K <- ncol(dat) - 1
-    PLMIX::mapPLMIX(as.matrix(dat[r, -1]), K = K, G = 1,
-                    init = list(p = rep.int(1/K, K)), plot_objective = FALSE)
-}
-pmr <- function(dat, ...){
-    requireNamespace("pmr")
-    # convert ordered items to ranking
-    R <- as.rankings(dat[,-1], "ordering")
-    # create data frame with counts as required by pl
-    X <- as.data.frame(unclass(R))
-    X$Freq <- dat$Freq
-    capture.output(res <- pmr::pl(X))
-    res
-}
-statrank <- function(dat, iter){
-    requireNamespace("StatRank")
-    # disaggregate data (no functionality for weights or counts)
-    r <- rep(seq_len(nrow(dat)), dat$Freq)
-    capture.output(res <- StatRank::Estimation.PL.MLE(as.matrix(dat[r, -1]),
-                                                      iter = iter))
-    res
-}
-

When recording timings, the number of iterations for StatRank was set so -that the log-likelihood on exit was equal to the log-likelihood returned by the -other functions with relative tolerance 1e-6.

-
timings <- function(dat, iter = NULL,
-                    fun = c("pl", "hyper2", "plmix", "pmr", "statrank")){
-    res <- list()
-    for (nm in c("pl", "hyper2", "plmix", "pmr", "statrank")){
-        if (nm %in% fun){
-            res[[nm]] <- suppressWarnings(
-                system.time(do.call(nm, list(dat, iter)))[["elapsed"]])
-        } else res[[nm]] <- NA
-    }
-    res
-}
-netflix_timings <- timings(netflix, 6)
-tshirt_timings <- timings(tshirt, 341,
-                          fun = c("pl", "hyper2", "plmix", "statrank"))
-sushi_timings <- timings(sushi, 5,
-                         fun = c("pl", "hyper2", "plmix", "statrank"))
-
-
+5.1 Timings +

Data for the package comparison in Table 1.2 was downloaded +from PrefLib (Mattei and Walsh 2013) using the read.soc function provided in +PlackettLuce to read in files with the “Strict Orders - Complete List” +format.

+
+library(PlackettLuce)
+# read in example data sets
+preflib <- "https://www.preflib.org/data/election/"
+netflix <- read.soc(file.path(preflib, "netflix/ED-00004-00000101.soc"))
+tshirt <- read.soc(file.path(preflib, "shirt/ED-00012-00000001.soc"))
+sushi <- read.soc(file.path(preflib, "sushi/ED-00014-00000001.soc"))
+

A wrapper was defined for each function in the comparison, to prepare the +rankings and run each function with reasonable defaults. The Plackett-Luce model +was fitted to aggregated rankings where possible (for PlackettLuce, hyper2, +and pmr). Arguments were set to obtain the maximum likelihood estimate, with +the default convergence criteria. The default iterative scaling algorithm was +used for PlackettLuce.

+
+pl <- function(dat, ...){
+    # convert ordered items to ranking
+    R <- as.rankings(dat[,-1], "ordering")
+    # fit without adding pseudo-rankings, weight rankings by count
+    PlackettLuce(R, npseudo = 0, weights = dat$Freq)
+}
+hyper2 <- function(dat, ...){
+    requireNamespace("hyper2")
+    # create likelihood object based on ordered items and counts
+    H <- hyper2::hyper2(pnames = paste0("p", seq_len(ncol(dat) - 1)))
+    for (i in seq_len(nrow(dat))){
+        x <-  dat[i, -1][dat[i, -1] > 0]
+        H <- H + hyper2::order_likelihood(x, times = dat[i, 1])
+    }
+    # find parameters to maximise likelihood
+    p <- hyper2::maxp(H)
+    structure(p, loglik = hyper2::loglik(H, p[-length(p)]))
+}
+plmix <- function(dat, ...){
+    requireNamespace("PLMIX")
+    # disaggregate data (no functionality for weights or counts)
+    r <- rep(seq_len(nrow(dat)), dat$Freq)
+    # maximum a posteriori estimate, with non-informative prior
+    # K items in each ranking, single component distribution
+    # default starting values do not always work so specify as uniform
+    K <- ncol(dat) - 1
+    PLMIX::mapPLMIX(as.matrix(dat[r, -1]), K = K, G = 1,
+                    init = list(p = rep.int(1/K, K)), plot_objective = FALSE)
+}
+pmr <- function(dat, ...){
+    requireNamespace("pmr")
+    # convert ordered items to ranking
+    R <- as.rankings(dat[,-1], "ordering")
+    # create data frame with counts as required by pl
+    X <- as.data.frame(unclass(R))
+    X$Freq <- dat$Freq
+    capture.output(res <- pmr::pl(X))
+    res
+}
+statrank <- function(dat, iter){
+    requireNamespace("StatRank")
+    # disaggregate data (no functionality for weights or counts)
+    r <- rep(seq_len(nrow(dat)), dat$Freq)
+    capture.output(res <- StatRank::Estimation.PL.MLE(as.matrix(dat[r, -1]),
+                                                      iter = iter))
+    res
+}
+

When recording timings, the number of iterations for StatRank was set so +that the log-likelihood on exit was equal to the log-likelihood returned by the +other functions with relative tolerance 1e-6.

+
+timings <- function(dat, iter = NULL,
+                    fun = c("pl", "hyper2", "plmix", "pmr", "statrank")){
+    res <- list()
+    for (nm in c("pl", "hyper2", "plmix", "pmr", "statrank")){
+        if (nm %in% fun){
+            res[[nm]] <- suppressWarnings(
+                system.time(do.call(nm, list(dat, iter)))[["elapsed"]])
+        } else res[[nm]] <- NA
+    }
+    res
+}
+netflix_timings <- timings(netflix, 6)
+tshirt_timings <- timings(tshirt, 341,
+                          fun = c("pl", "hyper2", "plmix", "statrank"))
+sushi_timings <- timings(sushi, 5,
+                         fun = c("pl", "hyper2", "plmix", "statrank"))
+
+

-5.2 beans data preparation

-

First we handle the best and worst rankings. These give the variety the farmer -thought was best or worst, coded as A, B or C for the first, second or third -variety assigned to the farmer respectively.

-
data(beans)
-head(beans[c("best", "worst")], 2)
-
##   best worst
-## 1    C     A
-## 2    B     A
-

We fill in the missing item using the complete function from PlackettLuce:

-
beans$middle <- complete(beans[c("best", "worst")],
-                         items = c("A", "B", "C"))
-head(beans[c("best", "middle", "worst")], 2)
-
##   best middle worst
-## 1    C      B     A
-## 2    B      C     A
-

This gives an ordering of the three varieties the farmer was given. The names of -these varieties are stored in separate columns:

-
head(beans[c("variety_a", "variety_b", "variety_c")], 2)
-
##     variety_a       variety_b   variety_c
-## 1 BRT 103-182      SJC 730-79 PM2 Don Rey
-## 2   INTA Rojo INTA Centro Sur INTA Sequia
-

We can use the decode function from PlackettLuce to decode the orderings, -replacing the coded values with the actual varieties:

-
order3 <- decode(beans[c("best", "middle", "worst")],
-                 items = beans[c("variety_a", "variety_b", "variety_c")],
-                 code = c("A", "B", "C"))
-

The pairwise comparisons with the local variety are stored in another set of -columns

-
head(beans[c("var_a", "var_b", "var_c")], 2)
-
##   var_a  var_b  var_c
-## 1 Worse  Worse Better
-## 2 Worse Better Better
-

To convert these data to orderings we first create vectors of the trial variety -and the outcome in each paired comparison:

-
trial_variety <- unlist(beans[c("variety_a", "variety_b", "variety_c")])
-outcome <- unlist(beans[c("var_a", "var_b", "var_c")])
-

We can then derive the winner and loser in each comparison:

- -
##   Winner       Loser
-## 1  Local BRT 103-182
-## 2  Local   INTA Rojo
-

Finally we covert each set of orderings to rankings and combine them

-
R <- rbind(as.rankings(order3, input = "ordering"),
-           as.rankings(order2, input = "ordering"))
-head(R)
-
## [1] "PM2 Don Rey > SJC 730-79 > BRT 103-182"  
-## [2] "INTA Centro Sur > INTA Sequia > INTA ..."
-## [3] "INTA Ferroso > INTA Matagalpa > BRT  ..."
-## [4] "INTA Rojo > INTA Centro Sur > ALS 0532-6"
-## [5] "PM2 Don Rey > INTA Sequia > SJC 730-79"  
-## [6] "ALS 0532-6 > INTA Matagalpa > INTA Rojo"
-
tail(R)
-
## [1] "INTA Sequia > Local"    "INTA Sequia > Local"   
-## [3] "BRT 103-182 > Local"    "Local > INTA Matagalpa"
-## [5] "Local > INTA Rojo"      "Local > SJC 730-79"
-
-
-
+5.2 beans data preparation +

First we handle the best and worst rankings. These give the variety the farmer +thought was best or worst, coded as A, B or C for the first, second or third +variety assigned to the farmer respectively.

+
+data(beans)
+head(beans[c("best", "worst")], 2)
+
##   best worst
+## 1    C     A
+## 2    B     A
+

We fill in the missing item using the complete function from PlackettLuce:

+
+beans$middle <- complete(beans[c("best", "worst")],
+                         items = c("A", "B", "C"))
+head(beans[c("best", "middle", "worst")], 2)
+
##   best middle worst
+## 1    C      B     A
+## 2    B      C     A
+

This gives an ordering of the three varieties the farmer was given. The names of +these varieties are stored in separate columns:

+
+head(beans[c("variety_a", "variety_b", "variety_c")], 2)
+
##     variety_a       variety_b   variety_c
+## 1 BRT 103-182      SJC 730-79 PM2 Don Rey
+## 2   INTA Rojo INTA Centro Sur INTA Sequia
+

We can use the decode function from PlackettLuce to decode the orderings, +replacing the coded values with the actual varieties:

+
+order3 <- decode(beans[c("best", "middle", "worst")],
+                 items = beans[c("variety_a", "variety_b", "variety_c")],
+                 code = c("A", "B", "C"))
+

The pairwise comparisons with the local variety are stored in another set of +columns

+
+head(beans[c("var_a", "var_b", "var_c")], 2)
+
##   var_a  var_b  var_c
+## 1 Worse  Worse Better
+## 2 Worse Better Better
+

To convert these data to orderings we first create vectors of the trial variety +and the outcome in each paired comparison:

+
+trial_variety <- unlist(beans[c("variety_a", "variety_b", "variety_c")])
+outcome <- unlist(beans[c("var_a", "var_b", "var_c")])
+

We can then derive the winner and loser in each comparison:

+
+order2 <- data.frame(Winner = ifelse(outcome == "Worse",
+                                     "Local", trial_variety),
+                     Loser = ifelse(outcome == "Worse",
+                                    trial_variety, "Local"),
+                     stringsAsFactors = FALSE, row.names = NULL)
+head(order2, 2)
+
##   Winner       Loser
+## 1  Local BRT 103-182
+## 2  Local   INTA Rojo
+

Finally we covert each set of orderings to rankings and combine them

+
+R <- rbind(as.rankings(order3, input = "ordering"),
+           as.rankings(order2, input = "ordering"))
+head(R)
+
## [1] "PM2 Don Rey > SJC 730-79 > BRT 103-182"  
+## [2] "INTA Centro Sur > INTA Sequia > INTA ..."
+## [3] "INTA Ferroso > INTA Matagalpa > BRT  ..."
+## [4] "INTA Rojo > INTA Centro Sur > ALS 0532-6"
+## [5] "PM2 Don Rey > INTA Sequia > SJC 730-79"  
+## [6] "ALS 0532-6 > INTA Matagalpa > INTA Rojo"
+
+tail(R)
+
## [1] "INTA Sequia > Local"    "INTA Sequia > Local"    "BRT 103-182 > Local"   
+## [4] "Local > INTA Matagalpa" "Local > INTA Rojo"      "Local > SJC 730-79"
+
+
+

-References

-
-
-

Baker, Stuart G. 1994. “The multinomial-Poisson transformation.” Journal of the Royal Statistical Society. Series D (the Statistician) 43 (4): 495–504. https://doi.org/10.2307/2348134.

-
-
-

Bennett, J., and S. Lanning. 2007. “The Netflix Prize.” In Proceedings of the Kdd Cup Workshop 2007, 3–6. ACM.

-
-
-

Davidson, Roger R. 1970. “On extending the bradley-terry model to accommodate ties in paired comparison experiments.” Journal of the American Statistical Association 65 (329): 317–28. https://doi.org/10.1080/01621459.1970.10481082.

-
-
-

Hunter, David R. 2004. “MM algorithms for generalized Bradley-Terry models.” Annals of Statistics 32 (1): 384–406. https://doi.org/10.1214/aos/1079120141.

-
-
-

Luce, R. Duncan. 1959. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley.

-
-
-

———. 1977. “The choice axiom after twenty years.” Journal of Mathematical Psychology 15 (3): 215–33. https://doi.org/10.1016/0022-2496(77)90032-3.

-
-
-

Mattei, Nicholas, and Toby Walsh. 2013. “PrefLib: A Library of Preference Data.” In Proceedings of the 3rd International Conference on Algorithmic Decision Theory (Adt 2013). Lecture Notes in Artificial Intelligence. Springer. http://preflib.org.

-
-
-

Plackett, Robert L. 1975. “The Analysis of Permutations.” Appl. Statist 24 (2): 193–202. https://doi.org/10.2307/2346567.

-
-
-

Strobl, Carolin, Florian Wickelmaier, and Achim Zeileis. 2011. “Accounting for Individual Differences in Bradley-Terry Models by Means of Recursive Partitioning.” Journal of Educational and Behavioral Statistics 36 (2). SAGE PublicationsSage CA: Los Angeles, CA: 135—–153. https://doi.org/10.3102/1076998609359791.

-
-
-

Van Etten, Jacob, Eskender Beza, Lluís Calderer, Kees van Duijvendijk, Carlo Fadda, Basazen Fantahun, Yosef Gebrehawaryat Kidane, et al. 2016. “First experiences with a novel farmer citizen science approach: crowdsourcing participatory variety selection through on-farm triadic comparisons of technologies (tricot).” Experimental Agriculture, 1–22. https://doi.org/10.1017/S0014479716000739.

-
-
-
- - - - - - - -