From 7b26369279b58a2e8aa81a33426266f6d245dbd8 Mon Sep 17 00:00:00 2001 From: strengejacke Date: Thu, 9 Jan 2025 12:25:33 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20easystat?= =?UTF-8?q?s/performance@5ea050b5fe252a870aa12e8b77ec7ac2335b185e=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- CONTRIBUTING.html | 2 +- articles/check_outliers.html | 36 +++++++-------- articles/simulate_residuals.html | 2 +- ...aGLdTylUAMQXC89YmC2DPNWubEbVmQiArmlw.woff2 | Bin 0 -> 11840 bytes ...n66aGLdTylUAMQXC89YmC2DPNWubEbVmUiAo.woff2 | Bin 0 -> 20612 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVmXiArmlw.woff2 | Bin 0 -> 9644 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVmYiArmlw.woff2 | Bin 0 -> 3676 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVmZiArmlw.woff2 | Bin 0 -> 16848 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVmaiArmlw.woff2 | Bin 0 -> 13740 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVmbiArmlw.woff2 | Bin 0 -> 7856 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVn6iArmlw.woff2 | Bin 0 -> 10576 bytes ...aGLdTylUAMQXC89YmC2DPNWubEbVnoiArmlw.woff2 | Bin 0 -> 19660 bytes .../KFOmCnqEu92Fr1Mu4WxKOzY.woff2 | Bin 7096 -> 0 bytes deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu4mxK.woff2 | Bin 18536 -> 0 bytes .../KFOmCnqEu92Fr1Mu5mxKOzY.woff2 | Bin 9852 -> 0 bytes .../KFOmCnqEu92Fr1Mu72xKOzY.woff2 | Bin 15336 -> 0 bytes .../KFOmCnqEu92Fr1Mu7GxKOzY.woff2 | Bin 12456 -> 0 bytes .../KFOmCnqEu92Fr1Mu7WxKOzY.woff2 | Bin 5796 -> 0 bytes .../KFOmCnqEu92Fr1Mu7mxKOzY.woff2 | Bin 1496 -> 0 bytes deps/Roboto-0.4.9/font.css | 41 +++++++++++++++--- pkgdown.yml | 4 +- reference/r2_bayes.html | 6 +-- search.json | 2 +- 23 files changed, 60 insertions(+), 33 deletions(-) create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmQiArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmUiAo.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmXiArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmYiArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmZiArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmaiArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmbiArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVn6iArmlw.woff2 create mode 100644 deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVnoiArmlw.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu4WxKOzY.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu4mxK.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu5mxKOzY.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu72xKOzY.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7GxKOzY.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7WxKOzY.woff2 delete mode 100644 deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7mxKOzY.woff2 diff --git a/CONTRIBUTING.html b/CONTRIBUTING.html index d3846c756..8cdfc2b9f 100644 --- a/CONTRIBUTING.html +++ b/CONTRIBUTING.html @@ -68,7 +68,7 @@

Filing an issuePull requests

diff --git a/articles/check_outliers.html b/articles/check_outliers.html index adef5faab..1d815ff2c 100644 --- a/articles/check_outliers.html +++ b/articles/check_outliers.html @@ -427,32 +427,32 @@

Table 1Summary of Statistical Outlier Detection Methods Recommendations

- - + +
- - - - + + + + - - - - + + + + - - - - + + + + - - - - + + + +

Statistical Test

Diagnosis Method

Recommended Threshold

Function Usage

Statistical Test

Diagnosis Method

Recommended Threshold

Function Usage

Supported regression model

Model-based: Cook (or Pareto for Bayesian models)

qf(0.5, ncol(x), nrow(x) - ncol(x)) (or 0.7 for Pareto)

check_outliers(model, method = cook)

Supported regression model

Model-based: Cook (or Pareto for Bayesian models)

qf(0.5, ncol(x), nrow(x) - ncol(x)) (or 0.7 for Pareto)

check_outliers(model, method = cook)

Structural Equation Modeling (or other unsupported model)

Multivariate: Minimum Covariance Determinant (MCD)

qchisq(p = 1 - 0.001, df = ncol(x))

check_outliers(data, method = mcd)

Structural Equation Modeling (or other unsupported model)

Multivariate: Minimum Covariance Determinant (MCD)

qchisq(p = 1 - 0.001, df = ncol(x))

check_outliers(data, method = mcd)

Simple test with few variables (t test, correlation, etc.)

Univariate: robust z scores (MAD)

qnorm(p = 1 - 0.001 / 2), ~ 3.29

check_outliers(data, method = zscore_robust)

Simple test with few variables (t test, correlation, etc.)

Univariate: robust z scores (MAD)

qnorm(p = 1 - 0.001 / 2), ~ 3.29

check_outliers(data, method = zscore_robust)

diff --git a/articles/simulate_residuals.html b/articles/simulate_residuals.html index eb81e3884..e5a9eccf2 100644 --- a/articles/simulate_residuals.html +++ b/articles/simulate_residuals.html @@ -199,7 +199,7 @@ #> $ simulatedResponse : num [1:644, 1:250] 0 0 0 3 0 4 3 0 5 1 ... #> ..- attr(*, "dimnames")=List of 2 #> $ scaledResiduals : num [1:644] 0.155 0.731 0.448 0.498 0.437 ... -#> $ time : 'proc_time' Named num [1:5] 0.092 0.02 0.112 0 0 +#> $ time : 'proc_time' Named num [1:5] 0.094 0.021 0.113 0 0 #> ..- attr(*, "names")= chr [1:5] "user.self" "sys.self" "elapsed" "user.child" ... #> $ randomState :List of 4 #> - attr(*, "class")= chr [1:3] "performance_simres" "see_performance_simres" "DHARMa"
diff --git a/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmQiArmlw.woff2 b/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmQiArmlw.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..b0ed6d697b45af5d168e2095a7f9d42ab5037983 GIT binary patch literal 11840 zcmV-GF2B)tPew8T0RR9104_iP5&!@I09*6`04>@80RR9100000000000000000000 z0000QcpIBW9DyPRU_Vn-K~!D2^yKoX@ zk;X1E=$fuv8vTSdk6?oZ4|b5Ttjyi8e7MM0uSBSoMS4^GT}UufZ$={c{|jx-z;Qlm z?$6V<{?9o#CaD1iFklqHz!e1bDHg^?oBuBx5%a2iQEdFkn)OBWuPg@mZT|l;7KO=7 z1ZVU_F^?h=B`JzFDHsWgnR8eHZZPL`k(hU|!nJo%uK!(mD_s8V6+A@u{(EM3X796a zC9MJ%O4oqZO6fx3PTzvTh(shEF#oMV^GX_cFHbIUKiT=cN&IFuIIAK$sayoapP9p^ z{Z~WnuTH7}C40a^w=Nl2jnRfbKec`C9wxk(>Ldrony~**O}KkV!ohUiL@f{0@_!V{VNG+GnWE!Gw1*lYwF5%FJ z4C|vd>YK6jSxNdLuD%mr-$SY&CetsaM3yXoS+SuO+Do)6f0VTSDvloaH^3OZ|B$v-?(?e(UGn_6&9BpE^|LYQ?dE2{qv^B(Gdx zQ$qk#957jVxV=D1O!PtLueY+#^K*Zo@t5eHx(-crq2Br!j|X-B{5 zAjBJ#+w1E)wlQ57TR!cipZqBFE%vFjQc84vQ>BFnrUxMEk3l4(L0RoOb?cE=P*l>V z-+)UD8dE{V#Fi*UstlQgL~=+-<;s()N9bFNweGC;Vx&P zHivD(r~s4)O>m;Lc&&yKM@nALM<&z($n_rqTu6#AWtTHTrfYO?*?CAI1jxZRL^#9)eJw{%F^Fa{hkYRtGP6jTWq4o-2h06XzzrJ}@_ zNV0o@o-WMax)YRsXQqEV!0(5d=lkTnT>A znfW_lludKohHI#{3;O?KB8s?^Qg*dh*{+_NK;223jv%6}Rds2~sw8x-H3n0)k-9E2 z#M62@gGz{Nl+vl{gsKtN_C$TvHrS>KI-#33$*_(nSZ9B5qo#!JA*eC1LO2Nk@Bku^ zfE-f#OM(Lp=)^#l*}G+k14xA$sDozcfW7FP&95v``lJzjmY><0t0ke*gzJP)AOZ=< zMA=b-0i75CHgF)7Y5;X;2sHybXs_p5@+{|@pa_+Hw1eb&ybJhN7?3NlQ?Q4~QWy|H zq!gq4NeYE38BT!@A^6}ysD=Chevkdm);K4nZEqo}Jxce4(nj?qbEx7j<*!OmZAHBK z`4;+|x;q7DfMgs%xU@)g0}RVFE0lgg>sL4ON%>lB0V@isJR@bjw_5~ciDDVefyspJ z30AKq^-wwsGJ9wFp0&Ii`C1s*Ah?IfdQp?|4HQ(#v&z@$&C{PtjWrn-eZ4M?emK3t6FhSF}HcgZW&y>z_^Ff%nZ@flW(H0I*^sVsPq=s zLwx_h72zF-Kmu||X+;8v4qyNqIFLFzzrQGY=truVRA++?{j5=OZqNapw6`iLd&Gig zf4&Y1@hHzrE?ZCHWS_H!WD|3{@a)uZ#Gr0% z7+KH#BMtz?!EI^iW=T=cJ}>QNx8nIwyq*B!U6lYH%DwEngI|X=23MmCo79i!>MJZV z5D#eC9Poc|aG7Y}z|R1HpD7Q3gF#5e07NhV-RR}*iaawjfGh9iRhR*Q51t8&%*=sg z*(8QgbN~QA01Tp#_muZ7FE9JHb+JFqci0~J{2b-Kw^KlfZEC|wxq>$#SH8z*k$N{Gd!xE}SBZy(Qv;8H zh@=*I%O^?@HK;o%Ikx9&(W*`R&UN@S9ro&!{sN#6*w^^IdoN)6vuA=na~Jx~F1)}< z^XLZS%BKlCop*xFH{p?a>hs1JoZ7LoO{>!;^%$jhjDt7v`>vyO(5nuOB@&trHpOSd zOAK1|nK<&CM+T{#GVkJ9iCyXqWGWi^K<&UC>a^%sG}3>YEHZE~L#$eZHcA6jrku9w zRh)FKZo@QYEZkn!-?CaYKGxOQ(caeD0yWPzo<1c7pBfJOy>6%7YBuV%YNcE%7V^2Q zP|%Cv^{baJ4qGn=`+K`PTbmo}YpW~EON$H5^-I+Wt-HKv=u`Y`cw$ATnAMH1bMiZ? z^yCzjH{Ta*c7)27*p+RmpBrnvVT%fQrmpMaK=<8oyiCQB5BV+TTiM~2tyZYk-N4D@ z-A0U4w@3CN4VsDot6c1vBdZgSu%-yuS^*WA05e+KpMYg}U&SQ6IkcGiQpINH8G9XD zmZ6%LR4`k1!?SCdS*dguKvR^1Aj%Em1I`F}t-#qm@{4jGPzGPd>}2Rqzdhcf0B0W& zWQKd2qAecoX4C51!rmT3jT%Yb=lXIJ`efO5JBeU2rSZ<$R4MPAZzTs|Q~AyX`ASGr z;i*q;VNm8oDjH!Vjz2V~wpd@T-&X4Iw8<51WJ&QFllSe*2#*n!9mNr%lng4;vOED4 z<*CYG;88WY6zc$03ywMhLh#Pw$xpB_hkJZNRE|}SS5A?S8WYyh*D&it2%sU{ZK4lp zwiMa!egKy)TANLl7Bko`Rg}A{?Kv=)v}jt$Z`VhmG&tAfGFVr9L-dKmeZEXo*Vq_bE7g&HO${RimqDAY<`ifqju!CLzz1V@2FWBx!nj zdiWGZ`f0}<#tV!lnbHolasDjVT+{p;-N!v5R5oR7Mn1yXhYCmV^iz{KOj_&b%M~4} z6KrzNyPcJ-RM~Kwhr%$dA+1f*;8ZsDW3}0HYuIYrv!|z@`w33%J+ES;V5q7+iX$cu zxsoFdP}u^{5MZ9~e%nrYe|B3@b3YGwM^fdn?xB7()b~cKeT=)GcRS_AiV4&C$l{Ws zJjnD%``GM&C~&U^V_^8XW!=LmqfLgL;Sf^24%utqtIrTPF9jBzQNAqXx&l>oZrx(q2+0$ane8 z91RO>fy6Ddv@fP3DXGT^Sl*VTXRHU$W~Y>JM&RFsrfE3oL}C&hi3?&u8Fz+ZX@-K| zD6nkSv*rSkp-~JjOti^Oe=?h`cpXHYFZH1sH9b%kM42A4{a};wxZ=W~>O(C9<^|AL ztEFomRvBv4c?6M#%r2dHE;#Kt4&m(9=Qw~bptJ!~v-JY65K9EEHxulAJN12< z-b2_C97JYLt(r5WTaI8yRjB6c?+3@0Dr#u1y=XoNf?Q&rzXzgcm6pNj=F2WwoN#H! zY^K3tEvH=XGt}&s1eg=d^ifK|gmZ=oD0W*!=X-Ik74W6Z3fS%~g)&z0t(n|MpoPNa z_i|~hYZ@4jO6i|8Yug(!ETu}mL`$G2 zE!){f5@>8B%sbuZTkr>Tq};Mx#RUoCO99xRgY=s6t-EkMj7^|-;#&cKB|tTO z`^2=7iBR#-PDaD3(`IcSZ!KZ4Hs@{%V6gGNgkVVR8kbgHk(x#=$aLdWHDnB?^-4!# ze8(OMF0YKql!ByGyBf7bAkEQAO2Kz5F8$b~8=$5tW2!PaVlQu|{>ev=-U*eTrJ^av0R7U`!ofaBGrU0F1Z=k@@uh zECmY1zKgnR06TZiL(O|q&&u}HSe6}(9WH&X@5(1cD)^6f0cI*|U&;J;LK{-N^_1J& z%1?_+H~${K*LUbm1fMF?{_gNSaQ2Xyw`P(Hw6=$o@vHdH#hH)(%Jy-%!cHq(Q8`k? z3K-qiAB$l^;29wc?|9(uT>q^pUzw|)TzJjdRmZ93B;#_dD!T`_!E1FXZZdoaWOfjL z@TsL%8-~^woTA$L9q{a5Fy}<16EX2W)pdS*u3ggfLSQ<>VSa0|$jPvi-&s0`;W#fu zR{`(P9E;-xw;BV%~(P)r1{6&F?N}baN z0D$uV0QUm;Isp1H5c~r04FGQg^s6=kQORWp;sxlk7OTjLJ|tPhTpSb|As}p$1nLTO zBVAn{aBSlm+4Q9g{A5m#Z4;ZnI`zkwnEN<^QH1*!g0HI~*1=CM!X;OPu`s6T0=(sVt!#heqSP3 zA?*WBs8c7_ z)IAYeyTZmva-_G76CG;-A=qG^`ZCs*+ghO5*bhfxtBQo2Z0AlP(K}B+|FHel8-i=9 zkS}~w~n0+%ClOqw3OVP6xd_=*|?2vpi(lSZu&Rgat=JW*xA8C_Q zlpc*a9NLRy*E^)75u|NgQQ`Ou6F7lv@|M}71MMR%od?OgYgd>?)JlHr&)CrJ*ky|g z!v1qQFd<{;6yPI41khM4+4g37oedYXtf+b^Mb-2`aEj^5t*2t}VQ-5n4rZat&3v^% zN<^L@h>yiMlimc8lB!GNsXOz2JLCMD=%zh|ixQP9EuawfJ@m-P!fqBrJCZ42 zjMlYTvd%gJuStj2aEvJ#AwyFpiH7HoC{Ts6ycX17Rx5gb^&E{bQC^FR7b8k~W%Yp( zo|@Vv)3Iry%FxWFAB|^Us)W@z@PDg-XEi`Uj?uRL3MoxcyYu$_#o(4q2dQpKrI0p_|eWk_|MlW6REX8neU`aS4orP(&L`)pXO&^%T399D}S^ae{zr$t^c z%9JaU7jwa@J?Lg$lo{*E{w$E6DGI+c9)C+Vw**45mwTzFCqyZT_6?WQ;E;hnS9+&jzqH`&ayZ0y~pd*!SR)%u+|Dl?m4CDla|@~G_Coci4YS3p?H z0PlygGl8`{%%C(Qb6_g7XglNSLeYM5Rx<`)!A^`;)L}$+W)sz2d7B4w==*qkDh*Es ze%O|KMo4$l^WPfJG~efs>eye^gKG9y)$CJ#*UVa7;|oZ?afgI9q-4p2LsU^$$c2jG z!oYKxy}fK<-5?`j5K0heG!`0jg&iQgWq^m0%QYI!SDE&k7Dlp&)7WcZvNqy7Fh0e_ zp+i^qDcFu&KDNoI3@EPMm3DUC%T%1}U(UiSsw0{7>Ud(a-OibTp95sp;Uoe4j;Z<; zZvKSoO5tD}r;qHadp!J=H)hc;$SCt9c% zpYS_c{J{Q2uEjKQmt^yOOYxSDFXF@!$NZv+*BW+1(ltcP&KMv{yCQG2^4cPA=It3E zX7C1yIeSny?Uum~O@~ZLj&I`7;@gyLSsLtzbDCL`DQ>p(D6XwZ9RkDcg^RNcX(d{y ztwfg{wAOoj>;98hYtkrZ?J}N!p*h=(0Z(`xF!S1q)airbLwOPujg%J*GJ^DAjI1n6 z8zHJ(U`Z!yzDC&A1XwS>wkm*Inzfu`)j1}G1SEu_f|9b31a1qNo{M0n<)bCite)7k zl4)+nKt?LQy(w)zsML>PCtISoDv6im{_Tvr9sS3snH5J_g?%WaJjr*w6k? zWAejjxqhP}e};c2M^YJ07j>~haiwXVh5Y?7aA*z)8o(WHLB$c_y+Hvz;SojFTRcI- zIdNQfJ&Esx+ad+toc?6%*WikcGz)(b)DRxg5EM`!5ngYXY3-oE-)tILP zRAuXf!jf|$sSMt3f3mL|Gezi5vu=l-NJ^?nC4-8RWYund|22u^V1~hh#!4&qjHXL{ z!if&n)8L=o|8zeWLW?|}eEZwz(?Smc)cJg=?W6CnyLi6+1(`)lDhNy1iQo1JKD}!b zCKVA&hvK$a?zb7H1Yd5)JO0RW+Q`j>@v5hNvHzxkY+87+?Dcw^*H&mU2f}9cCp<@lH1%{>_Vr0DX70aoEJA;CpW~_n~kL?{74LVA(l^V z8Ne4|8f>Z&W@1QWFZGtTK)_A(*L1JM8>|5Pu{5>=8>YAd< zS{dv)$WV&wM086!uNuXy9i-P^FqnfvXGGlGs79>E==k!zk-{*IyqxVHmFl6i|9(T) zU&piqT4ZZj@8pYe*-DE5bFiU$30sZ0T5_0Q*q(%Lr&7_a?VQ41AL4wpW|PLPva`p- z*$T_{S0FoApN-kqAYZ~d~)42OntjwJ<2z<#_mx7Jf6v?)S#`;h=c`LPPA{$(IGPODzJ7 zwxQy?IFofIx=Gfk{vg~r2#toRfm>Vxs9FXfUNYZGw9__fJ<T%nLbj(sk*}@U1)SMg@~8;63UXh;#eh=vcD zYdN$*eei?TX>5=$$E`cIfeE=aSw$WdnR> zSN~MBZfbcKblRfOE#Ht~n>tWA1i6IMZY@OMbk~Y4EkUk(XW{%qZ(&B@96% zMo53ti+6Q1T~GO{S@yRq@DqI<)!pYepZTq^@i}ZS@GcISP-Uo6)~z<{rFfuiBh@@_ zO~|pk6TS6UewJt-YWbhAWivTB-jYmeQ_}K?lxX3^ox%xtScHvA6eE|So8bfq*Jg;G zr$oNqI-RA;(v$33F3gn{+A|=fyH31#Q!qmzP}{@P?qHwdn1~IlJ7cfSsgc` z1oiz7nO`JN^_Y(sgZt_ewdl3OrDx7l3%1d}{eHUi&(q0HOJ>s`_*-Si zsZu1u@`ihsk?!q94EFa=$llG0&kc;FBlY64zVnPS-iU85GopeKi4qDE_WHqyH-y8U zVsdC6fgW33nlur>%fCi$8dK;)(5WTiRAz2)7%LbZOcm{7ob^cdgrlGa&;W&_pl89e z4$vGB)PqgHG3Ih`wP3|;8tnb8vcs$t9$^Wpx2aEqdT^J)Wy{8Dx3ghwOir;k1w165 zTK2;^=R*c91Wy0@oCRUd%@b^PvjpcNKs|_A06_!j5-}$kiyiY`1@&Og@7(YoLA8h4 zC4i*a$@7@=mFYFsJ!YLUv=u53xhPT&(gmz4e@-G&zM1@HGaXsO;VOby%=!qPB(4uW zd$txzvTgJ(K(kT;qGFja-9-7~6NY09NfZ?)aqS=&8Bi&F7Sm@o3T9_A zi&)QsQ_3kln2ZDxckiplPp6hob-V}7heklvOV!PCV%vO1mR3STwMvC!5u!Wb@XNDf z^P>GSLqq(sd2wta%=6k1{WhqN@qwQvPHpEdH#qQX&k51Cgf0mu@#1EMRnwPpisWaw z)uZUk#s0-{m#fE4b7eIbbBlDPv-tR;VhoX1geR6F_LBU`h;Bk@5iSrn{&5`ksbm~8 zKKVVU2haK*IdP9!B`dcX?m}%A(=++X3@DGt20U^tZVqmr)Mf&B71U2}V~N9pl7a&L z*y$*8k}4^?KSZ2{swR^os`AoRNLZ-wx&eL!t4?_isU~i;V(_m$w~gt5ga<)_`jv<1 z=D^AE^X1A(x`^H6pOGCrQh3d9gBv2{?H?;$vpZQ zOMd0_#hmimMQ-INh>EdqQ#HE4t!cU-1<$fH{l4Jh3rpf@TsaO`=DLUMPj>CbmCA80 zwggZgCjhHvafC)0~Oi*BOcG zgQ~en>Zuhm+B=gSijHN5Uc=GiSf5NL3gRr(eE4mE@NRqAmy{T8w+xfXyd1)Hhr;h( zpEmat*a5SDo92(bT}KL!`lAvNJGRBxl%d?1bWCTo6jlg7#jKNH+qhg}e`~>FOxg60 z^;I|JMh>yDmr|VDfsW57?DkIJV4B0z9qV?@b1Mp>I=SiO{tk&DqFi!0Qd`QKE;iDl z3v!6f2$Dn@Ny&BDv2AHVx6QhygG(zcKxM2ps#_r;;*z%8v8^}t!XY5nyqXuaFJ8jM z1BhkBn~5H__SSnTZu8Q6rBuoT78`#o5h-8mzqM@LQt|zl+|yU?{fZ^z#z;a6z#v(j z1vL*9Xo6>Aa&)xzW_#!FTh6CSH;FQ7Ev+B(3K{YJkL&I-6 z5I`YFh_Df&n7rWw{e~EE8f^#_k(U=hj4k;53<;xyN{UQweXSj@sT`^Qel{L*+pOBK z;NP>SX?Q#$UwCH8Zo&692 zRl=)(4bUz=vcLXkf2F)}lSmqT;wKYIFKn#;&6k%1GzX902|;7E(N&%R4WUKOL6uNP zUs?ZGx_D;nA|9S}uHsesO)5@!8LP17pAp^r8h z7a-_b2k?K6e{(j%6-D>?Bp9YiDkE2r>O!uVD~5*hrJE_ z%>cPgjg9_~J-I0Exznwop$d8;8f0!82u*H!1=={djo;998SpEaSw@mm(R0AoJf9|2 zMkJ$IQH~c{y?cR*AlDUYLDN(;m*{s;%op8pf>JSrsY>FET;1}F6&sQ|K#E@&E-*RF zs8N(C=|S3MQ;4}zGu`Vm2+=X~Bu?l|1{kwTLmY1wKb69DsDC~g}V;Cs7v$OZQ+J|)E#r+A4%TOrZ)J$&N zK2v~+?e>~Pat+2{V%!>1`2d|IndB4>3vwg<1TNJr1z7#C5ehksDaitVAP|g@tD%Gj ziZ@l1eU@ z#1-{FiD`+D>JnpI#m6L5Pt1lW>BRlCfU1$>NM_13b2dP(i@7<-M=ic$TSZeLS8%RL z)x{qn4z!#OPC+B8e17F>j2I@D z>BkK=$2Hq-46C{u4F*AuLX=FiWvt$}Ljkmy1c)f6754dPSo?UXl)TL#iiTLK?(lG= z*wbk|R1pQ_O8P7TQ&9{lNn_I4I$5iOMX{*9vfRH9G*hB3U4v;ln^v@jIFl>tTmrL; z7~}TE;@{5(N5~X^>!R&au1$tQbEPrEEh+96a9`_RIFo3myaAs?=5s#iX{%F%yhQZ3 zejuW02-3y9g)Nh-)ir&!@{&!Q2Z+hl>eX=oCRd}EZWAn%>*~sYn+%tOwpHNpYj#Mszo;V><`+%G_N05?7>& zhA$izvh1E7XlrTFH9hezmwV*DW-FR)ubDZsYz3sLWRYaEtt&1AH$*B+`+xyb8WY;x zPLQu~)2<;YnHEv2J`dToOs=S(Rw!5K=_1x>wP{*!5ARxG!gx#=VcmDZ=fN61Ojk8B zjV3E4#eP4G=PV1o@$u&1_Y0m;5N~-6kn8Ka52bi&$A6BBPtOJhdUKi^j|AAdL_<0E zCQPs<=6A5o2-TFt$j%3Y0qy#oD;j9c5yjQ8RwY^S31i}i)N5xgiySN7yFxAU#%gXU z(c@rRxA7ih!nv+?)~f_iz!hc~lBs{6ZL80S4Q*+JfjrG@4c@8KmcI0Y0tUe z_;&?H{^n^^>oouXcxFA60Kkt{sdV?`di{#A-~gZ@000P-$7>Tn-Py+Hci1T|_jkUZ zvtJATuLI(g=B&D(sD4*vk8>~PZY$iMM}?PwPmEE`O|Go%-?RCgVsR+9t3~Hl#TjQu zd~hLtJul|#ZO=wM*e~3XcQ;=D+K*`k|9hI{=IUO{10Yui)UAdT zS0AxFZZTH(@XCM>4GyJcic3CW2K!Z5pHwInHjW?*^}^@s8sp0yyyXeru!P2MESG=} z)d3?*yZb9{YZepbu=n=u_T^Xgj23nVZAq>7|OJ})Z6Yi)!*QatFrf2 zn`P;!0qTt!qx{}{!mQ3l`tz~Yh(oP z;>Qb-877eu`~QVZ(JL2nO}q<*tQtukJ9&ZMfWgfWz|$Dy5+zhn&>#sTg-XILr8voDl}`{Ll#X%>3Wf|j71#8lzPJiR@kF>93+uh?F`UIs zF;d|Lb=HszwpO6tCvYqfkd#;^m?1AJ*eEeRJ3*n8DwQ=eLWgSEL-G-$6-0L1X9aet z6q|Atk?2X~e6lMxhTI_kY6U*ZYn$Pl%A%NgNNdy$BT}WC_I8vM-Ru;k=2wpn{sC!moDFZ zlF5`an#AJ?qA?f3WD=xi{K3QQIkh#Y854fMYWs#rlt!9{S-Is|l8;FI9IGaac+sWGu# zI9sW-nA3LVRCjkqQ_f)A-VVnVsbs7Uv+J(5!ljgG+E#|yKe9e(#c!R=Y0UoU9olNw zYb}O)gNs%*SoQUpi=vM&npIv+@m-t({6_wVWPzRjMWat&RQlv1_QImSs#j(8Mu~a3 zS)P(?T<(%LN(R+DU>;7vA26*?U1WWAECe70 zgm?!a424b`MwTV)m=UmX05+KZ>90U;o+Z#7?Bw)9Sw{wfjRP3Yr#Jfl??>ejlajPJ z^9_n@CqyQdFp>&6k)#E|oF$lK$#9sfW-W*^N=1euNecAaPLg254dc+lqD{t9kj>M< z#)CNmx0p6*PqkhNkAtniJc~1-K8FwMAF6RtIl1U&>xthED>TZ7Ocj3}GJ21(Y2y>j z@5uNz-qCFl7`O0IbLu6EY}NPXnUU4IH;+j=CJ6~7ViJ)wQh_zX|Mx16Xw9mgSv3|s z3&8^m9@q_p8pi@BA{G8q|+j2;y$FyP09%$ykJkT0Hqs@k# zF}Zft*OULjd8L#qWHAmpF{S^1Rn3pe0)QmIrsh^;+J*(n_e__g(#n*sv^wTJ&DX|k z+f~_lyWdTghbhjmo@(C!`1ZA*%0tWOOW*IGD(aO8ci@;ydeGf!+2J>bXqD_UVLB#G)=j zrz}SK0KWfguU}&X(+aN`11NE4|Msih9b>acaTV^%q68J>1crQC*;WNLggyVM1kqTJ zz_J3vGJK{!2ZqanH$)~Gk}DUo*kVYYJV?HLNTEVVi4sV;a!93WNUeHE0~OpMM`8Zb z1GyaoC??5)B1<&{!GQ~a(_mw9`DWnz$yY(Zzq3W$ z12;M2&Lpa(OQ9*=^uai(a#*X_u)BArFzV)b=D?Uy~K?Wsq_Sf@w2B|rFCZ-ahO%9k^6_;&>-26_~; zG!{{K<~udq`dq@Z_f$|{0sD99;#3BxH2j~W&Gi=n*Df!Q)hB(bA9cTPEl#JWn8*H2 zS-*)iGUMe?g5rwbnx-vk{=oi~<=kzLdbWGekzQ)qepi*#GS5%xu4u;nyYXO!)Bs0g z#DdM}J9b==dGN;J%b%_QFc1_P3KzwQ0+IO8RH9^NDk>I@JZ|}OYqw3Z_1O<$v}l4N z8xau_vKdhzFiRvUqrYR{=XpS;ks${3i4w&3*@s5GD$aw=0-P1dG!QX5 zTtM7Fro-d`hbLqb1f;OcK*$R^Z@7Fz0)GK8@(>d)0!1VwQBXvKiGd*wqIjqh(Uk;M zGJJ&~RUp-K%yv7@pcUFVvcfKYg?6Tlvqx;VTF7(9L%2s{f*}mBP!I(up(8pl!$1sR)N)7;3xOE6z>Pon=|I(u zvm&e{qQ~G3Y77&FmMM!U5ebi}2r(0$sUpNeSgeG_Mp(o`aS$nvLU9r)&LYP}#JGwa zHxc75dZvpSk5tAp*+_y#nw+9PqC_q%(J6{Ch>}>LB#D{i4u-ZOoFpPARf4EQO?t}E zL{rQ}tYnLus!#D@7U{2K0T@ekS;1g4C$M9$yb+lL=x_{~oH#>VgQRDC7DCv*KD-sq zk1C&kl_Q`^6qKfDQT&A+bB%8x!6JtbO*B=3)%cZScx4`ag3QnRM=F~ z7?U|!k8nWE0y786fYb&pL*v?Hg=sunpe>DkXl!B9=0c=l8m?g(JmxT+Wr(pPf)J|+ zVyq(wv56qYHUhkVTw(~m^M8O44(jmgj&xuI`QCWZkr~}B`Y+M&TaY{iByouH3luAR*<&+e`rh1ql@{LL`M)@e(9SSq@$2_Viu%-1or4*H&jwJbj)1 zPp>{Nz2e?u#!Z;?*%x1Z^W6_x1nIp~_#zwWBrL$}`WW;J14>Ndk`Qnbla%D7BqN!D zCM&3*gGqL<$q7BIaKnorqDWGhq7+9KMarX!I+|!x$&7#hWsVCV&p|{&g_X$OBziO* z({&QiRp-#im~j&(^OxS_t8c!iAG^WE6JcoUVi92_HsPRfBILOQ6NL|9FRx9Z5x0gX zUVGzh>Tkm2%rz-T5OKN)Ppb~Qr;$KRRFQ?!0{koDt9MKwj2SnPCYxX^Sg}d+5k^Qj=yfK6S$O<=$Q&0yF1jf3RY&YJEzySd&|u8C zi8Q$v%(Ed*B1Hwb4=dPoeAh@zt4n6wxN+mgjkn^=H7Rd}B=kEhw06fV_pBc|8G6Vi~6*Hf-g#<|bX zC&2{vxP%^o`w|JWE#{T&Fpn3ZcM}Q0OPFUMVNfuzxtt*C z&{j_qY99|V9{nOIYJs6Z$1!>c!1IZr*<{l1D97 zssQVrP*tG>tk{!#IQ0pJdOaZV2}YF;pxEA80AWvYECQcFk}>vqMIu<>hSC)sh(Yz| ziqVuHhHaP{yWg<4%iH1XPg8P3i@$pg+ald;qFsiV{{W*v(`Aiimb>7hOD?v`$M!X(kTJCT4*?6;`)*`c>_B!m z#T5;KGN0(cuy8mwVUC{3;;<0k$Q-H#JJL}VK_7AlMO&i(nQGxwbyZzuRZx(MtMK{V z{BPcxXQyhW&Z0@1h?zQm8~x+fXd5*neUuErppD28IDFBcX!1{PWN+5RlCtnZ$*%aq zxNgVo*P=;Gqy@}OliT% zc-hW(>*RdFfv={t>ph*^fY#g*4M+c(!y z!l#3LNa^z_DS=N{f&*$dSaGwDshpQM@*iS)iTzM-{>2Z4PbSJYA;KpTmJTTf*ve;QQLv@9BHEP0H1@kudoid^YAdf+~0+(DGTEB0FCIp8Yn5jk*0;ZBP6QEVmP zd;U&HE|{TEQA{WZ=G-T#)7tz@_!e`9N0tZx zYHT!j0@$#>ilfNlf-Q^-31PA4R~q1vA^m9+$|=K$V7+s>fLuBrK7TCpTQ1up=(%ao>OmT6C@CkBg&c{4@TW2*+_MKJ_vz=6K9e%vmA>k%`DmWGQkHNkp=k z_djN4ygyp)wswFKi*WmVM_qQ^1A``)C89&ZerJmGL`HYaS>%cEP269)pG{w7+*g1j zV1bTr$*$Zxzt8QdZP@y)+GTC-Q2Vj<*4eJMx240c<~Qy4>%-Eo3$+sva3?qV&FFV< zJ9w6vW8L6c^WFCEn!%61K%v*(c&pz#MT!j=G-TL_Q6);(63%&av|J+-b9IWtB^GIW*YRHjkaKHLs{}_6~#&1&(S{ zrP(Lr~D)Yx|F$30eWt0PXhn49^{eT=^?cJgPp zYJDlI0y!kMM0@>t2()IkpqI+=x(jvzy5={Gn%d7Z{88K$hw@~~;AZnY7P@n{3W zV-(Gcz|f}RM1T`k9`!2Vr|rw^r@cI59P%-P8tvFx6Ev9qzJD}!nE9gBCQ}AQy%^9j z!&k9DRFhdw{KU1)qcCH+T&7pR75t!jzC0&G09F^mu`A)igTRBk)taejuMSGI++%3D1qCU;FdGQe5`cPdbJfgmQ1*ya0~b z(fxVp%YM!;QVn$lY?D7j+rJ19B78GggFMQ$Zs9lPU#w|WC~~?doI{tE(@Q78R7qm* zW*XkIa|WyL%krf+xDyv%Di-X?M7; zUJJo>p26Z+p$p9$bS32(gW>3`^M!2MkGZ1DH#C=7E2 zL#Jr(5aL3;R*gm^TH+TrA}Y>|k8$3Y9kNiuGiQ#Zh2kr4Ld4twsP1F#5+p^YG9 zV4BY7EGZ?(TD%gS2j@jK#P{0u|u6(3N z+(#G5K$+a^(>ugR7wH~;r8R1}u zEzFEjU!aR@CitM^cG>PT^els-BBc@`WgEpcz|ygDdoS46T``!CIq~C(#6e%9EKcBn zvXV+sJH-e9C0MRl5FC3w`^PEGfe(7a(yme?B{yi#F}$9LqD>7Cy+I8LG+?RSTwTBl zY7e>YmNOak(9-_2T*L826AqSa2S+`Z4Rfbl@fS^?1am`9sz|O3~4(qRfuRZ zdN^G96hE;*`F-a1_>5w?Pmn=OdsM7#xo~hfLbJxL%GGNe*&7@XXi1?F4{{!~hqiSG z-Fba$h!Zbc$}T7}vD$z7wvplhDO+*DK3`N?y(zb81<2qEB_p%m5S)-nhpO?AepL;` zBFR~Z(wQddbNMKQavEY1rNI}yg9^=t8_=Ix-^#9d%8`SFg}n)k6+ileN`KbwDCL-jZjp+K9<%!oCgxGjBmYbFFn?Y_R?xqmF)iRN{6IrdRlpngq2{ZQ z6L`&*&BIW@i>DkF$*$7>Pw7MZMu`Z~dH7K>CqKrIPS3;^wD=`ZiARg-5Y*!?`4Ybm zl>{A0IP@v_oQM10Xi4WXsetPYI1z(mV^`Aw+J_odb>_+<5Yw3>L?~AmIC7pQGj&S zoi!xTFb;*_&=AXE09iKP)AY>;$|E_Nf3TCk28>dZ)Vej=%6DelD=U;^HcY$H>gls( zy8K#6CzNi)jM+!}7vFv+9n-n}tdbO5DZOUNvq~Uk)VfuXR(mo&m9#Zx%1TI?P!)Oy z1y$W=<6`q!Dpc2G)=rS6l#PphyJXvTTpAfD3+AdcDHBgM$JnlMLuTtBCG%20+O@GE zHz2cu*IrzdiJ(_@QhlqLV6u3IJv%#+wMu20OJ?nfMW*d7w^w9wbMs8~O2!xlDAi^z zRzwS~%|)hD(w0Ta@hw;@fm&Ikxk1vVMm1v9xb?VekW9&}^%Z5hXt$JX;3iDb!e%d z+sYjErQZ+}Be{>;+{OCkmu6$g4%Rss)&rrc2LffwS;`GVs~CM$hqw-Tw22<3MZwZi zbaP~*g~GI3t`npRC=8T6$fi3{9OVXZUb&o3t(ngSGO*BoazxJSMJy5l9$CR1!F#k3 zs1gn)#Ko`Kv}u;0Huz4WKEo4o(7mLI!(@HXNXoXAnHpu7v)z*I{wENTl!;3@0C9ET zZfa>VG{^Db5qZ(lp2nPu`ItLb9QJd;cCvUZAX-*fq zfydOla=>p2>|#Uf^h4Qt-V_|?=3SpCfP9qd&0C5QJ9nK(EeC@a{hI8?3#TM*+VoNK zSjh9SBd{Dv7;>E&t3B!Whd2zs7Ozg_Q4Ma&n)k_Ij{)UGEv13qy1yvb2Hs!a$g3BX|^pf?)`uYt|R-P@SG4YZKEg&r|e=X3W zfDPD5Ph0p|g`J|6ZpdI4Pqk&=1%TDOisKuk5CHwOMkd$-QN!IbWbxHrv&_nS$vXgL zC0xpk)!Qxu1dAzURJhD8 zHh`wqm`WwpkBk^Q?xyTy<7FT|MtZ7z&R716|f$ z+i46k6dWpAGTO_KOf?CadXG6`0Y^a+*I088=fgAw zIMr+uZ>^G6GlvsUwHz!85cFsggWL(u%@+&}ak;Pai)^63(O;HSQr^lw3YY|%#dB$^ z;lwtO2aKLX{UXu6U`1ea&Q!?oS8^vNWQ!vqt&Hbe!i~Fru<0)p4~4#c?yS55Z*K|NXz#!H5ZxOXMUpA zMv9mwD+fiakBl9~>ZZ;b(NS#OHlqr@u}tfmmXU&9H?lJ*45LZzD7oQgeJ+`>G;4&Cwa z8Ji;^$5pYpwIu{Him;K0j2bDfKXQ*8cqDdssKbYf^B0FS7B_e#CbPGDPZD;rU%CxsO=|MM@bug)O1gH_<2kLRDcnT0PbALh{5q-l*Z7GYZ!@n6x4zHSvTba zde^GLUmoo~wP)p_jL1bgXOar2-a{UT4Nk8C^-zj%lw5B`wEFfn$*p8Xeq|sFbLCDy zhQ1ye?W8zQ&A;E={}!6!qJB;)soarK#jkB^JMVqt_j@};f`iEY8D1cFKp?%o!D~;11(}5ro<2z^eRAlq+)|D#~Uvp2rRsn`K0y_bcsIadO>cs9B7?A!J$o<)5W z$31}nf}xIILzdNgR|_7VBEC0mKM6(5-w!(OLG_^`0P)pu=f;@zskaTQlNV*+kUFe} z0L0}pA^gTA2DJ0Tfd6yAI6qyFLJp*-4Iokapa3nP%vf!djqJahSJk&YEMUnq9I)!~ zH9Yjm4OBV(`~6w<0eu1Ssfp8B`?7p#%5CG@WI4G=g`Hm!+njjLSiWerSDji>bZTdF z4Gq{gE$7J~W1dI~$rbCe;gwVZT;bs86WWr`7)s|F+G{j*w_x}Z}oLj zx4u%AO>8-Qf|i)r^hV>1`cquyWxvZfsQUB9gC`S>w8VSuCVZ7jUlf1qyGrGj2HuMC zjfOeRS2);h5@D5JTIbk_rcc%_$}-=!>$myo#Tn!22H!&0meLeMx(ilv!_O`|4>}tj zOeXI*@!1LfkpypoKjPhSsr0UhT*>%5Ek3ji@McSxCuC15aELT1KE4Cdt4 ztPl_QH~ZGaP3c<*XE-06$@_I>^AS?Dcp$|jo~|QO3u2HI4V+?#JrR^XejebR^**laIqZLH zI~nS7d7bYC7Anhn6Lr}PY@L}ZRJO75oP}O{&Alh9_V-;ct2|LpNbO*?mp40{hk)U= z@GfNw0dJy<(Pm?I0neICGIG2|%l?ygmt{#-8_=BTbYN@;K>8{Pdcc?2T@Cz zap(`DRf(#s*49e9bvh9xyfrR=&sj69VZ@Rci4pM3qtiT3yt2a9Zm)38!B?&aM6jN)?8DkaZq`Bh_dVSF})TZX_+oJJo6Vi=!7#>NJ zPL(7yaj4*WR$&d4Rw~bmg8zD)GvhzHDzA0JR<+NS>0^A28jdEYpp>HcCMFToNR*eO zDBLcVxN##*31BYbxGn0z41s<|sOg0sE2+pnW9d1}Hd;@X)gH)R)fP_Sl3d^K9Ec!k zJeA-j#&z&wc(5yo@MB-G?fm}~ot=*rP3N73q+wgl5v;cRv9|Gy*C?bqnyL8`8yA2jtSuBb|L6ee03vpF{co-x;^W4f!B>ztT4E9d8qt-z&`|pcQD25x$TFp=L&dGgsr5JgvYGPc$-fU0ZA1 zGtURR+%R{_$s=Z;DSo~kO{ruGa@CHEvPeM8TLF0L_f01uHhJENwnRF7P^#O`eQ{Pd zO_#Ohk?RD)?0S1%d63UkV^5{;h?=us-fLZcQImEX|S9UAuDe!f|`ms#GEb*XN&2690dKUZyftcE)4MiJoO5|Q%6wBW@gsUtg8>Y z3&Aaxb!YxCxBTfg4qbfM$1V&uO3G^0$#?u_g9+7|4pzRyK3t!Ry-L%}rRbOdcZekOP( z%FHhnj;{_yggKw`N46cha&+@m@^Jm?!_4mPN7DKWLn|_JeIuI3X(UkFf_}Oy2$dbm zF9SVTtA;@9K2`5pwYZXBDyS1rRNG~yO*YyB8<)xGfBK`U72jMLmt zC$x&#gwB3}xkxGBbAgMm&Gon2_tdYx&J4>Y95yGkH`K&@@ha)EXI@D3Ulw(@>#L;lua#cia8DQuuoH#-iB_oj~%4TVqQP*yZykxZ&ls&%J%BrDXQU^m6-KS8%Cd-nRzvn zv(STCO=uX#EX6d6+zW0{%(*t#6@XuM#nlKPpY`i?ByLLgoT|0}7KdE6y{yf3_5S~1 z^NXl^br;^ZeQp|feg`gjW=?q?I_VmB`3CF*Y}=ak0dIwRyy*m(Tmxs*Dl@U%bT2l? zY?8TA<<_%&?Of9zC|*;V|H_B#YS0Fs_6vM_cz#wI3kFZmmq96#b~>jbhF4UD(`aRT za*GTLl9J()LULDY(V28(itkPM7CiCJ#|N`Ti6K_>?V8d3LQTaKt;&hNw9P8p)_}iM z)jdhmYG-)WBj~%{;SgceAW>I?=hL*AbOA(r=*OK@Mw?A2Wmo5tYCr=(Tqnthq{}&; z4GQbR6?Lucti(Ewy{1pqW3O4q&q{2qm$W+12BT*KiBMx8up|}|$rRa3Po+ofCK)Xx zAl&pez!NV6d^k4fAL7WpYZ{s6DYY(IbvN$Pwg%Z&EB;cobCOn}m=>Bwb8mb5LWD5` z)S8-Hw#>?yacQ(AX~Uo!~5 zYRONYF#4^h!+P|Eio7ZWyCP-0Je8Z;Ez(%CIpsZk7|B?aP@|a60UhwZQ7@jG6&Yd( zn?I-SqmXyyh%?fPNuQbrS%V>n1?-Xdq1rnSvwJZ6;)X>~YMGua9Lr-CwGwGnL9Ym{`j%(N`gX$7HnV9z;w4@8{yl0&P&XetNYm-@obKy`sB~Kc ztDwWv^s~zfge&I&9&P8Ng8=Uz&!AHwj_kX}5jRh(oucWw^Ns^@&FGxdTu1E|8ju$r zHKW(zxvq`R?C2)0V2UcCfAs0|XolD!V6bX<*o!?|un}-Py<-|5ql27NIS;Vv_#Nq`3fQD>*?W;0Bk!h#BwBZu z?sVZ+>40#r6zm?z-O?uM&*!k56Rw#Mf6L@ZM+~&N(m|RW>Y0HM`fKf)~wJvh_CwyQI)a43tWvW1s z>u;HRA+@KiqRH2jkxAXSM6)%a__FLD|&N`__I22Usi2Q?2xiTx#7V6emk@Gp)7{EJ(BsP_`hz2K?7ymM8L7-`kAY_@se zWQoA7ek20#DbcQ*h)J~Y!b`M?{i25&!OY2$0h3A9rhizKcW!ET1hsa{1mLlZo&v2E zOE2uAa>*GII1EmQCKl(-`S3EF5|tH2FY0A72ucw&3&BCADk7#U0sSivom_|qfxQ0D zYYxM)R1E?TXFuqam$=6g&< z)b_;r<8r6$^}(Yr}Ek;O2wEUS*tvamUJno;&v? z#DWbj%+KzGIIck+uyRg+)J$$hWW_tkj*53fe-=KC2ol=^){Db}qHfwE;9~93O;<_j zoE_e9@1ftkUv@0lSo3S)I4L$WjwZzJMropTQO77&Ij*ZTpVHl`I0H9LzN@Og((mQc zT6$QjDrYu9frUlW+2~$aWu!6sO-hU+XXr1Z3wo`xFBylMMXMT<@6efT-L@;qx~PFXjarpV2VrSLI5NOhDh>P=dM zBCorYm18yL&!k6hl1N+h+!M)f?2Gy3eq6A3=JfPr^^{WlX6*;Lkj+`Bo;2>Tn4upkBvQo)^pE;$U&L4kK(Q|P;FYD}UUnh1q-92 zfQ4&!k+Xf|+8tPo02Z&`{~x6wFkCK9J8np&@(_r*?G#RBUruR38;;&2!)9$Oe0MR7 zm1G1g9M9!}@?+9g2x$e06TkrD_jqzSgwoh%t6;IT`;g{cIj>_6<{gCpAW!zn1L3EK zUiq}%8)ih@NP+=*KuYi90YQjr2+cJWpt2+i93^nQZvB?t_UNGE{iY5z3cQ)r?LRWP zP7#V=k|?K{SzRO~?9;`;*+E}l#0s5SUrVD>%`|d7oldSZQzUAAkvcmK&0(TY3?>@G zV8>rE9QMw<{6g25D|!Fhhu@Cwl1`8(0#naKv=!(;es^4m>QLyvQaO#C?Pk#Hoiuq# zYc{!(iOi5nqzflTg8Ub>;;g?x1o_+7la-Y=Lt&SFs6!$Box^YMMf~ssBX=~ z6%&~Jiaul^yB8{{5YD9Yb4fG?E-SU17McaweD>9f#fRI#Tj64?7kfD1Pth+KvLs)T z3^=)Rjx|v%jvyz*)K$c`=TiB=jA9EaPX+8*9z756{y7UIF8dbFzT0Se($j5v^6u){ z0Kd_9GP6I6LMPriZeN%4?4+Dsn3>37qfAlkP1PB!pnLJLA+*5SfaqVzxVK` z24~UAP~k|^?EFj>l?hj=V2m718lmKX0Oxq&d|_S3x#F^;jZ(&xvC9iflL8=kuuk*SGzjSlfN_{Ptj{QRVx*ywuS<+B*R{+W8&&oV5|;Ig`I>Rs=-vK~EH zVS}EDu-Z41cLmyI+-!Th@Yy%shnPe7=gh`WKTvI>;tX!sT5$~ZsZj_7OLWpQaZ{Cn;?unAGWoA1*@%h%yagDMdx+iwu~B3d0b(NPmQ`9IF!~R%Zzkxo|YKpn_jnOf_>Y z<>PNMOsIz7@|cm7DR38(#w*KA<7Z_Tu;9!)sCFPh7m>pp+``Loy8wK0eXPD$`}tiJDx7 z4!pNO?J3`U;?l=9S!B4Lp$gXlNl*OGe|Ek*8#;@@oP;VlxlY&UWbJGauFjiMPq6z& zYlkl)J6dmxt=)mazX1eo4~R@#HvH9EpNJJSmO71k`%aAr5o57l?DiDuDtCN4X|tQS zZOo4^>BW>8VGDYg9=@nF^0N=Grr&P=uFG5_aTi zG?-jt-p)HQZZK)X+2oh@Fs2aa7xNo@O=wf&N5+o0w*A5m z-aOup#gvwzaZ-CzK?=*t5Sy$mFN6?ia5#aIk!ju+G(QOV3&&Pww!SO}2PZ}%_N1l` zWjZ!8F$N_*H;oWE_Fi&oHX$kL5MbMu+XuWgCE-o4l95i#>JvJA zgS!E(8Nb&}U&Y8ELXYY6%~`6rLG(ywzp%RI*L^<8{oFQVFadui-bHr# zc6nWP#nDrRHpcEa1~7X6&`%Wj z5zr5sWJ8ImOObZuTz@3}`PYkLegCe@Kp#0l*(Pc`v~RrC(UR?3?p|D6y#X(zb+m5MG;MJ7e)15ve-Sbh685di z0fG*H*jXBSEQuA8A!)=BYK8PXTRTh5>O|yg*3M&q75uBR(43vzgu_$TOn9&G#Dz>N^$B>w)=mr*-X6vAp$a^`gJ!R-h%t4cR zF()Wh6;e{Ve5iDEwSBw$1)HOG!o*^4!&9><9HkQ}kvNHqv>zQ0{VFB(m4AF(z*ng$ zU&Z#e<7qNGOel1~B7&#Wl&(OaMd->I0q8yxjQ-WpW6oo^2Digqr3ke&; z-*p8EKZ&n?)#7S)#T)VTk2I6HXI|U;JpdcB$&6n+dc~{@%{;L z!0Rt_8LI>CE0b_e0?fjD{`7hGqt|Ju zA3r79^Vl@mDVw9GrZXs?K{j#@l{)6+I7C5%fHMZFu=P>0g%|5u9iRYsT$$Q|D93ySCrr`te0Lrjbi zo|}4?beiMJ)4adDH{Qb-l4IgR{>!+lb(L6VgF@kLzRD(+8>Y38Ze%0P=K`DFd;5ZPlEQs$f_Mnp`q~9wW=0 zt)+hR;)Xc}s_AJHfN_YRrw?b&I26;%x}cDByNo-PUYoP=qbU$bFRO_pBsDI(TTKX{ z3%!PZ!>PepFgi8%EI3Qt!?g235hJ-Ztr?u_vS^mXxy&6|0-f~pLv8BN+Q&J55!nuL zO<|C?TXINlj$|nmB_%}DSJkliF)Wim9qkI*VOT}!5j8OaXaPcn(niktyK)XG>#s29w?To2AXM~ul__p(yn(RemTC{9~b}d z^i|c5HntEPq;X9!0g;E^6=Jfdp;B4C8(_|ce<*Bl%nfkr{<-(XRnTkxm-MI1FDje8 z7e&R{&!Cza^Rz5m=jyp68mu?2Bf=7mjErX8>W0>oaGkbf`)p!& zpWNkW3u_0+Jp;w0#!K|IVmUy=4?DvctV<4P_~U!GfAO_5YSm88JFD9PP`5ge5c1W& zOy&7KI_8eM5$q9sYy33#;DfLEs#(X$Q>|>xqLDq}w}_Cm@b2EIgNNyDx8K;HzH)H4 zfTv-OY=B*s#_7STiIR4*zpW4q@WXOndzdJ1e7Y=e1AC|XQ5eQgbwUwRX1--E+_ z4kV#Hx~<=QU}f)G#)meEy}A%<4Yq>qj|-Znub)nDv-JE^fZv}SmhuFA0O7_F#tQ)592VG*f>HC4+;=#F4N7048ceqP!`rgr75;*4}iqQ8D zrIs2*AajqZ^qkC1Nay6f`TmMSrLYt(6-RxGE^%jx}RenXZb zr-0XdxqA;Wl)J<~ZBTb84=y{VYTpu58(-f)Wk2Zm!9x+%X6fX*YmuEs>SXe)`5wO( zQ4AZ4cZ|vv0aFmH?^hytBT!}Pt_JuzsJwjkP#v=0Jj`Z%Zb0id9(=1@xF6A$+F1C0 zAj`@yvb)@+sPo>D&4L<7C}t7MoE>1ScKAGuH1^LK}? zjP|VNIfa6|cFCuxUnrfeY{Njjj%0asRB3&d&b%cl33qg@`^@OFd8>NP9_;JTWx#f) zfiEx#3#xNMFMLzUuZ7c6KbKY zTb^}=4-Pm%#Z={15ie}+3Da6l$?VAh0G&-^m_Pa~o&ECr@8Zo5OH1d&Be3-+8f4cn znjx?_RZXP`2w72JGWG;yYL{x7o-wxV4!7~;9dv0ss+v$7^!pJqRP$`tUI>A4tBX7# zwh1h7Z|Z|WxmiNoL`}2Bw&cH9LAdECrzB0SWQ0VfaGtwhTPi{+XC}k_80qlqdwXmw zilqa`5?>MFxZA#8(h1? zelOsv${7aTj&B8w44b6!`yIvUQv7H%ssig|SO@qgt6N42B@d-}#7PfPjSNqF5%crOD z<&m&XH?+Vvsz0s(ytp4ClKlIG>j6M z;lzCCm@6$T9qN`X6_mVUCkXz;D|^-`>TLdPLMBNhsw z^3(6A7R#=+upO=@HrNp+=c_q^LFLi|;X`BH2zwj!n73YuhqAPZZ{1Ehs2fP$Y4c$@H- zab5S4h2}V0;3aASu=eh@KVBa|Nk)rNQyvJR^OqN1~Rpv4L zjL(||J1=7tMhg|9O}O@X?$?>RQkO{!Iz|PbMs@zy)mf_$ltpOo{?EGel78L-V%L8- z9v#I8`TE_ZOnW6?6(o2h#%7YP>t4zz!M+qNM?Qa1FUIxly7JvH6?7vV*= zFtl~=ziZ@Ax)f!hj0Yyd_R92E`khrugxFkF<>B$?vT0s@p3d5j$f)5S#vXr&O>R{t z%?J6vqCL1bF0HxB#OzXECv7l<)t|5NKLjs%@I}A1orzp`0D8S-e>mJsvd7FhhnAO# zsBVfc-|Rs=0VFLbHw+3f%t${QM1+Zd0z;Ra?I2ZATHM@aPA4e_1Fh?|=B1D!u$2Zz zFUuAsEYc zmG8y7p|DTy-!IFZy0#m;!^bo4L_At83Up0oI|=kV2M&bY$eYnL39MYn-) zBCG-kJrSgcEnE-xc}RyDrCMz^$+=)V;k;l`<5G1bOWc#K%gBUP3}h3 za(>cHfDg`7W6nS8ihS3zbFoxR^U*7H3222JxcV#g-XWj(QPJZX>Fb)&6D`ab>^4aO z5Ze4CUVdoUi7ItHV|hdFc*!EjNHj(48e(kO`H0~OSDq3snBZwCqe#N?>YV7nAxlyn zG7N->ba2nswlOg)C-I!mLMyab(}R}0s%@(lJ#HB^0tj)?5$7fWh9lFupiB^`7}QJO&)YPw*6;00O)C z!93Cwg4`sov%a1$Tmf`@J-^-GkjAUwoOkt zO56CyyuP+(<>6ma&Att{9^!|upc-#?t6A?n|3`gIe8(eKj?eG=zQYzTw4=$Fc!dFk z;hyUcHK#Ex9_(){Vec+FKMw2_eh9G`ht&+eJHiFg=Uh7uiRBWX9uGx~9d_`?V~HKd z5ivIF^o~qKQw}3Mg09b*pHQDBuu7+5{<*r>ss5@KDX_PHGY)W6kYU-k3vRJrVrii| zj^_~{vA{IO4m;SJaUG#qM^jBqOH51b0spCxY-ivg+5%Nm2s`*UV@X1VH~u9$jpQh0ESdc# zoJ({=NKVy4hjBk8S-<&_Zm_L$Q12SknrdysSAQP+ddjKsYUr-#_*J^&<+RFB^v;e^ z`rCA2Vw6l+3tdS`LubFaS?p>TaT2sf&vxGDt=;8)EnE%{>tmB&n2=_i3iyt@v+zgW z7?I-OBwVaB?2o3-!X&IHBv6$Q@zC)!ALTnGW7;5xsWBBCNfgZTovxLIPs`p(nI<-= z8>1gFcghil(UqNUe7dGi!=S*5h}kg!1kUL+%)eMSh{c1B`E&cM&Z~Goc3RxWGcV|8 zP?e;a&?IwRFU)dxyHsG_#%34$TF=SnQO%~IUnZrp8f-FQ+6#u&_6_a6WjS5(F?3c3 zPIq9zHq$*Yz+@%n#9z1m_+j);J>GT48CM;!k?DDdYX;7y$L{pHiofjk@VpZHdHIpM zoBrw0o^(Tj`MTb5yn;@7$#-TFnpt;F^v~oaYeaK=5C8(^2)S0E?u?!LPm19&0N^`+ z3D^bT*FVD+o#@-Aql?;Aj|D%H!uA=~n#2ZdE@zPSZZ> zl-HP*)oS5-Pfql9F+g)wD3GJ|P)pJWwo0q`Dc5Uyo%tBv}eK+Y}42uG?Y;NaGA(GeBrFmd~6<7^`_! zi3Qc7svIR%r3uRTR&IdIa>5Ijh5AQe-pt9y&R-MGSQYB@a>%-LU>PD>` z3 zD>BMpiYwob*Rp+eRn>M?Xq~d5_VTaY5^Gv>e%f(pmvf*{jt$zAv)jRO)arTD zT5#fH1OIzccx60LYi}rnK|YvJT{qFO%~Y3dl)lU{F6QH@objF+<65qrQMuwZjV$xI zil6glXy_5k2eNC_{#ry-{k*oP*ayl!l9+d_m2fk;OQawbYTz+};KYYQn){2@YJ)zD zW0Uq|u61QfI-A5S3D9_P$ysQ#=?=6?b;=3g_Dgiff<2OWs?liy`fLCIG}xR#le6Fv z$=}_rQw95eqEn000(95_005)|?+!`WwjX*%t5-fSBiJ|e``Jf~f8s=b1cVML6#!cWXwo{(5AJ4> z1(X&FS?s0#_m)R+rAx9Q$-MI4u9zKn9pW#gGkGqKY|6WPlOJ!%6t+^5c!uIL!n(OFl!Q> z021qTP`;EyB-e!UIDuN@t2R8b#^q@s4nK~~*I2v&%)T3gt~?iUXmLqu?+CvY-}ZX0VW!N$^%EYcfQ2fRLWem6$8-oMIV8TOX2u}w^hpk10u0@7!76AuFN=rm&q_p1piff$= zhz~trq*oX);w~gN>P-(&-9~;+Q_BZ{qmJ=u0t$GCjJypS8y1|_%VJXlgj=57=PoZa)&u6g6eK;4rh-TUzfJG zE%{Ey>Ug%{mF&m{#1DJFeg7U#KRn=13-)IL+X@$mvd&$dMaA3i2qVk~jKp*A3e$mM j_b|IPGVqIv`mitp26GE&oSMXm-)DYfz@mv!6#xJLdS3+x literal 0 HcmV?d00001 diff --git a/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmXiArmlw.woff2 b/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmXiArmlw.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..de646f8d4fbfcc314053710c299fcc60716999b0 GIT binary patch literal 9644 zcmV;dB~#jWPew8T0RR91041yd5&!@I07Mi303}BN0RR9100000000000000000000 z0000QSR0NM95e=CKT}jeR9*mr4hVsM37-ZL3<|**x(N$`P5=Qm0we>27z7{%gm?!a z424b`d_Fbom`?(_11EXOcwS`x|EB~xMApKd!me6C10@y0GUKCBJOTs>JDR)2XcRnH zEK}2lP6~L$`;b-WUc;#=RLdrony~**O}PJ%#(M$#qf&HSN1AlabI@0Y`X9hw>eI1M60BcLuIBPC;oV0RvXU$d zb?{UmNUjw7(Y19Q59~UgI145`tsY#~7P$7ae9QwTI=-#!12wOa5fJAFBBJ2|9i1c+b!2vhELEd&9ukrPW)b-X$u z!9$pS3ZYq@-4;o=UAx<@+wGBZXPe{pTJMe;b;oUn&Yy;fVV5$Uzi|05=6*Wj??n4I zfAbD_zsL9QMg;*pd3Gh&<=175I(BUKccZlw(?y-FfW3}`&K!ckNP2-#YhbGBsth_9 zuYvhF=|QFdJ4wG3wtiCt%aG+Ny1;^D^XbIQryuhHurq&(V)nqMyZH@W<;xzIi!6zF zl+Uc51KnqR$+z$=+!^`3^NWoBcttp++$4b+4cxvPTbOu}?zqeS@! z_FBf*$fK-4IWdax9Ftw8;)>NZb{n%Sq}w^8_k67JDPK5!&BiMxS2_bL|A#J(We|?s zY{r6{H&zNZX?DvOy2AtM>(r2-Eqhi_1W%LD?zZ2U1s6)zuWs&BNkN%d4Trnidk?`k$n;u2S{ZhlqkM4pRaa zHV&>d83fjdo73pK8NNssuFESkz0+ew%2yVVN8(0mbklToka7i5)lLLO+e`&8q{5^# z(ddt)7uv00;-+;uwWHgLbeRLq`|5pEwSP6;HBIHdXZWDSf;R898?TGonZkpgNV$$_ z@Y^V*Hm|>-Axk6J6dq;jD5;aEtH)P2*XN^Ai6$b^$yhs;Nek{4)=xA9a+}Y2*OEYQJGet5FW}Z~Phm z=6+MgpT)23KKrd^zxb#0gH|=Q4N(VBCw*aEmUMe=rn`E~NqK6uL>WD;h3@rPFq+pk zWJ-e= zZ0YO@{CktjCFs-Xz;$Qk>e9Np?y7vfP#@H%^<{ksZ?M-)*KbJyIJ`vhu8p zY|nZg6n?vY5BVcSHs59(5U-eU8;x=O`gtDu*GQMHc8T3ytkXE7f%Z*okK$LqY+REz>I0uJ>nekeJL%H+VEj+-CW;$rbf<%M zo|-z1l%9I>Gs$V=!1_%%?1b}d*I~dOBj$X6csS-_-8A+))t9k&vGk@J`}_FM_}O4?MynVzEgR0;jxw+Mpf!9mSmg-m9kFY>Qa?A=dcziJh(ul78v8oY z7{cg3diXEMFP1zVOtiX|hD-inPNJNo)-Fj1Qi|ia+I^-djXFifPlJD8|3)nrm8oqZ8aELV8<9X6Pf_ z38Xnf#GjfZFwa`QSk7osov_k9Z@1QVQhLj69}A7WL7JOS#oUA!dojJ;b8FaY^Q*__ zKlj36jfH$#L!q0ky;enB+2;a|)JJ*;B0<0qKa-{r|5o-JQF{~jc~4UMv5r$O8tQqY znI68i=j~QGvtrrgQsl>mJ>Y4gH+)~1U1SIDUOhJ4v24W=p`oIWCcN60J_~B0SHYBC;Dv-4Ng97kwvRVoM}HD%pwnEHWzW)Pk=rCg~dMA-s(_4K`^N4~#q- zyPZ1u=m8Cj1xt*j1=)TIex@L>fr2%W%L)ynXW=OuZQD}WZp7j=>d=_M`=x214a=(X zfGtB^1`m0_+owpZ0z(2+=5p~|o@_DES=mL99+bE8K-RO@aU8%MGZnf3IkhHIWT2^G zYPPWZkgM&;gXe8&R^sE!CuZk7JdWXTWrHVS_=0Y?K6JShg~kMpH9>~GPTFuIIepyO~9a_zujt-$#7qhveMfPjJ!;P&CUuZ zCE#s^l`z86i8YdS0vo2W*qfF6VpC&cL>Zqub`q+v{2r}337Z*AFYI<8w}6ibW0};4 z3Iq1U8ISf29ufM9iR;KXys3!9VDEkQ!{9ONm2~8y#pJ>SQ&JL`NJQUO?*+nIt<@Bx z#dgp7G=LkN9LuZ>zrV?v)M{|ShbLcumu?ovy5KwlETgoKDykeMU4|T^Ll_fy#wkkRjt9M4>}`arsg! z+&sAg7><@b{FQ<9`0X__<|js4o}y2pmg=;fo8WmW^gl1RZwt^ z%*LsI!4Fh319XX&JS)Oy`8;a_Q%Hmd+LwvS$qY=7D)7WzEJN==8ZYRq=82l;NXBL~ zXD$7RH=F?J;%mvrL`JVZ9ysCJEzRi;cf}cwJ)QV7oOjM>@ZSq_W*@A+Z(Z3?P<0{wP-*Y^RQ%`8G+@{GA4Ps(LHHksMFbmZ>KC!m!6$BdjR%d-WU|C?tB z;a%F}zOFT&W|q$VERnNH3NX>vEsbfoO?6zADyNSr9=xO){3u4?vdRn%BpwgD`r0d&C_@;98KfapES-mK&z#=tBX^EHcNP7>v`scMuqK z=mmnv$0ub!Huxl{|ATclK&OfIZ&HI$Al@HQ)crI9k68rPvYqCGmI?$=_g{l(ITO_9 zKt7^cApGxxLN59WQi^dem5Q=u*oz6sMazm2cXyRsRLV%i%qgqNV#Ly(Z4}C+WKP-5 z6mQLnmeupolIj{E&5o$+HpYDxDgwcjtcx`Wk!o!zkJ>Ld)*2JB+LS0N61BCW?ik3N z&;u1G@^Vq_VdxhiA-YhweUA_eIusGnaB-0a3kwiY$!jZqPHVGg|@s0{?!-;i(s}A-?@Cjt|ASO=02glu<%c5@O1G9 z!}<9?_EX9g+H)Z#$X=xoE}BR0MkqiY*8@SdKVnJ8<`;2p9v0?caiMDN8DRQ<6hXOghU&?$G41@QDP_I7CcU670fnj4?x-b(mfvvD%srZfRIwhS)zgScF=>- zz=(gFv}HHB|GF2P^=9O@YGSQeKBj255~T?-pFnSZm?ZI!VHWcqQgRAn6+rtFKiNgP z6=@PjDNZT^E9l0wpcrz1$V|GpKv8Lx-d$WmJ+aBF2_;&!13J~STSTK1fiRgbFsH@L zLdTNMD{%sMiK$N+{Rz!E8YPyL=e6PHJl%wOb03?!mrV;Og?p*McnAY(JLqY#K;0!c#@Ax&Lj~! zwhCOVF-4)yT1l_s1iNV*WE!p>;fY6T|CZ*msX!IEL|= zaDL{ox;4l3xR3XoI;8L$r2*Tuf;*@cuar1<*X-M zn5uiAar4~w6M^Rr3_KuS_pts3IJ3ZCF~_9*V1GYf0PSO_sWO{O{Jrt9zJh*x-N(;I zO8pwz0)If9Dt^W2&3_>n)=eU-h)EyL1~XXT%`$b%ff24v`GaFK9MYJEbNUX*!%rQv z%j2e}C2|VXm(F-aa`3vgP1T^IWHrPfIo)VIXr+6Jn9*JR1VLzup;Zu-$8V@!2Xnj- zkr~0toHDgIpjU_9hIRmXD2lJ1QzLnm(C)RL54Vv4W(Fi>um7ygB!6eP|Fr9AT|LA` z=Zx|8M{-dUR(_Dz9S? zPrM;+@sxO&!f%Ks$e0OX^6KbpD#7(U@qAexV*-uLb zyp}Q?%|*KNPIG&muI{}6H`8@_o!0F)I$434V3Isk&0-vwz3JQ(Voo49ejSrY)0`YI zbMGqtz66uO(+e5O#)XXG(}^bg784xfp)=V=U8L$FrKNU|a#W$$ zo6?&@Z{%(1Af={vl7w4=Z&*zNN{zB?I|W< zsQ&2^tBY;x-^8p%In+r7hWvoV8W#+^Smv%R4ZqVHGUxJ}s1I0hSYP)c2fbl}XO98hU`_i}3oTd5R^GoCb+Ud2Ty6 z0p1o{B#nxTl|;o9d9)>fnI7O4OQOT*C>#G9Tg)G@Z0^2j#{p_Ko&v0_2CR42*tG!Q zRf#-AEec(W@U20kYDT;eMQBtj!gmJ>EjAD40D9oy=6}KdOEi*rT~2IO#-+z9JUAQ& zf;&u>>(UrOVnC275_Gfxq8jf$2a-?DLAoFLhhZ@o0PBw&!CMFQJ({1I5LT(SSL?vE$=F26@~&U}Wm^%kTJ1?K24$(; zrLDVZhnbPJzoNe)x653DVY$I#HWBS?`peoD11ZH)Vcy`2U+*MxE*x>6*@@ zxvl0C<^o_aKfe=|WRAijjgziVpiM^?`aRN zW=X@)eAd7Pzik>qXA9ICPELFcA|shl=e>JPCVO3Z_$865n<5Yn#};l6y`=xkCJw5^ ziwZC?wA^rV4zhU@8NZ(Dx0PC)kEO*H5D0RpI{1;s3aQV$W3kR8%XaI^d6+-VHJEN+ zk?3U)V`I9vPvzyFl(fmN-N|iodwNsJ)EnsG>N_h zu?DKmTL8;utk=J)At3-{&O!7tk?Tp>YwiviO}|*(pQ56>Nf>el7qXTk1znEJB9RecX;z zrGTSnprK|uPDT3RSYE+y^ue5f%_P?3P$QgXE%U&~vrvG9nH`bEd2=++8TGJXjq=8XVIMn zx44inMhQU9h5ZF~7hO$k4JZ}HtGyno^eV5}I88}zYfvJmHK;U~q4Azo=~dnjB#ILh=+2A_t9DLm zm32z>LCO?LIhjSMt4|t3l=!Eu+zGGeYFG6{j#N>#5Yu#gGIm2_lj~F$GGQ zS$+kahkx{*_HXd^HJWVOy*PysJ6Kx4-Z>z?kA_gJfkEn^02l}8<2ZIfI(IHLlBTMg zcj0-mV;q?}Y_`Ehg1VHyT^%FL(-4Xpu&38aEKZ#S6_LnQ<#b+5A%-Y~DdTLCoYXjZ zJ|UKzO^hsYY>EW+8tk?lUwM%y9;ww4P-1a(LaZ21EQM{M_@u_SX>#SzcsWq%VNLjP4aa9Lx2DIqtF{OG#pwlZZPcRDi^Y{hQHtE=V%${J{P34lpycQOmZDv*4+4W$k^6wY zt$lpGANuBRM@NHEHr-FM^p6#Xhc%0B)|vwP4yIEnnnZ*8(u?WzTKGVmk&0v*RCVQf z@DTJusTA{2zGp1Giq{pABSs&VTZx~8B#Im*EDenjj4+fPF;`_Jq+0{p-@MVIU(&nx z5*|uR!!DN%voV6^y|jAQt;U3S#DJ*wP)zmV>ov(2vq6jNx6+G>urwIN9wY*?$`sJQ zuaDMA#DjsU4Wlvn_5SL3Vm(V(PAu_!Yv*?cy|vL$ucW*{{`W;;G9iP%_E}|qp7r&)i4^7$%2Kr9sTOwAG~Z~ zllJq=zF%o{==I7bp9e}K)c)s|aW)pe39fx#kGTf1ooGzwtW&^18VCbXSE3@uK7em` zG`NdXB#&TB4)3&oHT`EW!#%+3ybr^^a6>)W#7%^MuestR$YLA&0w3miuK)%;^FK}h zx>md*f?X8SW%maVHROVm0f_xirjev41LoAM5G>1=w+ga!3h*Y_LMskhx#QgIoq zC=*2@DvH^60?SS*Nqr!bl>CX}BF&;N?%pTy!ahoo?m1j|4G3{m2xA97m}}=wI>MKr z8f1-{Ti5|49^QHjY9P5&ol)2YMBk0of9G=-Qxv5O?7w5;hYN@DYaZM7%dU}mhvU

Examplesr2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> -#> Conditional R2: 0.826 (95% CI [0.757, 0.855]) +#> Conditional R2: 0.826 (95% CI [0.759, 0.855]) model <- suppressWarnings(brms::brm( Petal.Length ~ Petal.Width + (1 | Species), @@ -207,8 +207,8 @@

Examplesr2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> -#> Conditional R2: 0.955 (95% CI [0.951, 0.957]) -#> Marginal R2: 0.382 (95% CI [0.173, 0.597]) +#> Conditional R2: 0.954 (95% CI [0.951, 0.957]) +#> Marginal R2: 0.383 (95% CI [0.169, 0.615]) # } diff --git a/search.json b/search.json index 58119b660..866c8b52a 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":[]},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement d.luedecke@uke.de. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.1, available https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines inspired [Mozilla’s code conduct enforcement ladder][https://github.com/mozilla/inclusion]. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https://www.contributor-covenant.org/translations.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contributing to performance","title":"Contributing to performance","text":"outlines propose change performance.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"fixing-typos","dir":"","previous_headings":"","what":"Fixing typos","title":"Contributing to performance","text":"Small typos grammatical errors documentation may edited directly using GitHub web interface, long changes made source file. want fix typos documentation, please edit related .R file R/ folder. edit .Rd file man/.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"filing-an-issue","dir":"","previous_headings":"","what":"Filing an issue","title":"Contributing to performance","text":"easiest way propose change new feature file issue. ’ve found bug, may also create associated issue. possible, try illustrate proposal bug minimal reproducible example.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"pull-requests","dir":"","previous_headings":"","what":"Pull requests","title":"Contributing to performance","text":"Please create Git branch pull request (PR). contributed code roughly follow R style guide, particular easystats convention code-style. performance uses roxygen2, Markdown syntax, documentation. performance uses testthat. Adding tests PR makes easier merge PR code base. PR user-visible change, may add bullet top NEWS.md describing changes made. may optionally add GitHub username, links relevant issue(s)/PR(s).","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Contributing to performance","text":"Please note project released Contributor Code Conduct. participating project agree abide terms.","code":""},{"path":"https://easystats.github.io/performance/SUPPORT.html","id":null,"dir":"","previous_headings":"","what":"Getting help with {performance}","title":"Getting help with {performance}","text":"Thanks using performance. filing issue, places explore pieces put together make process smooth possible. Start making minimal reproducible example using reprex package. haven’t heard used reprex , ’re treat! Seriously, reprex make R-question-asking endeavors easier (pretty insane ROI five ten minutes ’ll take learn ’s ). additional reprex pointers, check Get help! resource used tidyverse team. Armed reprex, next step figure ask: ’s question: start StackOverflow. people answer questions. ’s bug: ’re right place, file issue. ’re sure: let’s discuss try figure ! problem bug feature request, can easily return report . opening new issue, sure search issues pull requests make sure bug hasn’t reported /already fixed development version. default, search pre-populated :issue :open. can edit qualifiers (e.g. :pr, :closed) needed. example, ’d simply remove :open search issues repo, open closed. Thanks help!","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"make-sure-your-model-inference-is-accurate","dir":"Articles","previous_headings":"","what":"Make sure your model inference is accurate!","title":"Checking model assumption - linear models","text":"Model diagnostics crucial, parameter estimation, p-values confidence interval depend correct model assumptions well data. model assumptions violated, estimates can statistically significant “even effect study null” (Gelman/Greenland 2019). several problems associated model diagnostics. Different types models require different checks. instance, normally distributed residuals assumed apply linear regression, appropriate assumption logistic regression. Furthermore, recommended carry visual inspections, .e. generate inspect called diagnostic plots model assumptions - formal statistical tests often strict warn violation model assumptions, although everything fine within certain tolerance range. diagnostic plots interpreted? violations detected, fix ? vignette introduces check_model() function performance package, shows use function different types models resulting diagnostic plots interpreted. Furthermore, recommendations given address possible violations model assumptions. plots seen can also generated dedicated functions, e.g.: Posterior predictive checks: check_predictions() Homogeneity variance: check_heteroskedasticity() Normality residuals: check_normality() Multicollinearity: check_collinearity() Influential observations: check_outliers() Binned residuals: binned_residuals() Check overdispersion: check_overdispersion()","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"linear-models-are-all-assumptions-for-linear-models-met","dir":"Articles","previous_headings":"","what":"Linear models: Are all assumptions for linear models met?","title":"Checking model assumption - linear models","text":"start simple example linear model. go details diagnostic plots, let’s first look summary table. nothing suspicious far. Now let’s start model diagnostics. use check_model() function, provides overview important appropriate diagnostic plots model investigation. Now let’s take closer look plot. , ask check_model() return single plot check, instead arranging grid. can using panel argument. returns list ggplot plots.","code":"data(iris) m1 <- lm(Sepal.Width ~ Species + Petal.Length + Petal.Width, data = iris) library(parameters) model_parameters(m1) #> Parameter | Coefficient | SE | 95% CI | t(145) | p #> ---------------------------------------------------------------------------- #> (Intercept) | 3.05 | 0.09 | [ 2.86, 3.23] | 32.52 | < .001 #> Species [versicolor] | -1.76 | 0.18 | [-2.12, -1.41] | -9.83 | < .001 #> Species [virginica] | -2.20 | 0.27 | [-2.72, -1.67] | -8.28 | < .001 #> Petal Length | 0.15 | 0.06 | [ 0.03, 0.28] | 2.38 | 0.018 #> Petal Width | 0.62 | 0.14 | [ 0.35, 0.89] | 4.57 | < .001 library(performance) check_model(m1) # return a list of single plots diagnostic_plots <- plot(check_model(m1, panel = FALSE))"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"posterior-predictive-checks","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Posterior predictive checks","title":"Checking model assumption - linear models","text":"first plot based check_predictions(). Posterior predictive checks can used “look systematic discrepancies real simulated data” (Gelman et al. 2014, p. 169). helps see whether type model (distributional family) fits well data (Gelman Hill, 2007, p. 158). blue lines simulated data based model, model true distributional assumptions met. green line represents actual observed data response variable. plot looks good, thus assume violations model assumptions . Next, different example. use Poisson-distributed outcome linear model, expect deviation distributional assumption linear model. can see, green line plot deviates visibly blue lines. may indicate linear model appropriate, since capture distributional nature response variable properly.","code":"# posterior predicive checks diagnostic_plots[[1]] set.seed(99) d <- iris d$skewed <- rpois(150, 1) m2 <- lm(skewed ~ Species + Petal.Length + Petal.Width, data = d) out <- check_predictions(m2) plot(out)"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Posterior predictive checks","what":"How to fix this?","title":"Checking model assumption - linear models","text":"best way, serious concerns model fit well data, use different type (family) regression models. example, obvious better use Poisson regression.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"plots-for-discrete-outcomes","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Posterior predictive checks","what":"Plots for discrete outcomes","title":"Checking model assumption - linear models","text":"discrete integer outcomes (like logistic Poisson regression), density plots always best choice, look somewhat “wiggly” around actual values dependent variables. case, use type argument plot() method change plot-style. Available options type = \"discrete_dots\" (dots observed replicated outcomes), type = \"discrete_interval\" (dots observed, error bars replicated outcomes) type = \"discrete_both\" (dots error bars).","code":"set.seed(99) d <- iris d$skewed <- rpois(150, 1) m3 <- glm( skewed ~ Species + Petal.Length + Petal.Width, family = poisson(), data = d ) out <- check_predictions(m3) plot(out, type = \"discrete_both\")"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"linearity","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Linearity","title":"Checking model assumption - linear models","text":"plot helps check assumption linear relationship. shows whether predictors may non-linear relationship outcome, case reference line may roughly indicate relationship. straight horizontal line indicates model specification seems ok. Now different example, simulate data quadratic relationship one predictors outcome.","code":"# linearity diagnostic_plots[[2]] set.seed(1234) x <- rnorm(200) z <- rnorm(200) # quadratic relationship y <- 2 * x + x^2 + 4 * z + rnorm(200) d <- data.frame(x, y, z) m <- lm(y ~ x + z, data = d) out <- plot(check_model(m, panel = FALSE)) # linearity plot out[[2]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-1","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Linearity","what":"How to fix this?","title":"Checking model assumption - linear models","text":"green reference line roughly flat horizontal, rather - like example - U-shaped, may indicate predictors probably better modeled quadratic term. Transforming response variable might another solution linearity assumptions met. caution needed interpreting plots. Although plots helpful check model assumptions, necessarily indicate -called “lack fit”, e.g. missed non-linear relationships interactions. Thus, always recommended also look effect plots, including partial residuals.","code":"# model quadratic term m <- lm(y ~ x + I(x^2) + z, data = d) out <- plot(check_model(m, panel = FALSE)) # linearity plot out[[2]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"homogeneity-of-variance---detecting-heteroscedasticity","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Homogeneity of variance - detecting heteroscedasticity","title":"Checking model assumption - linear models","text":"plot helps check assumption equal (constant) variance, .e. homoscedasticity. meet assumption, variance residuals across different values predictors similar notably increase decrease. Hence, desired pattern dots spread equally roughly straight, horizontal line show apparent deviation. Usually, can easily inspected plotting residuals fitted values, possibly adding trend lines plot. horizontal parallel, everything ok. spread dot increases (decreases) across x-axis, model may suffer heteroscedasticity. example model, see model indeed violates assumption homoscedasticity. diagnostic plot used check_model() look different? check_model() plots square-root absolute values residuals. makes visual inspection slightly easier, one line needs judged. roughly flat horizontal green reference line indicates homoscedasticity. steeper slope line indicates model suffers heteroscedasticity.","code":"library(ggplot2) d <- data.frame( x = fitted(m1), y = residuals(m1), grp = as.factor(residuals(m1) >= 0) ) ggplot(d, aes(x, y, colour = grp)) + geom_point() + geom_smooth(method = \"lm\", se = FALSE) # homoscedasticiy - homogeneity of variance diagnostic_plots[[3]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-2","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Homogeneity of variance - detecting heteroscedasticity","what":"How to fix this?","title":"Checking model assumption - linear models","text":"several ways address heteroscedasticity. Calculating heteroscedasticity-consistent standard errors accounts larger variation, better reflecting increased uncertainty. can easily done using parameters package, e.g. parameters::model_parameters(m1, vcov = \"HC3\"). detailed vignette robust standard errors can found . heteroscedasticity can modeled directly, e.g. using package glmmTMB dispersion formula, estimate dispersion parameter account heteroscedasticity (see Brooks et al. 2017). Transforming response variable, instance, taking log(), may also help avoid issues heteroscedasticity. Weighting observations another remedy heteroscedasticity, particular method weighted least squares.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"influential-observations---outliers","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Influential observations - outliers","title":"Checking model assumption - linear models","text":"Outliers can defined particularly influential observations, plot helps detecting outliers. Cook’s distance (Cook 1977, Cook & Weisberg 1982) used define outliers, .e. point plot falls outside Cook’s distance (dashed lines) considered influential observation. example, everything looks well.","code":"# influential observations - outliers diagnostic_plots[[4]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-3","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Influential observations - outliers","what":"How to fix this?","title":"Checking model assumption - linear models","text":"Dealing outliers straightforward, recommended automatically discard observation marked “outlier”. Rather, domain knowledge must involved decision whether keep omit influential observation. helpful heuristic distinguish error outliers, interesting outliers, random outliers (Leys et al. 2019). Error outliers likely due human error corrected data analysis. Interesting outliers due technical error may theoretical interest; might thus relevant investigate even though removed current analysis interest. Random outliers assumed due chance alone belong correct distribution , therefore, retained.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"multicollinearity","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Multicollinearity","title":"Checking model assumption - linear models","text":"plot checks potential collinearity among predictors. nutshell multicollinearity means know effect one predictor, value knowing predictor rather low. Multicollinearity might arise third, unobserved variable causal effect two predictors associated outcome. cases, actual relationship matters association unobserved variable outcome. Multicollinearity confused raw strong correlation predictors. matters association one predictor variables, conditional variables model. multicollinearity problem, model seems suggest predictors question don’t seems reliably associated outcome (low estimates, high standard errors), although predictors actually strongly associated outcome, .e. indeed might strong effect (McElreath 2020, chapter 6.1). variance inflation factor (VIF) indicates magnitude multicollinearity model terms. thresholds low, moderate high collinearity VIF values less 5, 5 10 larger 10, respectively (James et al. 2013). Note thresholds, although commonly used, also criticized high. Zuur et al. (2010) suggest using lower values, e.g. VIF 3 larger may already longer considered “low”. model clearly suffers multicollinearity, predictors high VIF values.","code":"# multicollinearity diagnostic_plots[[5]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-4","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Multicollinearity","what":"How to fix this?","title":"Checking model assumption - linear models","text":"Usually, predictors () high VIF values removed model fix multicollinearity. caution needed interaction terms. interaction terms included model, high VIF values expected. portion multicollinearity among component terms interaction also called “inessential ill-conditioning”, leads inflated VIF values typically seen models interaction terms (Francoeur 2013). cases, try centering involved interaction terms, can reduce multicollinearity (Kim Jung 2024), re-fit model without interaction terms check model collinearity among predictors.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"normality-of-residuals","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Normality of residuals","title":"Checking model assumption - linear models","text":"linear regression, residuals normally distributed. can checked using -called Q-Q plots (quantile-quantile plot) compare shapes distributions. plot shows quantiles studentized residuals versus fitted values. Usually, dots fall along green reference line. deviation (mostly tails), indicates model doesn’t predict outcome well range shows larger deviations reference line. cases, inferential statistics like p-value coverage confidence intervals can inaccurate. example, see data points ok, except observations tails. Whether action needed fix can also depend results remaining diagnostic plots. plots indicate violation assumptions, deviation normality, particularly tails, can less critical.","code":"# normally distributed residuals diagnostic_plots[[6]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-5","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Normality of residuals","what":"How to fix this?","title":"Checking model assumption - linear models","text":"remedies fix non-normality residuals, according Pek et al. 2018. large sample sizes, assumption normality can relaxed due central limit theorem - action needed. Calculating heteroscedasticity-consistent standard errors can help. See section Homogeneity variance details. Bootstrapping another alternative resolve issues non-normally residuals. , can easily done using parameters package, e.g. parameters::model_parameters(m1, bootstrap = TRUE) parameters::bootstrap_parameters().","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"references","dir":"Articles","previous_headings":"","what":"References","title":"Checking model assumption - linear models","text":"Brooks , Kristensen K, Benthem KJ van, Magnusson , Berg CW, Nielsen , et al. glmmTMB Balances Speed Flexibility Among Packages Zero-inflated Generalized Linear Mixed Modeling. R Journal. 2017;9: 378-400. Cook RD. Detection influential observation linear regression. Technometrics. 1977;19(1): 15-18. Cook RD Weisberg S. Residuals Influence Regression. London: Chapman Hall, 1982. Francoeur RB. Sequential Residual Centering Resolve Low Sensitivity Moderated Regression? Simulations Cancer Symptom Clusters. Open Journal Statistics. 2013:03(06), 24-44. Gelman , Carlin JB, Stern HS, Dunson DB, Vehtari , Rubin DB. Bayesian data analysis. (Third edition). CRC Press, 2014 Gelman , Greenland S. confidence intervals better termed “uncertainty intervals”? BMJ. 2019;l5381. doi:10.1136/bmj.l5381 Gelman , Hill J. Data analysis using regression multilevel/hierarchical models. Cambridge; New York. Cambridge University Press, 2007 James, G., Witten, D., Hastie, T., Tibshirani, R. (eds.).introduction statistical learning: applications R. New York: Springer, 2013 Kim, Y., & Jung, G. (2024). Understanding linear interaction analysis causal graphs. British Journal Mathematical Statistical Psychology, 00, 1–14. Leys C, Delacre M, Mora YL, Lakens D, Ley C. Classify, Detect, Manage Univariate Multivariate Outliers, Emphasis Pre-Registration. International Review Social Psychology, 2019 McElreath, R. Statistical rethinking: Bayesian course examples R Stan. 2nd edition. Chapman Hall/CRC, 2020 Pek J, Wong O, Wong ACM. Address Non-normality: Taxonomy Approaches, Reviewed, Illustrated. Front Psychol (2018) 9:2104. doi: 10.3389/fpsyg.2018.02104 Zuur AF, Ieno EN, Elphick CS. protocol data exploration avoid common statistical problems: Data exploration. Methods Ecology Evolution (2010) 1:3-14.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"fit-the-initial-model","dir":"Articles","previous_headings":"","what":"Fit the initial model","title":"How to arrive at the best model fit","text":"start generalized mixed effects model, using Poisson distribution. First, let us look summary model. see lot statistically significant estimates . matter, philosophy follow, conclusions draw statistical models inaccurate modeling assumptions poor fit situation. Hence, checking model fit essential. performance, can conduct comprehensive visual inspection model fit using check_model(). won’t go details plots , can find information created diagnostic plots dedicated vignette. now, want focus posterior predictive checks, dispersion zero-inflation well Q-Q plot (uniformity residuals). Note unlike plot(), base R function create diagnostic plots, check_model() relies simulated residuals Q-Q plot, accurate non-Gaussian models. See vignette documentation simulate_residuals() details. plot suggests may issues overdispersion /zero-inflation. can check problems using check_overdispersion() check_zeroinflation(), perform statistical tests (based simulated residuals). tests can additionally used beyond visual inspection. can see, model seems suffer overdispersion zero-inflation.","code":"library(performance) model1 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), family = poisson, data = glmmTMB::Salamanders ) library(parameters) model_parameters(model1) #> # Fixed Effects #> #> Parameter | Log-Mean | SE | 95% CI | z | p #> --------------------------------------------------------------- #> (Intercept) | -1.62 | 0.24 | [-2.10, -1.15] | -6.76 | < .001 #> mined [no] | 2.26 | 0.28 | [ 1.72, 2.81] | 8.08 | < .001 #> spp [PR] | -1.39 | 0.22 | [-1.81, -0.96] | -6.44 | < .001 #> spp [DM] | 0.23 | 0.13 | [-0.02, 0.48] | 1.79 | 0.074 #> spp [EC-A] | -0.77 | 0.17 | [-1.11, -0.43] | -4.50 | < .001 #> spp [EC-L] | 0.62 | 0.12 | [ 0.39, 0.86] | 5.21 | < .001 #> spp [DES-L] | 0.68 | 0.12 | [ 0.45, 0.91] | 5.75 | < .001 #> spp [DF] | 0.08 | 0.13 | [-0.18, 0.34] | 0.60 | 0.549 #> #> # Random Effects #> #> Parameter | Coefficient | 95% CI #> ------------------------------------------------- #> SD (Intercept: site) | 0.58 | [0.38, 0.87] #> #> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed #> using a Wald z-distribution approximation. #> #> The model has a log- or logit-link. Consider using `exponentiate = #> TRUE` to interpret coefficients as ratios. check_model(model1, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model1) #> # Overdispersion test #> #> dispersion ratio = 2.324 #> Pearson's Chi-Squared = 1475.875 #> p-value = < 0.001 #> Overdispersion detected. check_zeroinflation(model1) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 311 #> Ratio: 0.80 #> Model is underfitting zeros (probable zero-inflation)."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"first-attempt-at-improving-the-model-fit","dir":"Articles","previous_headings":"","what":"First attempt at improving the model fit","title":"How to arrive at the best model fit","text":"can try improve model fit fitting model zero-inflation component: Looking plots, zero-inflation seems addressed properly (see especially posterior predictive checks uniformity residuals, Q-Q plot). However, overdispersion still present. can check problems using check_overdispersion() check_zeroinflation() . Indeed, overdispersion still present.","code":"model2 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), ziformula = ~ mined + spp, family = poisson, data = glmmTMB::Salamanders ) check_model(model2, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model2) #> # Overdispersion test #> #> dispersion ratio = 1.679 #> p-value = 0.008 #> Overdispersion detected. check_zeroinflation(model2) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 387 #> Ratio: 1.00 #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p > .999)."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"second-attempt-at-improving-the-model-fit","dir":"Articles","previous_headings":"","what":"Second attempt at improving the model fit","title":"How to arrive at the best model fit","text":"can try address issue fitting negative binomial model instead using Poisson distribution. Now see plot showing misspecified dispersion zero-inflation suggests overdispersion better addressed . Let us check :","code":"model3 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), ziformula = ~ mined + spp, family = glmmTMB::nbinom1, data = glmmTMB::Salamanders ) check_model(model3, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model3) #> # Overdispersion test #> #> dispersion ratio = 1.081 #> p-value = 0.54 #> No overdispersion detected. check_zeroinflation(model3) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 389 #> Ratio: 1.00 #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p > .999)."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"comparing-model-fit-indices","dir":"Articles","previous_headings":"","what":"Comparing model fit indices","title":"How to arrive at the best model fit","text":"different model fit indices can used compare models. purpose, rely Akaike Information Criterion (AIC), corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC), Proper Scoring Rules. can compare models using compare_performance() plot(). weighted AIC BIC range 0 1, indicating better model fit closer value 1. AICc corrected version AIC small sample sizes. Proper Scoring Rules range -Inf 0, higher values (.e. closer 0) indicating better model fit. results suggest indeed third model best fit.","code":"result <- compare_performance( model1, model2, model3, metrics = c(\"AIC\", \"AICc\", \"BIC\", \"SCORE\") ) result #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | Score_log | Score_spherical #> ------------------------------------------------------------------------------------------------- #> model1 | glmmTMB | 1962.8 (<.001) | 1963.1 (<.001) | 2003.0 (<.001) | -1.457 | 0.032 #> model2 | glmmTMB | 1785.5 (<.001) | 1786.5 (<.001) | 1861.4 (<.001) | -1.328 | 0.032 #> model3 | glmmTMB | 1653.7 (>.999) | 1654.8 (>.999) | 1734.1 (>.999) | -1.275 | 0.032 plot(result)"},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"statistical-tests-for-model-comparison","dir":"Articles","previous_headings":"","what":"Statistical tests for model comparison","title":"How to arrive at the best model fit","text":"can also perform statistical tests determine model best fit using test_performance() anova(). test_performance() automatically selects appropriate test based model family. can also call different tests, like test_likelihoodratio(), test_bf(), test_wald() test_vuong() directly. see, first, test_performance() used Bayes factor (based BIC comparison) compare models. second, second third model seem significantly better first model. Now compare second third model see Bayes factor likelihood ratio test suggest third model significantly better second model. mean inference? Obviously, although might found best fitting model, coefficients zero-inflation component model look rather spurious. high coefficients . still might find better distributional family model, try nbinom2 now. Based results, might even go model4.","code":"test_performance(model1, model2, model3) #> Name | Model | BF #> ------------------------- #> model1 | glmmTMB | #> model2 | glmmTMB | > 1000 #> model3 | glmmTMB | > 1000 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. test_performance(model2, model3) #> Name | Model | BF #> ------------------------- #> model2 | glmmTMB | #> model3 | glmmTMB | > 1000 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. test_likelihoodratio(model2, model3) #> # Likelihood-Ratio-Test (LRT) for Model Comparison (ML-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------------- #> model2 | glmmTMB | 17 | | | #> model3 | glmmTMB | 18 | 1 | 133.83 | < .001 model_parameters(model3) #> # Fixed Effects (Count Model) #> #> Parameter | Log-Mean | SE | 95% CI | z | p #> --------------------------------------------------------------- #> (Intercept) | -0.75 | 0.34 | [-1.40, -0.09] | -2.23 | 0.026 #> mined [no] | 1.56 | 0.33 | [ 0.92, 2.20] | 4.78 | < .001 #> spp [PR] | -1.57 | 0.30 | [-2.16, -0.97] | -5.15 | < .001 #> spp [DM] | 0.07 | 0.20 | [-0.32, 0.46] | 0.34 | 0.735 #> spp [EC-A] | -0.93 | 0.27 | [-1.45, -0.41] | -3.51 | < .001 #> spp [EC-L] | 0.31 | 0.20 | [-0.07, 0.69] | 1.59 | 0.111 #> spp [DES-L] | 0.41 | 0.19 | [ 0.04, 0.79] | 2.19 | 0.028 #> spp [DF] | -0.12 | 0.20 | [-0.51, 0.28] | -0.57 | 0.568 #> #> # Fixed Effects (Zero-Inflation Component) #> #> Parameter | Log-Odds | SE | 95% CI | z | p #> ------------------------------------------------------------------------------- #> (Intercept) | 2.28 | 1.12 | [ 0.08, 4.47] | 2.04 | 0.042 #> mined [no] | -21.36 | 4655.41 | [ -9145.81, 9103.08] | -4.59e-03 | 0.996 #> spp [PR] | -24.37 | 92198.78 | [ -1.81e+05, 1.81e+05] | -2.64e-04 | > .999 #> spp [DM] | -3.63 | 2.01 | [ -7.57, 0.31] | -1.80 | 0.071 #> spp [EC-A] | -2.79 | 1.95 | [ -6.61, 1.03] | -1.43 | 0.152 #> spp [EC-L] | -2.84 | 1.41 | [ -5.59, -0.08] | -2.02 | 0.044 #> spp [DES-L] | -3.56 | 1.78 | [ -7.04, -0.07] | -2.00 | 0.045 #> spp [DF] | -20.55 | 4284.59 | [ -8418.20, 8377.09] | -4.80e-03 | 0.996 #> #> # Dispersion #> #> Parameter | Coefficient | 95% CI #> ---------------------------------------- #> (Intercept) | 2.02 | [1.54, 2.67] #> #> # Random Effects Variances #> #> Parameter | Coefficient | 95% CI #> ------------------------------------------------- #> SD (Intercept: site) | 0.46 | [0.27, 0.76] #> #> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed #> using a Wald z-distribution approximation. model4 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), ziformula = ~ mined + spp, family = glmmTMB::nbinom2, data = glmmTMB::Salamanders ) check_model(model4, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model4) #> # Overdispersion test #> #> dispersion ratio = 0.958 #> p-value = 0.93 #> No overdispersion detected. check_zeroinflation(model4) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 386 #> Ratio: 1.00 #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p = 0.952). test_likelihoodratio(model3, model4) #> Some of the nested models seem to be identical and probably only vary in #> their random effects. #> # Likelihood-Ratio-Test (LRT) for Model Comparison (ML-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------------ #> model3 | glmmTMB | 18 | | | #> model4 | glmmTMB | 18 | 0 | 16.64 | < .001 model_parameters(model4) #> # Fixed Effects (Count Model) #> #> Parameter | Log-Mean | SE | 95% CI | z | p #> -------------------------------------------------------------- #> (Intercept) | -0.61 | 0.41 | [-1.40, 0.18] | -1.51 | 0.132 #> mined [no] | 1.43 | 0.37 | [ 0.71, 2.15] | 3.90 | < .001 #> spp [PR] | -0.96 | 0.64 | [-2.23, 0.30] | -1.50 | 0.134 #> spp [DM] | 0.17 | 0.24 | [-0.29, 0.63] | 0.73 | 0.468 #> spp [EC-A] | -0.39 | 0.34 | [-1.06, 0.28] | -1.13 | 0.258 #> spp [EC-L] | 0.49 | 0.24 | [ 0.02, 0.96] | 2.05 | 0.041 #> spp [DES-L] | 0.59 | 0.23 | [ 0.14, 1.04] | 2.59 | 0.010 #> spp [DF] | -0.11 | 0.24 | [-0.59, 0.36] | -0.46 | 0.642 #> #> # Fixed Effects (Zero-Inflation Component) #> #> Parameter | Log-Odds | SE | 95% CI | z | p #> --------------------------------------------------------------- #> (Intercept) | 0.91 | 0.63 | [-0.32, 2.14] | 1.45 | 0.147 #> mined [no] | -2.56 | 0.60 | [-3.75, -1.38] | -4.24 | < .001 #> spp [PR] | 1.16 | 1.33 | [-1.45, 3.78] | 0.87 | 0.384 #> spp [DM] | -0.94 | 0.80 | [-2.51, 0.63] | -1.17 | 0.241 #> spp [EC-A] | 1.04 | 0.71 | [-0.36, 2.44] | 1.46 | 0.144 #> spp [EC-L] | -0.56 | 0.73 | [-1.99, 0.86] | -0.77 | 0.439 #> spp [DES-L] | -0.89 | 0.75 | [-2.37, 0.58] | -1.19 | 0.236 #> spp [DF] | -2.54 | 2.18 | [-6.82, 1.74] | -1.16 | 0.244 #> #> # Dispersion #> #> Parameter | Coefficient | 95% CI #> ---------------------------------------- #> (Intercept) | 1.51 | [0.93, 2.46] #> #> # Random Effects Variances #> #> Parameter | Coefficient | 95% CI #> ------------------------------------------------- #> SD (Intercept: site) | 0.38 | [0.17, 0.87] #> #> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed #> using a Wald z-distribution approximation."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"conclusion","dir":"Articles","previous_headings":"","what":"Conclusion","title":"How to arrive at the best model fit","text":"Statistics hard. just fitting model, also checking model fit improving model. also requires domain knowledge consider whether relevant predictors included model (whether included predictors relevant!). performance package provides comprehensive set tools help task. demonstrated use tools check fit model, detect misspecification, improve model. also shown compare model fit indices perform statistical tests determine model best fit. hope vignette helpful guiding process.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"reuse-of-this-material","dir":"Articles","previous_headings":"","what":"Reuse of this Material","title":"Checking outliers with *performance*","text":"Note: vignette extended write-Behavior Research Methods paper. educational module can freely reused teaching purposes long original BRM paper cited. raw code file, can adapted rmarkdown formats teaching purposes, can accessed . contribute improve content directly, please submit Pull Request {performance} package GitHub repository following usual contributing guidelines: https://easystats.github.io/performance/CONTRIBUTING.html. report issues problems, module, seek support, please open issue: https://github.com/easystats/performance/issues. Reference: Thériault, R., Ben-Shachar, M. S., Patil, ., Lüdecke, D., Wiernik, B. M., & Makowski, D. (2024). Check outliers! introduction identifying statistical outliers R easystats. Behavior Research Methods, 1-11. https://doi.org/10.3758/s13428-024-02356-w","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"summary","dir":"Articles","previous_headings":"","what":"Summary","title":"Checking outliers with *performance*","text":"Beyond challenge keeping --date current best practices regarding diagnosis treatment outliers, additional difficulty arises concerning mathematical implementation recommended methods. vignette, provide overview current recommendations best practices demonstrate can easily conveniently implemented R statistical computing software, using {performance} package easystats ecosystem. cover univariate, multivariate, model-based statistical outlier detection methods, recommended threshold, standard output, plotting methods. conclude recommendations handling outliers: different theoretical types outliers, whether exclude winsorize , importance transparency.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"statement-of-need","dir":"Articles","previous_headings":"","what":"Statement of Need","title":"Checking outliers with *performance*","text":"Real-life data often contain observations can considered abnormal compared main population. cause —belong different distribution (originating different generative process) simply extreme cases, statistically rare impossible—can hard assess, boundaries “abnormal” difficult define. Nonetheless, improper handling outliers can substantially affect statistical model estimations, biasing effect estimations weakening models’ predictive performance. thus essential address problem thoughtful manner. Yet, despite existence established recommendations guidelines, many researchers still treat outliers consistent manner, using inappropriate strategies (Simmons, Nelson, Simonsohn 2011; Leys et al. 2013). One possible reason researchers aware existing recommendations, know implement using analysis software. paper, show follow current best practices automatic reproducible statistical outlier detection (SOD) using R {performance} package (Lüdecke et al. 2021), part easystats ecosystem packages build R framework easy statistical modeling, visualization, reporting (Lüdecke et al. [2019] 2023). Installation instructions can found GitHub website, list dependencies CRAN. instructional materials follow aimed audience researchers want follow good practices, appropriate advanced undergraduate students, graduate students, professors, professionals deal nuances outlier treatment.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"identifying-outliers","dir":"Articles","previous_headings":"","what":"Identifying Outliers","title":"Checking outliers with *performance*","text":"Although many researchers attempt identify outliers measures based mean (e.g., z scores), methods problematic mean standard deviation robust influence outliers methods also assume normally distributed data (.e., Gaussian distribution). Therefore, current guidelines recommend using robust methods identify outliers, relying median opposed mean (Leys et al. 2019, 2013, 2018). Nonetheless, exact outlier method use depends many factors. cases, eye-gauging odd observations can appropriate solution, though many researchers favour algorithmic solutions detect potential outliers, example, based continuous value expressing observation stands others. One factors consider selecting algorithmic outlier detection method statistical test interest. using regression model, relevant information can found identifying observations fit well model. approach, known model-based outliers detection (outliers extracted statistical model fit), can contrasted distribution-based outliers detection, based distance observation “center” population. Various quantification strategies distance exist latter, univariate (involving one variable time) multivariate (involving multiple variables). method readily available detect model-based outliers, structural equation modelling (SEM), looking multivariate outliers may relevance. simple tests (t tests correlations) compare values variable, can appropriate check univariate outliers. However, univariate methods can give false positives since t tests correlations, ultimately, also models/multivariable statistics. sense limited, show nonetheless educational purposes. Importantly, whatever approach researchers choose remains subjective decision, usage (rationale) must transparently documented reproducible (Leys et al. 2019). Researchers commit (ideally preregistration) outlier treatment method collecting data. report paper decisions details methods, well deviation original plan. transparency practices can help reduce false positives due excessive researchers’ degrees freedom (.e., choice flexibility throughout analysis). following section, go mentioned methods provide examples implement R.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"univariate-outliers","dir":"Articles","previous_headings":"Identifying Outliers","what":"Univariate Outliers","title":"Checking outliers with *performance*","text":"Researchers frequently attempt identify outliers using measures deviation center variable’s distribution. One popular procedure z score transformation, computes distance standard deviation (SD) mean. However, mentioned earlier, popular method robust. Therefore, univariate outliers, recommended use median along Median Absolute Deviation (MAD), robust interquartile range mean standard deviation (Leys et al. 2019, 2013). Researchers can identify outliers based robust (.e., MAD-based) z scores using check_outliers() function {performance} package, specifying method = \"zscore_robust\".1 Although Leys et al. (2013) suggest default threshold 2.5 Leys et al. (2019) threshold 3, {performance} uses default less conservative threshold ~3.29.2 , data points flagged outliers go beyond +/- ~3.29 MAD. Users can adjust threshold using threshold argument. provide example code using mtcars dataset, extracted 1974 Motor Trend US magazine. dataset contains fuel consumption 10 characteristics automobile design performance 32 different car models (see ?mtcars details). chose dataset accessible base R familiar many R users. might want conduct specific statistical analyses data set, say, t tests structural equation modelling, first, want check outliers may influence test results. automobile names stored column names mtcars, first convert ID column benefit check_outliers() ID argument. Furthermore, really need couple columns demonstration, choose first four (mpg = Miles/(US) gallon; cyl = Number cylinders; disp = Displacement; hp = Gross horsepower). Finally, outliers dataset, add two artificial outliers running function. see check_outliers() robust z score method detected two outliers: cases 33 34, observations added . flagged two variables specifically: mpg (Miles/(US) gallon) cyl (Number cylinders), output provides exact z score variables. describe deal cases details later paper, want exclude detected outliers main dataset, can extract row numbers using () output object, can used indexing: check_outliers() output objects possess plot() method, meaning also possible visualize outliers using generic plot() function resulting outlier object loading {see} package. Visual depiction outliers using robust z-score method. distance represents aggregate score variables mpg, cyl, disp, hp. univariate methods available, using interquartile range (IQR), based different intervals, Highest Density Interval (HDI) Bias Corrected Accelerated Interval (BCI). methods documented described function’s help page.","code":"library(performance) # Create some artificial outliers and an ID column data <- rbind(mtcars[1:4], 42, 55) data <- cbind(car = row.names(data), data) outliers <- check_outliers(data, method = \"zscore_robust\", ID = \"car\") outliers > 2 outliers detected: cases 33, 34. > - Based on the following method and threshold: zscore_robust (3.291). > - For variables: mpg, cyl, disp, hp. > > ----------------------------------------------------------------------------- > > The following observations were considered outliers for two or more > variables by at least one of the selected methods: > > Row car n_Zscore_robust > 1 33 33 2 > 2 34 34 2 > > ----------------------------------------------------------------------------- > Outliers per variable (zscore_robust): > > $mpg > Row car Distance_Zscore_robust > 33 33 33 3.7 > 34 34 34 5.8 > > $cyl > Row car Distance_Zscore_robust > 33 33 33 12 > 34 34 34 17 which(outliers) > [1] 33 34 data_clean <- data[-which(outliers), ] library(see) plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"multivariate-outliers","dir":"Articles","previous_headings":"Identifying Outliers","what":"Multivariate Outliers","title":"Checking outliers with *performance*","text":"Univariate outliers can useful focus particular variable, instance reaction time, extreme values might indicative inattention non-task-related behavior3. However, many scenarios, variables data set independent, abnormal observation impact multiple dimensions. instance, participant giving random answers questionnaire. case, computing z score questions might lead satisfactory results. Instead, one might want look variables together. One common approach compute multivariate distance metrics Mahalanobis distance. Although Mahalanobis distance popular, just like regular z scores method, robust heavily influenced outliers . Therefore, multivariate outliers, recommended use Minimum Covariance Determinant, robust version Mahalanobis distance (MCD, Leys et al. 2018, 2019). {performance}’s check_outliers(), one can use approach method = \"mcd\".4 , detected 9 multivariate outliers (.e,. looking variables dataset together). Visual depiction outliers using Minimum Covariance Determinant (MCD) method, robust version Mahalanobis distance. distance represents MCD scores variables mpg, cyl, disp, hp. multivariate methods available, another type robust Mahalanobis distance case relies orthogonalized Gnanadesikan-Kettenring pairwise estimator (Gnanadesikan Kettenring 1972). methods documented described function’s help page.","code":"outliers <- check_outliers(data, method = \"mcd\", verbose = FALSE) outliers > 2 outliers detected: cases 33, 34. > - Based on the following method and threshold: mcd (20). > - For variables: mpg, cyl, disp, hp. plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"model-based-outliers","dir":"Articles","previous_headings":"Identifying Outliers","what":"Model-Based Outliers","title":"Checking outliers with *performance*","text":"Working regression models creates possibility using model-based SOD methods. methods rely concept leverage, , much influence given observation can model estimates. observations relatively strong leverage/influence model, one can suspect model’s estimates biased observations, case flagging outliers prove helpful (see next section, “Handling Outliers”). {performance}, two model-based SOD methods currently available: Cook’s distance, regular regression models, Pareto, Bayesian models. , check_outliers() can applied directly regression model objects, simply specifying method = \"cook\" (method = \"pareto\" Bayesian models).5 Currently, lm models supported (exception glmmTMB, lmrob, glmrob models), long supported underlying functions stats::cooks.distance() (loo::pareto_k_values()) insight::get_data() (full list 225 models currently supported insight package, see https://easystats.github.io/insight/#list--supported-models--class). Also note although check_outliers() supports pipe operators (|> %>%), support tidymodels time. show demo . Visual depiction outliers based Cook’s distance (leverage standardized residuals), based fitted model. Using model-based outlier detection method, identified two outliers. Table 1 summarizes methods use cases, threshold. recommended thresholds default thresholds.","code":"model <- lm(disp ~ mpg * hp, data = data) outliers <- check_outliers(model, method = \"cook\") outliers > 2 outliers detected: cases 31, 34. > - Based on the following method and threshold: cook (0.806). > - For variable: (Whole model). plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"table-1","dir":"Articles","previous_headings":"Identifying Outliers > Model-Based Outliers","what":"Table 1","title":"Checking outliers with *performance*","text":"Summary Statistical Outlier Detection Methods Recommendations Statistical Test Diagnosis Method Recommended Threshold Function Usage Supported regression model Model-based: Cook (Pareto Bayesian models) qf(0.5, ncol(x), nrow(x) - ncol(x)) (0.7 Pareto) check_outliers(model, method = “cook”) Structural Equation Modeling (unsupported model) Multivariate: Minimum Covariance Determinant (MCD) qchisq(p = 1 - 0.001, df = ncol(x)) check_outliers(data, method = “mcd”) Simple test variables (t test, correlation, etc.) Univariate: robust z scores (MAD) qnorm(p = 1 - 0.001 / 2), ~ 3.29 check_outliers(data, method = “zscore_robust”)","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"cooks-distance-vs--mcd","dir":"Articles","previous_headings":"Identifying Outliers","what":"Cook’s Distance vs. MCD","title":"Checking outliers with *performance*","text":"Leys et al. (2018) report preference MCD method Cook’s distance. Cook’s distance removes one observation time checks corresponding influence model time (Cook 1977), flags observation large influence. view authors, several outliers, process removing single outlier time problematic model remains “contaminated” influenced possible outliers model, rendering method suboptimal presence multiple outliers. However, distribution-based approaches silver bullet either, cases usage methods agnostic theoretical statistical models interest might problematic. example, tall person expected also much heavier average, still fit expected association height weight (.e., line model weight ~ height). contrast, using multivariate outlier detection methods may flag person outlier—unusual two variables, height weight—even though pattern fits perfectly predictions. example , plot raw data see two possible outliers. first one falls along regression line, therefore “line” hypothesis. second one clearly diverges regression line, therefore can conclude outlier may disproportionate influence model. Scatter plot height weight, two extreme observations: one model-consistent (top-right) , model-inconsistent (.e., outlier; bottom-right). Using either z-score MCD methods, model-consistent observation incorrectly flagged outlier influential observation. contrast, model-based detection method displays desired behaviour: correctly flags person tall light, without flagging person tall heavy. leverage method (Cook’s distance) correctly distinguishes true outlier model-consistent extreme observation), based fitted model. Finally, unusual observations happen naturally: extreme observations expected even taken normal distribution. statistical models can integrate “expectation”, multivariate outlier methods might conservative, flagging many observations despite belonging right generative process. reasons, believe model-based methods still preferable MCD using supported regression models. Additionally, presence multiple outliers significant concern, regression methods robust outliers considered—like t regression quantile regression—render precise identification less critical (McElreath 2020).","code":"data <- women[rep(seq_len(nrow(women)), each = 100), ] data <- rbind(data, c(100, 258), c(100, 200)) model <- lm(weight ~ height, data) rempsyc::nice_scatter(data, \"height\", \"weight\") outliers <- check_outliers(model, method = c(\"zscore_robust\", \"mcd\"), verbose = FALSE) which(outliers) > [1] 1501 1502 outliers <- check_outliers(model, method = \"cook\") which(outliers) > [1] 1502 plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"composite-outlier-score","dir":"Articles","previous_headings":"Identifying Outliers","what":"Composite Outlier Score","title":"Checking outliers with *performance*","text":"{performance} package also offers alternative, consensus-based approach combines several methods, based assumption different methods provide different angles looking given problem. applying variety methods, one can hope “triangulate” true outliers (consistently flagged multiple methods) thus attempt minimize false positives. practice, approach computes composite outlier score, formed average binary (0 1) classification results method. represents probability observation classified outlier least one method. default decision rule classifies rows composite outlier scores superior equal 0.5 outlier observations (.e., classified outliers least half methods). {performance}’s check_outliers(), one can use approach including desired methods corresponding argument. Outliers (counts per variables) individual methods can obtained attributes. example: example sentence reporting usage composite method : Based composite outlier score (see ‘check_outliers()’ function ‘performance’ R package, Lüdecke et al. 2021) obtained via joint application multiple outliers detection algorithms (() median absolute deviation (MAD)-based robust z scores, Leys et al. 2013; (b) Mahalanobis minimum covariance determinant (MCD), Leys et al. 2019; (c) Cook’s distance, Cook 1977), excluded two participants classified outliers least half methods used.","code":"outliers <- check_outliers(model, method = c(\"zscore_robust\", \"mcd\", \"cook\"), verbose = FALSE) which(outliers) > [1] 1501 1502 attributes(outliers)$outlier_var$zscore_robust > $weight > Row Distance_Zscore_robust > 1501 1501 6.9 > 1502 1502 3.7 > > $height > Row Distance_Zscore_robust > 1501 1501 5.9 > 1502 1502 5.9"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"handling-outliers","dir":"Articles","previous_headings":"","what":"Handling Outliers","title":"Checking outliers with *performance*","text":"section demonstrated identify outliers using check_outliers() function {performance} package. outliers identified? Although common automatically discard observation marked “outlier” might infect rest data statistical ailment, believe use SOD methods one step get--know--data pipeline; researcher analyst’s domain knowledge must involved decision deal observations marked outliers means SOD. Indeed, automatic tools can help detect outliers, nowhere near perfect. Although can useful flag suspect data, can misses false alarms, replace human eyes proper vigilance researcher. end manually inspecting data outliers, can helpful think outliers belonging different types outliers, categories, can help decide given outlier.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"error-interesting-and-random-outliers","dir":"Articles","previous_headings":"Handling Outliers","what":"Error, Interesting, and Random Outliers","title":"Checking outliers with *performance*","text":"Leys et al. (2019) distinguish error outliers, interesting outliers, random outliers. Error outliers likely due human error corrected data analysis outright removed since invalid observations. Interesting outliers due technical error may theoretical interest; might thus relevant investigate even though removed current analysis interest. Random outliers assumed due chance alone belong correct distribution , therefore, retained. recommended keep observations expected part distribution interest, even outliers (Leys et al. 2019). However, suspected outliers belong alternative distribution, observations large impact results call question robustness, especially significance conditional inclusion, removed. also keep mind might error outliers detected statistical tools, nonetheless found removed. example, studying effects X Y among teenagers one observation 20-year-old, observation might statistical outlier, outlier context research, discarded. call observations undetected error outliers, sense although statistically stand , belong theoretical empirical distribution interest (e.g., teenagers). way, blindly rely statistical outlier detection methods; due diligence investigate undetected error outliers relative specific research question also essential valid inferences.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"winsorization","dir":"Articles","previous_headings":"Handling Outliers","what":"Winsorization","title":"Checking outliers with *performance*","text":"Removing outliers can case valid strategy, ideally one report results without outliers see extent impact results. approach however can reduce statistical power. Therefore, propose recoding approach, namely, winsorization: bringing outliers back within acceptable limits (e.g., 3 MADs, Tukey McLaughlin 1963). However, possible, recommended collect enough data even removing outliers, still sufficient statistical power without resort winsorization (Leys et al. 2019). easystats ecosystem makes easy incorporate step workflow winsorize() function {datawizard}, lightweight R package facilitate data wrangling statistical transformations (Patil et al. 2022). procedure bring back univariate outliers within limits ‘acceptable’ values, based either percentile, z score, robust alternative based MAD.","code":"data[1501:1502, ] # See outliers rows > height weight > 1501 100 258 > 1502 100 200 # Winsorizing using the MAD library(datawizard) winsorized_data <- winsorize(data, method = \"zscore\", robust = TRUE, threshold = 3) # Values > +/- MAD have been winsorized winsorized_data[1501:1502, ] > height weight > 1501 83 188 > 1502 83 188"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"the-importance-of-transparency","dir":"Articles","previous_headings":"Handling Outliers","what":"The Importance of Transparency","title":"Checking outliers with *performance*","text":", critical part sound outlier treatment regardless SOD method used, reported reproducible manner. Ideally, handling outliers specified priori much detail possible, preregistered, limit researchers’ degrees freedom therefore risks false positives (Leys et al. 2019). especially true given interesting outliers random outliers often times hard distinguish practice. Thus, researchers always prioritize transparency report following information: () many outliers identified (including percentage); (b) according method criteria, (c) using function R package (applicable), (d) handled (excluded winsorized, latter, using threshold). possible, (e) corresponding code script along data shared public repository like Open Science Framework (OSF), exclusion criteria can reproduced precisely.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"conclusion","dir":"Articles","previous_headings":"","what":"Conclusion","title":"Checking outliers with *performance*","text":"vignette, showed investigate outliers using check_outliers() function {performance} package following current good practices. However, best practice outlier treatment stop using appropriate statistical algorithms, entails respecting existing recommendations, preregistration, reproducibility, consistency, transparency, justification. Ideally, one additionally also report package, function, threshold used (linking full code possible). hope paper accompanying check_outlier() function easystats help researchers engage good research practices providing smooth outlier detection experience.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"comparing-vs--testing","dir":"Articles","previous_headings":"","what":"Comparing vs. Testing","title":"Compare, Test, and Select Models","text":"Let’s imagine interested explaining variability Sepal.Length using 3 different predictors. , can build 3 linear models.","code":"model1 <- lm(Sepal.Length ~ Petal.Length, data = iris) model2 <- lm(Sepal.Length ~ Petal.Width, data = iris) model3 <- lm(Sepal.Length ~ Sepal.Width, data = iris)"},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"comparing-indices-of-model-performance","dir":"Articles","previous_headings":"Comparing vs. Testing","what":"Comparing Indices of Model Performance","title":"Compare, Test, and Select Models","text":"eponymous function package, performance(), can used compute different indices performance (umbrella term indices fit). Indices model performance multiple models, one can obtain useful table compare indices glance using compare_performance() function. Comparison Model Performance Indices remember stats lessons, comparing different model fits, like choose model high R^2 value (measure much variance explained predictors), low AIC BIC values, low root mean squared error (RMSE). Based criteria, can immediately see model1 best fit. don’t like looking tables, can also plot using plotting method supported see package: , see: https://easystats.github.io/see/articles/performance.html","code":"library(performance) library(insight) # we will use `print_md` function to display a well-formatted table result <- performance(model1) print_md(result) result <- compare_performance(model1, model2, model3) print_md(result) library(see) plot(compare_performance(model1, model2, model3))"},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"testing-models","dir":"Articles","previous_headings":"Comparing vs. Testing","what":"Testing Models","title":"Compare, Test, and Select Models","text":"comparing indices often useful, making decision (instance, model keep drop) can often hard, indices can give conflicting suggestions. Additionally, sometimes unclear index favour given context. one reason tests useful, facilitate decisions via (infamous) “significance” indices, like p-values (frequentist framework) Bayes Factors (Bayesian framework). model compared model1. However, tests also strong limitations shortcomings, used one criterion rule ! can find information tests .","code":"result <- test_performance(model1, model2, model3) print_md(result)"},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"experimenting","dir":"Articles","previous_headings":"Comparing vs. Testing","what":"Experimenting","title":"Compare, Test, and Select Models","text":"Although shown examples simple linear models, highly encourage try functions models choosing. example, functions work mixed-effects regression models, Bayesian regression models, etc. demonstrate , run Bayesian versions linear regression models just compared: Comparison Model Performance Indices Note , since Bayesian regression models, function automatically picked appropriate indices compare! unfamiliar , explore . Now ’s turn play! :)","code":"library(rstanarm) model1 <- stan_glm(Sepal.Length ~ Petal.Length, data = iris, refresh = 0) model2 <- stan_glm(Sepal.Length ~ Petal.Width, data = iris, refresh = 0) model3 <- stan_glm(Sepal.Length ~ Sepal.Width, data = iris, refresh = 0) result <- compare_performance(model1, model2, model3) print_md(result)"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"what-is-the-r2","dir":"Articles","previous_headings":"","what":"What is the R2?","title":"R-squared (R2)","text":"coefficient determination, denoted R^2 pronounced “R squared”, typically corresponds proportion variance dependent variable (response) explained (.e., predicted) independent variables (predictors). “absolute” index goodness--fit, ranging 0 1 (often expressed percentage), can used model performance assessment models comparison.","code":""},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"different-types-of-r2","dir":"Articles","previous_headings":"","what":"Different types of R2","title":"R-squared (R2)","text":"models become complex, computation R^2 becomes increasingly less straightforward. Currently, depending context regression model object, one can choose following measures supported performance: Bayesian R^2 Cox & Snell’s R^2 Efron’s R^2 Kullback-Leibler R^2 LOO-adjusted R^2 McFadden’s R^2 McKelvey & Zavoinas R^2 Nagelkerke’s R^2 Nakagawa’s R^2 mixed models Somers’ D_{xy} rank correlation binary outcomes Tjur’s R^2 - coefficient determination (D) Xu’ R^2 (Omega-squared) R^2 models zero-inflation COMPLETED. begin, let’s first load package.","code":"library(performance)"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"r2-for-lm","dir":"Articles","previous_headings":"","what":"R2 for lm","title":"R-squared (R2)","text":"","code":"m_lm <- lm(wt ~ am * cyl, data = mtcars) r2(m_lm) > # R2 for Linear Regression > R2: 0.724 > adj. R2: 0.694"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"r2-for-glm","dir":"Articles","previous_headings":"","what":"R2 for glm","title":"R-squared (R2)","text":"context generalized linear model (e.g., logistic model outcome binary), R^2 doesn’t measure percentage “explained variance”, concept doesn’t apply. However, R^2s adapted GLMs retained name “R2”, mostly similar properties (range, sensitivity, interpretation amount explanatory power).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"marginal-vs--conditional-r2","dir":"Articles","previous_headings":"R2 for Mixed Models","what":"Marginal vs. Conditional R2","title":"R-squared (R2)","text":"mixed models, performance return two different R^2s: conditional R^2 marginal R^2 marginal R^2 considers variance fixed effects (without random effects), conditional R^2 takes fixed random effects account (.e., total model). Note r2 functions return R^2 values. encourage users instead always use model_performance function get comprehensive set indices model fit. , current vignette, like exclusively focus family functions talk measure.","code":"library(lme4) # defining a linear mixed-effects model model <- lmer(Petal.Length ~ Petal.Width + (1 | Species), data = iris) r2(model) > # R2 for Mixed Models > > Conditional R2: 0.933 > Marginal R2: 0.303 model_performance(model) > # Indices of model performance > > AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma > ----------------------------------------------------------------------------- > 159.036 | 159.312 | 171.079 | 0.933 | 0.303 | 0.904 | 0.373 | 0.378"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"r2-for-bayesian-models","dir":"Articles","previous_headings":"","what":"R2 for Bayesian Models","title":"R-squared (R2)","text":"discussed , mixed-effects models, two components associated R^2.","code":"library(rstanarm) model <- stan_glm(mpg ~ wt + cyl, data = mtcars, refresh = 0) r2(model) > # Bayesian R2 with Compatibility Interval > > Conditional R2: 0.816 (95% CI [0.704, 0.897]) # defining a Bayesian mixed-effects model model <- stan_lmer(Petal.Length ~ Petal.Width + (1 | Species), data = iris, refresh = 0) r2(model) > # Bayesian R2 with Compatibility Interval > > Conditional R2: 0.954 (95% CI [0.950, 0.957]) > Marginal R2: 0.405 (95% CI [0.186, 0.625])"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"comparing-change-in-r2-using-cohens-f","dir":"Articles","previous_headings":"","what":"Comparing change in R2 using Cohen’s f","title":"R-squared (R2)","text":"Cohen’s f (ANOVA fame) can used measure effect size context sequential multiple regression (.e., nested models). , comparing two models, can examine ratio increase R^2 unexplained variance: f^{2}={R_{AB}^{2}-R_{}^{2} \\1-R_{AB}^{2}} want know indices, can check details references functions compute .","code":"library(effectsize) data(hardlyworking) m1 <- lm(salary ~ xtra_hours, data = hardlyworking) m2 <- lm(salary ~ xtra_hours + n_comps + seniority, data = hardlyworking) cohens_f_squared(m1, model2 = m2) > Cohen's f2 (partial) | 95% CI | R2_delta > --------------------------------------------- > 1.19 | [0.99, Inf] | 0.17 > > - One-sided CIs: upper bound fixed at [Inf]."},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"interpretation","dir":"Articles","previous_headings":"","what":"Interpretation","title":"R-squared (R2)","text":"want know interpret R^2 values, see interpretation guidelines.","code":""},{"path":"https://easystats.github.io/performance/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Daniel Lüdecke. Author, maintainer. Dominique Makowski. Author, contributor. Mattan S. Ben-Shachar. Author, contributor. Indrajeet Patil. Author, contributor. Philip Waggoner. Author, contributor. Brenton M. Wiernik. Author, contributor. Rémi Thériault. Author, contributor. Vincent Arel-Bundock. Contributor. Martin Jullum. Reviewer. gjo11. Reviewer. Etienne Bacher. Contributor. Joseph Luchman. Contributor.","code":""},{"path":"https://easystats.github.io/performance/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Lüdecke et al., (2021). performance: R Package Assessment, Comparison Testing Statistical Models. Journal Open Source Software, 6(60), 3139. https://doi.org/10.21105/joss.03139","code":"@Article{, title = {{performance}: An {R} Package for Assessment, Comparison and Testing of Statistical Models}, author = {Daniel Lüdecke and Mattan S. Ben-Shachar and Indrajeet Patil and Philip Waggoner and Dominique Makowski}, year = {2021}, journal = {Journal of Open Source Software}, volume = {6}, number = {60}, pages = {3139}, doi = {10.21105/joss.03139}, }"},{"path":"https://easystats.github.io/performance/index.html","id":"performance-","dir":"","previous_headings":"","what":"Assessment of Regression Models Performance","title":"Assessment of Regression Models Performance","text":"Test model good model! crucial aspect building regression models evaluate quality modelfit. important investigate well models fit data fit indices report. Functions create diagnostic plots compute fit measures exist, however, mostly spread different packages. unique consistent approach assess model quality different kind models. primary goal performance package fill gap provide utilities computing indices model quality goodness fit. include measures like r-squared (R2), root mean squared error (RMSE) intraclass correlation coefficient (ICC) , also functions check (mixed) models overdispersion, zero-inflation, convergence singularity.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Assessment of Regression Models Performance","text":"performance package available CRAN, latest development version available R-universe (rOpenSci). downloaded package, can load using: Tip Instead library(performance), use library(easystats). make features easystats-ecosystem available. stay updated, use easystats::install_latest().","code":"library(\"performance\")"},{"path":"https://easystats.github.io/performance/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Assessment of Regression Models Performance","text":"cite performance publications use:","code":"citation(\"performance\") #> To cite package 'performance' in publications use: #> #> Lüdecke et al., (2021). performance: An R Package for Assessment, Comparison and #> Testing of Statistical Models. Journal of Open Source Software, 6(60), 3139. #> https://doi.org/10.21105/joss.03139 #> #> A BibTeX entry for LaTeX users is #> #> @Article{, #> title = {{performance}: An {R} Package for Assessment, Comparison and Testing of Statistical Models}, #> author = {Daniel Lüdecke and Mattan S. Ben-Shachar and Indrajeet Patil and Philip Waggoner and Dominique Makowski}, #> year = {2021}, #> journal = {Journal of Open Source Software}, #> volume = {6}, #> number = {60}, #> pages = {3139}, #> doi = {10.21105/joss.03139}, #> }"},{"path":"https://easystats.github.io/performance/index.html","id":"documentation","dir":"","previous_headings":"","what":"Documentation","title":"Assessment of Regression Models Performance","text":"nice introduction package youtube.","code":""},{"path":[]},{"path":[]},{"path":"https://easystats.github.io/performance/index.html","id":"r-squared","dir":"","previous_headings":"The performance workflow > Assessing model quality","what":"R-squared","title":"Assessment of Regression Models Performance","text":"performance generic r2() function, computes r-squared many different models, including mixed effects Bayesian regression models. r2() returns list containing values related “appropriate” r-squared given model. different R-squared measures can also accessed directly via functions like r2_bayes(), r2_coxsnell() r2_nagelkerke() (see full list functions ). mixed models, conditional marginal R-squared returned. marginal R-squared considers variance fixed effects indicates much model’s variance explained fixed effects part . conditional R-squared takes fixed random effects account indicates much model’s variance explained “complete” model. frequentist mixed models, r2() (resp. r2_nakagawa()) computes mean random effect variances, thus r2() also appropriate mixed models complex random effects structures, like random slopes nested random effects (Johnson 2014; Nakagawa, Johnson, Schielzeth 2017).","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) r2(model) #> # R2 for Linear Regression #> R2: 0.830 #> adj. R2: 0.819 model <- glm(am ~ wt + cyl, data = mtcars, family = binomial) r2(model) #> # R2 for Logistic Regression #> Tjur's R2: 0.705 library(MASS) data(housing) model <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing) r2(model) #> Nagelkerke's R2: 0.108 set.seed(123) library(rstanarm) model <- stan_glmer( Petal.Length ~ Petal.Width + (1 | Species), data = iris, cores = 4 ) r2(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.954 (95% CI [0.951, 0.957]) #> Marginal R2: 0.414 (95% CI [0.204, 0.644]) library(lme4) model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) r2(model) #> # R2 for Mixed Models #> #> Conditional R2: 0.799 #> Marginal R2: 0.279"},{"path":"https://easystats.github.io/performance/index.html","id":"intraclass-correlation-coefficient-icc","dir":"","previous_headings":"The performance workflow > Assessing model quality","what":"Intraclass Correlation Coefficient (ICC)","title":"Assessment of Regression Models Performance","text":"Similar R-squared, ICC provides information explained variance can interpreted “proportion variance explained grouping structure population” (Hox 2010). icc() calculates ICC various mixed model objects, including stanreg models. …models class brmsfit.","code":"library(lme4) model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) icc(model) #> # Intraclass Correlation Coefficient #> #> Adjusted ICC: 0.722 #> Unadjusted ICC: 0.521 library(brms) set.seed(123) model <- brm(mpg ~ wt + (1 | cyl) + (1 + wt | gear), data = mtcars) icc(model) #> # Intraclass Correlation Coefficient #> #> Adjusted ICC: 0.941 #> Unadjusted ICC: 0.779"},{"path":[]},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-overdispersion","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for overdispersion","title":"Assessment of Regression Models Performance","text":"Overdispersion occurs observed variance data higher expected variance model assumption (Poisson, variance roughly equals mean outcome). check_overdispersion() checks count model (including mixed models) overdispersed . Overdispersion can fixed either modelling dispersion parameter (possible packages), choosing different distributional family (like Quasi-Poisson, negative binomial, see (Gelman Hill 2007)).","code":"library(glmmTMB) data(Salamanders) model <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_overdispersion(model) #> # Overdispersion test #> #> dispersion ratio = 2.946 #> Pearson's Chi-Squared = 1873.710 #> p-value = < 0.001"},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-zero-inflation","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for zero-inflation","title":"Assessment of Regression Models Performance","text":"Zero-inflation ((Quasi-)Poisson models) indicated amount observed zeros larger amount predicted zeros, model underfitting zeros. cases, recommended use negative binomial zero-inflated models. Use check_zeroinflation() check zero-inflation present fitted model.","code":"model <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_zeroinflation(model) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 298 #> Ratio: 0.77"},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-singular-model-fits","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for singular model fits","title":"Assessment of Regression Models Performance","text":"“singular” model fit means dimensions variance-covariance matrix estimated exactly zero. often occurs mixed models overly complex random effects structures. check_singularity() checks mixed models (class lme, merMod, glmmTMB MixMod) singularity, returns TRUE model fit singular. Remedies cure issues singular fits can found .","code":"library(lme4) data(sleepstudy) # prepare data set.seed(123) sleepstudy$mygrp <- sample(1:5, size = 180, replace = TRUE) sleepstudy$mysubgrp <- NA for (i in 1:5) { filter_group <- sleepstudy$mygrp == i sleepstudy$mysubgrp[filter_group] <- sample(1:30, size = sum(filter_group), replace = TRUE) } # fit strange model model <- lmer( Reaction ~ Days + (1 | mygrp / mysubgrp) + (1 | Subject), data = sleepstudy ) check_singularity(model) #> [1] TRUE"},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-heteroskedasticity","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for heteroskedasticity","title":"Assessment of Regression Models Performance","text":"Linear models assume constant error variance (homoskedasticity). check_heteroscedasticity() functions assess assumption violated:","code":"data(cars) model <- lm(dist ~ speed, data = cars) check_heteroscedasticity(model) #> Warning: Heteroscedasticity (non-constant error variance) detected (p = 0.031)."},{"path":"https://easystats.github.io/performance/index.html","id":"comprehensive-visualization-of-model-checks","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Comprehensive visualization of model checks","title":"Assessment of Regression Models Performance","text":"performance provides many functions check model assumptions, like check_collinearity(), check_normality() check_heteroscedasticity(). get comprehensive check, use check_model().","code":"# defining a model model <- lm(mpg ~ wt + am + gear + vs * cyl, data = mtcars) # checking model assumptions check_model(model)"},{"path":"https://easystats.github.io/performance/index.html","id":"model-performance-summaries","dir":"","previous_headings":"The performance workflow","what":"Model performance summaries","title":"Assessment of Regression Models Performance","text":"model_performance() computes indices model performance regression models. Depending model object, typical indices might r-squared, AIC, BIC, RMSE, ICC LOOIC.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"linear-model","dir":"","previous_headings":"The performance workflow > Model performance summaries","what":"Linear model","title":"Assessment of Regression Models Performance","text":"","code":"m1 <- lm(mpg ~ wt + cyl, data = mtcars) model_performance(m1) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma #> --------------------------------------------------------------- #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568"},{"path":"https://easystats.github.io/performance/index.html","id":"logistic-regression","dir":"","previous_headings":"The performance workflow > Model performance summaries","what":"Logistic regression","title":"Assessment of Regression Models Performance","text":"","code":"m2 <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") model_performance(m2) #> # Indices of model performance #> #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log | Score_spherical | PCP #> ----------------------------------------------------------------------------------------------------- #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 | 0.095 | 0.743"},{"path":"https://easystats.github.io/performance/index.html","id":"linear-mixed-model","dir":"","previous_headings":"The performance workflow > Model performance summaries","what":"Linear mixed model","title":"Assessment of Regression Models Performance","text":"","code":"library(lme4) m3 <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) model_performance(m3) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma #> ---------------------------------------------------------------------------------- #> 1755.628 | 1756.114 | 1774.786 | 0.799 | 0.279 | 0.722 | 23.438 | 25.592"},{"path":"https://easystats.github.io/performance/index.html","id":"models-comparison","dir":"","previous_headings":"The performance workflow","what":"Models comparison","title":"Assessment of Regression Models Performance","text":"compare_performance() function can used compare performance quality several models (including models different types).","code":"counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) outcome <- gl(3, 1, 9) treatment <- gl(3, 3) m4 <- glm(counts ~ outcome + treatment, family = poisson()) compare_performance(m1, m2, m3, m4, verbose = FALSE) #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | RMSE | Sigma | Score_log | Score_spherical | R2 | R2 (adj.) | Tjur's R2 | Log_loss | PCP | R2 (cond.) | R2 (marg.) | ICC | Nagelkerke's R2 #> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ #> m1 | lm | 156.0 (<.001) | 157.5 (<.001) | 161.9 (<.001) | 2.444 | 2.568 | | | 0.830 | 0.819 | | | | | | | #> m2 | glm | 31.3 (>.999) | 32.2 (>.999) | 35.7 (>.999) | 0.359 | 1.000 | -14.903 | 0.095 | | | 0.478 | 0.395 | 0.743 | | | | #> m3 | lmerMod | 1764.0 (<.001) | 1764.5 (<.001) | 1783.1 (<.001) | 23.438 | 25.592 | | | | | | | | 0.799 | 0.279 | 0.722 | #> m4 | glm | 56.8 (<.001) | 76.8 (<.001) | 57.7 (<.001) | 3.043 | 1.000 | -2.598 | 0.324 | | | | | | | | | 0.657"},{"path":"https://easystats.github.io/performance/index.html","id":"general-index-of-model-performance","dir":"","previous_headings":"The performance workflow > Models comparison","what":"General index of model performance","title":"Assessment of Regression Models Performance","text":"One can also easily compute composite index model performance sort models best one worse.","code":"compare_performance(m1, m2, m3, m4, rank = TRUE, verbose = FALSE) #> # Comparison of Model Performance Indices #> #> Name | Model | RMSE | Sigma | AIC weights | AICc weights | BIC weights | Performance-Score #> ----------------------------------------------------------------------------------------------- #> m2 | glm | 0.359 | 1.000 | 1.000 | 1.000 | 1.000 | 100.00% #> m4 | glm | 3.043 | 1.000 | 2.96e-06 | 2.06e-10 | 1.63e-05 | 37.67% #> m1 | lm | 2.444 | 2.568 | 8.30e-28 | 6.07e-28 | 3.99e-28 | 36.92% #> m3 | lmerMod | 23.438 | 25.592 | 0.00e+00 | 0.00e+00 | 0.00e+00 | 0.00%"},{"path":"https://easystats.github.io/performance/index.html","id":"visualisation-of-indices-of-models-performance","dir":"","previous_headings":"The performance workflow > Models comparison","what":"Visualisation of indices of models’ performance","title":"Assessment of Regression Models Performance","text":"Finally, provide convenient visualisation (see package must installed).","code":"plot(compare_performance(m1, m2, m4, rank = TRUE, verbose = FALSE))"},{"path":"https://easystats.github.io/performance/index.html","id":"testing-models","dir":"","previous_headings":"The performance workflow","what":"Testing models","title":"Assessment of Regression Models Performance","text":"test_performance() (test_bf, Bayesian sister) carries relevant appropriate tests based input (instance, whether models nested ).","code":"set.seed(123) data(iris) lm1 <- lm(Sepal.Length ~ Species, data = iris) lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) lm3 <- lm(Sepal.Length ~ Species * Sepal.Width, data = iris) lm4 <- lm(Sepal.Length ~ Species * Sepal.Width + Petal.Length + Petal.Width, data = iris) test_performance(lm1, lm2, lm3, lm4) #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR) #> ------------------------------------------------------------ #> lm1 | lm | | | | | #> lm2 | lm | > 1000 | 0.69 | < .001 | -6.25 | < .001 #> lm3 | lm | > 1000 | 0.36 | < .001 | -3.44 | < .001 #> lm4 | lm | > 1000 | 0.73 | < .001 | -7.77 | < .001 #> Each model is compared to lm1. test_bf(lm1, lm2, lm3, lm4) #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2] Species + Petal.Length 3.45e+26 #> [lm3] Species * Sepal.Width 4.69e+07 #> [lm4] Species * Sepal.Width + Petal.Length + Petal.Width 7.58e+29 #> #> * Against Denominator: [lm1] Species #> * Bayes Factor Type: BIC approximation"},{"path":"https://easystats.github.io/performance/index.html","id":"plotting-functions","dir":"","previous_headings":"The performance workflow","what":"Plotting Functions","title":"Assessment of Regression Models Performance","text":"Plotting functions available see package.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Assessment of Regression Models Performance","text":"Please note performance project released Contributor Code Conduct. contributing project, agree abide terms.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"contributing","dir":"","previous_headings":"","what":"Contributing","title":"Assessment of Regression Models Performance","text":"happy receive bug reports, suggestions, questions, () contributions fix problems add features. Please follow contributing guidelines mentioned : https://easystats.github.io/performance/CONTRIBUTING.html","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":null,"dir":"Reference","previous_headings":"","what":"Binned residuals for binomial logistic regression — binned_residuals","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"Check model quality binomial logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"","code":"binned_residuals( model, term = NULL, n_bins = NULL, show_dots = NULL, ci = 0.95, ci_type = c(\"exact\", \"gaussian\", \"boot\"), residuals = c(\"deviance\", \"pearson\", \"response\"), iterations = 1000, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"model glm-object binomial-family. term Name independent variable x. NULL, average residuals categories term plotted; else, average residuals estimated probabilities response plotted. n_bins Numeric, number bins divide data. n_bins = NULL, square root number observations taken. show_dots Logical, TRUE, show data points plot. Set FALSE models many observations, generating plot time-consuming. default, show_dots = NULL. case binned_residuals() tries guess whether performance poor due large model thus automatically shows hides dots. ci Numeric, confidence level error bounds. ci_type Character, type error bounds calculate. Can \"exact\" (default), \"gaussian\" \"boot\". \"exact\" calculates error bounds based exact binomial distribution, using binom.test(). \"gaussian\" uses Gaussian approximation, \"boot\" uses simple bootstrap method, confidence intervals calculated based quantiles bootstrap distribution. residuals Character, type residuals calculate. Can \"deviance\" (default), \"pearson\" \"response\". recommended use \"response\" models residuals available. iterations Integer, number iterations use bootstrap method. used ci_type = \"boot\". verbose Toggle warnings messages. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"data frame representing data mapped accompanying plot. case residuals inside error bounds, points black. residuals outside error bounds (indicated grey-shaded area), blue points indicate residuals OK, red points indicate model - -fitting relevant range estimated probabilities.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"Binned residual plots achieved \"dividing data categories (bins) based fitted values, plotting average residual versus average fitted value bin.\" (Gelman, Hill 2007: 97). model true, one expect 95% residuals fall inside error bounds. term NULL, one can compare residuals relation specific model predictor. may helpful check term fit better transformed, e.g. rising falling pattern residuals along x-axis signal consider taking logarithm predictor (cf. Gelman Hill 2007, pp. 97-98).","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"binned_residuals() returns data frame, however, print() method returns short summary result. data frame used plotting. plot() method, turn, creates ggplot-object.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") result <- binned_residuals(model) result #> Warning: Probably bad model fit. Only about 50% of the residuals are inside the error bounds. #> # look at the data frame as.data.frame(result) #> xbar ybar n x.lo x.hi se CI_low #> conf_int 0.03786483 -0.26905395 5 0.01744776 0.06917366 0.07079661 -0.5299658 #> conf_int1 0.09514191 -0.44334345 5 0.07087498 0.15160143 0.06530245 -0.7042553 #> conf_int2 0.25910531 0.03762945 6 0.17159955 0.35374001 1.02017708 -0.3293456 #> conf_int3 0.47954643 -0.19916717 5 0.38363314 0.54063600 1.16107852 -0.5994783 #> conf_int4 0.71108931 0.81563262 5 0.57299903 0.89141359 0.19814385 0.5547207 #> conf_int5 0.97119262 -0.23399465 6 0.91147360 0.99815623 0.77513642 -0.5525066 #> CI_high group #> conf_int -0.008142076 no #> conf_int1 -0.182431572 no #> conf_int2 0.404604465 yes #> conf_int3 0.201143953 yes #> conf_int4 1.076544495 no #> conf_int5 0.084517267 yes # \\donttest{ # plot if (require(\"see\")) { plot(result, show_dots = TRUE) } #> Loading required package: see # }"},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for independence of residuals. — check_autocorrelation","title":"Check model for independence of residuals. — check_autocorrelation","text":"Check model independence residuals, .e. autocorrelation error terms.","code":""},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for independence of residuals. — check_autocorrelation","text":"","code":"check_autocorrelation(x, ...) # Default S3 method check_autocorrelation(x, nsim = 1000, ...)"},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for independence of residuals. — check_autocorrelation","text":"x model object. ... Currently used. nsim Number simulations Durbin-Watson-Test.","code":""},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for independence of residuals. — check_autocorrelation","text":"Invisibly returns p-value test statistics. p-value < 0.05 indicates autocorrelated residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check model for independence of residuals. — check_autocorrelation","text":"Performs Durbin-Watson-Test check autocorrelated residuals. case autocorrelation, robust standard errors return accurate results estimates, maybe mixed model error term cluster groups used.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for independence of residuals. — check_autocorrelation","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_autocorrelation(m) #> OK: Residuals appear to be independent and not autocorrelated (p = 0.306)."},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":null,"dir":"Reference","previous_headings":"","what":"Check suitability of data for clustering — check_clusterstructure","title":"Check suitability of data for clustering — check_clusterstructure","text":"checks whether data appropriate clustering using Hopkins' H statistic given data. value Hopkins statistic close 0 (0.5), can reject null hypothesis conclude dataset significantly clusterable. value H lower 0.25 indicates clustering tendency 90% confidence level. visual assessment cluster tendency (VAT) approach (Bezdek Hathaway, 2002) consists investigating heatmap ordered dissimilarity matrix. Following , one can potentially detect clustering tendency counting number square shaped blocks along diagonal.","code":""},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check suitability of data for clustering — check_clusterstructure","text":"","code":"check_clusterstructure(x, standardize = TRUE, distance = \"euclidean\", ...)"},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check suitability of data for clustering — check_clusterstructure","text":"x data frame. standardize Standardize data frame clustering (default). distance Distance method used. methods \"euclidean\" (default) exploratory context clustering tendency. See stats::dist() list available methods. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check suitability of data for clustering — check_clusterstructure","text":"H statistic (numeric)","code":""},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check suitability of data for clustering — check_clusterstructure","text":"Lawson, R. G., & Jurs, P. C. (1990). New index clustering tendency application chemical problems. Journal chemical information computer sciences, 30(1), 36-41. Bezdek, J. C., & Hathaway, R. J. (2002, May). VAT: tool visual assessment (cluster) tendency. Proceedings 2002 International Joint Conference Neural Networks. IJCNN02 (3), 2225-2230. IEEE.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check suitability of data for clustering — check_clusterstructure","text":"","code":"# \\donttest{ library(performance) check_clusterstructure(iris[, 1:4]) #> # Clustering tendency #> #> The dataset is suitable for clustering (Hopkins' H = 0.20). #> plot(check_clusterstructure(iris[, 1:4])) # }"},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check for multicollinearity of model terms — check_collinearity","title":"Check for multicollinearity of model terms — check_collinearity","text":"check_collinearity() checks regression models multicollinearity calculating variance inflation factor (VIF). multicollinearity() alias check_collinearity(). check_concurvity() wrapper around mgcv::concurvity(), can considered collinearity check smooth terms GAMs. Confidence intervals VIF tolerance based Marcoulides et al. (2019, Appendix B).","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check for multicollinearity of model terms — check_collinearity","text":"","code":"check_collinearity(x, ...) multicollinearity(x, ...) # Default S3 method check_collinearity(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'glmmTMB' check_collinearity( x, component = c(\"all\", \"conditional\", \"count\", \"zi\", \"zero_inflated\"), ci = 0.95, verbose = TRUE, ... ) check_concurvity(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check for multicollinearity of model terms — check_collinearity","text":"x model object (least respond vcov(), possible, also model.matrix() - however, also work without model.matrix()). ... Currently used. ci Confidence Interval (CI) level VIF tolerance values. verbose Toggle warnings messages. component models zero-inflation component, multicollinearity can checked conditional model (count component, component = \"conditional\" component = \"count\"), zero-inflation component (component = \"zero_inflated\" component = \"zi\") components (component = \"\"). Following model-classes currently supported: hurdle, zeroinfl, zerocount, MixMod glmmTMB.","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check for multicollinearity of model terms — check_collinearity","text":"data frame information name model term, variance inflation factor associated confidence intervals, factor standard error increased due possible correlation terms, tolerance values (including confidence intervals), tolerance = 1/vif.","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check for multicollinearity of model terms — check_collinearity","text":"code compute confidence intervals VIF tolerance values adapted Appendix B Marcoulides et al. paper. Thus, credits go authors original algorithm. also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"multicollinearity","dir":"Reference","previous_headings":"","what":"Multicollinearity","title":"Check for multicollinearity of model terms — check_collinearity","text":"Multicollinearity confused raw strong correlation predictors. matters association one predictor variables, conditional variables model. nutshell, multicollinearity means know effect one predictor, value knowing predictor rather low. Thus, one predictors help much terms better understanding model predicting outcome. consequence, multicollinearity problem, model seems suggest predictors question seems reliably associated outcome (low estimates, high standard errors), although predictors actually strongly associated outcome, .e. indeed might strong effect (McElreath 2020, chapter 6.1). Multicollinearity might arise third, unobserved variable causal effect two predictors associated outcome. cases, actual relationship matters association unobserved variable outcome. Remember: \"Pairwise correlations problem. conditional associations - correlations - matter.\" (McElreath 2020, p. 169)","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"interpretation-of-the-variance-inflation-factor","dir":"Reference","previous_headings":"","what":"Interpretation of the Variance Inflation Factor","title":"Check for multicollinearity of model terms — check_collinearity","text":"variance inflation factor measure analyze magnitude multicollinearity model terms. VIF less 5 indicates low correlation predictor predictors. value 5 10 indicates moderate correlation, VIF values larger 10 sign high, tolerable correlation model predictors (James et al. 2013). Increased SE column output indicates much larger standard error due association predictors conditional remaining variables model. Note thresholds, although commonly used, also criticized high. Zuur et al. (2010) suggest using lower values, e.g. VIF 3 larger may already longer considered \"low\".","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"multicollinearity-and-interaction-terms","dir":"Reference","previous_headings":"","what":"Multicollinearity and Interaction Terms","title":"Check for multicollinearity of model terms — check_collinearity","text":"interaction terms included model, high VIF values expected. portion multicollinearity among component terms interaction also called \"inessential ill-conditioning\", leads inflated VIF values typically seen models interaction terms (Francoeur 2013). Centering interaction terms can resolve issue (Kim Jung 2024).","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"multicollinearity-and-polynomial-terms","dir":"Reference","previous_headings":"","what":"Multicollinearity and Polynomial Terms","title":"Check for multicollinearity of model terms — check_collinearity","text":"Polynomial transformations considered single term thus VIFs calculated .","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"concurvity-for-smooth-terms-in-generalized-additive-models","dir":"Reference","previous_headings":"","what":"Concurvity for Smooth Terms in Generalized Additive Models","title":"Check for multicollinearity of model terms — check_collinearity","text":"check_concurvity() wrapper around mgcv::concurvity(), can considered collinearity check smooth terms GAMs.\"Concurvity occurs smooth term model approximated one smooth terms model.\" (see ?mgcv::concurvity). check_concurvity() returns column named VIF, \"worst\" measure. mgcv::concurvity() range 0 1, VIF value 1 / (1 - worst), make interpretation comparable classical VIF values, .e. 1 indicates problems, higher values indicate increasing lack identifiability. VIF proportion column equals \"estimate\" column mgcv::concurvity(), ranging 0 (problem) 1 (total lack identifiability).","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check for multicollinearity of model terms — check_collinearity","text":"Francoeur, R. B. (2013). Sequential Residual Centering Resolve Low Sensitivity Moderated Regression? Simulations Cancer Symptom Clusters. Open Journal Statistics, 03(06), 24-44. James, G., Witten, D., Hastie, T., Tibshirani, R. (eds.). (2013). introduction statistical learning: applications R. New York: Springer. Kim, Y., & Jung, G. (2024). Understanding linear interaction analysis causal graphs. British Journal Mathematical Statistical Psychology, 00, 1–14. Marcoulides, K. M., Raykov, T. (2019). Evaluation Variance Inflation Factors Regression Models Using Latent Variable Modeling Methods. Educational Psychological Measurement, 79(5), 874–882. McElreath, R. (2020). Statistical rethinking: Bayesian course examples R Stan. 2nd edition. Chapman Hall/CRC. Vanhove, J. (2019). Collinearity disease needs curing. webpage Zuur AF, Ieno EN, Elphick CS. protocol data exploration avoid common statistical problems: Data exploration. Methods Ecology Evolution (2010) 1:3–14.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check for multicollinearity of model terms — check_collinearity","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_collinearity(m) #> # Check for Multicollinearity #> #> Low Correlation #> #> Term VIF VIF 95% CI Increased SE Tolerance Tolerance 95% CI #> gear 1.53 [1.19, 2.51] 1.24 0.65 [0.40, 0.84] #> #> Moderate Correlation #> #> Term VIF VIF 95% CI Increased SE Tolerance Tolerance 95% CI #> wt 5.05 [3.21, 8.41] 2.25 0.20 [0.12, 0.31] #> cyl 5.41 [3.42, 9.04] 2.33 0.18 [0.11, 0.29] #> disp 9.97 [6.08, 16.85] 3.16 0.10 [0.06, 0.16] # plot results x <- check_collinearity(m) plot(x)"},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":null,"dir":"Reference","previous_headings":"","what":"Convergence test for mixed effects models — check_convergence","title":"Convergence test for mixed effects models — check_convergence","text":"check_convergence() provides alternative convergence test merMod-objects.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convergence test for mixed effects models — check_convergence","text":"","code":"check_convergence(x, tolerance = 0.001, ...)"},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convergence test for mixed effects models — check_convergence","text":"x merMod glmmTMB-object. tolerance Indicates value convergence result accepted. smaller tolerance , stricter test . ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convergence test for mixed effects models — check_convergence","text":"TRUE convergence fine FALSE convergence suspicious. Additionally, convergence value returned attribute.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"convergence-and-log-likelihood","dir":"Reference","previous_headings":"","what":"Convergence and log-likelihood","title":"Convergence test for mixed effects models — check_convergence","text":"Convergence problems typically arise model converged solution log-likelihood true maximum. may result unreliable overly complex (non-estimable) estimates standard errors.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"inspect-model-convergence","dir":"Reference","previous_headings":"","what":"Inspect model convergence","title":"Convergence test for mixed effects models — check_convergence","text":"lme4 performs convergence-check (see ?lme4::convergence), however, discussed suggested one lme4-authors comment, check can strict. check_convergence() thus provides alternative convergence test merMod-objects.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"resolving-convergence-issues","dir":"Reference","previous_headings":"","what":"Resolving convergence issues","title":"Convergence test for mixed effects models — check_convergence","text":"Convergence issues easy diagnose. help page ?lme4::convergence provides current advice resolve convergence issues. Another clue might large parameter values, e.g. estimates (scale linear predictor) larger 10 (non-identity link) generalized linear model might indicate complete separation. Complete separation can addressed regularization, e.g. penalized regression Bayesian regression appropriate priors fixed effects.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"convergence-versus-singularity","dir":"Reference","previous_headings":"","what":"Convergence versus Singularity","title":"Convergence test for mixed effects models — check_convergence","text":"Note different meaning singularity convergence: singularity indicates issue \"true\" best estimate, .e. whether maximum likelihood estimation variance-covariance matrix random effects positive definite semi-definite. Convergence question whether can assume numerical optimization worked correctly .","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convergence test for mixed effects models — check_convergence","text":"","code":"data(cbpp, package = \"lme4\") set.seed(1) cbpp$x <- rnorm(nrow(cbpp)) cbpp$x2 <- runif(nrow(cbpp)) model <- lme4::glmer( cbind(incidence, size - incidence) ~ period + x + x2 + (1 + x | herd), data = cbpp, family = binomial() ) check_convergence(model) #> [1] TRUE #> attr(,\"gradient\") #> [1] 0.0002803063 # \\donttest{ model <- suppressWarnings(glmmTMB::glmmTMB( Sepal.Length ~ poly(Petal.Width, 4) * poly(Petal.Length, 4) + (1 + poly(Petal.Width, 4) | Species), data = iris )) check_convergence(model) #> [1] FALSE # }"},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":null,"dir":"Reference","previous_headings":"","what":"Check correct model adjustment for identifying causal effects — check_dag","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"purpose check_dag() build, check visualize model based directed acyclic graphs (DAG). function checks model correctly adjusted identifying specific relationships variables, especially directed (maybe also \"causal\") effects given exposures outcome. case incorrect adjustments, function suggests minimal required variables adjusted (sometimes also called \"controlled \"), .e. variables least need included model. Depending goal analysis, still possible add variables model just minimally required adjustment sets. check_dag() convenient wrapper around ggdag::dagify(), dagitty::adjustmentSets() dagitty::adjustedNodes() check correct adjustment sets. returns dagitty object can visualized plot(). .dag() small convenient function return dagitty-string, can used online-tool dagitty-website.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"","code":"check_dag( ..., outcome = NULL, exposure = NULL, adjusted = NULL, latent = NULL, effect = c(\"all\", \"total\", \"direct\"), coords = NULL ) as.dag(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"... One formulas, converted dagitty syntax. First element may also model object. model objects provided, formula used first formula, independent variables used adjusted argument. See 'Details' 'Examples'. outcome Name dependent variable (outcome), character string formula. Must valid name formulas provided .... set, first dependent variable formulas used. exposure Name exposure variable (character string formula), direct total causal effect outcome checked. Must valid name formulas provided .... set, first independent variable formulas used. adjusted character vector formula names variables adjusted model, e.g. adjusted = c(\"x1\", \"x2\") adjusted = ~ x1 + x2. model object provided ..., values adjusted overwritten model's independent variables. latent character vector names latent variables model. effect Character string, indicating effect check. Can \"\" (default), \"total\", \"direct\". coords Coordinates variables plotting DAG. coordinates can provided three different ways: list two elements, x y, named vectors numerics. names correspond variable names DAG, values x y indicate x/y coordinates plot. list elements correspond variables DAG. element numeric vector length two x- y-coordinate. data frame three columns: x, y name (contains variable names). See 'Examples'. x object class check_dag, returned check_dag().","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"object class check_dag, can visualized plot(). returned object also inherits class dagitty thus can used functions ggdag dagitty packages.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"specifying-the-dag-formulas","dir":"Reference","previous_headings":"","what":"Specifying the DAG formulas","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"formulas following syntax: One-directed paths: left-hand-side name variables causal effects point (direction arrows, dagitty-language). right-hand-side variables causal effects assumed come . example, formula Y ~ X1 + X2, paths directed X1 X2 Y assumed. Bi-directed paths: Use ~~ indicate bi-directed paths. example, Y ~~ X indicates path Y X bi-directed, arrow points directions. Bi-directed paths often indicate unmeasured cause, unmeasured confounding, two involved variables.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"minimally-required-adjustments","dir":"Reference","previous_headings":"","what":"Minimally required adjustments","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"function checks model correctly adjusted identifying direct total effects exposure outcome. model correctly specified, adjustment needed estimate direct effect. model correctly specified, function suggests minimally required variables adjusted . function distinguishes direct total effects, checks model correctly adjusted . model cyclic, function stops suggests remove cycles model. Note sometimes necessary try different combinations suggested adjustments, check_dag() can always detect whether least one several variables required, whether adjustments done listed variables. can useful copy dagitty-code (using .dag(), prints dagitty-string console) dagitty-website play around different adjustments.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"direct-and-total-effects","dir":"Reference","previous_headings":"","what":"Direct and total effects","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"direct effect exposure outcome effect mediated variable model. total effect sum direct indirect effects. function checks model correctly adjusted identifying direct total effects exposure outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"why-are-dags-important-the-table-fallacy","dir":"Reference","previous_headings":"","what":"Why are DAGs important - the Table 2 fallacy","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"Correctly thinking identifying relationships variables important comes reporting coefficients regression models mutually adjust \"confounders\" include covariates. Different coefficients might different interpretations, depending relationship variables model. Sometimes, regression coefficient represents direct effect exposure outcome, sometimes must interpreted total effect, due involvement mediating effects. problem also called \"Table 2 fallacy\" (Westreich Greenland 2013). DAG helps visualizing thereby focusing relationships variables regression model detect missing adjustments -adjustment.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"Rohrer, J. M. (2018). Thinking clearly correlations causation: Graphical causal models observational data. Advances Methods Practices Psychological Science, 1(1), 27–42. doi:10.1177/2515245917745629 Westreich, D., & Greenland, S. (2013). Table 2 Fallacy: Presenting Interpreting Confounder Modifier Coefficients. American Journal Epidemiology, 177(4), 292–298. doi:10.1093/aje/kws412","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"","code":"# no adjustment needed check_dag( y ~ x + b, outcome = \"y\", exposure = \"x\" ) #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> #> Identification of direct and total effects #> #> Model is correctly specified. #> No adjustment needed to estimate the direct and total effect of `x` on `y`. #> # incorrect adjustment dag <- check_dag( y ~ x + b + c, x ~ b, outcome = \"y\", exposure = \"x\" ) dag #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> #> Identification of direct and total effects #> #> Incorrectly adjusted! #> To estimate the direct and total effect, at least adjust for `b`. Currently, the model does not adjust for any variables. #> plot(dag) # After adjusting for `b`, the model is correctly specified dag <- check_dag( y ~ x + b + c, x ~ b, outcome = \"y\", exposure = \"x\", adjusted = \"b\" ) dag #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> - Adjustment: b #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> # using formula interface for arguments \"outcome\", \"exposure\" and \"adjusted\" check_dag( y ~ x + b + c, x ~ b, outcome = ~y, exposure = ~x, adjusted = ~ b + c ) #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> - Adjustments: b and c #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> # if not provided, \"outcome\" is taken from first formula, same for \"exposure\" # thus, we can simplify the above expression to check_dag( y ~ x + b + c, x ~ b, adjusted = ~ b + c ) #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> - Adjustments: b and c #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> # use specific layout for the DAG dag <- check_dag( score ~ exp + b + c, exp ~ b, outcome = \"score\", exposure = \"exp\", coords = list( # x-coordinates for all nodes x = c(score = 5, exp = 4, b = 3, c = 3), # y-coordinates for all nodes y = c(score = 3, exp = 3, b = 2, c = 4) ) ) plot(dag) # alternative way of providing the coordinates dag <- check_dag( score ~ exp + b + c, exp ~ b, outcome = \"score\", exposure = \"exp\", coords = list( # x/y coordinates for each node score = c(5, 3), exp = c(4, 3), b = c(3, 2), c = c(3, 4) ) ) plot(dag) # Objects returned by `check_dag()` can be used with \"ggdag\" or \"dagitty\" ggdag::ggdag_status(dag) # Using a model object to extract information about outcome, # exposure and adjusted variables data(mtcars) m <- lm(mpg ~ wt + gear + disp + cyl, data = mtcars) dag <- check_dag( m, wt ~ disp + cyl, wt ~ am ) dag #> # Check for correct adjustment sets #> - Outcome: mpg #> - Exposure: wt #> - Adjustments: cyl, disp and gear #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> plot(dag)"},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":null,"dir":"Reference","previous_headings":"","what":"Classify the distribution of a model-family using machine learning — check_distribution","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"Choosing right distributional family regression models essential get accurate estimates standard errors. function may help check models' distributional family see model-family probably reconsidered. Since difficult exactly predict correct model family, consider function somewhat experimental.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"","code":"check_distribution(model)"},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"model Typically, model (response residuals()). May also numeric vector.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"function uses internal random forest model classify distribution model-family. Currently, following distributions trained (.e. results check_distribution() may one following): \"bernoulli\", \"beta\", \"beta-binomial\", \"binomial\", \"cauchy\", \"chi\", \"exponential\", \"F\", \"gamma\", \"half-cauchy\", \"inverse-gamma\", \"lognormal\", \"normal\", \"negative binomial\", \"negative binomial (zero-inflated)\", \"pareto\", \"poisson\", \"poisson (zero-inflated)\", \"tweedie\", \"uniform\" \"weibull\". Note similarity certain distributions according shape, skewness, etc. Thus, predicted distribution may perfectly representing distributional family underlying fitted model, response value. plot() method, shows probabilities predicted distributions, however, probability greater zero.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"function somewhat experimental might improved future releases. final decision model-family also based theoretical aspects information data model. also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"","code":"data(sleepstudy, package = \"lme4\") model <<- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy) check_distribution(model) #> # Distribution of Model Family #> #> Predicted Distribution of Residuals #> #> Distribution Probability #> cauchy 91% #> gamma 6% #> neg. binomial (zero-infl.) 3% #> #> Predicted Distribution of Response #> #> Distribution Probability #> lognormal 66% #> gamma 34% plot(check_distribution(model))"},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":null,"dir":"Reference","previous_headings":"","what":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"checks whether data appropriate Factor Analysis (FA) running Bartlett's Test Sphericity Kaiser, Meyer, Olkin (KMO) Measure Sampling Adequacy (MSA). See details information interpretation meaning test.","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"","code":"check_factorstructure(x, n = NULL, ...) check_kmo(x, n = NULL, ...) check_sphericity_bartlett(x, n = NULL, ...)"},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"x data frame correlation matrix. latter passed, n must provided. n correlation matrix passed, number observations must specified. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"list lists indices related sphericity KMO.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"bartlett-s-test-of-sphericity","dir":"Reference","previous_headings":"","what":"Bartlett's Test of Sphericity","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"Bartlett's (1951) test sphericity tests whether matrix (correlations) significantly different identity matrix (filled 0). tests whether correlation coefficients 0. test computes probability correlation matrix significant correlations among least variables dataset, prerequisite factor analysis work. often suggested check whether Bartlett’s test sphericity significant starting factor analysis, one needs remember test testing pretty extreme scenario (correlations non-significant). sample size increases, test tends always significant, makes particularly useful informative well-powered studies.","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"kaiser-meyer-olkin-kmo-","dir":"Reference","previous_headings":"","what":"Kaiser, Meyer, Olkin (KMO)","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"(Measure Sampling Adequacy (MSA) Factor Analysis.) Kaiser (1970) introduced Measure Sampling Adequacy (MSA), later modified Kaiser Rice (1974). Kaiser-Meyer-Olkin (KMO) statistic, can vary 0 1, indicates degree variable set predicted without error variables. value 0 indicates sum partial correlations large relative sum correlations, indicating factor analysis likely inappropriate. KMO value close 1 indicates sum partial correlations large relative sum correlations factor analysis yield distinct reliable factors. means patterns correlations relatively compact, factor analysis yield distinct reliable factors. Values smaller 0.5 suggest either collect data rethink variables include. Kaiser (1974) suggested KMO > .9 marvelous, .80s, meritorious, .70s, middling, .60s, mediocre, .50s, miserable, less .5, unacceptable. Hair et al. (2006) suggest accepting value > 0.5. Values 0.5 0.7 mediocre, values 0.7 0.8 good. Variables individual KMO values 0.5 considered exclusion analysis (note need re-compute KMO indices dependent whole dataset).","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"function wrapper around KMO cortest.bartlett() functions psych package (Revelle, 2016). Revelle, W. (2016). : Use psych package Factor Analysis data reduction. Bartlett, M. S. (1951). effect standardization Chi-square approximation factor analysis. Biometrika, 38(3/4), 337-344. Kaiser, H. F. (1970). second generation little jiffy. Psychometrika, 35(4), 401-415. Kaiser, H. F., & Rice, J. (1974). Little jiffy, mark IV. Educational psychological measurement, 34(1), 111-117. Kaiser, H. F. (1974). index factorial simplicity. Psychometrika, 39(1), 31-36.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"","code":"library(performance) check_factorstructure(mtcars) #> # Is the data suitable for Factor Analysis? #> #> #> - Sphericity: Bartlett's test of sphericity suggests that there is sufficient significant correlation in the data for factor analysis (Chisq(55) = 408.01, p < .001). #> - KMO: The Kaiser, Meyer, Olkin (KMO) overall measure of sampling adequacy suggests that data seems appropriate for factor analysis (KMO = 0.83). The individual KMO scores are: mpg (0.93), cyl (0.90), disp (0.76), hp (0.84), drat (0.95), wt (0.74), qsec (0.74), vs (0.91), am (0.88), gear (0.85), carb (0.62). # One can also pass a correlation matrix r <- cor(mtcars) check_factorstructure(r, n = nrow(mtcars)) #> # Is the data suitable for Factor Analysis? #> #> #> - Sphericity: Bartlett's test of sphericity suggests that there is sufficient significant correlation in the data for factor analysis (Chisq(55) = 408.01, p < .001). #> - KMO: The Kaiser, Meyer, Olkin (KMO) overall measure of sampling adequacy suggests that data seems appropriate for factor analysis (KMO = 0.83). The individual KMO scores are: mpg (0.93), cyl (0.90), disp (0.76), hp (0.84), drat (0.95), wt (0.74), qsec (0.74), vs (0.91), am (0.88), gear (0.85), carb (0.62)."},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"check_heterogeneity_bias() checks model predictors variables may cause heterogeneity bias, .e. variables within- /-effect (Bell Jones, 2015).","code":""},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"","code":"check_heterogeneity_bias(x, select = NULL, by = NULL, nested = FALSE)"},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"x data frame mixed model object. select Character vector (formula) names variables select checked. x mixed model object, argument ignored. Character vector (formula) name variable indicates group- cluster-ID. cross-classified nested designs, can also identify two variables group- cluster-IDs. data nested treated , set nested = TRUE. Else, defines two variables nested = FALSE, cross-classified design assumed. x model object, argument ignored. nested designs, can : character vector name variable indicates levels, ordered highest level lowest (e.g. = c(\"L4\", \"L3\", \"L2\"). character vector variable names format = \"L4/L3/L2\", levels separated /. See also section De-meaning cross-classified designs De-meaning nested designs . nested Logical, TRUE, data treated nested. FALSE, data treated cross-classified. applies contains one variable.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"Bell , Jones K. 2015. Explaining Fixed Effects: Random Effects Modeling Time-Series Cross-Sectional Panel Data. Political Science Research Methods, 3(1), 133–153.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"","code":"data(iris) iris$ID <- sample(1:4, nrow(iris), replace = TRUE) # fake-ID check_heterogeneity_bias(iris, select = c(\"Sepal.Length\", \"Petal.Length\"), by = \"ID\") #> Possible heterogeneity bias due to following predictors: Sepal.Length, Petal.Length"},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for (non-)constant error variance — check_heteroscedasticity","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"Significance testing linear regression models assumes model errors (residuals) constant variance. assumption violated p-values model longer reliable.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"","code":"check_heteroscedasticity(x, ...) check_heteroskedasticity(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"x model object. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"p-value test statistics. p-value < 0.05 indicates non-constant variance (heteroskedasticity).","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"test hypothesis (non-)constant error also called Breusch-Pagan test (1979).","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"Breusch, T. S., Pagan, . R. (1979) simple test heteroscedasticity random coefficient variation. Econometrica 47, 1287-1294.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"","code":"m <<- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_heteroscedasticity(m) #> Warning: Heteroscedasticity (non-constant error variance) detected (p = 0.042). #> # plot results if (require(\"see\")) { x <- check_heteroscedasticity(m) plot(x) }"},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for homogeneity of variances — check_homogeneity","title":"Check model for homogeneity of variances — check_homogeneity","text":"Check model homogeneity variances groups described independent variables model.","code":""},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for homogeneity of variances — check_homogeneity","text":"","code":"check_homogeneity(x, method = c(\"bartlett\", \"fligner\", \"levene\", \"auto\"), ...) # S3 method for class 'afex_aov' check_homogeneity(x, method = \"levene\", ...)"},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for homogeneity of variances — check_homogeneity","text":"x linear model ANOVA object. method Name method (underlying test) performed check homogeneity variances. May either \"levene\" Levene's Test Homogeneity Variance, \"bartlett\" Bartlett test (assuming normal distributed samples groups), \"fligner\" Fligner-Killeen test (rank-based, non-parametric test), \"auto\". latter case, Bartlett test used model response normal distributed, else Fligner-Killeen test used. ... Arguments passed car::leveneTest().","code":""},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for homogeneity of variances — check_homogeneity","text":"Invisibly returns p-value test statistics. p-value < 0.05 indicates significant difference variance groups.","code":""},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check model for homogeneity of variances — check_homogeneity","text":"also plot()-method implemented see-package.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for homogeneity of variances — check_homogeneity","text":"","code":"model <<- lm(len ~ supp + dose, data = ToothGrowth) check_homogeneity(model) #> OK: There is not clear evidence for different variances across groups (Bartlett Test, p = 0.226). #> # plot results if (require(\"see\")) { result <- check_homogeneity(model) plot(result) }"},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":null,"dir":"Reference","previous_headings":"","what":"Describe Properties of Item Scales — check_itemscale","title":"Describe Properties of Item Scales — check_itemscale","text":"Compute various measures internal consistencies applied (sub)scales, items extracted using parameters::principal_components().","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Describe Properties of Item Scales — check_itemscale","text":"","code":"check_itemscale(x, factor_index = NULL)"},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Describe Properties of Item Scales — check_itemscale","text":"x object class parameters_pca, returned parameters::principal_components(), data frame. factor_index x data frame, factor_index must specified. must numeric vector length number columns x, element index factor respective column x.","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Describe Properties of Item Scales — check_itemscale","text":"list data frames, related measures internal consistencies subscale.","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Describe Properties of Item Scales — check_itemscale","text":"check_itemscale() calculates various measures internal consistencies, Cronbach's alpha, item difficulty discrimination etc. subscales built several items. Subscales retrieved results parameters::principal_components(), .e. based many components extracted PCA, check_itemscale() retrieves variables belong component calculates mentioned measures.","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Describe Properties of Item Scales — check_itemscale","text":"Item difficulty range 0.2 0.8. Ideal value p+(1-p)/2 (mostly 0.5 0.8). See item_difficulty() details. item discrimination, acceptable values 0.20 higher; closer 1.00 better. See item_reliability() details. case total Cronbach's alpha value acceptable cut-0.7 (mostly index items), mean inter-item-correlation alternative measure indicate acceptability. Satisfactory range lies 0.2 0.4. See also item_intercor().","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Describe Properties of Item Scales — check_itemscale","text":"Briggs SR, Cheek JM (1986) role factor analysis development evaluation personality scales. Journal Personality, 54(1), 106-148. doi: 10.1111/j.1467-6494.1986.tb00391.x","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Describe Properties of Item Scales — check_itemscale","text":"","code":"# data generation from '?prcomp', slightly modified C <- chol(S <- toeplitz(0.9^(0:15))) set.seed(17) X <- matrix(rnorm(1600), 100, 16) Z <- X %*% C pca <- parameters::principal_components( as.data.frame(Z), rotation = \"varimax\", n = 3 ) pca #> # Rotated loadings from Principal Component Analysis (varimax-rotation) #> #> Variable | RC3 | RC1 | RC2 | Complexity | Uniqueness | MSA #> -------------------------------------------------------------- #> V1 | 0.85 | 0.17 | 0.20 | 1.20 | 0.21 | 0.90 #> V2 | 0.89 | 0.25 | 0.22 | 1.28 | 0.11 | 0.90 #> V3 | 0.91 | 0.26 | 0.17 | 1.23 | 0.07 | 0.89 #> V4 | 0.88 | 0.33 | 0.13 | 1.33 | 0.10 | 0.91 #> V5 | 0.82 | 0.41 | 0.14 | 1.55 | 0.14 | 0.94 #> V6 | 0.68 | 0.59 | 0.18 | 2.12 | 0.15 | 0.92 #> V7 | 0.57 | 0.74 | 0.20 | 2.04 | 0.09 | 0.93 #> V8 | 0.44 | 0.81 | 0.20 | 1.67 | 0.11 | 0.95 #> V9 | 0.33 | 0.84 | 0.32 | 1.61 | 0.09 | 0.93 #> V10 | 0.29 | 0.85 | 0.33 | 1.55 | 0.09 | 0.92 #> V11 | 0.30 | 0.79 | 0.42 | 1.86 | 0.11 | 0.92 #> V12 | 0.27 | 0.68 | 0.57 | 2.28 | 0.15 | 0.90 #> V13 | 0.20 | 0.55 | 0.71 | 2.06 | 0.15 | 0.90 #> V14 | 0.21 | 0.36 | 0.86 | 1.48 | 0.09 | 0.91 #> V15 | 0.20 | 0.23 | 0.91 | 1.23 | 0.08 | 0.88 #> V16 | 0.11 | 0.15 | 0.90 | 1.09 | 0.15 | 0.87 #> #> The 3 principal components (varimax rotation) accounted for 88.19% of the total variance of the original data (RC3 = 32.81%, RC1 = 31.24%, RC2 = 24.14%). #> check_itemscale(pca) #> # Description of (Sub-)Scales #> Component 1 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V1 | 0 | -0.02 | 1.06 | -0.49 | -0.01 | 0.80 | 0.96 #> V2 | 0 | -0.05 | 1.05 | -0.29 | -0.02 | 0.90 | 0.95 #> V3 | 0 | 0.00 | 1.10 | -0.77 | 0.00 | 0.94 | 0.95 #> V4 | 0 | 0.00 | 1.10 | -0.82 | 0.00 | 0.92 | 0.95 #> V5 | 0 | -0.07 | 1.09 | -0.29 | -0.02 | 0.90 | 0.95 #> V6 | 0 | -0.04 | 1.13 | -0.27 | -0.01 | 0.83 | 0.96 #> #> Mean inter-item-correlation = 0.813 Cronbach's alpha = 0.963 #> #> Component 2 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V7 | 0 | -0.01 | 1.07 | 0.01 | 0.00 | 0.87 | 0.97 #> V8 | 0 | 0.02 | 0.96 | 0.23 | 0.01 | 0.89 | 0.96 #> V9 | 0 | 0.04 | 0.98 | 0.37 | 0.01 | 0.93 | 0.96 #> V10 | 0 | 0.08 | 1.00 | 0.18 | 0.02 | 0.93 | 0.96 #> V11 | 0 | 0.02 | 1.03 | 0.18 | 0.01 | 0.92 | 0.96 #> V12 | 0 | 0.00 | 1.04 | 0.27 | 0.00 | 0.84 | 0.97 #> #> Mean inter-item-correlation = 0.840 Cronbach's alpha = 0.969 #> #> Component 3 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V13 | 0 | 0.04 | 0.95 | 0.10 | 0.01 | 0.81 | 0.95 #> V14 | 0 | -0.02 | 0.96 | 0.24 | -0.01 | 0.93 | 0.91 #> V15 | 0 | -0.03 | 0.94 | 0.41 | -0.01 | 0.92 | 0.91 #> V16 | 0 | 0.03 | 0.96 | 0.28 | 0.01 | 0.82 | 0.94 #> #> Mean inter-item-correlation = 0.811 Cronbach's alpha = 0.945 # as data frame check_itemscale( as.data.frame(Z), factor_index = parameters::closest_component(pca) ) #> # Description of (Sub-)Scales #> Component 1 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V1 | 0 | -0.02 | 1.06 | -0.49 | -0.01 | 0.80 | 0.96 #> V2 | 0 | -0.05 | 1.05 | -0.29 | -0.02 | 0.90 | 0.95 #> V3 | 0 | 0.00 | 1.10 | -0.77 | 0.00 | 0.94 | 0.95 #> V4 | 0 | 0.00 | 1.10 | -0.82 | 0.00 | 0.92 | 0.95 #> V5 | 0 | -0.07 | 1.09 | -0.29 | -0.02 | 0.90 | 0.95 #> V6 | 0 | -0.04 | 1.13 | -0.27 | -0.01 | 0.83 | 0.96 #> #> Mean inter-item-correlation = 0.813 Cronbach's alpha = 0.963 #> #> Component 2 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V7 | 0 | -0.01 | 1.07 | 0.01 | 0.00 | 0.87 | 0.97 #> V8 | 0 | 0.02 | 0.96 | 0.23 | 0.01 | 0.89 | 0.96 #> V9 | 0 | 0.04 | 0.98 | 0.37 | 0.01 | 0.93 | 0.96 #> V10 | 0 | 0.08 | 1.00 | 0.18 | 0.02 | 0.93 | 0.96 #> V11 | 0 | 0.02 | 1.03 | 0.18 | 0.01 | 0.92 | 0.96 #> V12 | 0 | 0.00 | 1.04 | 0.27 | 0.00 | 0.84 | 0.97 #> #> Mean inter-item-correlation = 0.840 Cronbach's alpha = 0.969 #> #> Component 3 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V13 | 0 | 0.04 | 0.95 | 0.10 | 0.01 | 0.81 | 0.95 #> V14 | 0 | -0.02 | 0.96 | 0.24 | -0.01 | 0.93 | 0.91 #> V15 | 0 | -0.03 | 0.94 | 0.41 | -0.01 | 0.92 | 0.91 #> V16 | 0 | 0.03 | 0.96 | 0.28 | 0.01 | 0.82 | 0.94 #> #> Mean inter-item-correlation = 0.811 Cronbach's alpha = 0.945"},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Visual check of model assumptions — check_model","title":"Visual check of model assumptions — check_model","text":"Visual check various model assumptions (normality residuals, normality random effects, linear relationship, homogeneity variance, multicollinearity). check_model() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Visual check of model assumptions — check_model","text":"","code":"check_model(x, ...) # Default S3 method check_model( x, panel = TRUE, check = \"all\", detrend = TRUE, bandwidth = \"nrd\", type = \"density\", residual_type = NULL, show_dots = NULL, size_dot = 2, size_line = 0.8, size_title = 12, size_axis_title = base_size, base_size = 10, alpha = 0.2, alpha_dot = 0.8, colors = c(\"#3aaf85\", \"#1b6ca8\", \"#cd201f\"), theme = \"see::theme_lucid\", verbose = FALSE, ... )"},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Visual check of model assumptions — check_model","text":"x model object. ... Arguments passed individual check functions, especially check_predictions() binned_residuals(). panel Logical, TRUE, plots arranged panels; else, single plots diagnostic returned. check Character vector, indicating checks performed plotted. May one \"\", \"vif\", \"qq\", \"normality\", \"linearity\", \"ncv\", \"homogeneity\", \"outliers\", \"reqq\", \"pp_check\", \"binned_residuals\" \"overdispersion\". Note check apply type models (see 'Details'). \"reqq\" QQ-plot random effects available mixed models. \"ncv\" alias \"linearity\", checks non-constant variance, .e. heteroscedasticity, well linear relationship. default, possible checks performed plotted. detrend Logical. Q-Q/P-P plots detrended? Defaults TRUE linear models residual_type = \"normal\". Defaults FALSE QQ plots based simulated residuals (.e. residual_type = \"simulated\"). bandwidth character string indicating smoothing bandwidth used. Unlike stats::density(), used \"nrd0\" default, default used \"nrd\" (seems give plausible results non-Gaussian models). problems plotting occur, try change different value. type Plot type posterior predictive checks plot. Can \"density\", \"discrete_dots\", \"discrete_interval\" \"discrete_both\" (discrete_* options appropriate models discrete - binary, integer ordinal etc. - outcomes). residual_type Character, indicating type residuals used. non-Gaussian models, default \"simulated\", uses simulated residuals. based simulate_residuals() thus uses DHARMa package return randomized quantile residuals. Gaussian models, default \"normal\", uses default residuals model. Setting residual_type = \"normal\" non-Gaussian models use half-normal Q-Q plot absolute value standardized deviance residuals. show_dots Logical, TRUE, show data points plot. Set FALSE models many observations, generating plot time-consuming. default, show_dots = NULL. case check_model() tries guess whether performance poor due large model thus automatically shows hides dots. size_dot, size_line Size line dot-geoms. base_size, size_title, size_axis_title Base font size axis plot titles. alpha, alpha_dot alpha level confidence bands dot-geoms. Scalar 0 1. colors Character vector color codes (hex-format). Must length 3. First color usually used reference lines, second color dots, third color outliers extreme values. theme String, indicating name plot-theme. Must format \"package::theme_name\" (e.g. \"ggplot2::theme_minimal\"). verbose FALSE (default), suppress warning messages.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Visual check of model assumptions — check_model","text":"data frame used plotting.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Visual check of model assumptions — check_model","text":"Bayesian models packages rstanarm brms, models \"converted\" frequentist counterpart, using bayestestR::bayesian_as_frequentist. advanced model-check Bayesian models implemented later stage. See also related vignette.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Visual check of model assumptions — check_model","text":"function just prepares data plotting. create plots, see needs installed. Furthermore, function suppresses possible warnings. case observe suspicious plots, please refer dedicated functions (like check_collinearity(), check_normality() etc.) get informative messages warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"posterior-predictive-checks","dir":"Reference","previous_headings":"","what":"Posterior Predictive Checks","title":"Visual check of model assumptions — check_model","text":"Posterior predictive checks can used look systematic discrepancies real simulated data. helps see whether type model (distributional family) fits well data. See check_predictions() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"linearity-assumption","dir":"Reference","previous_headings":"","what":"Linearity Assumption","title":"Visual check of model assumptions — check_model","text":"plot Linearity checks assumption linear relationship. However, spread dots also indicate possible heteroscedasticity (.e. non-constant variance, hence, alias \"ncv\" plot), thus shows residuals non-linear patterns. plot helps see whether predictors may non-linear relationship outcome, case reference line may roughly indicate relationship. straight horizontal line indicates model specification seems ok. instance, line U-shaped, predictors probably better modeled quadratic term. See check_heteroscedasticity() details. caution needed interpreting plots. Although plots helpful check model assumptions, necessarily indicate -called \"lack fit\", e.g. missed non-linear relationships interactions. Thus, always recommended also look effect plots, including partial residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"homogeneity-of-variance","dir":"Reference","previous_headings":"","what":"Homogeneity of Variance","title":"Visual check of model assumptions — check_model","text":"plot checks assumption equal variance (homoscedasticity). desired pattern dots spread equally straight, horizontal line show apparent deviation.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"influential-observations","dir":"Reference","previous_headings":"","what":"Influential Observations","title":"Visual check of model assumptions — check_model","text":"plot used identify influential observations. points plot fall outside Cook’s distance (dashed lines) considered influential observation. See check_outliers() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"multicollinearity","dir":"Reference","previous_headings":"","what":"Multicollinearity","title":"Visual check of model assumptions — check_model","text":"plot checks potential collinearity among predictors. nutshell, multicollinearity means know effect one predictor, value knowing predictor rather low. Multicollinearity might arise third, unobserved variable causal effect two predictors associated outcome. cases, actual relationship matters association unobserved variable outcome. See check_collinearity() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"normality-of-residuals","dir":"Reference","previous_headings":"","what":"Normality of Residuals","title":"Visual check of model assumptions — check_model","text":"plot used determine residuals regression model normally distributed. Usually, dots fall along line. deviation (mostly tails), indicates model predict outcome well range shows larger deviations line. generalized linear models residual_type = \"normal\", half-normal Q-Q plot absolute value standardized deviance residuals shown, however, interpretation plot remains . See check_normality() details. Usually, generalized linear (mixed) models, test uniformity residuals based simulated residuals conducted (see next section).","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"uniformity-of-residuals","dir":"Reference","previous_headings":"","what":"Uniformity of Residuals","title":"Visual check of model assumptions — check_model","text":"Fore non-Gaussian models, residual_type = \"simulated\" (default generalized linear (mixed) models), residuals expected normally distributed. case, created Q-Q plot checks uniformity residuals. interpretation plot normal Q-Q plot. See simulate_residuals() check_residuals() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"overdispersion","dir":"Reference","previous_headings":"","what":"Overdispersion","title":"Visual check of model assumptions — check_model","text":"count models, overdispersion plot shown. Overdispersion occurs observed variance higher variance theoretical model. Poisson models, variance increases mean , therefore, variance usually (roughly) equals mean value. variance much higher, data \"overdispersed\". See check_overdispersion() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"binned-residuals","dir":"Reference","previous_headings":"","what":"Binned Residuals","title":"Visual check of model assumptions — check_model","text":"models binomial families, binned residuals plot shown. Binned residual plots achieved cutting data bins plotting average residual versus average fitted value bin. model true, one expect 95% residuals fall inside error bounds. See binned_residuals() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"residuals-for-generalized-linear-models","dir":"Reference","previous_headings":"","what":"Residuals for (Generalized) Linear Models","title":"Visual check of model assumptions — check_model","text":"Plots check homogeneity variance use standardized Pearson's residuals generalized linear models, standardized residuals linear models. plots normality residuals (overlayed normal curve) linearity assumption use default residuals lm glm (deviance residuals glm). Q-Q plots use simulated residuals (see simulate_residuals()) non-Gaussian models standardized residuals linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"troubleshooting","dir":"Reference","previous_headings":"","what":"Troubleshooting","title":"Visual check of model assumptions — check_model","text":"models many observations, complex models general, generating plot might become slow. One reason might underlying graphic engine becomes slow plotting many data points. cases, setting argument show_dots = FALSE might help. Furthermore, look check argument see model checks skipped, also increases performance. check_model() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Visual check of model assumptions — check_model","text":"","code":"# \\donttest{ m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_model(m) data(sleepstudy, package = \"lme4\") m <- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy) check_model(m, panel = FALSE) # }"},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if a distribution is unimodal or multimodal — check_multimodal","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"univariate distributions (one-dimensional vectors), functions performs Ameijeiras-Alonso et al. (2018) excess mass test. multivariate distributions (data frames), uses mixture modelling. However, seems always returns significant result (suggesting distribution multimodal). better method might needed .","code":""},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"","code":"check_multimodal(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"x numeric vector data frame. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"Ameijeiras-Alonso, J., Crujeiras, R. M., Rodríguez-Casal, . (2019). Mode testing, critical bandwidth excess mass. Test, 28(3), 900-919.","code":""},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"","code":"# \\donttest{ # Univariate x <- rnorm(1000) check_multimodal(x) #> # Is the variable multimodal? #> #> The Ameijeiras-Alonso et al. (2018) excess mass test suggests that the #> hypothesis of a multimodal distribution cannot be rejected (excess mass #> = 0.02, p = 0.262). #> x <- c(rnorm(1000), rnorm(1000, 2)) check_multimodal(x) #> # Is the variable multimodal? #> #> The Ameijeiras-Alonso et al. (2018) excess mass test suggests that the #> distribution is significantly multimodal (excess mass = 0.02, p = #> 0.040). #> # Multivariate m <- data.frame( x = rnorm(200), y = rbeta(200, 2, 1) ) plot(m$x, m$y) check_multimodal(m) #> # Is the data multimodal? #> #> The parametric mixture modelling test suggests that the multivariate #> distribution is significantly multimodal (Chi2(8) = 25.13, p = 0.001). #> m <- data.frame( x = c(rnorm(100), rnorm(100, 4)), y = c(rbeta(100, 2, 1), rbeta(100, 1, 4)) ) plot(m$x, m$y) check_multimodal(m) #> # Is the data multimodal? #> #> The parametric mixture modelling test suggests that the multivariate #> distribution is significantly multimodal (Chi2(11) = 78.42, p < .001). #> # }"},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for (non-)normality of residuals. — check_normality","title":"Check model for (non-)normality of residuals. — check_normality","text":"Check model (non-)normality residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for (non-)normality of residuals. — check_normality","text":"","code":"check_normality(x, ...) # S3 method for class 'merMod' check_normality(x, effects = c(\"fixed\", \"random\"), ...)"},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for (non-)normality of residuals. — check_normality","text":"x model object. ... Currently used. effects normality residuals (\"fixed\") random effects (\"random\") tested? applies mixed-effects models. May abbreviated.","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for (non-)normality of residuals. — check_normality","text":"p-value test statistics. p-value < 0.05 indicates significant deviation normal distribution.","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check model for (non-)normality of residuals. — check_normality","text":"check_normality() calls stats::shapiro.test checks standardized residuals (studentized residuals mixed models) normal distribution. Note formal test almost always yields significant results distribution residuals visual inspection (e.g. Q-Q plots) preferable. generalized linear models, formal statistical test carried . Rather, plot() method GLMs. plot shows half-normal Q-Q plot absolute value standardized deviance residuals shown (line changes plot.lm() R 4.3+).","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check model for (non-)normality of residuals. — check_normality","text":"mixed-effects models, studentized residuals, standardized residuals, used test. also plot()-method implemented see-package.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for (non-)normality of residuals. — check_normality","text":"","code":"m <<- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_normality(m) #> OK: residuals appear as normally distributed (p = 0.230). #> # plot results x <- check_normality(m) plot(x) # \\donttest{ # QQ-plot plot(check_normality(m), type = \"qq\") # PP-plot plot(check_normality(m), type = \"pp\") # }"},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":null,"dir":"Reference","previous_headings":"","what":"Outliers detection (check for influential observations) — check_outliers","title":"Outliers detection (check for influential observations) — check_outliers","text":"Checks locates influential observations (.e., \"outliers\") via several distance /clustering methods. several methods selected, returned \"Outlier\" vector composite outlier score, made average binary (0 1) results method. represents probability observation classified outlier least one method. decision rule used default classify outliers observations composite outlier score superior equal 0.5 (.e., classified outliers least half methods). See Details section description methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Outliers detection (check for influential observations) — check_outliers","text":"","code":"check_outliers(x, ...) # Default S3 method check_outliers( x, method = c(\"cook\", \"pareto\"), threshold = NULL, ID = NULL, verbose = TRUE, ... ) # S3 method for class 'numeric' check_outliers(x, method = \"zscore_robust\", threshold = NULL, ...) # S3 method for class 'data.frame' check_outliers(x, method = \"mahalanobis\", threshold = NULL, ID = NULL, ...) # S3 method for class 'performance_simres' check_outliers( x, type = \"default\", iterations = 100, alternative = \"two.sided\", ... )"},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Outliers detection (check for influential observations) — check_outliers","text":"x model, data.frame, performance_simres simulate_residuals() DHARMa object. ... method = \"ics\", arguments ... passed ICSOutlier::ics.outlier(). method = \"mahalanobis\", passed stats::mahalanobis(). percentage_central can specified method = \"mcd\". objects class performance_simres DHARMa, arguments passed DHARMa::testOutliers(). method outlier detection method(s). Can \"\" \"cook\", \"pareto\", \"zscore\", \"zscore_robust\", \"iqr\", \"ci\", \"eti\", \"hdi\", \"bci\", \"mahalanobis\", \"mahalanobis_robust\", \"mcd\", \"ics\", \"optics\" \"lof\". threshold list containing threshold values method (e.g. list('mahalanobis' = 7, 'cook' = 1)), observation considered outlier. NULL, default values used (see 'Details'). numeric value given, used threshold method run. ID Optional, report ID column along row number. verbose Toggle warnings. type Type method test outliers. Can one \"default\", \"binomial\" \"bootstrap\". applies x object returned simulate_residuals() class DHARMa. See 'Details' ?DHARMa::testOutliers detailed description types. iterations Number simulations run. alternative character string specifying alternative hypothesis.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Outliers detection (check for influential observations) — check_outliers","text":"logical vector detected outliers nice printing method: check (message) whether outliers detected . information distance measure whether observation considered outlier can recovered .data.frame function. Note function (silently) return vector FALSE non-supported data types character strings.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Outliers detection (check for influential observations) — check_outliers","text":"Outliers can defined particularly influential observations. methods rely computation distance metric, observations greater certain threshold considered outliers. Importantly, outliers detection methods meant provide information consider researcher, rather automatized procedure mindless application substitute thinking. example sentence reporting usage composite method : \"Based composite outlier score (see 'check_outliers' function 'performance' R package; Lüdecke et al., 2021) obtained via joint application multiple outliers detection algorithms (Z-scores, Iglewicz, 1993; Interquartile range (IQR); Mahalanobis distance, Cabana, 2019; Robust Mahalanobis distance, Gnanadesikan Kettenring, 1972; Minimum Covariance Determinant, Leys et al., 2018; Invariant Coordinate Selection, Archimbaud et al., 2018; OPTICS, Ankerst et al., 1999; Isolation Forest, Liu et al. 2008; Local Outlier Factor, Breunig et al., 2000), excluded n participants classified outliers least half methods used.\"","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Outliers detection (check for influential observations) — check_outliers","text":"also plot()-method implemented see-package. Please note range distance-values along y-axis re-scaled range 0 1.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"model-specific-methods","dir":"Reference","previous_headings":"","what":"Model-specific methods","title":"Outliers detection (check for influential observations) — check_outliers","text":"Cook's Distance: Among outlier detection methods, Cook's distance leverage less common basic Mahalanobis distance, still used. Cook's distance estimates variations regression coefficients removing observation, one one (Cook, 1977). Since Cook's distance metric F distribution p n-p degrees freedom, median point quantile distribution can used cut-(Bollen, 1985). common approximation heuristic use 4 divided numbers observations, usually corresponds lower threshold (.e., outliers detected). works frequentist models. Bayesian models, see pareto. Pareto: reliability approximate convergence Bayesian models can assessed using estimates shape parameter k generalized Pareto distribution. estimated tail shape parameter k exceeds 0.5, user warned, although practice authors loo package observed good performance values k 0.7 (default threshold used performance).","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"univariate-methods","dir":"Reference","previous_headings":"","what":"Univariate methods","title":"Outliers detection (check for influential observations) — check_outliers","text":"Z-scores (\"zscore\", \"zscore_robust\"): Z-score, standard score, way describing data point deviance central value, terms standard deviations mean (\"zscore\") , case (\"zscore_robust\") default (Iglewicz, 1993), terms Median Absolute Deviation (MAD) median (robust measures dispersion centrality). default threshold classify outliers 1.959 (threshold = list(\"zscore\" = 1.959)), corresponding 2.5% (qnorm(0.975)) extreme observations (assuming data normally distributed). Importantly, Z-score method univariate: computed column column. data frame passed, Z-score calculated variable separately, maximum (absolute) Z-score kept observations. Thus, observations extreme least one variable might detected outliers. Thus, method suited high dimensional data (many columns), returning liberal results (detecting many outliers). IQR (\"iqr\"): Using IQR (interquartile range) robust method developed John Tukey, often appears box--whisker plots (e.g., ggplot2::geom_boxplot). interquartile range range first third quartiles. Tukey considered outliers data point fell outside either 1.5 times (default threshold 1.7) IQR first third quartile. Similar Z-score method, univariate method outliers detection, returning outliers detected least one column, might thus suited high dimensional data. distance score IQR absolute deviation median upper lower IQR thresholds. , value divided IQR threshold, “standardize” facilitate interpretation. CI (\"ci\", \"eti\", \"hdi\", \"bci\"): Another univariate method compute, variable, sort \"confidence\" interval consider outliers values lying beyond edges interval. default, \"ci\" computes Equal-Tailed Interval (\"eti\"), types intervals available, Highest Density Interval (\"hdi\") Bias Corrected Accelerated Interval (\"bci\"). default threshold 0.95, considering outliers observations outside 95% CI variable. See bayestestR::ci() details intervals. distance score CI methods absolute deviation median upper lower CI thresholds. , value divided difference upper lower CI bounds divided two, “standardize” facilitate interpretation.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"multivariate-methods","dir":"Reference","previous_headings":"","what":"Multivariate methods","title":"Outliers detection (check for influential observations) — check_outliers","text":"Mahalanobis Distance: Mahalanobis distance (Mahalanobis, 1930) often used multivariate outliers detection distance takes account shape observations. default threshold often arbitrarily set deviation (terms SD MAD) mean (median) Mahalanobis distance. However, Mahalanobis distance can approximated Chi squared distribution (Rousseeuw Van Zomeren, 1990), can use alpha quantile chi-square distribution k degrees freedom (k number columns). default, alpha threshold set 0.025 (corresponding 2.5\\ Cabana, 2019). criterion natural extension median plus minus coefficient times MAD method (Leys et al., 2013). Robust Mahalanobis Distance: robust version Mahalanobis distance using Orthogonalized Gnanadesikan-Kettenring pairwise estimator (Gnanadesikan Kettenring, 1972). Requires bigutilsr package. See bigutilsr::dist_ogk() function. Minimum Covariance Determinant (MCD): Another robust version Mahalanobis. Leys et al. (2018) argue Mahalanobis Distance robust way determine outliers, uses means covariances data - including outliers - determine individual difference scores. Minimum Covariance Determinant calculates mean covariance matrix based central subset data (default, 66\\ deemed robust method identifying removing outliers regular Mahalanobis distance. method percentage_central argument allows specifying breakdown point (0.75, default, recommended Leys et al. 2018, commonly used alternative 0.50). Invariant Coordinate Selection (ICS): outlier detected using ICS, default uses alpha threshold 0.025 (corresponding 2.5\\ value outliers classification. Refer help-file ICSOutlier::ics.outlier() get details procedure. Note method = \"ics\" requires ICS ICSOutlier installed, takes time compute results. can speed computation time using parallel computing. Set number cores use options(mc.cores = 4) (example). OPTICS: Ordering Points Identify Clustering Structure (OPTICS) algorithm (Ankerst et al., 1999) using similar concepts DBSCAN (unsupervised clustering technique can used outliers detection). threshold argument passed minPts, corresponds minimum size cluster. default, size set 2 times number columns (Sander et al., 1998). Compared techniques, always detect several outliers (usually defined percentage extreme values), algorithm functions different manner always detect outliers. Note method = \"optics\" requires dbscan package installed, takes time compute results. Additionally, optics_xi (default 0.05) passed dbscan::extractXi() function refine cluster selection. Local Outlier Factor: Based K nearest neighbors algorithm, LOF compares local density point local densities neighbors instead computing distance center (Breunig et al., 2000). Points substantially lower density neighbors considered outliers. LOF score approximately 1 indicates density around point comparable neighbors. Scores significantly larger 1 indicate outliers. default threshold 0.025 classify outliers observations located qnorm(1-0.025) * SD) log-transformed LOF distance. Requires dbscan package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"methods-for-simulated-residuals","dir":"Reference","previous_headings":"","what":"Methods for simulated residuals","title":"Outliers detection (check for influential observations) — check_outliers","text":"approach detecting outliers based simulated residuals differs traditional methods may detecting outliers expected. Literally, approach compares observed simulated values. However, know deviation observed data model expectation, thus, term \"outlier\" taken grain salt. refers \"simulation outliers\". Basically, comparison tests whether observed data point outside simulated range. strongly recommended read related documentations DHARMa package, e.g. ?DHARMa::testOutliers.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"threshold-specification","dir":"Reference","previous_headings":"","what":"Threshold specification","title":"Outliers detection (check for influential observations) — check_outliers","text":"Default thresholds currently specified follows:","code":"list( zscore = stats::qnorm(p = 1 - 0.001 / 2), zscore_robust = stats::qnorm(p = 1 - 0.001 / 2), iqr = 1.7, ci = 1 - 0.001, eti = 1 - 0.001, hdi = 1 - 0.001, bci = 1 - 0.001, cook = stats::qf(0.5, ncol(x), nrow(x) - ncol(x)), pareto = 0.7, mahalanobis = stats::qchisq(p = 1 - 0.001, df = ncol(x)), mahalanobis_robust = stats::qchisq(p = 1 - 0.001, df = ncol(x)), mcd = stats::qchisq(p = 1 - 0.001, df = ncol(x)), ics = 0.001, optics = 2 * ncol(x), optics_xi = 0.05, lof = 0.001 )"},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"meta-analysis-models","dir":"Reference","previous_headings":"","what":"Meta-analysis models","title":"Outliers detection (check for influential observations) — check_outliers","text":"meta-analysis models (e.g. objects class rma metafor package metagen package meta), studies defined outliers confidence interval lies outside confidence interval pooled effect.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Outliers detection (check for influential observations) — check_outliers","text":"Archimbaud, ., Nordhausen, K., Ruiz-Gazen, . (2018). ICS multivariate outlier detection application quality control. Computational Statistics Data Analysis, 128, 184-199. doi:10.1016/j.csda.2018.06.011 Gnanadesikan, R., Kettenring, J. R. (1972). Robust estimates, residuals, outlier detection multiresponse data. Biometrics, 81-124. Bollen, K. ., Jackman, R. W. (1985). Regression diagnostics: expository treatment outliers influential cases. Sociological Methods Research, 13(4), 510-542. Cabana, E., Lillo, R. E., Laniado, H. (2019). Multivariate outlier detection based robust Mahalanobis distance shrinkage estimators. arXiv preprint arXiv:1904.02596. Cook, R. D. (1977). Detection influential observation linear regression. Technometrics, 19(1), 15-18. Iglewicz, B., Hoaglin, D. C. (1993). detect handle outliers (Vol. 16). Asq Press. Leys, C., Klein, O., Dominicy, Y., Ley, C. (2018). Detecting multivariate outliers: Use robust variant Mahalanobis distance. Journal Experimental Social Psychology, 74, 150-156. Liu, F. T., Ting, K. M., Zhou, Z. H. (2008, December). Isolation forest. 2008 Eighth IEEE International Conference Data Mining (pp. 413-422). IEEE. Lüdecke, D., Ben-Shachar, M. S., Patil, ., Waggoner, P., Makowski, D. (2021). performance: R package assessment, comparison testing statistical models. Journal Open Source Software, 6(60), 3139. doi:10.21105/joss.03139 Thériault, R., Ben-Shachar, M. S., Patil, ., Lüdecke, D., Wiernik, B. M., Makowski, D. (2023). Check outliers! introduction identifying statistical outliers R easystats. Behavior Research Methods, 1-11. doi:10.3758/s13428-024-02356-w Rousseeuw, P. J., Van Zomeren, B. C. (1990). Unmasking multivariate outliers leverage points. Journal American Statistical association, 85(411), 633-639.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Outliers detection (check for influential observations) — check_outliers","text":"","code":"data <- mtcars # Size nrow(data) = 32 # For single variables ------------------------------------------------------ outliers_list <- check_outliers(data$mpg) # Find outliers outliers_list # Show the row index of the outliers #> OK: No outliers detected. #> - Based on the following method and threshold: zscore_robust (3.291). #> - For variable: data$mpg #> #> as.numeric(outliers_list) # The object is a binary vector... #> [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 filtered_data <- data[!outliers_list, ] # And can be used to filter a data frame nrow(filtered_data) # New size, 28 (4 outliers removed) #> [1] 32 # Find all observations beyond +/- 2 SD check_outliers(data$mpg, method = \"zscore\", threshold = 2) #> 2 outliers detected: cases 18, 20. #> - Based on the following method and threshold: zscore (2). #> - For variable: data$mpg. #> #> ----------------------------------------------------------------------------- #> Outliers per variable (zscore): #> #> $`data$mpg` #> Row Distance_Zscore #> 18 18 2.042389 #> 20 20 2.291272 #> # For dataframes ------------------------------------------------------ check_outliers(data) # It works the same way on data frames #> OK: No outliers detected. #> - Based on the following method and threshold: mahalanobis (31.264). #> - For variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb #> #> # You can also use multiple methods at once outliers_list <- check_outliers(data, method = c( \"mahalanobis\", \"iqr\", \"zscore\" )) outliers_list #> OK: No outliers detected. #> - Based on the following methods and thresholds: mahalanobis (3.291), iqr (2), zscore (31.264). #> - For variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb #> #> # Using `as.data.frame()`, we can access more details! outliers_info <- as.data.frame(outliers_list) head(outliers_info) #> Row Distance_Zscore Outlier_Zscore Distance_IQR Outlier_IQR #> 1 1 1.189901 0 0.4208483 0 #> 2 2 1.189901 0 0.2941176 0 #> 3 3 1.224858 0 0.5882353 0 #> 4 4 1.122152 0 0.5882353 0 #> 5 5 1.043081 0 0.3915954 0 #> 6 6 1.564608 0 0.6809025 0 #> Distance_Mahalanobis Outlier_Mahalanobis Outlier #> 1 8.946673 0 0 #> 2 8.287933 0 0 #> 3 8.937150 0 0 #> 4 6.096726 0 0 #> 5 5.429061 0 0 #> 6 8.877558 0 0 outliers_info$Outlier # Including the probability of being an outlier #> [1] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [8] 0.0000000 0.3333333 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [15] 0.0000000 0.3333333 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [22] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [29] 0.0000000 0.0000000 0.3333333 0.0000000 # And we can be more stringent in our outliers removal process filtered_data <- data[outliers_info$Outlier < 0.1, ] # We can run the function stratified by groups using `{datawizard}` package: group_iris <- datawizard::data_group(iris, \"Species\") check_outliers(group_iris) #> OK: No outliers detected. #> - Based on the following method and threshold: mahalanobis (20). #> - For variables: Sepal.Length, Sepal.Width, Petal.Length, Petal.Width #> #> # nolint start # nolint end # \\donttest{ # You can also run all the methods check_outliers(data, method = \"all\", verbose = FALSE) #> Package `parallel` is installed, but `check_outliers()` will run on a #> single core. #> To use multiple cores, set `options(mc.cores = 4)` (for example). #> 3 outliers detected: cases 9, 29, 31. #> - Based on the following methods and thresholds: zscore_robust (3.291), #> iqr (2), ci (1), cook (1), pareto (0.7), mahalanobis (31.264), #> mahalanobis_robust (31.264), mcd (31.264), ics (0.001), optics (22), lof #> (0.05), optics_xi (0.001). #> - For variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb. #> Note: Outliers were classified as such by at least half of the selected methods. #> #> ----------------------------------------------------------------------------- #> #> The following observations were considered outliers for two or more #> variables by at least one of the selected methods: #> #> Row n_Zscore_robust n_IQR n_ci n_Mahalanobis_robust n_MCD #> 1 3 2 0 0 0 0 #> 2 9 2 1 1 (Multivariate) (Multivariate) #> 3 18 2 0 0 0 0 #> 4 19 2 0 2 0 (Multivariate) #> 5 20 2 0 2 0 0 #> 6 26 2 0 0 0 0 #> 7 28 2 0 1 (Multivariate) (Multivariate) #> 8 31 2 2 2 (Multivariate) (Multivariate) #> 9 32 2 0 0 0 0 #> 10 8 1 0 0 0 (Multivariate) #> 11 21 1 0 0 (Multivariate) 0 #> 12 27 1 0 0 (Multivariate) (Multivariate) #> 13 29 1 0 1 (Multivariate) (Multivariate) #> 14 30 1 0 0 0 (Multivariate) #> 15 7 0 0 0 (Multivariate) 0 #> 16 24 0 0 0 (Multivariate) 0 #> n_ICS #> 1 0 #> 2 (Multivariate) #> 3 0 #> 4 0 #> 5 0 #> 6 0 #> 7 0 #> 8 0 #> 9 0 #> 10 0 #> 11 0 #> 12 0 #> 13 (Multivariate) #> 14 0 #> 15 0 #> 16 0 # For statistical models --------------------------------------------- # select only mpg and disp (continuous) mt1 <- mtcars[, c(1, 3, 4)] # create some fake outliers and attach outliers to main df mt2 <- rbind(mt1, data.frame( mpg = c(37, 40), disp = c(300, 400), hp = c(110, 120) )) # fit model with outliers model <- lm(disp ~ mpg + hp, data = mt2) outliers_list <- check_outliers(model) plot(outliers_list) insight::get_data(model)[outliers_list, ] # Show outliers data #> disp mpg hp #> Maserati Bora 301 15 335 #> 2 400 40 120 # }"},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":null,"dir":"Reference","previous_headings":"","what":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"check_overdispersion() checks generalized linear (mixed) models overdispersion (underdispersion).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"","code":"check_overdispersion(x, ...) # S3 method for class 'performance_simres' check_overdispersion(x, alternative = c(\"two.sided\", \"less\", \"greater\"), ...)"},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"x Fitted model class merMod, glmmTMB, glm, glm.nb (package MASS), object returned simulate_residuals(). ... Arguments passed simulate_residuals(). applies models zero-inflation component, models class glmmTMB nbinom1 nbinom2 family. alternative character string specifying alternative hypothesis.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"list results overdispersion test, like chi-squared statistics, p-value dispersion ratio.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Overdispersion occurs observed variance higher variance theoretical model. Poisson models, variance increases mean , therefore, variance usually (roughly) equals mean value. variance much higher, data \"overdispersed\". less common case underdispersion, variance much lower mean.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"interpretation-of-the-dispersion-ratio","dir":"Reference","previous_headings":"","what":"Interpretation of the Dispersion Ratio","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"dispersion ratio close one, Poisson model fits well data. Dispersion ratios larger one indicate overdispersion, thus negative binomial model similar might fit better data. Dispersion ratios much smaller one indicate underdispersion. p-value < .05 indicates either overdispersion underdispersion (first common).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"overdispersion-in-poisson-models","dir":"Reference","previous_headings":"","what":"Overdispersion in Poisson Models","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Poisson models, overdispersion test based code Gelman Hill (2007), page 115.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"overdispersion-in-negative-binomial-or-zero-inflated-models","dir":"Reference","previous_headings":"","what":"Overdispersion in Negative Binomial or Zero-Inflated Models","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"negative binomial (mixed) models models zero-inflation component, overdispersion test based simulated residuals (see simulate_residuals()).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"overdispersion-in-mixed-models","dir":"Reference","previous_headings":"","what":"Overdispersion in Mixed Models","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"merMod- glmmTMB-objects, check_overdispersion() based code GLMM FAQ, section can deal overdispersion GLMMs?. Note function returns approximate estimate overdispersion parameter. Using approach inaccurate zero-inflated negative binomial mixed models (fitted glmmTMB), thus, cases, overdispersion test based simulate_residuals() (identical check_overdispersion(simulate_residuals(model))).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"how-to-fix-overdispersion","dir":"Reference","previous_headings":"","what":"How to fix Overdispersion","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Overdispersion can fixed either modeling dispersion parameter, choosing different distributional family (like Quasi-Poisson, negative binomial, see Gelman Hill (2007), pages 115-116).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"certain models, resp. model certain families, tests based simulated residuals (see simulate_residuals()). usually accurate testing models traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. Arguments specified ... passed simulate_residuals(), relies DHARMa::simulateResiduals() (therefore, arguments ... passed DHARMa). defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Bolker B et al. (2017): GLMM FAQ. Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"","code":"data(Salamanders, package = \"glmmTMB\") m <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_overdispersion(m) #> # Overdispersion test #> #> dispersion ratio = 2.946 #> Pearson's Chi-Squared = 1873.710 #> p-value = < 0.001 #> #> Overdispersion detected."},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":null,"dir":"Reference","previous_headings":"","what":"Posterior predictive checks — check_predictions","title":"Posterior predictive checks — check_predictions","text":"Posterior predictive checks mean \"simulating replicated data fitted model comparing observed data\" (Gelman Hill, 2007, p. 158). Posterior predictive checks can used \"look systematic discrepancies real simulated data\" (Gelman et al. 2014, p. 169). performance provides posterior predictive check methods variety frequentist models (e.g., lm, merMod, glmmTMB, ...). Bayesian models, model passed bayesplot::pp_check(). check_predictions() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Posterior predictive checks — check_predictions","text":"","code":"check_predictions(object, ...) # Default S3 method check_predictions( object, iterations = 50, check_range = FALSE, re_formula = NULL, bandwidth = \"nrd\", type = \"density\", verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Posterior predictive checks — check_predictions","text":"object statistical model. ... Passed simulate(). iterations number draws simulate/bootstrap. check_range Logical, TRUE, includes plot minimum value original response minimum values replicated responses, maximum value. plot helps judging whether variation original data captured model (Gelman et al. 2020, pp.163). minimum maximum values y inside range related minimum maximum values yrep. re_formula Formula containing group-level effects (random effects) considered simulated data. NULL (default), condition random effects. NA ~0, condition random effects. See simulate() lme4. bandwidth character string indicating smoothing bandwidth used. Unlike stats::density(), used \"nrd0\" default, default used \"nrd\" (seems give plausible results non-Gaussian models). problems plotting occur, try change different value. type Plot type posterior predictive checks plot. Can \"density\", \"discrete_dots\", \"discrete_interval\" \"discrete_both\" (discrete_* options appropriate models discrete - binary, integer ordinal etc. - outcomes). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Posterior predictive checks — check_predictions","text":"data frame simulated responses original response vector.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Posterior predictive checks — check_predictions","text":"example posterior predictive checks can also used model comparison Figure 6 Gabry et al. 2019, Figure 6. model shown right panel (b) can simulate new data similar observed outcome model left panel (). Thus, model (b) likely preferred model ().","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Posterior predictive checks — check_predictions","text":"Every model object simulate()-method work check_predictions(). R 3.6.0 higher, bayesplot (package imports bayesplot rstanarm brms) loaded, pp_check() also available alias check_predictions(). check_predictions() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Posterior predictive checks — check_predictions","text":"Gabry, J., Simpson, D., Vehtari, ., Betancourt, M., Gelman, . (2019). Visualization Bayesian workflow. Journal Royal Statistical Society: Series (Statistics Society), 182(2), 389–402. https://doi.org/10.1111/rssa.12378 Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press. Gelman, ., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, ., Rubin, D. B. (2014). Bayesian data analysis. (Third edition). CRC Press. Gelman, ., Hill, J., Vehtari, . (2020). Regression Stories. Cambridge University Press.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Posterior predictive checks — check_predictions","text":"","code":"# linear model model <- lm(mpg ~ disp, data = mtcars) check_predictions(model) # discrete/integer outcome set.seed(99) d <- iris d$skewed <- rpois(150, 1) model <- glm( skewed ~ Species + Petal.Length + Petal.Width, family = poisson(), data = d ) check_predictions(model, type = \"discrete_both\")"},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":null,"dir":"Reference","previous_headings":"","what":"Check uniformity of simulated residuals — check_residuals","title":"Check uniformity of simulated residuals — check_residuals","text":"check_residuals() checks generalized linear (mixed) models uniformity randomized quantile residuals, can used identify typical model misspecification problems, /underdispersion, zero-inflation, residual spatial temporal autocorrelation.","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check uniformity of simulated residuals — check_residuals","text":"","code":"check_residuals(x, ...) # Default S3 method check_residuals(x, alternative = c(\"two.sided\", \"less\", \"greater\"), ...)"},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check uniformity of simulated residuals — check_residuals","text":"x object returned simulate_residuals() DHARMa::simulateResiduals(). ... Passed stats::ks.test(). alternative character string specifying alternative hypothesis. See stats::ks.test() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check uniformity of simulated residuals — check_residuals","text":"p-value test statistics.","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check uniformity of simulated residuals — check_residuals","text":"Uniformity residuals checked using Kolmogorov-Smirnov test. plot() method visualize distribution residuals. test uniformity basically tests extent observed values deviate model expectations (.e. simulated values). sense, check_residuals() function similar goals like check_predictions().","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Check uniformity of simulated residuals — check_residuals","text":"certain models, resp. model certain families, tests like check_zeroinflation() check_overdispersion() based simulated residuals. usually accurate tests traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. simulate_residuals() relies DHARMa::simulateResiduals(), additional arguments specified ... passed function. defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check uniformity of simulated residuals — check_residuals","text":"","code":"dat <- DHARMa::createData(sampleSize = 100, overdispersion = 0.5, family = poisson()) m <- glm(observedResponse ~ Environment1, family = poisson(), data = dat) res <- simulate_residuals(m) check_residuals(res) #> Warning: Non-uniformity of simulated residuals detected (p = 0.021). #>"},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check mixed models for boundary fits — check_singularity","title":"Check mixed models for boundary fits — check_singularity","text":"Check mixed models boundary fits.","code":""},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check mixed models for boundary fits — check_singularity","text":"","code":"check_singularity(x, tolerance = 1e-05, ...)"},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check mixed models for boundary fits — check_singularity","text":"x mixed model. tolerance Indicates value convergence result accepted. larger tolerance , stricter test . ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check mixed models for boundary fits — check_singularity","text":"TRUE model fit singular.","code":""},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check mixed models for boundary fits — check_singularity","text":"model \"singular\", means dimensions variance-covariance matrix estimated exactly zero. often occurs mixed models complex random effects structures. \"singular models statistically well defined (theoretically sensible true maximum likelihood estimate correspond singular fit), real concerns (1) singular fits correspond overfitted models may poor power; (2) chances numerical problems mis-convergence higher singular models (e.g. may computationally difficult compute profile confidence intervals models); (3) standard inferential procedures Wald statistics likelihood ratio tests may inappropriate.\" (lme4 Reference Manual) gold-standard deal singularity random-effects specification choose. Beside using fully Bayesian methods (informative priors), proposals frequentist framework : avoid fitting overly complex models, variance-covariance matrices can estimated precisely enough (Matuschek et al. 2017) use form model selection choose model balances predictive accuracy overfitting/type error (Bates et al. 2015, Matuschek et al. 2017) \"keep maximal\", .e. fit complex model consistent experimental design, removing terms required allow non-singular fit (Barr et al. 2013) since version 1.1.9, glmmTMB package allows use priors frequentist framework, . One recommendation use Gamma prior (Chung et al. 2013). mean may vary 1 large values (like 1e8), shape parameter set value 2.5. can update() model specified prior. glmmTMB, code look like : Large values mean parameter Gamma prior large impact random effects variances terms \"bias\". Thus, 1 fix singular fit, can safely try larger values. Note different meaning singularity convergence: singularity indicates issue \"true\" best estimate, .e. whether maximum likelihood estimation variance-covariance matrix random effects positive definite semi-definite. Convergence question whether can assume numerical optimization worked correctly .","code":"# \"model\" is an object of class gmmmTMB prior <- data.frame( prior = \"gamma(1, 2.5)\", # mean can be 1, but even 1e8 class = \"ranef\" # for random effects ) model_with_priors <- update(model, priors = prior)"},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check mixed models for boundary fits — check_singularity","text":"Bates D, Kliegl R, Vasishth S, Baayen H. Parsimonious Mixed Models. arXiv:1506.04967, June 2015. Barr DJ, Levy R, Scheepers C, Tily HJ. Random effects structure confirmatory hypothesis testing: Keep maximal. Journal Memory Language, 68(3):255-278, April 2013. Chung Y, Rabe-Hesketh S, Dorie V, Gelman , Liu J. 2013. \"Nondegenerate Penalized Likelihood Estimator Variance Parameters Multilevel Models.\" Psychometrika 78 (4): 685–709. doi:10.1007/s11336-013-9328-2 Matuschek H, Kliegl R, Vasishth S, Baayen H, Bates D. Balancing type error power linear mixed models. Journal Memory Language, 94:305-315, 2017. lme4 Reference Manual, https://cran.r-project.org/package=lme4","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check mixed models for boundary fits — check_singularity","text":"","code":"data(sleepstudy, package = \"lme4\") set.seed(123) sleepstudy$mygrp <- sample(1:5, size = 180, replace = TRUE) sleepstudy$mysubgrp <- NA for (i in 1:5) { filter_group <- sleepstudy$mygrp == i sleepstudy$mysubgrp[filter_group] <- sample(1:30, size = sum(filter_group), replace = TRUE) } model <- lme4::lmer( Reaction ~ Days + (1 | mygrp / mysubgrp) + (1 | Subject), data = sleepstudy ) #> boundary (singular) fit: see help('isSingular') check_singularity(model) #> [1] TRUE # \\dontrun{ # Fixing singularity issues using priors in glmmTMB # Example taken from `vignette(\"priors\", package = \"glmmTMB\")` dat <- readRDS(system.file( \"vignette_data\", \"gophertortoise.rds\", package = \"glmmTMB\" )) model <- glmmTMB::glmmTMB( shells ~ prev + offset(log(Area)) + factor(year) + (1 | Site), family = poisson, data = dat ) # singular fit check_singularity(model) #> [1] TRUE # impose Gamma prior on random effects parameters prior <- data.frame( prior = \"gamma(1, 2.5)\", # mean can be 1, but even 1e8 class = \"ranef\" # for random effects ) model_with_priors <- update(model, priors = prior) # no singular fit check_singularity(model_with_priors) #> [1] FALSE # }"},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for violation of sphericity — check_sphericity","title":"Check model for violation of sphericity — check_sphericity","text":"Check model violation sphericity. Bartlett's Test Sphericity (used correlation matrices factor analyses), see check_sphericity_bartlett.","code":""},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for violation of sphericity — check_sphericity","text":"","code":"check_sphericity(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for violation of sphericity — check_sphericity","text":"x model object. ... Arguments passed car::Anova.","code":""},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for violation of sphericity — check_sphericity","text":"Invisibly returns p-values test statistics. p-value < 0.05 indicates violation sphericity.","code":""},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for violation of sphericity — check_sphericity","text":"","code":"data(Soils, package = \"carData\") soils.mod <- lm( cbind(pH, N, Dens, P, Ca, Mg, K, Na, Conduc) ~ Block + Contour * Depth, data = Soils ) check_sphericity(Manova(soils.mod)) #> OK: Data seems to be spherical (p > .999). #>"},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":null,"dir":"Reference","previous_headings":"","what":"Check distribution symmetry — check_symmetry","title":"Check distribution symmetry — check_symmetry","text":"Uses Hotelling Solomons test symmetry testing standardized nonparametric skew (\\(\\frac{(Mean - Median)}{SD}\\)) different 0. underlying assumption Wilcoxon signed-rank test.","code":""},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check distribution symmetry — check_symmetry","text":"","code":"check_symmetry(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check distribution symmetry — check_symmetry","text":"x Model numeric vector ... used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check distribution symmetry — check_symmetry","text":"","code":"V <- suppressWarnings(wilcox.test(mtcars$mpg)) check_symmetry(V) #> OK: Data appears symmetrical (p = 0.119). #>"},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":null,"dir":"Reference","previous_headings":"","what":"Check for zero-inflation in count models — check_zeroinflation","title":"Check for zero-inflation in count models — check_zeroinflation","text":"check_zeroinflation() checks whether count models - underfitting zeros outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check for zero-inflation in count models — check_zeroinflation","text":"","code":"check_zeroinflation(x, ...) # Default S3 method check_zeroinflation(x, tolerance = 0.05, ...) # S3 method for class 'performance_simres' check_zeroinflation( x, tolerance = 0.1, alternative = c(\"two.sided\", \"less\", \"greater\"), ... )"},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check for zero-inflation in count models — check_zeroinflation","text":"x Fitted model class merMod, glmmTMB, glm, glm.nb (package MASS). ... Arguments passed simulate_residuals(). applies models zero-inflation component, models class glmmTMB nbinom1 nbinom2 family. tolerance tolerance ratio observed predicted zeros considered - underfitting zeros. ratio 1 +/- tolerance considered OK, ratio beyond threshold indicate - underfitting. alternative character string specifying alternative hypothesis.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check for zero-inflation in count models — check_zeroinflation","text":"list information amount predicted observed zeros outcome, well ratio two values.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check for zero-inflation in count models — check_zeroinflation","text":"amount observed zeros larger amount predicted zeros, model underfitting zeros, indicates zero-inflation data. cases, recommended use negative binomial zero-inflated models. case negative binomial models, models zero-inflation component, hurdle models, results check_zeroinflation() based simulate_residuals(), .e. check_zeroinflation(simulate_residuals(model)) internally called necessary.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Check for zero-inflation in count models — check_zeroinflation","text":"certain models, resp. model certain families, tests based simulated residuals (see simulate_residuals()). usually accurate testing models traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. Arguments specified ... passed simulate_residuals(), relies DHARMa::simulateResiduals() (therefore, arguments ... passed DHARMa). defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check for zero-inflation in count models — check_zeroinflation","text":"","code":"data(Salamanders, package = \"glmmTMB\") m <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_zeroinflation(m) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 298 #> Ratio: 0.77 #> #> Model is underfitting zeros (probable zero-inflation). # for models with zero-inflation component, it's better to carry out # the check for zero-inflation using simulated residuals m <- glmmTMB::glmmTMB( count ~ spp + mined, ziformula = ~ mined + spp, family = poisson, data = Salamanders ) res <- simulate_residuals(m) check_zeroinflation(res) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 387 #> Ratio: 1.00 #> #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p > .999)."},{"path":"https://easystats.github.io/performance/reference/classify_distribution.html","id":null,"dir":"Reference","previous_headings":"","what":"Classify the distribution of a model-family using machine learning — classify_distribution","title":"Classify the distribution of a model-family using machine learning — classify_distribution","text":"Classify distribution model-family using machine learning","code":""},{"path":"https://easystats.github.io/performance/reference/classify_distribution.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Classify the distribution of a model-family using machine learning — classify_distribution","text":"trained model classify distributions, used check_distribution() function.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Compare performance of different models — compare_performance","title":"Compare performance of different models — compare_performance","text":"compare_performance() computes indices model performance different models hence allows comparison indices across models.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compare performance of different models — compare_performance","text":"","code":"compare_performance( ..., metrics = \"all\", rank = FALSE, estimator = \"ML\", verbose = TRUE )"},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compare performance of different models — compare_performance","text":"... Multiple model objects (also different classes). metrics Can \"\", \"common\" character vector metrics computed. See related documentation() object's class details. rank Logical, TRUE, models ranked according 'best' overall model performance. See 'Details'. estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compare performance of different models — compare_performance","text":"data frame one row per model one column per \"index\" (see metrics).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"model-weights","dir":"Reference","previous_headings":"","what":"Model Weights","title":"Compare performance of different models — compare_performance","text":"information criteria (IC) requested metrics (.e., \"\", \"common\", \"AIC\", \"AICc\", \"BIC\", \"WAIC\", \"LOOIC\"), model weights based criteria also computed. IC except LOOIC, weights computed w = exp(-0.5 * delta_ic) / sum(exp(-0.5 * delta_ic)), delta_ic difference model's IC value smallest IC value model set (Burnham Anderson, 2002). LOOIC, weights computed \"stacking weights\" using loo::stacking_weights().","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"ranking-models","dir":"Reference","previous_headings":"","what":"Ranking Models","title":"Compare performance of different models — compare_performance","text":"rank = TRUE, new column Performance_Score returned. score ranges 0\\ performance. Note score value necessarily sum 100\\ Rather, calculation based normalizing indices (.e. rescaling range 0 1), taking mean value indices model. rather quick heuristic, might helpful exploratory index. particular models different types (e.g. mixed models, classical linear models, logistic regression, ...), indices computed model. case index calculated specific model type, model gets NA value. indices NAs excluded calculating performance score. plot()-method compare_performance(), creates \"spiderweb\" plot, different indices normalized larger values indicate better model performance. Hence, points closer center indicate worse fit indices (see online-documentation details).","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"reml-versus-ml-estimator","dir":"Reference","previous_headings":"","what":"REML versus ML estimator","title":"Compare performance of different models — compare_performance","text":"default, estimator = \"ML\", means values information criteria (AIC, AICc, BIC) specific model classes (like models lme4) based ML-estimator, default behaviour AIC() classes setting REML = TRUE. default intentional, comparing information criteria based REML fits usually valid (might useful, though, models share fixed effects - however, usually case nested models, prerequisite LRT). Set estimator = \"REML\" explicitly return (AIC/...) values defaults AIC.merMod().","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Compare performance of different models — compare_performance","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compare performance of different models — compare_performance","text":"Burnham, K. P., Anderson, D. R. (2002). Model selection multimodel inference: practical information-theoretic approach (2nd ed.). Springer-Verlag. doi:10.1007/b97636","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compare performance of different models — compare_performance","text":"","code":"data(iris) lm1 <- lm(Sepal.Length ~ Species, data = iris) lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) lm3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris) compare_performance(lm1, lm2, lm3) #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | R2 #> --------------------------------------------------------------------- #> lm1 | lm | 231.5 (<.001) | 231.7 (<.001) | 243.5 (<.001) | 0.619 #> lm2 | lm | 106.2 (0.566) | 106.6 (0.611) | 121.3 (0.964) | 0.837 #> lm3 | lm | 106.8 (0.434) | 107.6 (0.389) | 127.8 (0.036) | 0.840 #> #> Name | R2 (adj.) | RMSE | Sigma #> -------------------------------- #> lm1 | 0.614 | 0.510 | 0.515 #> lm2 | 0.833 | 0.333 | 0.338 #> lm3 | 0.835 | 0.330 | 0.336 compare_performance(lm1, lm2, lm3, rank = TRUE) #> # Comparison of Model Performance Indices #> #> Name | Model | R2 | R2 (adj.) | RMSE | Sigma | AIC weights | AICc weights #> ----------------------------------------------------------------------------- #> lm2 | lm | 0.837 | 0.833 | 0.333 | 0.338 | 0.566 | 0.611 #> lm3 | lm | 0.840 | 0.835 | 0.330 | 0.336 | 0.434 | 0.389 #> lm1 | lm | 0.619 | 0.614 | 0.510 | 0.515 | 3.65e-28 | 4.23e-28 #> #> Name | BIC weights | Performance-Score #> -------------------------------------- #> lm2 | 0.964 | 99.23% #> lm3 | 0.036 | 77.70% #> lm1 | 2.80e-27 | 0.00% m1 <- lm(mpg ~ wt + cyl, data = mtcars) m2 <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") m3 <- lme4::lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris) compare_performance(m1, m2, m3) #> When comparing models, please note that probably not all models were fit #> from same data. #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | RMSE | Sigma #> ------------------------------------------------------------------------------- #> m1 | lm | 156.0 (<.001) | 157.5 (<.001) | 161.9 (<.001) | 2.444 | 2.568 #> m2 | glm | 31.3 (>.999) | 32.2 (>.999) | 35.7 (>.999) | 0.359 | 1.000 #> m3 | lmerMod | 74.6 (<.001) | 74.9 (<.001) | 86.7 (<.001) | 0.279 | 0.283 #> #> Name | R2 | R2 (adj.) | Tjur's R2 | Log_loss | Score_log | Score_spherical #> ----------------------------------------------------------------------------- #> m1 | 0.830 | 0.819 | | | | #> m2 | | | 0.478 | 0.395 | -14.903 | 0.095 #> m3 | | | | | | #> #> Name | PCP | R2 (cond.) | R2 (marg.) | ICC #> ---------------------------------------------- #> m1 | | | | #> m2 | 0.743 | | | #> m3 | | 0.972 | 0.096 | 0.969"},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":null,"dir":"Reference","previous_headings":"","what":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"","code":"cronbachs_alpha(x, ...)"},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"x matrix data frame. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Cronbach's Alpha value x.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Cronbach's Alpha value x. value closer 1 indicates greater internal consistency, usually following rule thumb applied interpret results: α < 0.5 unacceptable, 0.5 < α < 0.6 poor, 0.6 < α < 0.7 questionable, 0.7 < α < 0.8 acceptable, everything > 0.8 good excellent.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Bland, J. M., Altman, D. G. Statistics notes: Cronbach's alpha. BMJ 1997;314:572. 10.1136/bmj.314.7080.572","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] cronbachs_alpha(x) #> [1] 0.09463206"},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Print tables in different output formats — display.performance_model","title":"Print tables in different output formats — display.performance_model","text":"Prints tables (.e. data frame) different output formats. print_md() alias display(format = \"markdown\").","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print tables in different output formats — display.performance_model","text":"","code":"# S3 method for class 'performance_model' display(object, format = \"markdown\", digits = 2, caption = NULL, ...) # S3 method for class 'performance_model' print_md( x, digits = 2, caption = \"Indices of model performance\", layout = \"horizontal\", ... ) # S3 method for class 'compare_performance' print_md( x, digits = 2, caption = \"Comparison of Model Performance Indices\", layout = \"horizontal\", ... )"},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print tables in different output formats — display.performance_model","text":"object, x object returned model_performance() compare_performance(). summary. format String, indicating output format. Currently, \"markdown\" supported. digits Number decimal places. caption Table caption string. NULL, table caption printed. ... Currently used. layout Table layout (can either \"horizontal\" \"vertical\").","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Print tables in different output formats — display.performance_model","text":"character vector. format = \"markdown\", return value character vector markdown-table format.","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Print tables in different output formats — display.performance_model","text":"display() useful table-output functions, usually printed formatted text-table console, formatted pretty table-rendering markdown documents, knitted rmarkdown PDF Word files. See vignette examples.","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Print tables in different output formats — display.performance_model","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) mp <- model_performance(model) display(mp) #> #> #> |AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma | #> |:------|:------:|:------:|:----:|:---------:|:----:|:-----:| #> |156.01 | 157.49 | 161.87 | 0.83 | 0.82 | 2.44 | 2.57 |"},{"path":"https://easystats.github.io/performance/reference/icc.html","id":null,"dir":"Reference","previous_headings":"","what":"Intraclass Correlation Coefficient (ICC) — icc","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"function calculates intraclass-correlation coefficient (ICC) - sometimes also called variance partition coefficient (VPC) repeatability - mixed effects models. ICC can calculated models supported insight::get_variance(). models fitted brms-package, icc() might fail due large variety models families supported brms-package. cases, alternative ICC variance_decomposition(), based posterior predictive distribution (see 'Details').","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"","code":"icc( model, by_group = FALSE, tolerance = 1e-05, ci = NULL, iterations = 100, ci_method = NULL, null_model = NULL, approximation = \"lognormal\", model_component = NULL, verbose = TRUE, ... ) variance_decomposition(model, re_formula = NULL, robust = TRUE, ci = 0.95, ...)"},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"model (Bayesian) mixed effects model. by_group Logical, TRUE, icc() returns variance components random-effects level (multiple levels). See 'Details'. tolerance Tolerance singularity check random effects, decide whether compute random effect variances . Indicates value convergence result accepted. larger tolerance , stricter test . See performance::check_singularity(). ci Confidence resp. credible interval level. icc(), r2(), rmse(), confidence intervals based bootstrapped samples ICC, R2 RMSE value. See iterations. iterations Number bootstrap-replicates computing confidence intervals ICC, R2, RMSE etc. ci_method Character string, indicating bootstrap-method. NULL (default), case lme4::bootMer() used bootstrapped confidence intervals. However, bootstrapped intervals calculated way, try ci_method = \"boot\", falls back boot::boot(). may successfully return bootstrapped confidence intervals, bootstrapped samples may appropriate multilevel structure model. also option ci_method = \"analytical\", tries calculate analytical confidence assuming chi-squared distribution. However, intervals rather inaccurate often narrow. recommended calculate bootstrapped confidence intervals mixed models. null_model Optional, null model compute random effect variances, passed insight::get_variance(). Usually required calculation r-squared ICC fails null_model specified. calculating null model takes longer already fit null model, can pass , , speed process. approximation Character string, indicating approximation method distribution-specific (observation level, residual) variance. applies non-Gaussian models. Can \"lognormal\" (default), \"delta\" \"trigamma\". binomial models, default theoretical distribution specific variance, however, can also \"observation_level\". See Nakagawa et al. 2017, particular supplement 2, details. model_component models can zero-inflation component, specify component variances returned. NULL \"full\" (default), conditional zero-inflation component taken account. \"conditional\", conditional component considered. verbose Toggle warnings messages. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict(). re_formula Formula containing group-level effects considered prediction. NULL (default), include group-level effects. Else, instance nested models, name specific group-level effect calculate variance decomposition group-level. See 'Details' ?brms::posterior_predict. robust Logical, TRUE, median instead mean used calculate central tendency variances.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"list two values, adjusted ICC unadjusted ICC. variance_decomposition(), list two values, decomposed ICC well credible intervals ICC.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"interpretation","dir":"Reference","previous_headings":"","what":"Interpretation","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"ICC can interpreted \"proportion variance explained grouping structure population\". grouping structure entails measurements organized groups (e.g., test scores school can grouped classroom multiple classrooms classroom administered test) ICC indexes strongly measurements group resemble . index goes 0, grouping conveys information, 1, observations group identical (Gelman Hill, 2007, p. 258). word, ICC - sometimes conceptualized measurement repeatability - \"can also interpreted expected correlation two randomly drawn units group\" (Hox 2010: 15), although definition might apply mixed models complex random effects structures. ICC can help determine whether mixed model even necessary: ICC zero (close zero) means observations within clusters similar observations different clusters, setting random factor might necessary.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"difference-with-r-","dir":"Reference","previous_headings":"","what":"Difference with R2","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"coefficient determination R2 (can computed r2()) quantifies proportion variance explained statistical model, definition mixed model complex (hence, different methods compute proxy exist). ICC related R2 ratios variance components. precisely, R2 proportion explained variance (full model), ICC proportion explained variance can attributed random effects. simple cases, ICC corresponds difference conditional R2 marginal R2 (see r2_nakagawa()).","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"calculation","dir":"Reference","previous_headings":"","what":"Calculation","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"ICC calculated dividing random effect variance, σ2i, total variance, .e. sum random effect variance residual variance, σ2ε.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"adjusted-and-unadjusted-icc","dir":"Reference","previous_headings":"","what":"Adjusted and unadjusted ICC","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"icc() calculates adjusted unadjusted ICC, take sources uncertainty (.e. random effects) account. adjusted ICC relates random effects, unadjusted ICC also takes fixed effects variances account, precisely, fixed effects variance added denominator formula calculate ICC (see Nakagawa et al. 2017). Typically, adjusted ICC interest analysis random effects interest. icc() returns meaningful ICC also complex random effects structures, like models random slopes nested design (two levels) applicable models distributions Gaussian. details computation variances, see ?insight::get_variance.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"icc-for-unconditional-and-conditional-models","dir":"Reference","previous_headings":"","what":"ICC for unconditional and conditional models","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"Usually, ICC calculated null model (\"unconditional model\"). However, according Raudenbush Bryk (2002) Rabe-Hesketh Skrondal (2012) also feasible compute ICC full models covariates (\"conditional models\") compare much, e.g., level-2 variable explains portion variation grouping structure (random intercept).","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"icc-for-specific-group-levels","dir":"Reference","previous_headings":"","what":"ICC for specific group-levels","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"proportion variance specific levels related overall model can computed setting by_group = TRUE. reported ICC variance (random effect) group compared total variance model. mixed models simple random intercept, identical classical (adjusted) ICC.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"variance-decomposition-for-brms-models","dir":"Reference","previous_headings":"","what":"Variance decomposition for brms-models","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"model class brmsfit, icc() might fail due large variety models families supported brms package. cases, variance_decomposition() alternative ICC measure. function calculates variance decomposition based posterior predictive distribution. case, first, draws posterior predictive distribution conditioned group-level terms (posterior_predict(..., re_formula = NA)) calculated well draws distribution conditioned random effects (default, unless specified else re_formula) taken. , second, variances draws calculated. \"ICC\" ratio two variances. recommended way analyse random-effect-variances non-Gaussian models. possible compare variances across models, also specifying different group-level terms via re_formula-argument. Sometimes, variance posterior predictive distribution large, variance ratio output makes sense, e.g. negative. cases, might help use robust = TRUE.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"supported-models-and-model-families","dir":"Reference","previous_headings":"","what":"Supported models and model families","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"single variance components required calculate marginal conditional r-squared values calculated using insight::get_variance() function. results validated solutions provided Nakagawa et al. (2017), particular examples shown Supplement 2 paper. model families validated results MuMIn package. means r-squared values returned r2_nakagawa() accurate reliable following mixed models model families: Bernoulli (logistic) regression Binomial regression (binary outcomes) Poisson Quasi-Poisson regression Negative binomial regression (including nbinom1, nbinom2 nbinom12 families) Gaussian regression (linear models) Gamma regression Tweedie regression Beta regression Ordered beta regression Following model families yet validated, work: Zero-inflated hurdle models Beta-binomial regression Compound Poisson regression Generalized Poisson regression Log-normal regression Skew-normal regression Extracting variance components models zero-inflation part straightforward, definitely clear distribution-specific variance calculated. Therefore, recommended carefully inspect results, probably validate models, e.g. Bayesian models (although results may roughly comparable). Log-normal regressions (e.g. lognormal() family glmmTMB gaussian(\"log\")) often low fixed effects variance (calculated suggested Nakagawa et al. 2017). results low ICC r-squared values, may meaningful.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"Hox, J. J. (2010). Multilevel analysis: techniques applications (2nd ed). New York: Routledge. Nakagawa, S., Johnson, P. C. D., Schielzeth, H. (2017). coefficient determination R2 intra-class correlation coefficient generalized linear mixed-effects models revisited expanded. Journal Royal Society Interface, 14(134), 20170213. Rabe-Hesketh, S., Skrondal, . (2012). Multilevel longitudinal modeling using Stata (3rd ed). College Station, Tex: Stata Press Publication. Raudenbush, S. W., Bryk, . S. (2002). Hierarchical linear models: applications data analysis methods (2nd ed). Thousand Oaks: Sage Publications.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"","code":"model <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) icc(model) #> # Intraclass Correlation Coefficient #> #> Adjusted ICC: 0.910 #> Unadjusted ICC: 0.311 # ICC for specific group-levels data(sleepstudy, package = \"lme4\") set.seed(12345) sleepstudy$grp <- sample(1:5, size = 180, replace = TRUE) sleepstudy$subgrp <- NA for (i in 1:5) { filter_group <- sleepstudy$grp == i sleepstudy$subgrp[filter_group] <- sample(1:30, size = sum(filter_group), replace = TRUE) } model <- lme4::lmer( Reaction ~ Days + (1 | grp / subgrp) + (1 | Subject), data = sleepstudy ) icc(model, by_group = TRUE) #> # ICC by Group #> #> Group | ICC #> ------------------ #> subgrp:grp | 0.017 #> Subject | 0.589 #> grp | 0.001"},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":null,"dir":"Reference","previous_headings":"","what":"Difficulty of Questionnaire Items — item_difficulty","title":"Difficulty of Questionnaire Items — item_difficulty","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Difficulty of Questionnaire Items — item_difficulty","text":"","code":"item_difficulty(x, maximum_value = NULL)"},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Difficulty of Questionnaire Items — item_difficulty","text":"x Depending function, x may matrix returned cor()-function, data frame items (e.g. test questionnaire). maximum_value Numeric value, indicating maximum value item. NULL (default), maximum taken maximum value columns x (assuming maximum value least appears data). NA, item's maximum value taken maximum. required maximum value present data, specify theoreritcal maximum using maximum_value.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Difficulty of Questionnaire Items — item_difficulty","text":"data frame three columns: name(s) item(s), item difficulties item, ideal item difficulty.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Difficulty of Questionnaire Items — item_difficulty","text":"Item difficutly item defined quotient sum actually achieved item maximum achievable score. function calculates item difficulty, range 0.2 0.8. Lower values signal difficult items, higher values close one sign easier items. ideal value item difficulty p + (1 - p) / 2, p = 1 / max(x). cases, ideal item difficulty lies 0.5 0.8.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Difficulty of Questionnaire Items — item_difficulty","text":"Bortz, J., Döring, N. (2006). Quantitative Methoden der Datenerhebung. J. Bortz N. Döring, Forschungsmethoden und Evaluation. Springer: Berlin, Heidelberg: 137–293 Kelava , Moosbrugger H (2020). Deskriptivstatistische Itemanalyse und Testwertbestimmung. : Moosbrugger H, Kelava , editors. Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer, 143–158","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Difficulty of Questionnaire Items — item_difficulty","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_difficulty(x) #> Item Difficulty #> #> Item | Difficulty | Ideal #> ------------------------- #> cyl | 0.02 | 0.50 #> gear | 0.01 | 0.50 #> carb | 0.01 | 0.50 #> hp | 0.44 | 0.50"},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":null,"dir":"Reference","previous_headings":"","what":"Discrimination of Questionnaire Items — item_discrimination","title":"Discrimination of Questionnaire Items — item_discrimination","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Discrimination of Questionnaire Items — item_discrimination","text":"","code":"item_discrimination(x, standardize = FALSE)"},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Discrimination of Questionnaire Items — item_discrimination","text":"x matrix data frame. standardize Logical, TRUE, data frame's vectors standardized. Recommended variables different measures / scales.","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Discrimination of Questionnaire Items — item_discrimination","text":"data frame item discrimination (corrected item-total correlations) item scale.","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Discrimination of Questionnaire Items — item_discrimination","text":"function calculates item discriminations (corrected item-total correlations item x remaining items) item scale. absolute value item discrimination indices 0.2. index 0.2 0.4 considered \"fair\", satisfactory index ranges 0.4 0.7. Items low discrimination indices often ambiguously worded examined. Items negative indices examined determine negative value obtained (e.g. reversed answer categories regarding positive negative poles).","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Discrimination of Questionnaire Items — item_discrimination","text":"Kelava , Moosbrugger H (2020). Deskriptivstatistische Itemanalyse und Testwertbestimmung. : Moosbrugger H, Kelava , editors. Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer, 143–158","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Discrimination of Questionnaire Items — item_discrimination","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_discrimination(x) #> Item Discrimination #> #> Item | Discrimination #> --------------------- #> cyl | 0.83 #> gear | -0.13 #> carb | 0.75 #> hp | 0.88"},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":null,"dir":"Reference","previous_headings":"","what":"Mean Inter-Item-Correlation — item_intercor","title":"Mean Inter-Item-Correlation — item_intercor","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mean Inter-Item-Correlation — item_intercor","text":"","code":"item_intercor(x, method = c(\"pearson\", \"spearman\", \"kendall\"))"},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Mean Inter-Item-Correlation — item_intercor","text":"x matrix returned cor()-function, data frame items (e.g. test questionnaire). method Correlation computation method. May one \"pearson\" (default), \"spearman\" \"kendall\". may use initial letter .","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Mean Inter-Item-Correlation — item_intercor","text":"mean inter-item-correlation value x.","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Mean Inter-Item-Correlation — item_intercor","text":"function calculates mean inter-item-correlation, .e. correlation matrix x computed (unless x already matrix returned cor() function) mean sum items' correlation values returned. Requires either data frame computed cor() object. \"Ideally, average inter-item correlation set items 0.20 0.40, suggesting items reasonably homogeneous, contain sufficiently unique variance isomorphic . values lower 0.20, items may representative content domain. values higher 0.40, items may capturing small bandwidth construct.\" (Piedmont 2014)","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Mean Inter-Item-Correlation — item_intercor","text":"Piedmont RL. 2014. Inter-item Correlations. : Michalos AC (eds) Encyclopedia Quality Life Well-Research. Dordrecht: Springer, 3303-3304. doi:10.1007/978-94-007-0753-5_1493","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Mean Inter-Item-Correlation — item_intercor","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_intercor(x) #> [1] 0.294155"},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":null,"dir":"Reference","previous_headings":"","what":"Reliability Test for Items or Scales — item_reliability","title":"Reliability Test for Items or Scales — item_reliability","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reliability Test for Items or Scales — item_reliability","text":"","code":"item_reliability(x, standardize = FALSE, digits = 3)"},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reliability Test for Items or Scales — item_reliability","text":"x matrix data frame. standardize Logical, TRUE, data frame's vectors standardized. Recommended variables different measures / scales. digits Amount digits returned values.","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reliability Test for Items or Scales — item_reliability","text":"data frame corrected item-total correlations (item discrimination, column item_discrimination) Cronbach's Alpha (item deleted, column alpha_if_deleted) item scale, NULL data frame less columns.","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Reliability Test for Items or Scales — item_reliability","text":"function calculates item discriminations (corrected item-total correlations item x remaining items) Cronbach's alpha item, deleted scale. absolute value item discrimination indices 0.2. index 0.2 0.4 considered \"fair\", index 0.4 (-0.4) \"good\". range satisfactory values 0.4 0.7. Items low discrimination indices often ambiguously worded examined. Items negative indices examined determine negative value obtained (e.g. reversed answer categories regarding positive negative poles).","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Reliability Test for Items or Scales — item_reliability","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_reliability(x) #> term alpha_if_deleted item_discrimination #> 1 cyl 0.048 0.826 #> 2 gear 0.110 -0.127 #> 3 carb 0.058 0.751 #> 4 hp 0.411 0.881"},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":null,"dir":"Reference","previous_headings":"","what":"Split-Half Reliability — item_split_half","title":"Split-Half Reliability — item_split_half","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Split-Half Reliability — item_split_half","text":"","code":"item_split_half(x, digits = 3)"},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Split-Half Reliability — item_split_half","text":"x matrix data frame. digits Amount digits returned values.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Split-Half Reliability — item_split_half","text":"list two elements: split-half reliability splithalf Spearman-Brown corrected split-half reliability spearmanbrown.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Split-Half Reliability — item_split_half","text":"function calculates split-half reliability items x, including Spearman-Brown adjustment. Splitting done selecting odd versus even columns x. value closer 1 indicates greater internal consistency.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Split-Half Reliability — item_split_half","text":"Spearman C. 1910. Correlation calculated faulty data. British Journal Psychology (3): 271-295. doi:10.1111/j.2044-8295.1910.tb00206.x Brown W. 1910. experimental results correlation mental abilities. British Journal Psychology (3): 296-322. doi:10.1111/j.2044-8295.1910.tb00207.x","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Split-Half Reliability — item_split_half","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_split_half(x) #> $splithalf #> [1] 0.9070215 #> #> $spearmanbrown #> [1] 0.9512441 #>"},{"path":"https://easystats.github.io/performance/reference/looic.html","id":null,"dir":"Reference","previous_headings":"","what":"LOO-related Indices for Bayesian regressions. — looic","title":"LOO-related Indices for Bayesian regressions. — looic","text":"Compute LOOIC (leave-one-cross-validation (LOO) information criterion) ELPD (expected log predictive density) Bayesian regressions. LOOIC ELPD, smaller larger values respectively indicative better fit.","code":""},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"LOO-related Indices for Bayesian regressions. — looic","text":"","code":"looic(model, verbose = TRUE)"},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"LOO-related Indices for Bayesian regressions. — looic","text":"model Bayesian regression model. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"LOO-related Indices for Bayesian regressions. — looic","text":"list four elements, ELPD, LOOIC standard errors.","code":""},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"LOO-related Indices for Bayesian regressions. — looic","text":"","code":"# \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0 )) looic(model) #> # LOOIC and ELPD with Standard Error #> #> LOOIC: 155.90 [8.79] #> ELPD: -77.95 [4.39] # }"},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Model Performance — model_performance","title":"Model Performance — model_performance","text":"See documentation object's class: Frequentist Regressions Instrumental Variables Regressions Mixed models Bayesian models CFA / SEM lavaan models Meta-analysis models","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Model Performance — model_performance","text":"","code":"model_performance(model, ...) performance(model, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Model Performance — model_performance","text":"model Statistical model. ... Arguments passed methods, resp. compare_performance(), one multiple model objects (also different classes).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Model Performance — model_performance","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Model Performance — model_performance","text":"model_performance() correctly detects transformed response returns \"corrected\" AIC BIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Model Performance — model_performance","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma #> --------------------------------------------------------------- #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568 model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log #> --------------------------------------------------------------------------- #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 #> #> AIC | Score_spherical | PCP #> -------------------------------- #> 31.298 | 0.095 | 0.743"},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of instrumental variable regression models — model_performance.ivreg","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"Performance instrumental variable regression models","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"","code":"# S3 method for class 'ivreg' model_performance(model, metrics = \"all\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"model model. metrics Can \"\", \"common\" character vector metrics computed (c(\"AIC\", \"AICc\", \"BIC\", \"R2\", \"RMSE\", \"SIGMA\", \"Sargan\", \"Wu_Hausman\", \"weak_instruments\")). \"common\" compute AIC, BIC, R2 RMSE. verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"model_performance() correctly detects transformed response returns \"corrected\" AIC BIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":null,"dir":"Reference","previous_headings":"","what":"Model summary for k-means clustering — model_performance.kmeans","title":"Model summary for k-means clustering — model_performance.kmeans","text":"Model summary k-means clustering","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Model summary for k-means clustering — model_performance.kmeans","text":"","code":"# S3 method for class 'kmeans' model_performance(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Model summary for k-means clustering — model_performance.kmeans","text":"model Object type kmeans. verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Model summary for k-means clustering — model_performance.kmeans","text":"","code":"# a 2-dimensional example x <- rbind( matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2) ) colnames(x) <- c(\"x\", \"y\") model <- kmeans(x, 2) model_performance(model) #> # Indices of model performance #> #> Sum_Squares_Total | Sum_Squares_Within | Sum_Squares_Between | Iterations #> ------------------------------------------------------------------------- #> 71.530 | 16.523 | 55.007 | 1.000"},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Compute indices model performance SEM CFA models lavaan package.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"","code":"# S3 method for class 'lavaan' model_performance(model, metrics = \"all\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"model lavaan model. metrics Can \"\" character vector metrics computed (\"Chi2\", \"Chi2_df\", \"p_Chi2\", \"Baseline\", \"Baseline_df\", \"p_Baseline\", \"GFI\", \"AGFI\", \"NFI\", \"NNFI\", \"CFI\", \"RMSEA\", \"RMSEA_CI_low\", \"RMSEA_CI_high\", \"p_RMSEA\", \"RMR\", \"SRMR\", \"RFI\", \"PNFI\", \"IFI\", \"RNI\", \"Loglikelihood\", \"AIC\", \"BIC\", \"BIC_adjusted\". verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"See documentation ?lavaan::fitmeasures.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"indices-of-fit","dir":"Reference","previous_headings":"","what":"Indices of fit","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Chisq: model Chi-squared assesses overall fit discrepancy sample fitted covariance matrices. p-value > .05 (.e., hypothesis perfect fit rejected). However, quite sensitive sample size. GFI/AGFI: (Adjusted) Goodness Fit proportion variance accounted estimated population covariance. Analogous R2. GFI AGFI > .95 > .90, respectively. NFI/NNFI/TLI: (Non) Normed Fit Index. NFI 0.95, indicates model interest improves fit 95\\ null model. NNFI (also called Tucker Lewis index; TLI) preferable smaller samples. > .90 (Byrne, 1994) > .95 (Schumacker Lomax, 2004). CFI: Comparative Fit Index revised form NFI. sensitive sample size (Fan, Thompson, Wang, 1999). Compares fit target model fit independent, null, model. > .90. RMSEA: Root Mean Square Error Approximation parsimony-adjusted index. Values closer 0 represent good fit. < .08 < .05. p-value printed tests hypothesis RMSEA less equal .05 (cutoff sometimes used good fit), thus significant. RMR/SRMR: (Standardized) Root Mean Square Residual represents square-root difference residuals sample covariance matrix hypothesized model. RMR can sometimes hard interpret, better use SRMR. < .08. RFI: Relative Fit Index, also known RHO1, guaranteed vary 0 1. However, RFI close 1 indicates good fit. IFI: Incremental Fit Index (IFI) adjusts Normed Fit Index (NFI) sample size degrees freedom (Bollen's, 1989). 0.90 good fit, index can exceed 1. PNFI: Parsimony-Adjusted Measures Index. commonly agreed-upon cutoff value acceptable model index. > 0.50.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"what-to-report","dir":"Reference","previous_headings":"","what":"What to report","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Kline (2015) suggests minimum following indices reported: model chi-square, RMSEA, CFI SRMR.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Byrne, B. M. (1994). Structural equation modeling EQS EQS/Windows. Thousand Oaks, CA: Sage Publications. Tucker, L. R., Lewis, C. (1973). reliability coefficient maximum likelihood factor analysis. Psychometrika, 38, 1-10. Schumacker, R. E., Lomax, R. G. (2004). beginner's guide structural equation modeling, Second edition. Mahwah, NJ: Lawrence Erlbaum Associates. Fan, X., B. Thompson, L. Wang (1999). Effects sample size, estimation method, model specification structural equation modeling fit indexes. Structural Equation Modeling, 6, 56-83. Kline, R. B. (2015). Principles practice structural equation modeling. Guilford publications.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"","code":"# Confirmatory Factor Analysis (CFA) --------- data(HolzingerSwineford1939, package = \"lavaan\") structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 \" model <- lavaan::cfa(structure, data = HolzingerSwineford1939) model_performance(model) #> # Indices of model performance #> #> Chi2(24) | p (Chi2) | Baseline(36) | p (Baseline) | GFI | AGFI | NFI #> ------------------------------------------------------------------------- #> 85.306 | < .001 | 918.852 | < .001 | 0.943 | 0.894 | 0.907 #> #> Chi2(24) | NNFI | CFI | RMSEA | RMSEA CI | p (RMSEA) | RMR | SRMR #> --------------------------------------------------------------------------- #> 85.306 | 0.896 | 0.931 | 0.092 | [0.07, 0.11] | < .001 | 0.082 | 0.065 #> #> Chi2(24) | RFI | PNFI | IFI | RNI | Loglikelihood | AIC | BIC | BIC_adjusted #> --------------------------------------------------------------------------------------------- #> 85.306 | 0.861 | 0.605 | 0.931 | 0.931 | -3737.745 | 7517.490 | 7595.339 | 7528.739"},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Regression Models — model_performance.lm","title":"Performance of Regression Models — model_performance.lm","text":"Compute indices model performance regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Regression Models — model_performance.lm","text":"","code":"# S3 method for class 'lm' model_performance(model, metrics = \"all\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Regression Models — model_performance.lm","text":"model model. metrics Can \"\", \"common\" character vector metrics computed (one \"AIC\", \"AICc\", \"BIC\", \"R2\", \"R2_adj\", \"RMSE\", \"SIGMA\", \"LOGLOSS\", \"PCP\", \"SCORE\"). \"common\" compute AIC, BIC, R2 RMSE. verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Regression Models — model_performance.lm","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of Regression Models — model_performance.lm","text":"Depending model, following indices computed: AIC: Akaike's Information Criterion, see ?stats::AIC AICc: Second-order (small sample) AIC correction small sample sizes BIC: Bayesian Information Criterion, see ?stats::BIC R2: r-squared value, see r2() R2_adj: adjusted r-squared, see r2() RMSE: root mean squared error, see performance_rmse() SIGMA: residual standard deviation, see insight::get_sigma() LOGLOSS: Log-loss, see performance_logloss() SCORE_LOG: score logarithmic proper scoring rule, see performance_score() SCORE_SPHERICAL: score spherical proper scoring rule, see performance_score() PCP: percentage correct predictions, see performance_pcp() model_performance() correctly detects transformed response returns \"corrected\" AIC BIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Regression Models — model_performance.lm","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma #> --------------------------------------------------------------- #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568 model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log #> --------------------------------------------------------------------------- #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 #> #> AIC | Score_spherical | PCP #> -------------------------------- #> 31.298 | 0.095 | 0.743"},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Mixed Models — model_performance.merMod","title":"Performance of Mixed Models — model_performance.merMod","text":"Compute indices model performance mixed models.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Mixed Models — model_performance.merMod","text":"","code":"# S3 method for class 'merMod' model_performance( model, metrics = \"all\", estimator = \"REML\", verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Mixed Models — model_performance.merMod","text":"model mixed effects model. metrics Can \"\", \"common\" character vector metrics computed (c(\"AIC\", \"AICc\", \"BIC\", \"R2\", \"ICC\", \"RMSE\", \"SIGMA\", \"LOGLOSS\", \"SCORE\")). \"common\" compute AIC, BIC, R2, ICC RMSE. estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings messages. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Mixed Models — model_performance.merMod","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"intraclass-correlation-coefficient-icc-","dir":"Reference","previous_headings":"","what":"Intraclass Correlation Coefficient (ICC)","title":"Performance of Mixed Models — model_performance.merMod","text":"method returns adjusted ICC , typically interest judging variance attributed random effects part model (see also icc()).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"reml-versus-ml-estimator","dir":"Reference","previous_headings":"","what":"REML versus ML estimator","title":"Performance of Mixed Models — model_performance.merMod","text":"default behaviour model_performance() computing AIC BIC linear mixed model package lme4 AIC() BIC() (.e. estimator = \"REML\"). However, model comparison using compare_performance() sets estimator = \"ML\" default, comparing information criteria based REML fits usually valid (unless models fixed effects). Thus, make sure set correct estimator-value looking fit-indices comparing model fits.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"other-performance-indices","dir":"Reference","previous_headings":"","what":"Other performance indices","title":"Performance of Mixed Models — model_performance.merMod","text":"Furthermore, see 'Details' model_performance.lm() details returned indices.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Mixed Models — model_performance.merMod","text":"","code":"model <- lme4::lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris) model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma #> -------------------------------------------------------------------------- #> 77.320 | 77.595 | 89.362 | 0.972 | 0.096 | 0.969 | 0.279 | 0.283"},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Meta-Analysis Models — model_performance.rma","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"Compute indices model performance meta-analysis model metafor package.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"","code":"# S3 method for class 'rma' model_performance( model, metrics = \"all\", estimator = \"ML\", verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"model rma object returned metafor::rma(). metrics Can \"\" character vector metrics computed (c(\"AIC\", \"BIC\", \"I2\", \"H2\", \"TAU2\", \"R2\", \"CochransQ\", \"QE\", \"Omnibus\", \"QM\")). estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"indices-of-fit","dir":"Reference","previous_headings":"","what":"Indices of fit","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"AIC Akaike's Information Criterion, see ?stats::AIC BIC Bayesian Information Criterion, see ?stats::BIC I2: random effects model, I2 estimates (percent) much total variability effect size estimates can attributed heterogeneity among true effects. mixed-effects model, I2 estimates much unaccounted variability can attributed residual heterogeneity. H2: random-effects model, H2 estimates ratio total amount variability effect size estimates amount sampling variability. mixed-effects model, H2 estimates ratio unaccounted variability effect size estimates amount sampling variability. TAU2: amount (residual) heterogeneity random mixed effects model. CochransQ (QE): Test (residual) Heterogeneity. Without moderators model, simply Cochran's Q-test. Omnibus (QM): Omnibus test parameters. R2: Pseudo-R2-statistic, indicates amount heterogeneity accounted moderators included fixed-effects model. See documentation ?metafor::fitstats.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"","code":"data(dat.bcg, package = \"metadat\") dat <- metafor::escalc( measure = \"RR\", ai = tpos, bi = tneg, ci = cpos, di = cneg, data = dat.bcg ) model <- metafor::rma(yi, vi, data = dat, method = \"REML\") model_performance(model) #> # Indices of model performance #> #> AIC | BIC | I2 | H2 | TAU2 | CochransQ | p (CochransQ) | df #> ------------------------------------------------------------------------- #> 29.376 | 30.345 | 0.922 | 12.856 | 0.313 | 152.233 | < .001 | 12 #> #> AIC | Omnibus | p (Omnibus) #> ------------------------------ #> 29.376 | 15.796 | < .001"},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Bayesian Models — model_performance.stanreg","title":"Performance of Bayesian Models — model_performance.stanreg","text":"Compute indices model performance (general) linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Bayesian Models — model_performance.stanreg","text":"","code":"# S3 method for class 'stanreg' model_performance(model, metrics = \"all\", verbose = TRUE, ...) # S3 method for class 'BFBayesFactor' model_performance( model, metrics = \"all\", verbose = TRUE, average = FALSE, prior_odds = NULL, ... )"},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Bayesian Models — model_performance.stanreg","text":"model Object class stanreg brmsfit. metrics Can \"\", \"common\" character vector metrics computed (c(\"LOOIC\", \"WAIC\", \"R2\", \"R2_adj\", \"RMSE\", \"SIGMA\", \"LOGLOSS\", \"SCORE\")). \"common\" compute LOOIC, WAIC, R2 RMSE. verbose Toggle warnings. ... Arguments passed methods. average Compute model-averaged index? See bayestestR::weighted_posteriors(). prior_odds Optional vector prior odds models compared first model (denominator, BFBayesFactor objects). data.frames, used basis weighting.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Bayesian Models — model_performance.stanreg","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of Bayesian Models — model_performance.stanreg","text":"Depending model, following indices computed: ELPD: expected log predictive density. Larger ELPD values mean better fit. See looic(). LOOIC: leave-one-cross-validation (LOO) information criterion. Lower LOOIC values mean better fit. See looic(). WAIC: widely applicable information criterion. Lower WAIC values mean better fit. See ?loo::waic. R2: r-squared value, see r2_bayes(). R2_adjusted: LOO-adjusted r-squared, see r2_loo(). RMSE: root mean squared error, see performance_rmse(). SIGMA: residual standard deviation, see insight::get_sigma(). LOGLOSS: Log-loss, see performance_logloss(). SCORE_LOG: score logarithmic proper scoring rule, see performance_score(). SCORE_SPHERICAL: score spherical proper scoring rule, see performance_score(). PCP: percentage correct predictions, see performance_pcp().","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Performance of Bayesian Models — model_performance.stanreg","text":"Gelman, ., Goodrich, B., Gabry, J., Vehtari, . (2018). R-squared Bayesian regression models. American Statistician, American Statistician, 1-6.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Bayesian Models — model_performance.stanreg","text":"","code":"# \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0 )) model_performance(model) #> # Indices of model performance #> #> ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | R2 (adj.) | RMSE | Sigma #> ------------------------------------------------------------------------------------ #> -78.243 | 4.270 | 156.486 | 8.539 | 156.468 | 0.814 | 0.798 | 2.445 | 2.660 model <- suppressWarnings(rstanarm::stan_glmer( mpg ~ wt + cyl + (1 | gear), data = mtcars, chains = 1, iter = 500, refresh = 0 )) model_performance(model) #> # Indices of model performance #> #> ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | R2 (marg.) #> --------------------------------------------------------------------- #> -79.362 | 4.741 | 158.723 | 9.482 | 158.664 | 0.820 | 0.823 #> #> ELPD | R2 (adj.) | R2_adjusted_marginal | ICC | RMSE | Sigma #> ------------------------------------------------------------------ #> -79.362 | 0.788 | 0.788 | 0.184 | 2.442 | 2.594 # }"},{"path":"https://easystats.github.io/performance/reference/performance-package.html","id":null,"dir":"Reference","previous_headings":"","what":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","title":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","text":"crucial aspect building regression models evaluate quality modelfit. important investigate well models fit data fit indices report. Functions create diagnostic plots compute fit measures exist, however, mostly spread different packages. unique consistent approach assess model quality different kind models. primary goal performance package fill gap provide utilities computing indices model quality goodness fit. include measures like r-squared (R2), root mean squared error (RMSE) intraclass correlation coefficient (ICC), also functions check (mixed) models overdispersion, zero-inflation, convergence singularity. References: Lüdecke et al. (2021) doi:10.21105/joss.03139","code":""},{"path":"https://easystats.github.io/performance/reference/performance-package.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","text":"performance-package","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/performance-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","text":"Maintainer: Daniel Lüdecke d.luedecke@uke.de (ORCID) Authors: Dominique Makowski dom.makowski@gmail.com (ORCID) [contributor] Mattan S. Ben-Shachar matanshm@post.bgu.ac.il (ORCID) [contributor] Indrajeet Patil patilindrajeet.science@gmail.com (ORCID) [contributor] Philip Waggoner philip.waggoner@gmail.com (ORCID) [contributor] Brenton M. Wiernik brenton@wiernik.org (ORCID) [contributor] Rémi Thériault remi.theriault@mail.mcgill.ca (ORCID) [contributor] contributors: Vincent Arel-Bundock vincent.arel-bundock@umontreal.ca (ORCID) [contributor] Martin Jullum [reviewer] gjo11 [reviewer] Etienne Bacher etienne.bacher@protonmail.com (ORCID) [contributor] Joseph Luchman (ORCID) [contributor]","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":null,"dir":"Reference","previous_headings":"","what":"Accuracy of predictions from model fit — performance_accuracy","title":"Accuracy of predictions from model fit — performance_accuracy","text":"function calculates predictive accuracy linear logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Accuracy of predictions from model fit — performance_accuracy","text":"","code":"performance_accuracy( model, method = c(\"cv\", \"boot\"), k = 5, n = 1000, ci = 0.95, verbose = TRUE )"},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Accuracy of predictions from model fit — performance_accuracy","text":"model linear logistic regression model. mixed-effects model also accepted. method Character string, indicating whether cross-validation (method = \"cv\") bootstrapping (method = \"boot\") used compute accuracy values. k number folds k-fold cross-validation. n Number bootstrap-samples. ci level confidence interval. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Accuracy of predictions from model fit — performance_accuracy","text":"list three values: Accuracy model predictions, .e. proportion accurately predicted values model, standard error, SE, Method used compute accuracy.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Accuracy of predictions from model fit — performance_accuracy","text":"linear models, accuracy correlation coefficient actual predicted value outcome. logistic regression models, accuracy corresponds AUC-value, calculated bayestestR::auc()-function. accuracy mean value multiple correlation resp. AUC-values, either computed cross-validation non-parametric bootstrapping (see argument method). standard error standard deviation computed correlation resp. AUC-values.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Accuracy of predictions from model fit — performance_accuracy","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) performance_accuracy(model) #> # Accuracy of Model Predictions #> #> Accuracy (95% CI): 95.79% [92.14%, 99.11%] #> Method: Correlation between observed and predicted model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") performance_accuracy(model) #> # Accuracy of Model Predictions #> #> Accuracy (95% CI): 87.56% [78.00%, 100.00%] #> Method: Area under Curve"},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute the AIC or second-order AIC — performance_aicc","title":"Compute the AIC or second-order AIC — performance_aicc","text":"Compute AIC second-order Akaike's information criterion (AICc). performance_aic() small wrapper returns AIC, however, models transformed response variable, performance_aic() returns corrected AIC value (see 'Examples'). generic function also works models AIC method (like Tweedie models). performance_aicc() returns second-order (\"small sample\") AIC incorporates correction small sample sizes.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute the AIC or second-order AIC — performance_aicc","text":"","code":"performance_aicc(x, ...) performance_aic(x, ...) # Default S3 method performance_aic(x, estimator = \"ML\", verbose = TRUE, ...) # S3 method for class 'lmerMod' performance_aic(x, estimator = \"REML\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute the AIC or second-order AIC — performance_aicc","text":"x model object. ... Currently used. estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute the AIC or second-order AIC — performance_aicc","text":"Numeric, AIC AICc value.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Compute the AIC or second-order AIC — performance_aicc","text":"performance_aic() correctly detects transformed response , unlike stats::AIC(), returns \"corrected\" AIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation. case possible return corrected AIC value, warning given corrected log-likelihood value computed.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compute the AIC or second-order AIC — performance_aicc","text":"Akaike, H. (1973) Information theory extension maximum likelihood principle. : Second International Symposium Information Theory, pp. 267-281. Petrov, B.N., Csaki, F., Eds, Akademiai Kiado, Budapest. Hurvich, C. M., Tsai, C.-L. (1991) Bias corrected AIC criterion underfitted regression time series models. Biometrika 78, 499–509.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute the AIC or second-order AIC — performance_aicc","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) AIC(m) #> [1] 159.1051 performance_aicc(m) #> [1] 162.4651 # correct AIC for models with transformed response variable data(\"mtcars\") mtcars$mpg <- floor(mtcars$mpg) model <- lm(log(mpg) ~ factor(cyl), mtcars) # wrong AIC, not corrected for log-transformation AIC(model) #> [1] -19.67061 # performance_aic() correctly detects transformed response and # returns corrected AIC performance_aic(model) #> [1] 168.2152 # \\dontrun{ # there are a few exceptions where the corrected log-likelihood values # cannot be returned. The following exampe gives a warning. model <- lm(1 / mpg ~ factor(cyl), mtcars) performance_aic(model) #> Warning: Could not compute corrected log-likelihood for models with transformed #> response. Log-likelihood value is probably inaccurate. #> [1] -196.3387 # }"},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":null,"dir":"Reference","previous_headings":"","what":"Cross-validated model performance — performance_cv","title":"Cross-validated model performance — performance_cv","text":"function cross-validates regression models user-supplied new sample using holdout (train-test), k-fold, leave-one-cross-validation.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cross-validated model performance — performance_cv","text":"","code":"performance_cv( model, data = NULL, method = c(\"holdout\", \"k_fold\", \"loo\"), metrics = \"all\", prop = 0.3, k = 5, stack = TRUE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Cross-validated model performance — performance_cv","text":"model regression model. data Optional. data frame containing variables model used cross-validation sample. method Character string, indicating cross-validation method use: whether holdout (\"holdout\", aka train-test), k-fold (\"k_fold\"), leave-one-(\"loo\"). data supplied, argument ignored. metrics Can \"\", \"common\" character vector metrics computed (c(\"ELPD\", \"Deviance\", \"MSE\", \"RMSE\", \"R2\")). \"common\" compute R2 RMSE. prop method = \"holdout\", proportion sample hold test sample? k method = \"k_fold\", number folds use. stack Logical. method = \"k_fold\", performance computed stacking residuals holdout fold calculating metric stacked data (TRUE, default) performance computed calculating metrics within holdout fold averaging performance across fold (FALSE)? verbose Toggle warnings. ... used.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Cross-validated model performance — performance_cv","text":"data frame columns metric requested, well k method = \"holdout\" Method used cross-validation. method = \"holdout\" stack = TRUE, standard error (standard deviation across holdout folds) metric also included.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Cross-validated model performance — performance_cv","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) performance_cv(model) #> # Cross-validation performance (30% holdout method) #> #> MSE | RMSE | R2 #> ----------------- #> 6.3 | 2.5 | 0.75"},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":null,"dir":"Reference","previous_headings":"","what":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"Check model quality logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"","code":"performance_hosmer(model, n_bins = 10)"},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"model glm-object binomial-family. n_bins Numeric, number bins divide data.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"object class hoslem_test following values: chisq, Hosmer-Lemeshow chi-squared statistic; df, degrees freedom p.value p-value goodness--fit test.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"well-fitting model shows significant difference model observed data, .e. reported p-value greater 0.05.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"Hosmer, D. W., Lemeshow, S. (2000). Applied Logistic Regression. Hoboken, NJ, USA: John Wiley Sons, Inc. doi:10.1002/0471722146","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") performance_hosmer(model) #> # Hosmer-Lemeshow Goodness-of-Fit Test #> #> Chi-squared: 5.137 #> df: 8 #> p-value: 0.743 #> #> Summary: model seems to fit well."},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":null,"dir":"Reference","previous_headings":"","what":"Log Loss — performance_logloss","title":"Log Loss — performance_logloss","text":"Compute log loss models binary outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Log Loss — performance_logloss","text":"","code":"performance_logloss(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Log Loss — performance_logloss","text":"model Model binary outcome. verbose Toggle warnings. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Log Loss — performance_logloss","text":"Numeric, log loss model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Log Loss — performance_logloss","text":"Logistic regression models predict probability outcome \"success\" \"failure\" (1 0 etc.). performance_logloss() evaluates good bad predicted probabilities . High values indicate bad predictions, low values indicate good predictions. lower log-loss, better model predicts outcome.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Log Loss — performance_logloss","text":"","code":"data(mtcars) m <- glm(formula = vs ~ hp + wt, family = binomial, data = mtcars) performance_logloss(m) #> [1] 0.2517054"},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":null,"dir":"Reference","previous_headings":"","what":"Mean Absolute Error of Models — performance_mae","title":"Mean Absolute Error of Models — performance_mae","text":"Compute mean absolute error models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mean Absolute Error of Models — performance_mae","text":"","code":"performance_mae(model, ...) mae(model, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Mean Absolute Error of Models — performance_mae","text":"model model. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Mean Absolute Error of Models — performance_mae","text":"Numeric, mean absolute error model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Mean Absolute Error of Models — performance_mae","text":"","code":"data(mtcars) m <- lm(mpg ~ hp + gear, data = mtcars) performance_mae(m) #> [1] 2.545822"},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":null,"dir":"Reference","previous_headings":"","what":"Mean Square Error of Linear Models — performance_mse","title":"Mean Square Error of Linear Models — performance_mse","text":"Compute mean square error linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mean Square Error of Linear Models — performance_mse","text":"","code":"performance_mse(model, ...) mse(model, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Mean Square Error of Linear Models — performance_mse","text":"model model. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Mean Square Error of Linear Models — performance_mse","text":"Numeric, mean square error model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Mean Square Error of Linear Models — performance_mse","text":"mean square error mean sum squared residuals, .e. measures average squares errors. Less technically speaking, mean square error can considered variance residuals, .e. variation outcome model explain. Lower values (closer zero) indicate better fit.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Mean Square Error of Linear Models — performance_mse","text":"","code":"data(mtcars) m <- lm(mpg ~ hp + gear, data = mtcars) performance_mse(m) #> [1] 8.752858"},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":null,"dir":"Reference","previous_headings":"","what":"Percentage of Correct Predictions — performance_pcp","title":"Percentage of Correct Predictions — performance_pcp","text":"Percentage correct predictions (PCP) models binary outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Percentage of Correct Predictions — performance_pcp","text":"","code":"performance_pcp(model, ci = 0.95, method = \"Herron\", verbose = TRUE)"},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Percentage of Correct Predictions — performance_pcp","text":"model Model binary outcome. ci level confidence interval. method Name method calculate PCP (see 'Details'). Default \"Herron\". May abbreviated. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Percentage of Correct Predictions — performance_pcp","text":"list several elements: percentage correct predictions full null model, confidence intervals, well chi-squared p-value Likelihood-Ratio-Test full null model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Percentage of Correct Predictions — performance_pcp","text":"method = \"Gelman-Hill\" (\"gelman_hill\") computes PCP based proposal Gelman Hill 2017, 99, defined proportion cases deterministic prediction wrong, .e. proportion predicted probability 0.5, although y=0 (vice versa) (see also Herron 1999, 90). method = \"Herron\" (\"herron\") computes modified version PCP (Herron 1999, 90-92), sum predicted probabilities, y=1, plus sum 1 - predicted probabilities, y=0, divided number observations. approach said accurate. PCP ranges 0 1, values closer 1 mean model predicts outcome better models PCP closer 0. general, PCP 0.5 (.e. 50\\ Furthermore, PCP full model considerably null model's PCP. likelihood-ratio test indicates whether model significantly better fit null-model (cases, p < 0.05).","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Percentage of Correct Predictions — performance_pcp","text":"Herron, M. (1999). Postestimation Uncertainty Limited Dependent Variable Models. Political Analysis, 8, 83–98. Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press, 99.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Percentage of Correct Predictions — performance_pcp","text":"","code":"data(mtcars) m <- glm(formula = vs ~ hp + wt, family = binomial, data = mtcars) performance_pcp(m) #> # Percentage of Correct Predictions from Logistic Regression Model #> #> Full model: 83.75% [70.96% - 96.53%] #> Null model: 50.78% [33.46% - 68.10%] #> #> # Likelihood-Ratio-Test #> #> Chi-squared: 27.751 #> df: 2.000 #> p-value: 0.000 #> performance_pcp(m, method = \"Gelman-Hill\") #> # Percentage of Correct Predictions from Logistic Regression Model #> #> Full model: 87.50% [76.04% - 98.96%] #> Null model: 56.25% [39.06% - 73.44%] #> #> # Likelihood-Ratio-Test #> #> Chi-squared: 27.751 #> df: 2.000 #> p-value: 0.000 #>"},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":null,"dir":"Reference","previous_headings":"","what":"Root Mean Squared Error — performance_rmse","title":"Root Mean Squared Error — performance_rmse","text":"Compute root mean squared error (mixed effects) models, including Bayesian regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Root Mean Squared Error — performance_rmse","text":"","code":"performance_rmse( model, normalized = FALSE, ci = NULL, iterations = 100, ci_method = NULL, verbose = TRUE, ... ) rmse( model, normalized = FALSE, ci = NULL, iterations = 100, ci_method = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Root Mean Squared Error — performance_rmse","text":"model model. normalized Logical, use TRUE normalized rmse returned. ci Confidence resp. credible interval level. icc(), r2(), rmse(), confidence intervals based bootstrapped samples ICC, R2 RMSE value. See iterations. iterations Number bootstrap-replicates computing confidence intervals ICC, R2, RMSE etc. ci_method Character string, indicating bootstrap-method. NULL (default), case lme4::bootMer() used bootstrapped confidence intervals. However, bootstrapped intervals calculated way, try ci_method = \"boot\", falls back boot::boot(). may successfully return bootstrapped confidence intervals, bootstrapped samples may appropriate multilevel structure model. also option ci_method = \"analytical\", tries calculate analytical confidence assuming chi-squared distribution. However, intervals rather inaccurate often narrow. recommended calculate bootstrapped confidence intervals mixed models. verbose Toggle warnings messages. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Root Mean Squared Error — performance_rmse","text":"Numeric, root mean squared error.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Root Mean Squared Error — performance_rmse","text":"RMSE square root variance residuals indicates absolute fit model data (difference observed data model's predicted values). can interpreted standard deviation unexplained variance, units response variable. Lower values indicate better model fit. normalized RMSE proportion RMSE related range response variable. Hence, lower values indicate less residual variance.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Root Mean Squared Error — performance_rmse","text":"","code":"data(Orthodont, package = \"nlme\") m <- nlme::lme(distance ~ age, data = Orthodont) # RMSE performance_rmse(m, normalized = FALSE) #> [1] 1.086327 # normalized RMSE performance_rmse(m, normalized = TRUE) #> [1] 0.07242178"},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":null,"dir":"Reference","previous_headings":"","what":"Simple ROC curve — performance_roc","title":"Simple ROC curve — performance_roc","text":"function calculates simple ROC curves x/y coordinates based response predictions binomial model. returns area curve (AUC) percentage, corresponds probability randomly chosen observation \"condition 1\" correctly classified model higher probability \"condition 1\" randomly chosen \"condition 2\" observation. Applying .data.frame() output returns data frame containing following: Sensitivity (actually corresponds 1 - Specificity): False Positive Rate. Sensitivity: True Positive Rate, proportion correctly classified \"condition 1\" observations.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simple ROC curve — performance_roc","text":"","code":"performance_roc(x, ..., predictions, new_data)"},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simple ROC curve — performance_roc","text":"x numeric vector, representing outcome (0/1), model binomial outcome. ... One models binomial outcome. case, new_data ignored. predictions x numeric, numeric vector length x, representing actual predicted values. new_data x model, data frame passed predict() newdata-argument. NULL, ROC full model calculated.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simple ROC curve — performance_roc","text":"data frame three columns, x/y-coordinate pairs ROC curve (Sensitivity Specificity), column model name.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Simple ROC curve — performance_roc","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simple ROC curve — performance_roc","text":"","code":"library(bayestestR) data(iris) set.seed(123) iris$y <- rbinom(nrow(iris), size = 1, .3) folds <- sample(nrow(iris), size = nrow(iris) / 8, replace = FALSE) test_data <- iris[folds, ] train_data <- iris[-folds, ] model <- glm(y ~ Sepal.Length + Sepal.Width, data = train_data, family = \"binomial\") as.data.frame(performance_roc(model, new_data = test_data)) #> Sensitivity Specificity Model #> 1 0.0000000 0.00000000 Model 1 #> 2 0.1428571 0.00000000 Model 1 #> 3 0.1428571 0.09090909 Model 1 #> 4 0.1428571 0.18181818 Model 1 #> 5 0.1428571 0.27272727 Model 1 #> 6 0.1428571 0.36363636 Model 1 #> 7 0.2857143 0.36363636 Model 1 #> 8 0.2857143 0.45454545 Model 1 #> 9 0.2857143 0.54545455 Model 1 #> 10 0.2857143 0.63636364 Model 1 #> 11 0.2857143 0.72727273 Model 1 #> 12 0.4285714 0.72727273 Model 1 #> 13 0.5714286 0.72727273 Model 1 #> 14 0.5714286 0.81818182 Model 1 #> 15 0.7142857 0.81818182 Model 1 #> 16 0.8571429 0.81818182 Model 1 #> 17 0.8571429 0.90909091 Model 1 #> 18 1.0000000 0.90909091 Model 1 #> 19 1.0000000 1.00000000 Model 1 #> 20 1.0000000 1.00000000 Model 1 as.numeric(performance_roc(model)) #> [1] 0.540825 roc <- performance_roc(model, new_data = test_data) area_under_curve(roc$Specificity, roc$Sensitivity) #> [1] 0.3766234 if (interactive()) { m1 <- glm(y ~ Sepal.Length + Sepal.Width, data = iris, family = \"binomial\") m2 <- glm(y ~ Sepal.Length + Petal.Width, data = iris, family = \"binomial\") m3 <- glm(y ~ Sepal.Length + Species, data = iris, family = \"binomial\") performance_roc(m1, m2, m3) # if you have `see` package installed, you can also plot comparison of # ROC curves for different models if (require(\"see\")) plot(performance_roc(m1, m2, m3)) }"},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":null,"dir":"Reference","previous_headings":"","what":"Residual Standard Error for Linear Models — performance_rse","title":"Residual Standard Error for Linear Models — performance_rse","text":"Compute residual standard error linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Residual Standard Error for Linear Models — performance_rse","text":"","code":"performance_rse(model)"},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Residual Standard Error for Linear Models — performance_rse","text":"model model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Residual Standard Error for Linear Models — performance_rse","text":"Numeric, residual standard error model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Residual Standard Error for Linear Models — performance_rse","text":"residual standard error square root residual sum squares divided residual degrees freedom.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Residual Standard Error for Linear Models — performance_rse","text":"","code":"data(mtcars) m <- lm(mpg ~ hp + gear, data = mtcars) performance_rse(m) #> [1] 3.107785"},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":null,"dir":"Reference","previous_headings":"","what":"Proper Scoring Rules — performance_score","title":"Proper Scoring Rules — performance_score","text":"Calculates logarithmic, quadratic/Brier spherical score model binary count outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Proper Scoring Rules — performance_score","text":"","code":"performance_score(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Proper Scoring Rules — performance_score","text":"model Model binary count outcome. verbose Toggle warnings. ... Arguments functions, usually used internally.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Proper Scoring Rules — performance_score","text":"list three elements, logarithmic, quadratic/Brier spherical score.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Proper Scoring Rules — performance_score","text":"Proper scoring rules can used evaluate quality model predictions model fit. performance_score() calculates logarithmic, quadratic/Brier spherical scoring rules. spherical rule takes values interval [0, 1], values closer 1 indicating accurate model, logarithmic rule interval [-Inf, 0], values closer 0 indicating accurate model. stan_lmer() stan_glmer() models, predicted values based posterior_predict(), instead predict(). Thus, results may differ expected non-Bayesian counterparts lme4.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Proper Scoring Rules — performance_score","text":"Code partially based GLMMadaptive::scoring_rules().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Proper Scoring Rules — performance_score","text":"Carvalho, . (2016). overview applications proper scoring rules. Decision Analysis 13, 223–242. doi:10.1287/deca.2016.0337","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Proper Scoring Rules — performance_score","text":"","code":"## Dobson (1990) Page 93: Randomized Controlled Trial : counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) outcome <- gl(3, 1, 9) treatment <- gl(3, 3) model <- glm(counts ~ outcome + treatment, family = poisson()) performance_score(model) #> # Proper Scoring Rules #> #> logarithmic: -2.5979 #> quadratic: 0.2095 #> spherical: 0.3238 # \\donttest{ data(Salamanders, package = \"glmmTMB\") model <- glmmTMB::glmmTMB( count ~ spp + mined + (1 | site), zi = ~ spp + mined, family = nbinom2(), data = Salamanders ) performance_score(model) #> # Proper Scoring Rules #> #> logarithmic: -1.3275 #> quadratic: 262.1651 #> spherical: 0.0316 # }"},{"path":"https://easystats.github.io/performance/reference/r2.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute the model's R2 — r2","title":"Compute the model's R2 — r2","text":"Calculate R2, also known coefficient determination, value different model objects. Depending model, R2, pseudo-R2, marginal / adjusted R2 values returned.","code":""},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute the model's R2 — r2","text":"","code":"r2(model, ...) # Default S3 method r2(model, ci = NULL, verbose = TRUE, ...) # S3 method for class 'mlm' r2(model, multivariate = TRUE, ...) # S3 method for class 'merMod' r2(model, ci = NULL, tolerance = 1e-05, ...)"},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute the model's R2 — r2","text":"model statistical model. ... Arguments passed related r2-methods. ci Confidence interval level, scalar. NULL (default), confidence intervals R2 calculated. verbose Logical. details R2 CI methods given (TRUE) (FALSE)? multivariate Logical. multiple R2 values reported separated response (FALSE) single R2 reported combined across responses computed r2_mlm (TRUE). tolerance Tolerance singularity check random effects, decide whether compute random effect variances conditional r-squared . Indicates value convergence result accepted. r2_nakagawa() returns warning, stating random effect variances computed (thus, conditional r-squared NA), decrease tolerance-level. See also check_singularity().","code":""},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute the model's R2 — r2","text":"Returns list containing values related appropriate R2 given model (NULL R2 extracted). See list : Logistic models: Tjur's R2 General linear models: Nagelkerke's R2 Multinomial Logit: McFadden's R2 Models zero-inflation: R2 zero-inflated models Mixed models: Nakagawa's R2 Bayesian models: R2 bayes","code":""},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Compute the model's R2 — r2","text":"r2()-method defined given model class, r2() tries return \"generic\" r-quared value, calculated following: 1-sum((y-y_hat)^2)/sum((y-y_bar)^2)","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute the model's R2 — r2","text":"","code":"# Pseudo r-quared for GLM model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2(model) #> # R2 for Logistic Regression #> Tjur's R2: 0.478 # r-squared including confidence intervals model <- lm(mpg ~ wt + hp, data = mtcars) r2(model, ci = 0.95) #> R2: 0.827 [0.654, 0.906] #> adj. R2: 0.815 [0.632, 0.899] model <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) r2(model) #> # R2 for Mixed Models #> #> Conditional R2: 0.969 #> Marginal R2: 0.658"},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayesian R2 — r2_bayes","title":"Bayesian R2 — r2_bayes","text":"Compute R2 Bayesian models. mixed models (including random part), additionally computes R2 related fixed effects (marginal R2). r2_bayes() returns single R2 value, r2_posterior() returns posterior sample Bayesian R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayesian R2 — r2_bayes","text":"","code":"r2_bayes(model, robust = TRUE, ci = 0.95, verbose = TRUE, ...) r2_posterior(model, ...) # S3 method for class 'brmsfit' r2_posterior(model, verbose = TRUE, ...) # S3 method for class 'stanreg' r2_posterior(model, verbose = TRUE, ...) # S3 method for class 'BFBayesFactor' r2_posterior(model, average = FALSE, prior_odds = NULL, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayesian R2 — r2_bayes","text":"model Bayesian regression model (brms, rstanarm, BayesFactor, etc). robust Logical, TRUE, median instead mean used calculate central tendency variances. ci Value vector probability CI (0 1) estimated. verbose Toggle warnings. ... Arguments passed r2_posterior(). average Compute model-averaged index? See bayestestR::weighted_posteriors(). prior_odds Optional vector prior odds models compared first model (denominator, BFBayesFactor objects). data.frames, used basis weighting.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayesian R2 — r2_bayes","text":"list Bayesian R2 value. mixed models, list Bayesian R2 value marginal Bayesian R2 value. standard errors credible intervals R2 values saved attributes.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayesian R2 — r2_bayes","text":"r2_bayes() returns \"unadjusted\" R2 value. See r2_loo() calculate LOO-adjusted R2, comes conceptually closer adjusted R2 measure. mixed models, conditional marginal R2 returned. marginal R2 considers variance fixed effects, conditional R2 takes fixed random effects account. Technically, since r2_bayes() relies rstantools::bayes_R2(), \"marginal\" R2 calls bayes_R2(re.form = NA), \"conditional\" R2 calls bayes_R2(re.form = NULL). re.form argument passed rstantools::posterior_epred(), internally called bayes_R2(). Note \"marginal\" \"conditional\", refer wording suggested Nakagawa et al. 2017. Thus, use term \"marginal\" sense random effects integrated , \"ignored\". r2_posterior() actual workhorse r2_bayes() returns posterior sample Bayesian R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayesian R2 — r2_bayes","text":"Gelman, ., Goodrich, B., Gabry, J., Vehtari, . (2018). R-squared Bayesian regression models. American Statistician, 1–6. doi:10.1080/00031305.2018.1549100 Nakagawa, S., Johnson, P. C. D., Schielzeth, H. (2017). coefficient determination R2 intra-class correlation coefficient generalized linear mixed-effects models revisited expanded. Journal Royal Society Interface, 14(134), 20170213.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayesian R2 — r2_bayes","text":"","code":"library(performance) # \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0, show_messages = FALSE )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.811 (95% CI [0.681, 0.884]) model <- suppressWarnings(rstanarm::stan_lmer( Petal.Length ~ Petal.Width + (1 | Species), data = iris, chains = 1, iter = 500, refresh = 0 )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.954 (95% CI [0.951, 0.957]) #> Marginal R2: 0.387 (95% CI [0.174, 0.611]) # } # \\donttest{ model <- suppressWarnings(brms::brm( mpg ~ wt + cyl, data = mtcars, silent = 2, refresh = 0 )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.826 (95% CI [0.757, 0.855]) model <- suppressWarnings(brms::brm( Petal.Length ~ Petal.Width + (1 | Species), data = iris, silent = 2, refresh = 0 )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.955 (95% CI [0.951, 0.957]) #> Marginal R2: 0.382 (95% CI [0.173, 0.597]) # }"},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":null,"dir":"Reference","previous_headings":"","what":"Cox & Snell's R2 — r2_coxsnell","title":"Cox & Snell's R2 — r2_coxsnell","text":"Calculates pseudo-R2 value based proposal Cox & Snell (1989).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cox & Snell's R2 — r2_coxsnell","text":"","code":"r2_coxsnell(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Cox & Snell's R2 — r2_coxsnell","text":"model Model binary outcome. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Cox & Snell's R2 — r2_coxsnell","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Cox & Snell's R2 — r2_coxsnell","text":"index proposed Cox Snell (1989, pp. 208-9) , apparently independently, Magee (1990); suggested earlier binary response models Maddala (1983). However, index achieves maximum less 1 discrete models (.e. models whose likelihood product probabilities) maximum 1, instead densities, can become infinite (Nagelkerke, 1991).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Cox & Snell's R2 — r2_coxsnell","text":"Cox, D. R., Snell, E. J. (1989). Analysis binary data (Vol. 32). Monographs Statistics Applied Probability. Magee, L. (1990). R 2 measures based Wald likelihood ratio joint significance tests. American Statistician, 44(3), 250-253. Maddala, G. S. (1986). Limited-dependent qualitative variables econometrics (. 3). Cambridge university press. Nagelkerke, N. J. (1991). note general definition coefficient determination. Biometrika, 78(3), 691-692.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Cox & Snell's R2 — r2_coxsnell","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_coxsnell(model) #> Cox & Snell's R2 #> 0.4401407"},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":null,"dir":"Reference","previous_headings":"","what":"Efron's R2 — r2_efron","title":"Efron's R2 — r2_efron","text":"Calculates Efron's pseudo R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Efron's R2 — r2_efron","text":"","code":"r2_efron(model)"},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Efron's R2 — r2_efron","text":"model Generalized linear model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Efron's R2 — r2_efron","text":"R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Efron's R2 — r2_efron","text":"Efron's R2 calculated taking sum squared model residuals, divided total variability dependent variable. R2 equals squared correlation predicted values actual values, however, note model residuals generalized linear models generally comparable OLS.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Efron's R2 — r2_efron","text":"Efron, B. (1978). Regression ANOVA zero-one data: Measures residual variation. Journal American Statistical Association, 73, 113-121.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Efron's R2 — r2_efron","text":"","code":"## Dobson (1990) Page 93: Randomized Controlled Trial: counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) # outcome <- gl(3, 1, 9) treatment <- gl(3, 3) model <- glm(counts ~ outcome + treatment, family = poisson()) r2_efron(model) #> [1] 0.5265152"},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":null,"dir":"Reference","previous_headings":"","what":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"Calculates Ferrari's Cribari-Neto's pseudo R2 (beta-regression models).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"","code":"r2_ferrari(model, ...) # Default S3 method r2_ferrari(model, correct_bounds = FALSE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"model Generalized linear, particular beta-regression model. ... Currently used. correct_bounds Logical, whether correct bounds response variable avoid 0 1. TRUE, response variable normalized \"compressed\", .e. zeros ones excluded.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"list pseudo R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"Ferrari, S., Cribari-Neto, F. (2004). Beta Regression Modelling Rates Proportions. Journal Applied Statistics, 31(7), 799–815. doi:10.1080/0266476042000214501","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"","code":"data(\"GasolineYield\", package = \"betareg\") model <- betareg::betareg(yield ~ batch + temp, data = GasolineYield) r2_ferrari(model) #> # R2 for Generalized Linear Regression #> Ferrari's R2: 0.962"},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":null,"dir":"Reference","previous_headings":"","what":"Kullback-Leibler R2 — r2_kullback","title":"Kullback-Leibler R2 — r2_kullback","text":"Calculates Kullback-Leibler-divergence-based R2 generalized linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Kullback-Leibler R2 — r2_kullback","text":"","code":"r2_kullback(model, ...) # S3 method for class 'glm' r2_kullback(model, adjust = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Kullback-Leibler R2 — r2_kullback","text":"model generalized linear model. ... Additional arguments. Currently used. adjust Logical, TRUE (default), adjusted R2 value returned.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Kullback-Leibler R2 — r2_kullback","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Kullback-Leibler R2 — r2_kullback","text":"Cameron, . C. Windmeijer, . G. (1997) R-squared measure goodness fit common nonlinear regression models. Journal Econometrics, 77: 329-342.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Kullback-Leibler R2 — r2_kullback","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_kullback(model) #> Kullback-Leibler R2 #> 0.3834362"},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":null,"dir":"Reference","previous_headings":"","what":"LOO-adjusted R2 — r2_loo","title":"LOO-adjusted R2 — r2_loo","text":"Compute LOO-adjusted R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"LOO-adjusted R2 — r2_loo","text":"","code":"r2_loo(model, robust = TRUE, ci = 0.95, verbose = TRUE, ...) r2_loo_posterior(model, ...) # S3 method for class 'brmsfit' r2_loo_posterior(model, verbose = TRUE, ...) # S3 method for class 'stanreg' r2_loo_posterior(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"LOO-adjusted R2 — r2_loo","text":"model Bayesian regression model (brms, rstanarm, BayesFactor, etc). robust Logical, TRUE, median instead mean used calculate central tendency variances. ci Value vector probability CI (0 1) estimated. verbose Toggle warnings. ... Arguments passed r2_posterior().","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"LOO-adjusted R2 — r2_loo","text":"list Bayesian R2 value. mixed models, list Bayesian R2 value marginal Bayesian R2 value. standard errors credible intervals R2 values saved attributes. list LOO-adjusted R2 value. standard errors credible intervals R2 values saved attributes.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"LOO-adjusted R2 — r2_loo","text":"r2_loo() returns \"adjusted\" R2 value computed using leave-one--adjusted posterior distribution. conceptually similar adjusted/unbiased R2 estimate classical regression modeling. See r2_bayes() \"unadjusted\" R2. Mixed models currently fully supported. r2_loo_posterior() actual workhorse r2_loo() returns posterior sample LOO-adjusted Bayesian R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"LOO-adjusted R2 — r2_loo","text":"","code":"model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0, show_messages = FALSE )) r2_loo(model) #> # LOO-adjusted R2 with Compatibility Interval #> #> Conditional R2: 0.794 (95% CI [0.687, 0.879])"},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":null,"dir":"Reference","previous_headings":"","what":"McFadden's R2 — r2_mcfadden","title":"McFadden's R2 — r2_mcfadden","text":"Calculates McFadden's pseudo R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"McFadden's R2 — r2_mcfadden","text":"","code":"r2_mcfadden(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"McFadden's R2 — r2_mcfadden","text":"model Generalized linear multinomial logit (mlogit) model. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"McFadden's R2 — r2_mcfadden","text":"models, list McFadden's R2 adjusted McFadden's R2 value. models, McFadden's R2 available.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"McFadden's R2 — r2_mcfadden","text":"McFadden, D. (1987). Regression-based specification tests multinomial logit model. Journal econometrics, 34(1-2), 63-82. McFadden, D. (1973). Conditional logit analysis qualitative choice behavior.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"McFadden's R2 — r2_mcfadden","text":"","code":"if (require(\"mlogit\")) { data(\"Fishing\", package = \"mlogit\") Fish <- mlogit.data(Fishing, varying = c(2:9), shape = \"wide\", choice = \"mode\") model <- mlogit(mode ~ price + catch, data = Fish) r2_mcfadden(model) } #> Loading required package: mlogit #> Loading required package: dfidx #> #> Attaching package: ‘dfidx’ #> The following object is masked from ‘package:MASS’: #> #> select #> The following object is masked from ‘package:stats’: #> #> filter #> McFadden's R2 #> 0.17823"},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":null,"dir":"Reference","previous_headings":"","what":"McKelvey & Zavoinas R2 — r2_mckelvey","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"Calculates McKelvey Zavoinas pseudo R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"","code":"r2_mckelvey(model)"},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"model Generalized linear model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"McKelvey Zavoinas R2 based explained variance, variance predicted response divided sum variance predicted response residual variance. binomial models, residual variance either pi^2/3 logit-link 1 probit-link. poisson-models, residual variance based log-normal approximation, similar distribution-specific variance described ?insight::get_variance.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"McKelvey, R., Zavoina, W. (1975), \"Statistical Model Analysis Ordinal Level Dependent Variables\", Journal Mathematical Sociology 4, S. 103–120.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"","code":"## Dobson (1990) Page 93: Randomized Controlled Trial: counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) # outcome <- gl(3, 1, 9) treatment <- gl(3, 3) model <- glm(counts ~ outcome + treatment, family = poisson()) r2_mckelvey(model) #> McKelvey's R2 #> 0.3776292"},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":null,"dir":"Reference","previous_headings":"","what":"Multivariate R2 — r2_mlm","title":"Multivariate R2 — r2_mlm","text":"Calculates two multivariate R2 values multivariate linear regression.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Multivariate R2 — r2_mlm","text":"","code":"r2_mlm(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Multivariate R2 — r2_mlm","text":"model Multivariate linear regression model. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Multivariate R2 — r2_mlm","text":"named vector R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Multivariate R2 — r2_mlm","text":"two indexes returned summarize model fit set predictors given system responses. compared default r2 index multivariate linear models, indexes returned function provide single fit value collapsed across responses. two returned indexes proposed Van den Burg Lewis (1988) extension metrics proposed Cramer Nicewander (1979). numerous indexes proposed across two papers, two metrics, \\(R_{xy}\\) \\(P_{xy}\\), recommended use Azen Budescu (2006). multivariate linear regression \\(p\\) predictors \\(q\\) responses \\(p > q\\), \\(R_{xy}\\) index computed : $$R_{xy} = 1 - \\prod_{=1}^p (1 - \\rho_i^2)$$ \\(\\rho\\) canonical variate canonical correlation predictors responses. metric symmetric value change roles variables predictors responses swapped. \\(P_{xy}\\) computed : $$P_{xy} = \\frac{q - trace(\\bf{S}_{\\bf{YY}}^{-1}\\bf{S}_{\\bf{YY|X}})}{q}$$ \\(\\bf{S}_{\\bf{YY}}\\) matrix response covariances \\(\\bf{S}_{\\bf{YY|X}}\\) matrix residual covariances given predictors. metric asymmetric can change depending variables considered predictors versus responses.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Multivariate R2 — r2_mlm","text":"Azen, R., & Budescu, D. V. (2006). Comparing predictors multivariate regression models: extension dominance analysis. Journal Educational Behavioral Statistics, 31(2), 157-180. Cramer, E. M., & Nicewander, W. . (1979). symmetric, invariant measures multivariate association. Psychometrika, 44, 43-54. Van den Burg, W., & Lewis, C. (1988). properties two measures multivariate association. Psychometrika, 53, 109-122.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Multivariate R2 — r2_mlm","text":"Joseph Luchman","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Multivariate R2 — r2_mlm","text":"","code":"model <- lm(cbind(qsec, drat) ~ wt + mpg + cyl, data = mtcars) r2_mlm(model) #> Symmetric Rxy Asymmetric Pxy #> 0.8573111 0.5517522 model_swap <- lm(cbind(wt, mpg, cyl) ~ qsec + drat, data = mtcars) r2_mlm(model_swap) #> Symmetric Rxy Asymmetric Pxy #> 0.8573111 0.3678348"},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":null,"dir":"Reference","previous_headings":"","what":"Nagelkerke's R2 — r2_nagelkerke","title":"Nagelkerke's R2 — r2_nagelkerke","text":"Calculate Nagelkerke's pseudo-R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nagelkerke's R2 — r2_nagelkerke","text":"","code":"r2_nagelkerke(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Nagelkerke's R2 — r2_nagelkerke","text":"model generalized linear model, including cumulative links resp. multinomial models. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Nagelkerke's R2 — r2_nagelkerke","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Nagelkerke's R2 — r2_nagelkerke","text":"Nagelkerke, N. J. (1991). note general definition coefficient determination. Biometrika, 78(3), 691-692.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Nagelkerke's R2 — r2_nagelkerke","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_nagelkerke(model) #> Nagelkerke's R2 #> 0.5899593"},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":null,"dir":"Reference","previous_headings":"","what":"Nakagawa's R2 for mixed models — r2_nakagawa","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"Compute marginal conditional r-squared value mixed effects models complex random effects structures.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"","code":"r2_nakagawa( model, by_group = FALSE, tolerance = 1e-08, ci = NULL, iterations = 100, ci_method = NULL, null_model = NULL, approximation = \"lognormal\", model_component = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"model mixed effects model. by_group Logical, TRUE, returns explained variance different levels (multiple levels). essentially similar variance reduction approach Hox (2010), pp. 69-78. tolerance Tolerance singularity check random effects, decide whether compute random effect variances conditional r-squared . Indicates value convergence result accepted. r2_nakagawa() returns warning, stating random effect variances computed (thus, conditional r-squared NA), decrease tolerance-level. See also check_singularity(). ci Confidence resp. credible interval level. icc(), r2(), rmse(), confidence intervals based bootstrapped samples ICC, R2 RMSE value. See iterations. iterations Number bootstrap-replicates computing confidence intervals ICC, R2, RMSE etc. ci_method Character string, indicating bootstrap-method. NULL (default), case lme4::bootMer() used bootstrapped confidence intervals. However, bootstrapped intervals calculated way, try ci_method = \"boot\", falls back boot::boot(). may successfully return bootstrapped confidence intervals, bootstrapped samples may appropriate multilevel structure model. also option ci_method = \"analytical\", tries calculate analytical confidence assuming chi-squared distribution. However, intervals rather inaccurate often narrow. recommended calculate bootstrapped confidence intervals mixed models. null_model Optional, null model compute random effect variances, passed insight::get_variance(). Usually required calculation r-squared ICC fails null_model specified. calculating null model takes longer already fit null model, can pass , , speed process. approximation Character string, indicating approximation method distribution-specific (observation level, residual) variance. applies non-Gaussian models. Can \"lognormal\" (default), \"delta\" \"trigamma\". binomial models, default theoretical distribution specific variance, however, can also \"observation_level\". See Nakagawa et al. 2017, particular supplement 2, details. model_component models can zero-inflation component, specify component variances returned. NULL \"full\" (default), conditional zero-inflation component taken account. \"conditional\", conditional component considered. verbose Toggle warnings messages. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"list conditional marginal R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"Marginal conditional r-squared values mixed models calculated based Nakagawa et al. (2017). details computation variances, see insight::get_variance(). random effect variances actually mean random effect variances, thus r-squared value also appropriate mixed models random slopes nested random effects (see Johnson, 2014). Conditional R2: takes fixed random effects account. Marginal R2: considers variance fixed effects. contribution random effects can deduced subtracting marginal R2 conditional R2 computing icc().","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"supported-models-and-model-families","dir":"Reference","previous_headings":"","what":"Supported models and model families","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"single variance components required calculate marginal conditional r-squared values calculated using insight::get_variance() function. results validated solutions provided Nakagawa et al. (2017), particular examples shown Supplement 2 paper. model families validated results MuMIn package. means r-squared values returned r2_nakagawa() accurate reliable following mixed models model families: Bernoulli (logistic) regression Binomial regression (binary outcomes) Poisson Quasi-Poisson regression Negative binomial regression (including nbinom1, nbinom2 nbinom12 families) Gaussian regression (linear models) Gamma regression Tweedie regression Beta regression Ordered beta regression Following model families yet validated, work: Zero-inflated hurdle models Beta-binomial regression Compound Poisson regression Generalized Poisson regression Log-normal regression Skew-normal regression Extracting variance components models zero-inflation part straightforward, definitely clear distribution-specific variance calculated. Therefore, recommended carefully inspect results, probably validate models, e.g. Bayesian models (although results may roughly comparable). Log-normal regressions (e.g. lognormal() family glmmTMB gaussian(\"log\")) often low fixed effects variance (calculated suggested Nakagawa et al. 2017). results low ICC r-squared values, may meaningful.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"Hox, J. J. (2010). Multilevel analysis: techniques applications (2nd ed). New York: Routledge. Johnson, P. C. D. (2014). Extension Nakagawa Schielzeth’s R2 GLMM random slopes models. Methods Ecology Evolution, 5(9), 944–946. doi:10.1111/2041-210X.12225 Nakagawa, S., Schielzeth, H. (2013). general simple method obtaining R2 generalized linear mixed-effects models. Methods Ecology Evolution, 4(2), 133–142. doi:10.1111/j.2041-210x.2012.00261.x Nakagawa, S., Johnson, P. C. D., Schielzeth, H. (2017). coefficient determination R2 intra-class correlation coefficient generalized linear mixed-effects models revisited expanded. Journal Royal Society Interface, 14(134), 20170213.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"","code":"model <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) r2_nakagawa(model) #> # R2 for Mixed Models #> #> Conditional R2: 0.969 #> Marginal R2: 0.658 r2_nakagawa(model, by_group = TRUE) #> # Explained Variance by Level #> #> Level | R2 #> ---------------- #> Level 1 | 0.569 #> Species | -0.853 #>"},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":null,"dir":"Reference","previous_headings":"","what":"Somers' Dxy rank correlation for binary outcomes — r2_somers","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"Calculates Somers' Dxy rank correlation logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"","code":"r2_somers(model)"},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"model logistic regression model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"Somers, R. H. (1962). new asymmetric measure association ordinal variables. American Sociological Review. 27 (6).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"","code":"# \\donttest{ if (require(\"correlation\") && require(\"Hmisc\")) { model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_somers(model) } #> Loading required package: correlation #> Loading required package: Hmisc #> #> Attaching package: ‘Hmisc’ #> The following object is masked from ‘package:psych’: #> #> describe #> The following objects are masked from ‘package:ggdag’: #> #> label, label<- #> The following objects are masked from ‘package:base’: #> #> format.pval, units #> Somers' Dxy #> 0.8253968 # }"},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":null,"dir":"Reference","previous_headings":"","what":"Tjur's R2 - coefficient of determination (D) — r2_tjur","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"method calculates Coefficient Discrimination D (also known Tjur's R2; Tjur, 2009) generalized linear (mixed) models binary outcomes. alternative pseudo-R2 values like Nagelkerke's R2 Cox-Snell R2. Coefficient Discrimination D can read like (pseudo-)R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"","code":"r2_tjur(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"model Binomial Model. ... Arguments functions, usually used internally.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"Tjur, T. (2009). Coefficients determination logistic regression models - new proposal: coefficient discrimination. American Statistician, 63(4), 366-372.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_tjur(model) #> Tjur's R2 #> 0.4776926"},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":null,"dir":"Reference","previous_headings":"","what":"Xu' R2 (Omega-squared) — r2_xu","title":"Xu' R2 (Omega-squared) — r2_xu","text":"Calculates Xu' Omega-squared value, simple R2 equivalent linear mixed models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Xu' R2 (Omega-squared) — r2_xu","text":"","code":"r2_xu(model)"},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Xu' R2 (Omega-squared) — r2_xu","text":"model linear (mixed) model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Xu' R2 (Omega-squared) — r2_xu","text":"R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Xu' R2 (Omega-squared) — r2_xu","text":"r2_xu() crude measure explained variance linear (mixed) effects models, originally denoted Ω2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Xu' R2 (Omega-squared) — r2_xu","text":"Xu, R. (2003). Measuring explained variation linear mixed effects models. Statistics Medicine, 22(22), 3527–3541. doi:10.1002/sim.1572","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Xu' R2 (Omega-squared) — r2_xu","text":"","code":"model <- lm(Sepal.Length ~ Petal.Length + Species, data = iris) r2_xu(model) #> Xu's R2 #> 0.8367238"},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":null,"dir":"Reference","previous_headings":"","what":"R2 for models with zero-inflation — r2_zeroinflated","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"Calculates R2 models zero-inflation component, including mixed effects models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"","code":"r2_zeroinflated(model, method = c(\"default\", \"correlation\"))"},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"model model. method Indicates method calculate R2. See 'Details'. May abbreviated.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"default-method, list R2 adjusted R2 values. method = \"correlation\", named numeric vector correlation-based R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"default-method calculates R2 value based residual variance divided total variance. method = \"correlation\", R2 correlation-based measure, rather crude. simply computes squared correlation model's actual predicted response.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"","code":"# \\donttest{ if (require(\"pscl\")) { data(bioChemists) model <- zeroinfl( art ~ fem + mar + kid5 + ment | kid5 + phd, data = bioChemists ) r2_zeroinflated(model) } #> Loading required package: pscl #> Classes and Methods for R originally developed in the #> Political Science Computational Laboratory #> Department of Political Science #> Stanford University (2002-2015), #> by and under the direction of Simon Jackman. #> hurdle and zeroinfl functions by Achim Zeileis. #> # R2 for Zero-Inflated and Hurdle Regression #> R2: 0.180 #> adj. R2: 0.175 # }"},{"path":"https://easystats.github.io/performance/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"Objects exported from other packages — reexports","title":"Objects exported from other packages — reexports","text":"objects imported packages. Follow links see documentation. insight display, print_html, print_md","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate randomized quantile residuals from a model — simulate_residuals","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"Returns simulated residuals model. useful checking uniformity residuals, particular non-Gaussian models, residuals expected normally distributed.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"","code":"simulate_residuals(x, iterations = 250, ...) # S3 method for class 'performance_simres' residuals(object, quantile_function = NULL, outlier_values = NULL, ...)"},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"x model object. iterations Number simulations run. ... Arguments passed DHARMa::simulateResiduals(). object performance_simres object, returned simulate_residuals(). quantile_function function apply residuals. NULL, residuals returned . NULL, residuals passed function. useful returning normally distributed residuals, example: residuals(x, quantile_function = qnorm). outlier_values vector length 2, specifying values replace -Inf Inf , respectively.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"Simulated residuals, can processed check_residuals(). returned object class DHARMa performance_simres.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"function small wrapper around DHARMa::simulateResiduals(). basically sets plot = FALSE adds additional class attribute (\"performance_sim_res\"), allows using DHARMa object plotting functions see package. See also vignette(\"DHARMa\"). plot() method visualize distribution residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"certain models, resp. model certain families, tests like check_zeroinflation() check_overdispersion() based simulated residuals. usually accurate tests traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. simulate_residuals() relies DHARMa::simulateResiduals(), additional arguments specified ... passed function. defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"Hartig, F., & Lohse, L. (2022). DHARMa: Residual Diagnostics Hierarchical (Multi-Level / Mixed) Regression Models (Version 0.4.5). Retrieved https://CRAN.R-project.org/package=DHARMa Dunn, P. K., & Smyth, G. K. (1996). Randomized Quantile Residuals. Journal Computational Graphical Statistics, 5(3), 236. doi:10.2307/1390802","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) simulate_residuals(m) #> Simulated residuals from a model of class `lm` based on 250 simulations. #> Use `check_residuals()` to check uniformity of residuals or #> `residuals()` to extract simulated residuals. It is recommended to refer #> to `?DHARMa::simulateResiudals` and `vignette(\"DHARMa\")` for more #> information about different settings in particular situations or for #> particular models. # extract residuals head(residuals(simulate_residuals(m))) #> [1] 0.356 0.448 0.096 0.568 0.668 0.204"},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Test if models are different — test_bf","title":"Test if models are different — test_bf","text":"Testing whether models \"different\" terms accuracy explanatory power delicate often complex procedure, many limitations prerequisites. Moreover, many tests exist, coming interpretation, set strengths weaknesses. test_performance() function runs relevant appropriate tests based type input (instance, whether models nested ). However, still requires user understand tests order prevent misinterpretation. See Details section information regarding different tests interpretation.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test if models are different — test_bf","text":"","code":"test_bf(...) # Default S3 method test_bf(..., reference = 1, text_length = NULL) test_likelihoodratio(..., estimator = \"ML\", verbose = TRUE) test_lrt(..., estimator = \"ML\", verbose = TRUE) test_performance(..., reference = 1, verbose = TRUE) test_vuong(..., verbose = TRUE) test_wald(..., verbose = TRUE)"},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test if models are different — test_bf","text":"... Multiple model objects. reference applies models non-nested, determines model taken reference, models tested. text_length Numeric, length (number chars) output lines. test_bf() describes models formulas, can lead overly long lines output. text_length fixes length lines specified limit. estimator Applied comparing regression models using test_likelihoodratio(). Corresponds different estimators standard deviation errors. Defaults \"OLS\" linear models, \"ML\" models (including mixed models), \"REML\" linear mixed models fixed effects. See 'Details'. verbose Toggle warning messages.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test if models are different — test_bf","text":"data frame containing relevant indices.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"nested-vs-non-nested-models","dir":"Reference","previous_headings":"","what":"Nested vs. Non-nested Models","title":"Test if models are different — test_bf","text":"Model's \"nesting\" important concept models comparison. Indeed, many tests make sense models \"nested\", .e., predictors nested. means fixed effects predictors model contained within fixed effects predictors larger model (sometimes referred encompassing model). instance, model1 (y ~ x1 + x2) \"nested\" within model2 (y ~ x1 + x2 + x3). Usually, people list nested models, instance m1 (y ~ 1), m2 (y ~ x1), m3 (y ~ x1 + x2), m4 (y ~ x1 + x2 + x3), conventional \"ordered\" smallest largest, user reverse order largest smallest. test shows whether parsimonious model, whether adding predictor, results significant difference model's performance. case, models usually compared sequentially: m2 tested m1, m3 m2, m4 m3, etc. Two models considered \"non-nested\" predictors different. instance, model1 (y ~ x1 + x2) model2 (y ~ x3 + x4). case non-nested models, models usually compared reference model (default, first list). Nesting detected via insight::is_nested_models() function. Note , apart nesting, order tests valid, requirements often fulfilled. instance, outcome variables (response) must . meaningfully test whether apples significantly different oranges!","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"estimator-of-the-standard-deviation","dir":"Reference","previous_headings":"","what":"Estimator of the standard deviation","title":"Test if models are different — test_bf","text":"estimator relevant comparing regression models using test_likelihoodratio(). estimator = \"OLS\", uses method anova(..., test = \"LRT\") implemented base R, .e., scaling n-k (unbiased OLS estimator) using estimator alternative hypothesis. estimator = \"ML\", instance used lrtest(...) package lmtest, scaling done n (biased ML estimator) estimator null hypothesis. moderately large samples, differences negligible, possible OLS perform slightly better small samples Gaussian errors. estimator = \"REML\", LRT based REML-fit log-likelihoods models. Note types estimators available model classes.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"reml-versus-ml-estimator","dir":"Reference","previous_headings":"","what":"REML versus ML estimator","title":"Test if models are different — test_bf","text":"estimator = \"ML\", default linear mixed models (unless share fixed effects), values information criteria (AIC, AICc) based ML-estimator, default behaviour AIC() may different (particular linear mixed models lme4, sets REML = TRUE). default test_likelihoodratio() intentional, comparing information criteria based REML fits requires fixed effects models, often case. Thus, anova.merMod() automatically refits models REML performing LRT, test_likelihoodratio() checks comparison based REML fits indeed valid, , uses REML default (else, ML default). Set estimator argument explicitely override default behaviour.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"tests-description","dir":"Reference","previous_headings":"","what":"Tests Description","title":"Test if models are different — test_bf","text":"Bayes factor Model Comparison - test_bf(): models fit data, returned BF shows Bayes Factor (see bayestestR::bayesfactor_models()) model reference model (depends whether models nested ). Check vignette details. Wald's F-Test - test_wald(): Wald test rough approximation Likelihood Ratio Test. However, applicable LRT: can often run Wald test situations test can run. Importantly, test makes statistical sense models nested. Note: test also available base R anova() function. returns F-value column statistic associated p-value. Likelihood Ratio Test (LRT) - test_likelihoodratio(): LRT tests model better (likely) explanation data. Likelihood-Ratio-Test (LRT) gives usually somewhat close results (equivalent) Wald test , similarly, makes sense nested models. However, maximum likelihood tests make stronger assumptions method moments tests like F-test, turn efficient. Agresti (1990) suggests use LRT instead Wald test small sample sizes (30) parameters large. Note: regression models, similar anova(..., test=\"LRT\") (models) lmtest::lrtest(...), depending estimator argument. lavaan models (SEM, CFA), function calls lavaan::lavTestLRT(). models transformed response variables (like log(x) sqrt(x)), logLik() returns wrong log-likelihood. However, test_likelihoodratio() calls insight::get_loglikelihood() check_response=TRUE, returns corrected log-likelihood value models transformed response variables. Furthermore, since LRT accepts nested models (.e. models differ fixed effects), computed log-likelihood always based ML estimator, REML fits. Vuong's Test - test_vuong(): Vuong's (1989) test can used nested non-nested models, actually consists two tests. Test Distinguishability (Omega2 column associated p-value) indicates whether models can possibly distinguished basis observed data. p-value significant, means models distinguishable. Robust Likelihood Test (LR column associated p-value) indicates whether model fits better reference model. models nested, test works robust LRT. code function adapted nonnest2 package, credit go authors.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Test if models are different — test_bf","text":"Vuong, Q. H. (1989). Likelihood ratio tests model selection non-nested hypotheses. Econometrica, 57, 307-333. Merkle, E. C., , D., & Preacher, K. (2016). Testing non-nested structural equation models. Psychological Methods, 21, 151-163.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test if models are different — test_bf","text":"","code":"# Nested Models # ------------- m1 <- lm(Sepal.Length ~ Petal.Width, data = iris) m2 <- lm(Sepal.Length ~ Petal.Width + Species, data = iris) m3 <- lm(Sepal.Length ~ Petal.Width * Species, data = iris) test_performance(m1, m2, m3) #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR) #> ------------------------------------------------------------ #> m1 | lm | | | | | #> m2 | lm | 0.007 | 9.54e-04 | 0.935 | 0.15 | 0.919 #> m3 | lm | 0.037 | 0.02 | 0.081 | 3.41 | 0.099 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. test_bf(m1, m2, m3) #> Bayes Factors for Model Comparison #> #> Model BF #> [m2] Petal.Width + Species 0.007 #> [m3] Petal.Width * Species 2.64e-04 #> #> * Against Denominator: [m1] Petal.Width #> * Bayes Factor Type: BIC approximation test_wald(m1, m2, m3) # Equivalent to anova(m1, m2, m3) #> Name | Model | df | df_diff | F | p #> ------------------------------------------- #> m1 | lm | 148 | | | #> m2 | lm | 146 | 2.00 | 0.08 | 0.927 #> m3 | lm | 144 | 2.00 | 1.66 | 0.195 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. # Equivalent to lmtest::lrtest(m1, m2, m3) test_likelihoodratio(m1, m2, m3, estimator = \"ML\") #> # Likelihood-Ratio-Test (LRT) for Model Comparison (ML-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------ #> m1 | lm | 3 | | | #> m2 | lm | 5 | 2 | 0.15 | 0.926 #> m3 | lm | 7 | 2 | 3.41 | 0.182 # Equivalent to anova(m1, m2, m3, test='LRT') test_likelihoodratio(m1, m2, m3, estimator = \"OLS\") #> # Likelihood-Ratio-Test (LRT) for Model Comparison (OLS-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------ #> m1 | lm | 3 | | | #> m2 | lm | 5 | 2 | 0.15 | 0.927 #> m3 | lm | 7 | 2 | 3.31 | 0.191 if (require(\"CompQuadForm\")) { test_vuong(m1, m2, m3) # nonnest2::vuongtest(m1, m2, nested=TRUE) # Non-nested Models # ----------------- m1 <- lm(Sepal.Length ~ Petal.Width, data = iris) m2 <- lm(Sepal.Length ~ Petal.Length, data = iris) m3 <- lm(Sepal.Length ~ Species, data = iris) test_performance(m1, m2, m3) test_bf(m1, m2, m3) test_vuong(m1, m2, m3) # nonnest2::vuongtest(m1, m2) } #> Loading required package: CompQuadForm #> Name | Model | Omega2 | p (Omega2) | LR | p (LR) #> --------------------------------------------------- #> m1 | lm | | | | #> m2 | lm | 0.19 | < .001 | -4.57 | < .001 #> m3 | lm | 0.12 | < .001 | 2.51 | 0.006 #> Each model is compared to m1. # Tweak the output # ---------------- test_performance(m1, m2, m3, include_formula = TRUE) #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR) #> --------------------------------------------------------------------------------------- #> m1 | lm(Sepal.Length ~ Petal.Width) | | | | | #> m2 | lm(Sepal.Length ~ Petal.Length) | > 1000 | 0.19 | < .001 | -4.57 | < .001 #> m3 | lm(Sepal.Length ~ Species) | < 0.001 | 0.12 | < .001 | 2.51 | 0.006 #> Each model is compared to m1. # SEM / CFA (lavaan objects) # -------------------------- # Lavaan Models if (require(\"lavaan\")) { structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 visual ~~ textual + speed \" m1 <- lavaan::cfa(structure, data = HolzingerSwineford1939) structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 visual ~~ 0 * textual + speed \" m2 <- lavaan::cfa(structure, data = HolzingerSwineford1939) structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 visual ~~ 0 * textual + 0 * speed \" m3 <- lavaan::cfa(structure, data = HolzingerSwineford1939) test_likelihoodratio(m1, m2, m3) # Different Model Types # --------------------- if (require(\"lme4\") && require(\"mgcv\")) { m1 <- lm(Sepal.Length ~ Petal.Length + Species, data = iris) m2 <- lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) m3 <- gam(Sepal.Length ~ s(Petal.Length, by = Species) + Species, data = iris) test_performance(m1, m2, m3) } } #> Loading required package: mgcv #> This is mgcv 1.9-1. For overview type 'help(\"mgcv-package\")'. #> #> Attaching package: ‘mgcv’ #> The following objects are masked from ‘package:brms’: #> #> s, t2 #> The following object is masked from ‘package:mclust’: #> #> mvn #> Name | Model | BF #> ------------------------ #> m1 | lm | #> m2 | lmerMod | < 0.001 #> m3 | gam | 0.038 #> Each model is compared to m1."},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-12-5","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"performance 0.12.5","text":"check_outliers() method = \"optics\" now returns refined cluster selection, passing optics_xi argument dbscan::extractXi(). Deprecated arguments alias-function-names removed. Argument names check_model() refer plot-aesthetics (like dot_size) now harmonized across easystats packages, meaning renamed. now follow pattern aesthetic_type, e.g. size_dot (instead dot_size).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-12-5","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.12.5","text":"Increased accuracy check_convergence() glmmTMB models. r2() r2_mcfadden() now support beta-binomial (non-mixed) models package glmmTMB. .numeric() resp. .double() method objects class performance_roc added. Improved documentation performance_roc().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.5","text":"check_outliers() warn numeric variables found response variable numeric, relevant predictors . check_collinearity() work glmmTMB models zero-inflation component set ~0.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0124","dir":"Changelog","previous_headings":"","what":"performance 0.12.4","title":"performance 0.12.4","text":"CRAN release: 2024-10-18","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-12-4","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.12.4","text":"check_dag() now also checks colliders, suggests removing printed output. Minor revisions printed output check_dag().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-4","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.4","text":"Fixed failing tests broke due changes latest glmmTMB update.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0123","dir":"Changelog","previous_headings":"","what":"performance 0.12.3","title":"performance 0.12.3","text":"CRAN release: 2024-09-02","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-12-3","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.12.3","text":"check_dag(), check DAGs correct adjustment sets.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-12-3","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.12.3","text":"check_heterogeneity_bias() gets nested argument. Furthermore, can specify one variable, meaning nested cross-classified model designs can also tested heterogeneity bias.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0122","dir":"Changelog","previous_headings":"","what":"performance 0.12.2","title":"performance 0.12.2","text":"CRAN release: 2024-07-18 Patch release, ensure performance runs older version datawizard Mac OSX R (old-release).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0121","dir":"Changelog","previous_headings":"","what":"performance 0.12.1","title":"performance 0.12.1","text":"CRAN release: 2024-07-15","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-12-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.12.1","text":"icc() r2_nakagawa() get null_model argument. can useful computing R2 ICC mixed models, internal computation null model fails, already fit null model want save time. icc() r2_nakagawa() get approximation argument indicating approximation method distribution-specific (residual) variance. See Nakagawa et al. 2017 details. icc() r2_nakagawa() get model_component argument indicating component zero-inflation hurdle models. performance_rmse() (resp. rmse()) can now compute analytical bootstrapped confidence intervals. function gains following new arguments: ci, ci_method iterations. New function r2_ferrari() compute Ferrari & Cribari-Neto’s R2 generalized linear models, particular beta-regression. Improved documentation functions.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.1","text":"Fixed issue check_model() model contained transformed response variable named like valid R function name (e.g., lm(log(lapply) ~ x), data contained variable named lapply). Fixed issue check_predictions() linear models response transformed ratio (e.g. lm(succes/trials ~ x)). Fixed issue r2_bayes() mixed models rstanarm.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0120","dir":"Changelog","previous_headings":"","what":"performance 0.12.0","title":"performance 0.12.0","text":"CRAN release: 2024-06-08","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-0-12-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"performance 0.12.0","text":"Aliases posterior_predictive_check() check_posterior_predictions() check_predictions() deprecated. Arguments named group group_by deprecated future release. Please use instead. affects check_heterogeneity_bias() performance.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-12-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.12.0","text":"Improved documentation new vignettes added. check_model() gets base_size argument, set base font size plots. check_predictions() stanreg brmsfit models now returns plots usual style models longer returns plots bayesplot::pp_check(). Updated trained model used prediction distributions check_distribution().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.0","text":"check_model() now falls back normal Q-Q plots model supported DHARMa package simulated residuals calculated.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0110","dir":"Changelog","previous_headings":"","what":"performance 0.11.0","title":"performance 0.11.0","text":"CRAN release: 2024-03-22","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-supported-models-0-11-0","dir":"Changelog","previous_headings":"","what":"New supported models","title":"performance 0.11.0","text":"Rudimentary support models class serp package serp.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-11-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.11.0","text":"simulate_residuals() check_residuals(), simulate check residuals generalized linear (mixed) models. Simulating residuals based DHARMa package, objects returned simulate_residuals() inherit DHARMa class, thus can used functions DHARMa package. However, also implementations performance package, check_overdispersion(), check_zeroinflation(), check_outliers() check_model(). Plots check_model() improved. Q-Q plots now based simulated residuals DHARMa package non-Gaussian models, thus providing accurate informative plots. half-normal QQ plot generalized linear models can still obtained setting new argument residual_type = \"normal\". Following functions now support simulated residuals (simulate_residuals()) resp. objects returned DHARMa::simulateResiduals(): check_overdispersion() check_zeroinflation() check_outliers() check_model()","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-11-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.11.0","text":"Improved error messages check_model() QQ-plots created. check_distribution() stable possibly sparse data.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-11-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.11.0","text":"Fixed issue check_normality() t-tests. Fixed issue check_itemscale() data frame inputs, factor_index named vector.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0109","dir":"Changelog","previous_headings":"","what":"performance 0.10.9","title":"performance 0.10.9","text":"CRAN release: 2024-02-17","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-10-9","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.10.9","text":"r2() models class glmmTMB without random effects now returns correct r-squared value non-mixed models. check_itemscale() now also accepts data frames input. case, factor_index must specified, must numeric vector length number columns x, element index factor respective column x. check_itemscale() gets print_html() method. Clarification documentation estimator argument performance_aic(). Improved plots overdispersion-checks negative-binomial models package glmmTMB (affects check_overdispersion() check_model()). Improved detection rates singularity check_singularity() models package glmmTMB. model class glmmTMB, deviance residuals now used check_model() plot. Improved (better understand) error messages check_model(), check_collinearity() check_outliers() models non-numeric response variables. r2_kullback() now gives informative error non-supported models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-9","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.9","text":"Fixed issue binned_residuals() models binary outcome, rare occasions empty bins occur. performance_score() longer fail models scoring rules can’t calculated. Instead, informative message returned. check_outliers() now properly accept percentage_central argument using \"mcd\" method. Fixed edge cases check_collinearity() check_outliers() models response variables classes Date, POSIXct, POSIXlt difftime. Fixed issue check_model() models package quantreg.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0108","dir":"Changelog","previous_headings":"","what":"performance 0.10.8","title":"performance 0.10.8","text":"CRAN release: 2023-10-30","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-10-8","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.10.8","text":"Changed behaviour check_predictions() models binomial family, get comparable plots different ways outcome specification. Now, outcome proportion, defined matrix trials successes, produced plots (models , ).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-8","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.8","text":"Fixed CRAN check errors. Fixed issue binned_residuals() models binomial family, outcome proportion.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0107","dir":"Changelog","previous_headings":"","what":"performance 0.10.7","title":"performance 0.10.7","text":"CRAN release: 2023-10-27","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-10-7","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"performance 0.10.7","text":"binned_residuals() gains new arguments control residuals used test, well different options calculate confidence intervals (namely, ci_type, residuals, ci iterations). default values compute binned residuals changed. Default residuals now “deviance” residuals (longer “response” residuals). Default confidence intervals now “exact” intervals (longer based Gaussian approximation). Use ci_type = \"gaussian\" residuals = \"response\" get old defaults.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-7","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.7","text":"binned_residuals() - like check_model() - gains show_dots argument show hide data points lie inside error bounds. particular useful models many observations, generating plot slow.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-10-6","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.10.6","text":"Support nestedLogit models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-6","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.6","text":"check_outliers() method \"ics\" now detects number available cores parallel computing via \"mc.cores\" option. robust previous method, used parallel::detectCores(). Now set number cores via options(mc.cores = 4).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-6","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.6","text":"Fixed issues check_model() models used data sets variables class \"haven_labelled\".","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0105","dir":"Changelog","previous_headings":"","what":"performance 0.10.5","title":"performance 0.10.5","text":"CRAN release: 2023-09-12","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-5","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.5","text":"informative message test_*() functions “nesting” refers fixed effects parameters currently ignores random effects detecting nested models. check_outliers() \"ICS\" method now stable less likely fail. check_convergence() now works parsnip _glm models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.5","text":"check_collinearity() work hurdle- zero-inflated models package pscl model explicitly defined formula zero-inflation model.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0104","dir":"Changelog","previous_headings":"","what":"performance 0.10.4","title":"performance 0.10.4","text":"CRAN release: 2023-06-02","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-4","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.4","text":"icc() r2_nakagawa() gain ci_method argument, either calculate confidence intervals using boot::boot() (instead lmer::bootMer()) ci_method = \"boot\" analytical confidence intervals (ci_method = \"analytical\"). Use ci_method = \"boot\" default method fails compute confidence intervals use ci_method = \"analytical\" bootstrapped intervals calculated . Note default computation method preferred. check_predictions() accepts bandwidth argument (smoothing bandwidth), passed plot() methods density-estimation. check_predictions() gains type argument, passed plot() method change plot-type (density discrete dots/intervals). default, type set \"default\" models without discrete outcomes, else type = \"discrete_interval\". performance_accuracy() now includes confidence intervals, reports default (standard error longer reported, still included).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-4","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.4","text":"Fixed issue check_collinearity() fixest models used () create interactions formulas.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0103","dir":"Changelog","previous_headings":"","what":"performance 0.10.3","title":"performance 0.10.3","text":"CRAN release: 2023-04-07","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-10-3","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.10.3","text":"item_discrimination(), calculate discrimination scale’s items.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"support-for-new-models-0-10-3","dir":"Changelog","previous_headings":"","what":"Support for new models","title":"performance 0.10.3","text":"model_performance(), check_overdispersion(), check_outliers() r2() now work objects class fixest_multi (@etiennebacher, #554). model_performance() can now return “Weak instruments” statistic p-value models class ivreg metrics = \"weak_instruments\" (@etiennebacher, #560). Support mclogit models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-3","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.3","text":"test_*() functions now automatically fit null-model one model objects provided testing multiple models. Warnings model_performance() unsupported objects class BFBayesFactor can now suppressed verbose = FALSE. check_predictions() longer fails issues re_formula = NULL mixed models, instead gives warning tries compute posterior predictive checks re_formuka = NA. check_outliers() now also works meta-analysis models packages metafor meta. plot() performance::check_model() longer produces normal QQ plot GLMs. Instead, now shows half-normal QQ plot absolute value standardized deviance residuals.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-3","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.3","text":"Fixed issue print() method check_collinearity(), mix correct order parameters.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0102","dir":"Changelog","previous_headings":"","what":"performance 0.10.2","title":"performance 0.10.2","text":"CRAN release: 2023-01-12","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-10-2","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.10.2","text":"Revised usage insight::get_data() meet forthcoming changes insight package.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-2","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.2","text":"check_collinearity() now accepts NULL ci argument.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.2","text":"Fixed issue item_difficulty() detecting maximum values item set. Furthermore, item_difficulty() gets maximum_value argument case item contains maximum value due missings.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0101","dir":"Changelog","previous_headings":"","what":"performance 0.10.1","title":"performance 0.10.1","text":"CRAN release: 2022-11-25","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-10-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.10.1","text":"Minor improvements documentation.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-1","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.1","text":"icc() r2_nakagawa() get ci iterations arguments, compute confidence intervals ICC resp. R2, based bootstrapped sampling. r2() gets ci, compute (analytical) confidence intervals R2. model underlying check_distribution() now also trained detect cauchy, half-cauchy inverse-gamma distributions. model_performance() now allows include ICC Bayesian models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.1","text":"verbose didn’t work r2_bayes() BFBayesFactor objects. Fixed issues check_model() models convergence issues lead NA values residuals. Fixed bug check_outliers whereby passing multiple elements threshold list generated error (#496). test_wald() now warns user inappropriate F test calls test_likelihoodratio() binomial models. Fixed edge case usage parellel::detectCores() check_outliers().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0100","dir":"Changelog","previous_headings":"","what":"performance 0.10.0","title":"performance 0.10.0","text":"CRAN release: 2022-10-03","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-change-0-10-0","dir":"Changelog","previous_headings":"","what":"Breaking Change","title":"performance 0.10.0","text":"minimum needed R version bumped 3.6. alias performance_lrt() removed. Use test_lrt() resp. test_likelihoodratio().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-10-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.10.0","text":"Following functions moved package parameters performance: check_sphericity_bartlett(), check_kmo(), check_factorstructure() check_clusterstructure().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.0","text":"check_normality(), check_homogeneity() check_symmetry() now works htest objects. Print method check_outliers() changed significantly: now states methods, thresholds, variables used, reports outliers per variable (univariate methods) well observation flagged several variables/methods. Includes new optional ID argument add along row number output (@rempsyc #443). check_outliers() now uses conventional outlier thresholds. IQR confidence interval methods now gain improved distance scores continuous instead discrete.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-0","dir":"Changelog","previous_headings":"","what":"Bug Fixes","title":"performance 0.10.0","text":"Fixed wrong z-score values using vector instead data frame check_outliers() (#476). Fixed cronbachs_alpha() objects parameters::principal_component().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-092","dir":"Changelog","previous_headings":"","what":"performance 0.9.2","title":"performance 0.9.2","text":"CRAN release: 2022-08-10","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-9-2","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.9.2","text":"print() methods model_performance() compare_performance() get layout argument, can \"horizontal\" (default) \"vertical\", switch layout printed table. Improved speed performance check_model() performance_*() functions. Improved support models class geeglm.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-9-2","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.9.2","text":"check_model() gains show_dots argument, show hide data points. particular useful models many observations, generating plot slow.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-9-2","dir":"Changelog","previous_headings":"","what":"Bug Fixes","title":"performance 0.9.2","text":"Fixes wrong column names model_performance() output kmeans objects (#453)","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-091","dir":"Changelog","previous_headings":"","what":"performance 0.9.1","title":"performance 0.9.1","text":"CRAN release: 2022-06-20","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-0-9-1","dir":"Changelog","previous_headings":"","what":"Breaking","title":"performance 0.9.1","text":"formerly “conditional” ICC icc() now named “unadjusted” ICC.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-9-1","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.9.1","text":"performance_cv() cross-validated model performance.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"support-for-new-models-0-9-1","dir":"Changelog","previous_headings":"","what":"Support for new models","title":"performance 0.9.1","text":"Added support models package estimator.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-9-1","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.9.1","text":"check_overdispersion() gets plot() method. check_outliers() now also works models classes gls lme. consequence, check_model() longer fail models. check_collinearity() now includes confidence intervals VIFs tolerance values. model_performance() now also includes within-subject R2 measures, applicable. Improved handling random effects check_normality() (.e. argument effects = \"random\").","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-9-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.9.1","text":"check_predictions() work GLMs matrix-response. check_predictions() work logistic regression models (.e. models binary response) package glmmTMB item_split_half() work input data frame matrix contained two columns. Fixed wrong computation BIC model_performance() models transformed response values. Fixed issues check_model() GLMs matrix-response.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-090","dir":"Changelog","previous_headings":"","what":"performance 0.9.0","title":"performance 0.9.0","text":"CRAN release: 2022-03-30","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-9-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.9.0","text":"check_concurvity(), returns GAM concurvity measures (comparable collinearity checks).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"check-functions-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Check functions","title":"performance 0.9.0","text":"check_predictions(), check_collinearity() check_outliers() now support (mixed) regression models BayesFactor. check_zeroinflation() now also works lme4::glmer.nb() models. check_collinearity() better supports GAM models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"test-functions-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Test functions","title":"performance 0.9.0","text":"test_performance() now calls test_lrt() test_wald() instead test_vuong() package CompQuadForm missing. test_performance() test_lrt() now compute corrected log-likelihood models transformed response variables (log- sqrt-transformations) passed functions.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"model-performance-functions-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Model performance functions","title":"performance 0.9.0","text":"performance_aic() now corrects AIC value models transformed response variables. also means comparing models using compare_performance() allows comparisons AIC values models without transformed response variables. Also, model_performance() now corrects AIC BIC values models transformed response variables.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"plotting-and-printing-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Plotting and printing","title":"performance 0.9.0","text":"print() method binned_residuals() now prints short summary results (longer generates plot). plot() method added generate plots. plot() output check_model() revised: binomial models, constant variance plot omitted, binned residuals plot included. density-plot showed normality residuals replaced posterior predictive check plot.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-9-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.9.0","text":"model_performance() models lme4 report AICc requested. r2_nakagawa() messed order group levels by_group TRUE.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-080","dir":"Changelog","previous_headings":"","what":"performance 0.8.0","title":"performance 0.8.0","text":"CRAN release: 2021-10-01","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-8-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"performance 0.8.0","text":"ci-level r2() Bayesian models now defaults 0.95, line latest changes bayestestR package. S3-method dispatch pp_check() revised, avoid problems bayesplot package, generic located.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-8-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.8.0","text":"Minor revisions wording messages check-functions. posterior_predictive_check() check_predictions() added aliases pp_check().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-8-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.8.0","text":"check_multimodal() check_heterogeneity_bias(). functions removed parameters packages future.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-8-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.8.0","text":"r2() linear models can now compute confidence intervals, via ci argument.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-8-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.8.0","text":"Fixed issues check_model() Bayesian models. Fixed issue pp_check() models transformed response variables, now predictions observed response values (transformed) scale.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-073","dir":"Changelog","previous_headings":"","what":"performance 0.7.3","title":"performance 0.7.3","text":"CRAN release: 2021-07-21","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-7-3","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.7.3","text":"check_outliers() new ci (hdi, eti) method filter based Confidence/Credible intervals. compare_performance() now also accepts list model objects. performance_roc() now also works binomial models classes glm. Several functions, like icc() r2_nakagawa(), now .data.frame() method. check_collinearity() now correctly handles objects forthcoming afex update.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-072","dir":"Changelog","previous_headings":"","what":"performance 0.7.2","title":"performance 0.7.2","text":"CRAN release: 2021-05-17","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-7-2","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.7.2","text":"performance_mae() calculate mean absolute error.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-7-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.7.2","text":"Fixed issue \"data length differs size matrix\" warnings examples forthcoming R 4.2. Fixed issue check_normality() models sample size larger 5.000 observations. Fixed issue check_model() glmmTMB models. Fixed issue check_collinearity() glmmTMB models zero-inflation, zero-inflated model intercept-model.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-071","dir":"Changelog","previous_headings":"","what":"performance 0.7.1","title":"performance 0.7.1","text":"CRAN release: 2021-04-09","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-supported-models-0-7-1","dir":"Changelog","previous_headings":"","what":"New supported models","title":"performance 0.7.1","text":"Add support model_fit (tidymodels). model_performance supports kmeans models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-7-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.7.1","text":"Give informative warning r2_bayes() BFBayesFactor objects can’t calculated. Several check_*() functions now return informative messages invalid model types input. r2() supports mhurdle (mhurdle) models. Added print() methods classes r2(). performance_roc() performance_accuracy() functions unfortunately spelling mistakes output columns: Sensitivity called Sensivity Specificity called Specifity. think understandable mistakes :-)","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"check_model-0-7-1","dir":"Changelog","previous_headings":"Changes to functions","what":"check_model()","title":"performance 0.7.1","text":"check_model() gains arguments, customize plot appearance. Added option detrend QQ/PP plots check_model().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"model_performance-0-7-1","dir":"Changelog","previous_headings":"Changes to functions","what":"model_performance()","title":"performance 0.7.1","text":"metrics argument model_performance() compare_performance() gains \"AICc\" option, also compute 2nd order AIC. \"R2_adj\" now explicit option metrics argument model_performance() compare_performance().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"other-functions-0-7-1","dir":"Changelog","previous_headings":"Changes to functions","what":"Other functions","title":"performance 0.7.1","text":"default-method r2() now tries compute r-squared models specific r2()-method yet, using following formula: 1-sum((y-y_hat)^2)/sum((y-y_bar)^2)) column name Parameter check_collinearity() now appropriately named Term.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-7-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.7.1","text":"test_likelihoodratio() now correctly sorts models identical fixed effects part, different model parts (like zero-inflation). Fixed incorrect computation models inverse-Gaussian families, Gaussian families fitted glm(). Fixed issue performance_roc() models outcome 0/1 coded. Fixed issue performance_accuracy() logistic regression models method = \"boot\". cronbachs_alpha() work matrix-objects, stated docs. now .","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-070","dir":"Changelog","previous_headings":"","what":"performance 0.7.0","title":"performance 0.7.0","text":"CRAN release: 2021-02-03","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-7-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.7.0","text":"Roll-back R dependency R >= 3.4.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-7-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"performance 0.7.0","text":"compare_performance() doesn’t return models’ Bayes Factors, now returned test_performance() test_bf().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-to-test-or-compare-models-0-7-0","dir":"Changelog","previous_headings":"","what":"New functions to test or compare models","title":"performance 0.7.0","text":"test_vuong(), compare models using Vuong’s (1989) Test. test_bf(), compare models using Bayes factors. test_likelihoodratio() alias performance_lrt(). test_wald(), rough approximation LRT. test_performance(), run relevant appropriate tests based input.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance_lrt-0-7-0","dir":"Changelog","previous_headings":"Changes to functions","what":"performance_lrt()","title":"performance 0.7.0","text":"performance_lrt() get alias test_likelihoodratio(). return AIC/BIC now (related LRT per se can easily obtained functions). Now contains column difference degrees freedom models. Fixed column names consistency.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"model_performance-0-7-0","dir":"Changelog","previous_headings":"Changes to functions","what":"model_performance()","title":"performance 0.7.0","text":"Added diagnostics models class ivreg.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"other-functions-0-7-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Other functions","title":"performance 0.7.0","text":"Revised computation performance_mse(), ensure ’s always based response residuals. performance_aic() now robust.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-7-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.7.0","text":"Fixed issue icc() variance_decomposition() multivariate response models, model parts contained random effects. Fixed issue compare_performance() duplicated rows. check_collinearity() longer breaks models rank deficient model matrix, gives warning instead. Fixed issue check_homogeneity() method = \"auto\", wrongly tested response variable, residuals. Fixed issue check_homogeneity() edge cases predictor non-syntactic names.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-061","dir":"Changelog","previous_headings":"","what":"performance 0.6.1","title":"performance 0.6.1","text":"CRAN release: 2020-12-09","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-6-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.6.1","text":"check_collinearity() gains verbose argument, toggle warnings messages.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-6-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.6.1","text":"Fixed examples, now using suggested packages conditionally.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-060","dir":"Changelog","previous_headings":"","what":"performance 0.6.0","title":"performance 0.6.0","text":"CRAN release: 2020-12-01","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-6-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.6.0","text":"model_performance() now supports margins, gamlss, stanmvreg semLme.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-6-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.6.0","text":"r2_somers(), compute Somers’ Dxy rank-correlation R2-measure logistic regression models. display(), print output package-functions different formats. print_md() alias display(format = \"markdown\").","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"model_performance-0-6-0","dir":"Changelog","previous_headings":"Changes to functions","what":"model_performance()","title":"performance 0.6.0","text":"model_performance() now robust doesn’t fail index computed. Instead, returns indices possible calculate. model_performance() gains default-method catches model objects previously supported. model object also supported default-method, warning given. model_performance() metafor-models now includes degrees freedom Cochran’s Q.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"other-functions-0-6-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Other functions","title":"performance 0.6.0","text":"performance_mse() performance_rmse() now always try return (R)MSE response scale. performance_accuracy() now accepts types linear logistic regression models, even class lm glm. performance_roc() now accepts types logistic regression models, even class glm. r2() mixed models r2_nakagawa() gain tolerance-argument, set tolerance level singularity checks computing random effect variances conditional r-squared.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-6-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.6.0","text":"Fixed issue icc() introduced last update make lme-models fail. Fixed issue performance_roc() models factors response.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-051","dir":"Changelog","previous_headings":"","what":"performance 0.5.1","title":"performance 0.5.1","text":"CRAN release: 2020-10-29","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-5-1","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"performance 0.5.1","text":"Column names model_performance() compare_performance() changed line easystats naming convention: LOGLOSS now Log_loss, SCORE_LOG Score_log SCORE_SPHERICAL now Score_spherical.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-5-1","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.5.1","text":"r2_posterior() Bayesian models obtain posterior distributions R-squared.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-5-1","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.5.1","text":"r2_bayes() works Bayesian models BayesFactor ( #143 ). model_performance() works Bayesian models BayesFactor ( #150 ). model_performance() now also includes residual standard deviation. Improved formatting Bayes factors compare_performance(). compare_performance() rank = TRUE doesn’t use BF values BIC present, prevent “double-dipping” BIC values (#144). method argument check_homogeneity() gains \"levene\" option, use Levene’s Test homogeneity.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-5-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.5.1","text":"Fix bug compare_performance() ... arguments function calls regression objects, instead direct function calls.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-050","dir":"Changelog","previous_headings":"","what":"performance 0.5.0","title":"performance 0.5.0","text":"CRAN release: 2020-09-12","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-5-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.5.0","text":"r2() icc() support semLME models (package smicd). check_heteroscedasticity() now also work zero-inflated mixed models glmmTMB GLMMadpative. check_outliers() now returns logical vector. Original numerical vector still accessible via .numeric().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-5-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.5.0","text":"pp_check() compute posterior predictive checks frequentist models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-5-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.5.0","text":"Fixed issue incorrect labeling groups icc() by_group = TRUE. Fixed issue check_heteroscedasticity() mixed models sigma calculated straightforward way. Fixed issues check_zeroinflation() MASS::glm.nb(). Fixed CRAN check issues.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-048","dir":"Changelog","previous_headings":"","what":"performance 0.4.8","title":"performance 0.4.8","text":"CRAN release: 2020-07-27","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-8","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.8","text":"Removed suggested packages removed CRAN.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-8","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.8","text":"icc() now also computes “classical” ICC brmsfit models. former way calculating “ICC” brmsfit models now available new function called variance_decomposition().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-8","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.8","text":"Fix issue new version bigutilsr check_outliers(). Fix issue model order performance_lrt().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-047","dir":"Changelog","previous_headings":"","what":"performance 0.4.7","title":"performance 0.4.7","text":"CRAN release: 2020-06-14","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-7","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.7","text":"Support models package mfx.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-7","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.7","text":"model_performance.rma() now includes results heterogeneity test meta-analysis objects. check_normality() now also works mixed models (limitation studentized residuals used). check_normality() gets effects-argument mixed models, check random effects normality.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-7","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.7","text":"Fixed issue performance_accuracy() binomial models response variable non-numeric factor levels. Fixed issues performance_roc(), printed 1 - AUC instead AUC.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-046","dir":"Changelog","previous_headings":"","what":"performance 0.4.6","title":"performance 0.4.6","text":"CRAN release: 2020-05-03","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-6","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.6","text":"Minor revisions model_performance() meet changes mlogit package. Support bayesx models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-6","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.6","text":"icc() gains by_group argument, compute ICCs per different group factors mixed models multiple levels cross-classified design. r2_nakagawa() gains by_group argument, compute explained variance different levels (following variance-reduction approach Hox 2010). performance_lrt() now works lavaan objects.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-6","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.6","text":"Fix issues functions models logical dependent variable. Fix bug check_itemscale(), caused multiple computations skewness statistics. Fix issues r2() gam models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-045","dir":"Changelog","previous_headings":"","what":"performance 0.4.5","title":"performance 0.4.5","text":"CRAN release: 2020-03-28","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-5","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.5","text":"model_performance() r2() now support rma-objects package metafor, mlm bife models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-5","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.5","text":"compare_performance() gets bayesfactor argument, include exclude Bayes factor model comparisons output. Added r2.aov().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.5","text":"Fixed issue performance_aic() models package survey, returned three different AIC values. Now AIC value returned. Fixed issue check_collinearity() glmmTMB models zero-inflated formula one predictor. Fixed issue check_model() lme models. Fixed issue check_distribution() brmsfit models. Fixed issue check_heteroscedasticity() aov objects. Fixed issues lmrob glmrob objects.","code":""}] +[{"path":[]},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement d.luedecke@uke.de. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"https://easystats.github.io/performance/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.1, available https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines inspired [Mozilla’s code conduct enforcement ladder][https://github.com/mozilla/inclusion]. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https://www.contributor-covenant.org/translations.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contributing to performance","title":"Contributing to performance","text":"outlines propose change performance.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"fixing-typos","dir":"","previous_headings":"","what":"Fixing typos","title":"Contributing to performance","text":"Small typos grammatical errors documentation may edited directly using GitHub web interface, long changes made source file. want fix typos documentation, please edit related .R file R/ folder. edit .Rd file man/.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"filing-an-issue","dir":"","previous_headings":"","what":"Filing an issue","title":"Contributing to performance","text":"easiest way propose change new feature file issue. ’ve found bug, may also create associated issue. possible, try illustrate proposal bug minimal reproducible example.","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"pull-requests","dir":"","previous_headings":"","what":"Pull requests","title":"Contributing to performance","text":"Please create Git branch pull request (PR). contributed code roughly follow R style guide, particular easystats convention code-style. performance uses roxygen2, Markdown syntax, documentation. performance uses testthat. Adding tests PR makes easier merge PR code base. PR user-visible change, may add bullet top NEWS.md describing changes made. may optionally add GitHub username, links relevant issue(s)/PR(s).","code":""},{"path":"https://easystats.github.io/performance/CONTRIBUTING.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Contributing to performance","text":"Please note project released Contributor Code Conduct. participating project agree abide terms.","code":""},{"path":"https://easystats.github.io/performance/SUPPORT.html","id":null,"dir":"","previous_headings":"","what":"Getting help with {performance}","title":"Getting help with {performance}","text":"Thanks using performance. filing issue, places explore pieces put together make process smooth possible. Start making minimal reproducible example using reprex package. haven’t heard used reprex , ’re treat! Seriously, reprex make R-question-asking endeavors easier (pretty insane ROI five ten minutes ’ll take learn ’s ). additional reprex pointers, check Get help! resource used tidyverse team. Armed reprex, next step figure ask: ’s question: start StackOverflow. people answer questions. ’s bug: ’re right place, file issue. ’re sure: let’s discuss try figure ! problem bug feature request, can easily return report . opening new issue, sure search issues pull requests make sure bug hasn’t reported /already fixed development version. default, search pre-populated :issue :open. can edit qualifiers (e.g. :pr, :closed) needed. example, ’d simply remove :open search issues repo, open closed. Thanks help!","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"make-sure-your-model-inference-is-accurate","dir":"Articles","previous_headings":"","what":"Make sure your model inference is accurate!","title":"Checking model assumption - linear models","text":"Model diagnostics crucial, parameter estimation, p-values confidence interval depend correct model assumptions well data. model assumptions violated, estimates can statistically significant “even effect study null” (Gelman/Greenland 2019). several problems associated model diagnostics. Different types models require different checks. instance, normally distributed residuals assumed apply linear regression, appropriate assumption logistic regression. Furthermore, recommended carry visual inspections, .e. generate inspect called diagnostic plots model assumptions - formal statistical tests often strict warn violation model assumptions, although everything fine within certain tolerance range. diagnostic plots interpreted? violations detected, fix ? vignette introduces check_model() function performance package, shows use function different types models resulting diagnostic plots interpreted. Furthermore, recommendations given address possible violations model assumptions. plots seen can also generated dedicated functions, e.g.: Posterior predictive checks: check_predictions() Homogeneity variance: check_heteroskedasticity() Normality residuals: check_normality() Multicollinearity: check_collinearity() Influential observations: check_outliers() Binned residuals: binned_residuals() Check overdispersion: check_overdispersion()","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"linear-models-are-all-assumptions-for-linear-models-met","dir":"Articles","previous_headings":"","what":"Linear models: Are all assumptions for linear models met?","title":"Checking model assumption - linear models","text":"start simple example linear model. go details diagnostic plots, let’s first look summary table. nothing suspicious far. Now let’s start model diagnostics. use check_model() function, provides overview important appropriate diagnostic plots model investigation. Now let’s take closer look plot. , ask check_model() return single plot check, instead arranging grid. can using panel argument. returns list ggplot plots.","code":"data(iris) m1 <- lm(Sepal.Width ~ Species + Petal.Length + Petal.Width, data = iris) library(parameters) model_parameters(m1) #> Parameter | Coefficient | SE | 95% CI | t(145) | p #> ---------------------------------------------------------------------------- #> (Intercept) | 3.05 | 0.09 | [ 2.86, 3.23] | 32.52 | < .001 #> Species [versicolor] | -1.76 | 0.18 | [-2.12, -1.41] | -9.83 | < .001 #> Species [virginica] | -2.20 | 0.27 | [-2.72, -1.67] | -8.28 | < .001 #> Petal Length | 0.15 | 0.06 | [ 0.03, 0.28] | 2.38 | 0.018 #> Petal Width | 0.62 | 0.14 | [ 0.35, 0.89] | 4.57 | < .001 library(performance) check_model(m1) # return a list of single plots diagnostic_plots <- plot(check_model(m1, panel = FALSE))"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"posterior-predictive-checks","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Posterior predictive checks","title":"Checking model assumption - linear models","text":"first plot based check_predictions(). Posterior predictive checks can used “look systematic discrepancies real simulated data” (Gelman et al. 2014, p. 169). helps see whether type model (distributional family) fits well data (Gelman Hill, 2007, p. 158). blue lines simulated data based model, model true distributional assumptions met. green line represents actual observed data response variable. plot looks good, thus assume violations model assumptions . Next, different example. use Poisson-distributed outcome linear model, expect deviation distributional assumption linear model. can see, green line plot deviates visibly blue lines. may indicate linear model appropriate, since capture distributional nature response variable properly.","code":"# posterior predicive checks diagnostic_plots[[1]] set.seed(99) d <- iris d$skewed <- rpois(150, 1) m2 <- lm(skewed ~ Species + Petal.Length + Petal.Width, data = d) out <- check_predictions(m2) plot(out)"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Posterior predictive checks","what":"How to fix this?","title":"Checking model assumption - linear models","text":"best way, serious concerns model fit well data, use different type (family) regression models. example, obvious better use Poisson regression.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"plots-for-discrete-outcomes","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Posterior predictive checks","what":"Plots for discrete outcomes","title":"Checking model assumption - linear models","text":"discrete integer outcomes (like logistic Poisson regression), density plots always best choice, look somewhat “wiggly” around actual values dependent variables. case, use type argument plot() method change plot-style. Available options type = \"discrete_dots\" (dots observed replicated outcomes), type = \"discrete_interval\" (dots observed, error bars replicated outcomes) type = \"discrete_both\" (dots error bars).","code":"set.seed(99) d <- iris d$skewed <- rpois(150, 1) m3 <- glm( skewed ~ Species + Petal.Length + Petal.Width, family = poisson(), data = d ) out <- check_predictions(m3) plot(out, type = \"discrete_both\")"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"linearity","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Linearity","title":"Checking model assumption - linear models","text":"plot helps check assumption linear relationship. shows whether predictors may non-linear relationship outcome, case reference line may roughly indicate relationship. straight horizontal line indicates model specification seems ok. Now different example, simulate data quadratic relationship one predictors outcome.","code":"# linearity diagnostic_plots[[2]] set.seed(1234) x <- rnorm(200) z <- rnorm(200) # quadratic relationship y <- 2 * x + x^2 + 4 * z + rnorm(200) d <- data.frame(x, y, z) m <- lm(y ~ x + z, data = d) out <- plot(check_model(m, panel = FALSE)) # linearity plot out[[2]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-1","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Linearity","what":"How to fix this?","title":"Checking model assumption - linear models","text":"green reference line roughly flat horizontal, rather - like example - U-shaped, may indicate predictors probably better modeled quadratic term. Transforming response variable might another solution linearity assumptions met. caution needed interpreting plots. Although plots helpful check model assumptions, necessarily indicate -called “lack fit”, e.g. missed non-linear relationships interactions. Thus, always recommended also look effect plots, including partial residuals.","code":"# model quadratic term m <- lm(y ~ x + I(x^2) + z, data = d) out <- plot(check_model(m, panel = FALSE)) # linearity plot out[[2]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"homogeneity-of-variance---detecting-heteroscedasticity","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Homogeneity of variance - detecting heteroscedasticity","title":"Checking model assumption - linear models","text":"plot helps check assumption equal (constant) variance, .e. homoscedasticity. meet assumption, variance residuals across different values predictors similar notably increase decrease. Hence, desired pattern dots spread equally roughly straight, horizontal line show apparent deviation. Usually, can easily inspected plotting residuals fitted values, possibly adding trend lines plot. horizontal parallel, everything ok. spread dot increases (decreases) across x-axis, model may suffer heteroscedasticity. example model, see model indeed violates assumption homoscedasticity. diagnostic plot used check_model() look different? check_model() plots square-root absolute values residuals. makes visual inspection slightly easier, one line needs judged. roughly flat horizontal green reference line indicates homoscedasticity. steeper slope line indicates model suffers heteroscedasticity.","code":"library(ggplot2) d <- data.frame( x = fitted(m1), y = residuals(m1), grp = as.factor(residuals(m1) >= 0) ) ggplot(d, aes(x, y, colour = grp)) + geom_point() + geom_smooth(method = \"lm\", se = FALSE) # homoscedasticiy - homogeneity of variance diagnostic_plots[[3]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-2","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Homogeneity of variance - detecting heteroscedasticity","what":"How to fix this?","title":"Checking model assumption - linear models","text":"several ways address heteroscedasticity. Calculating heteroscedasticity-consistent standard errors accounts larger variation, better reflecting increased uncertainty. can easily done using parameters package, e.g. parameters::model_parameters(m1, vcov = \"HC3\"). detailed vignette robust standard errors can found . heteroscedasticity can modeled directly, e.g. using package glmmTMB dispersion formula, estimate dispersion parameter account heteroscedasticity (see Brooks et al. 2017). Transforming response variable, instance, taking log(), may also help avoid issues heteroscedasticity. Weighting observations another remedy heteroscedasticity, particular method weighted least squares.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"influential-observations---outliers","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Influential observations - outliers","title":"Checking model assumption - linear models","text":"Outliers can defined particularly influential observations, plot helps detecting outliers. Cook’s distance (Cook 1977, Cook & Weisberg 1982) used define outliers, .e. point plot falls outside Cook’s distance (dashed lines) considered influential observation. example, everything looks well.","code":"# influential observations - outliers diagnostic_plots[[4]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-3","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Influential observations - outliers","what":"How to fix this?","title":"Checking model assumption - linear models","text":"Dealing outliers straightforward, recommended automatically discard observation marked “outlier”. Rather, domain knowledge must involved decision whether keep omit influential observation. helpful heuristic distinguish error outliers, interesting outliers, random outliers (Leys et al. 2019). Error outliers likely due human error corrected data analysis. Interesting outliers due technical error may theoretical interest; might thus relevant investigate even though removed current analysis interest. Random outliers assumed due chance alone belong correct distribution , therefore, retained.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"multicollinearity","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Multicollinearity","title":"Checking model assumption - linear models","text":"plot checks potential collinearity among predictors. nutshell multicollinearity means know effect one predictor, value knowing predictor rather low. Multicollinearity might arise third, unobserved variable causal effect two predictors associated outcome. cases, actual relationship matters association unobserved variable outcome. Multicollinearity confused raw strong correlation predictors. matters association one predictor variables, conditional variables model. multicollinearity problem, model seems suggest predictors question don’t seems reliably associated outcome (low estimates, high standard errors), although predictors actually strongly associated outcome, .e. indeed might strong effect (McElreath 2020, chapter 6.1). variance inflation factor (VIF) indicates magnitude multicollinearity model terms. thresholds low, moderate high collinearity VIF values less 5, 5 10 larger 10, respectively (James et al. 2013). Note thresholds, although commonly used, also criticized high. Zuur et al. (2010) suggest using lower values, e.g. VIF 3 larger may already longer considered “low”. model clearly suffers multicollinearity, predictors high VIF values.","code":"# multicollinearity diagnostic_plots[[5]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-4","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Multicollinearity","what":"How to fix this?","title":"Checking model assumption - linear models","text":"Usually, predictors () high VIF values removed model fix multicollinearity. caution needed interaction terms. interaction terms included model, high VIF values expected. portion multicollinearity among component terms interaction also called “inessential ill-conditioning”, leads inflated VIF values typically seen models interaction terms (Francoeur 2013). cases, try centering involved interaction terms, can reduce multicollinearity (Kim Jung 2024), re-fit model without interaction terms check model collinearity among predictors.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"normality-of-residuals","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met?","what":"Normality of residuals","title":"Checking model assumption - linear models","text":"linear regression, residuals normally distributed. can checked using -called Q-Q plots (quantile-quantile plot) compare shapes distributions. plot shows quantiles studentized residuals versus fitted values. Usually, dots fall along green reference line. deviation (mostly tails), indicates model doesn’t predict outcome well range shows larger deviations reference line. cases, inferential statistics like p-value coverage confidence intervals can inaccurate. example, see data points ok, except observations tails. Whether action needed fix can also depend results remaining diagnostic plots. plots indicate violation assumptions, deviation normality, particularly tails, can less critical.","code":"# normally distributed residuals diagnostic_plots[[6]]"},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"how-to-fix-this-5","dir":"Articles","previous_headings":"Linear models: Are all assumptions for linear models met? > Normality of residuals","what":"How to fix this?","title":"Checking model assumption - linear models","text":"remedies fix non-normality residuals, according Pek et al. 2018. large sample sizes, assumption normality can relaxed due central limit theorem - action needed. Calculating heteroscedasticity-consistent standard errors can help. See section Homogeneity variance details. Bootstrapping another alternative resolve issues non-normally residuals. , can easily done using parameters package, e.g. parameters::model_parameters(m1, bootstrap = TRUE) parameters::bootstrap_parameters().","code":""},{"path":"https://easystats.github.io/performance/articles/check_model.html","id":"references","dir":"Articles","previous_headings":"","what":"References","title":"Checking model assumption - linear models","text":"Brooks , Kristensen K, Benthem KJ van, Magnusson , Berg CW, Nielsen , et al. glmmTMB Balances Speed Flexibility Among Packages Zero-inflated Generalized Linear Mixed Modeling. R Journal. 2017;9: 378-400. Cook RD. Detection influential observation linear regression. Technometrics. 1977;19(1): 15-18. Cook RD Weisberg S. Residuals Influence Regression. London: Chapman Hall, 1982. Francoeur RB. Sequential Residual Centering Resolve Low Sensitivity Moderated Regression? Simulations Cancer Symptom Clusters. Open Journal Statistics. 2013:03(06), 24-44. Gelman , Carlin JB, Stern HS, Dunson DB, Vehtari , Rubin DB. Bayesian data analysis. (Third edition). CRC Press, 2014 Gelman , Greenland S. confidence intervals better termed “uncertainty intervals”? BMJ. 2019;l5381. doi:10.1136/bmj.l5381 Gelman , Hill J. Data analysis using regression multilevel/hierarchical models. Cambridge; New York. Cambridge University Press, 2007 James, G., Witten, D., Hastie, T., Tibshirani, R. (eds.).introduction statistical learning: applications R. New York: Springer, 2013 Kim, Y., & Jung, G. (2024). Understanding linear interaction analysis causal graphs. British Journal Mathematical Statistical Psychology, 00, 1–14. Leys C, Delacre M, Mora YL, Lakens D, Ley C. Classify, Detect, Manage Univariate Multivariate Outliers, Emphasis Pre-Registration. International Review Social Psychology, 2019 McElreath, R. Statistical rethinking: Bayesian course examples R Stan. 2nd edition. Chapman Hall/CRC, 2020 Pek J, Wong O, Wong ACM. Address Non-normality: Taxonomy Approaches, Reviewed, Illustrated. Front Psychol (2018) 9:2104. doi: 10.3389/fpsyg.2018.02104 Zuur AF, Ieno EN, Elphick CS. protocol data exploration avoid common statistical problems: Data exploration. Methods Ecology Evolution (2010) 1:3-14.","code":""},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"fit-the-initial-model","dir":"Articles","previous_headings":"","what":"Fit the initial model","title":"How to arrive at the best model fit","text":"start generalized mixed effects model, using Poisson distribution. First, let us look summary model. see lot statistically significant estimates . matter, philosophy follow, conclusions draw statistical models inaccurate modeling assumptions poor fit situation. Hence, checking model fit essential. performance, can conduct comprehensive visual inspection model fit using check_model(). won’t go details plots , can find information created diagnostic plots dedicated vignette. now, want focus posterior predictive checks, dispersion zero-inflation well Q-Q plot (uniformity residuals). Note unlike plot(), base R function create diagnostic plots, check_model() relies simulated residuals Q-Q plot, accurate non-Gaussian models. See vignette documentation simulate_residuals() details. plot suggests may issues overdispersion /zero-inflation. can check problems using check_overdispersion() check_zeroinflation(), perform statistical tests (based simulated residuals). tests can additionally used beyond visual inspection. can see, model seems suffer overdispersion zero-inflation.","code":"library(performance) model1 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), family = poisson, data = glmmTMB::Salamanders ) library(parameters) model_parameters(model1) #> # Fixed Effects #> #> Parameter | Log-Mean | SE | 95% CI | z | p #> --------------------------------------------------------------- #> (Intercept) | -1.62 | 0.24 | [-2.10, -1.15] | -6.76 | < .001 #> mined [no] | 2.26 | 0.28 | [ 1.72, 2.81] | 8.08 | < .001 #> spp [PR] | -1.39 | 0.22 | [-1.81, -0.96] | -6.44 | < .001 #> spp [DM] | 0.23 | 0.13 | [-0.02, 0.48] | 1.79 | 0.074 #> spp [EC-A] | -0.77 | 0.17 | [-1.11, -0.43] | -4.50 | < .001 #> spp [EC-L] | 0.62 | 0.12 | [ 0.39, 0.86] | 5.21 | < .001 #> spp [DES-L] | 0.68 | 0.12 | [ 0.45, 0.91] | 5.75 | < .001 #> spp [DF] | 0.08 | 0.13 | [-0.18, 0.34] | 0.60 | 0.549 #> #> # Random Effects #> #> Parameter | Coefficient | 95% CI #> ------------------------------------------------- #> SD (Intercept: site) | 0.58 | [0.38, 0.87] #> #> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed #> using a Wald z-distribution approximation. #> #> The model has a log- or logit-link. Consider using `exponentiate = #> TRUE` to interpret coefficients as ratios. check_model(model1, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model1) #> # Overdispersion test #> #> dispersion ratio = 2.324 #> Pearson's Chi-Squared = 1475.875 #> p-value = < 0.001 #> Overdispersion detected. check_zeroinflation(model1) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 311 #> Ratio: 0.80 #> Model is underfitting zeros (probable zero-inflation)."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"first-attempt-at-improving-the-model-fit","dir":"Articles","previous_headings":"","what":"First attempt at improving the model fit","title":"How to arrive at the best model fit","text":"can try improve model fit fitting model zero-inflation component: Looking plots, zero-inflation seems addressed properly (see especially posterior predictive checks uniformity residuals, Q-Q plot). However, overdispersion still present. can check problems using check_overdispersion() check_zeroinflation() . Indeed, overdispersion still present.","code":"model2 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), ziformula = ~ mined + spp, family = poisson, data = glmmTMB::Salamanders ) check_model(model2, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model2) #> # Overdispersion test #> #> dispersion ratio = 1.679 #> p-value = 0.008 #> Overdispersion detected. check_zeroinflation(model2) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 387 #> Ratio: 1.00 #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p > .999)."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"second-attempt-at-improving-the-model-fit","dir":"Articles","previous_headings":"","what":"Second attempt at improving the model fit","title":"How to arrive at the best model fit","text":"can try address issue fitting negative binomial model instead using Poisson distribution. Now see plot showing misspecified dispersion zero-inflation suggests overdispersion better addressed . Let us check :","code":"model3 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), ziformula = ~ mined + spp, family = glmmTMB::nbinom1, data = glmmTMB::Salamanders ) check_model(model3, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model3) #> # Overdispersion test #> #> dispersion ratio = 1.081 #> p-value = 0.54 #> No overdispersion detected. check_zeroinflation(model3) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 389 #> Ratio: 1.00 #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p > .999)."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"comparing-model-fit-indices","dir":"Articles","previous_headings":"","what":"Comparing model fit indices","title":"How to arrive at the best model fit","text":"different model fit indices can used compare models. purpose, rely Akaike Information Criterion (AIC), corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC), Proper Scoring Rules. can compare models using compare_performance() plot(). weighted AIC BIC range 0 1, indicating better model fit closer value 1. AICc corrected version AIC small sample sizes. Proper Scoring Rules range -Inf 0, higher values (.e. closer 0) indicating better model fit. results suggest indeed third model best fit.","code":"result <- compare_performance( model1, model2, model3, metrics = c(\"AIC\", \"AICc\", \"BIC\", \"SCORE\") ) result #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | Score_log | Score_spherical #> ------------------------------------------------------------------------------------------------- #> model1 | glmmTMB | 1962.8 (<.001) | 1963.1 (<.001) | 2003.0 (<.001) | -1.457 | 0.032 #> model2 | glmmTMB | 1785.5 (<.001) | 1786.5 (<.001) | 1861.4 (<.001) | -1.328 | 0.032 #> model3 | glmmTMB | 1653.7 (>.999) | 1654.8 (>.999) | 1734.1 (>.999) | -1.275 | 0.032 plot(result)"},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"statistical-tests-for-model-comparison","dir":"Articles","previous_headings":"","what":"Statistical tests for model comparison","title":"How to arrive at the best model fit","text":"can also perform statistical tests determine model best fit using test_performance() anova(). test_performance() automatically selects appropriate test based model family. can also call different tests, like test_likelihoodratio(), test_bf(), test_wald() test_vuong() directly. see, first, test_performance() used Bayes factor (based BIC comparison) compare models. second, second third model seem significantly better first model. Now compare second third model see Bayes factor likelihood ratio test suggest third model significantly better second model. mean inference? Obviously, although might found best fitting model, coefficients zero-inflation component model look rather spurious. high coefficients . still might find better distributional family model, try nbinom2 now. Based results, might even go model4.","code":"test_performance(model1, model2, model3) #> Name | Model | BF #> ------------------------- #> model1 | glmmTMB | #> model2 | glmmTMB | > 1000 #> model3 | glmmTMB | > 1000 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. test_performance(model2, model3) #> Name | Model | BF #> ------------------------- #> model2 | glmmTMB | #> model3 | glmmTMB | > 1000 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. test_likelihoodratio(model2, model3) #> # Likelihood-Ratio-Test (LRT) for Model Comparison (ML-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------------- #> model2 | glmmTMB | 17 | | | #> model3 | glmmTMB | 18 | 1 | 133.83 | < .001 model_parameters(model3) #> # Fixed Effects (Count Model) #> #> Parameter | Log-Mean | SE | 95% CI | z | p #> --------------------------------------------------------------- #> (Intercept) | -0.75 | 0.34 | [-1.40, -0.09] | -2.23 | 0.026 #> mined [no] | 1.56 | 0.33 | [ 0.92, 2.20] | 4.78 | < .001 #> spp [PR] | -1.57 | 0.30 | [-2.16, -0.97] | -5.15 | < .001 #> spp [DM] | 0.07 | 0.20 | [-0.32, 0.46] | 0.34 | 0.735 #> spp [EC-A] | -0.93 | 0.27 | [-1.45, -0.41] | -3.51 | < .001 #> spp [EC-L] | 0.31 | 0.20 | [-0.07, 0.69] | 1.59 | 0.111 #> spp [DES-L] | 0.41 | 0.19 | [ 0.04, 0.79] | 2.19 | 0.028 #> spp [DF] | -0.12 | 0.20 | [-0.51, 0.28] | -0.57 | 0.568 #> #> # Fixed Effects (Zero-Inflation Component) #> #> Parameter | Log-Odds | SE | 95% CI | z | p #> ------------------------------------------------------------------------------- #> (Intercept) | 2.28 | 1.12 | [ 0.08, 4.47] | 2.04 | 0.042 #> mined [no] | -21.36 | 4655.41 | [ -9145.81, 9103.08] | -4.59e-03 | 0.996 #> spp [PR] | -24.37 | 92198.78 | [ -1.81e+05, 1.81e+05] | -2.64e-04 | > .999 #> spp [DM] | -3.63 | 2.01 | [ -7.57, 0.31] | -1.80 | 0.071 #> spp [EC-A] | -2.79 | 1.95 | [ -6.61, 1.03] | -1.43 | 0.152 #> spp [EC-L] | -2.84 | 1.41 | [ -5.59, -0.08] | -2.02 | 0.044 #> spp [DES-L] | -3.56 | 1.78 | [ -7.04, -0.07] | -2.00 | 0.045 #> spp [DF] | -20.55 | 4284.59 | [ -8418.20, 8377.09] | -4.80e-03 | 0.996 #> #> # Dispersion #> #> Parameter | Coefficient | 95% CI #> ---------------------------------------- #> (Intercept) | 2.02 | [1.54, 2.67] #> #> # Random Effects Variances #> #> Parameter | Coefficient | 95% CI #> ------------------------------------------------- #> SD (Intercept: site) | 0.46 | [0.27, 0.76] #> #> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed #> using a Wald z-distribution approximation. model4 <- glmmTMB::glmmTMB( count ~ mined + spp + (1 | site), ziformula = ~ mined + spp, family = glmmTMB::nbinom2, data = glmmTMB::Salamanders ) check_model(model4, size_dot = 1.2) #> `check_outliers()` does not yet support models of class `glmmTMB`. check_overdispersion(model4) #> # Overdispersion test #> #> dispersion ratio = 0.958 #> p-value = 0.93 #> No overdispersion detected. check_zeroinflation(model4) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 386 #> Ratio: 1.00 #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p = 0.952). test_likelihoodratio(model3, model4) #> Some of the nested models seem to be identical and probably only vary in #> their random effects. #> # Likelihood-Ratio-Test (LRT) for Model Comparison (ML-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------------ #> model3 | glmmTMB | 18 | | | #> model4 | glmmTMB | 18 | 0 | 16.64 | < .001 model_parameters(model4) #> # Fixed Effects (Count Model) #> #> Parameter | Log-Mean | SE | 95% CI | z | p #> -------------------------------------------------------------- #> (Intercept) | -0.61 | 0.41 | [-1.40, 0.18] | -1.51 | 0.132 #> mined [no] | 1.43 | 0.37 | [ 0.71, 2.15] | 3.90 | < .001 #> spp [PR] | -0.96 | 0.64 | [-2.23, 0.30] | -1.50 | 0.134 #> spp [DM] | 0.17 | 0.24 | [-0.29, 0.63] | 0.73 | 0.468 #> spp [EC-A] | -0.39 | 0.34 | [-1.06, 0.28] | -1.13 | 0.258 #> spp [EC-L] | 0.49 | 0.24 | [ 0.02, 0.96] | 2.05 | 0.041 #> spp [DES-L] | 0.59 | 0.23 | [ 0.14, 1.04] | 2.59 | 0.010 #> spp [DF] | -0.11 | 0.24 | [-0.59, 0.36] | -0.46 | 0.642 #> #> # Fixed Effects (Zero-Inflation Component) #> #> Parameter | Log-Odds | SE | 95% CI | z | p #> --------------------------------------------------------------- #> (Intercept) | 0.91 | 0.63 | [-0.32, 2.14] | 1.45 | 0.147 #> mined [no] | -2.56 | 0.60 | [-3.75, -1.38] | -4.24 | < .001 #> spp [PR] | 1.16 | 1.33 | [-1.45, 3.78] | 0.87 | 0.384 #> spp [DM] | -0.94 | 0.80 | [-2.51, 0.63] | -1.17 | 0.241 #> spp [EC-A] | 1.04 | 0.71 | [-0.36, 2.44] | 1.46 | 0.144 #> spp [EC-L] | -0.56 | 0.73 | [-1.99, 0.86] | -0.77 | 0.439 #> spp [DES-L] | -0.89 | 0.75 | [-2.37, 0.58] | -1.19 | 0.236 #> spp [DF] | -2.54 | 2.18 | [-6.82, 1.74] | -1.16 | 0.244 #> #> # Dispersion #> #> Parameter | Coefficient | 95% CI #> ---------------------------------------- #> (Intercept) | 1.51 | [0.93, 2.46] #> #> # Random Effects Variances #> #> Parameter | Coefficient | 95% CI #> ------------------------------------------------- #> SD (Intercept: site) | 0.38 | [0.17, 0.87] #> #> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed #> using a Wald z-distribution approximation."},{"path":"https://easystats.github.io/performance/articles/check_model_practical.html","id":"conclusion","dir":"Articles","previous_headings":"","what":"Conclusion","title":"How to arrive at the best model fit","text":"Statistics hard. just fitting model, also checking model fit improving model. also requires domain knowledge consider whether relevant predictors included model (whether included predictors relevant!). performance package provides comprehensive set tools help task. demonstrated use tools check fit model, detect misspecification, improve model. also shown compare model fit indices perform statistical tests determine model best fit. hope vignette helpful guiding process.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"reuse-of-this-material","dir":"Articles","previous_headings":"","what":"Reuse of this Material","title":"Checking outliers with *performance*","text":"Note: vignette extended write-Behavior Research Methods paper. educational module can freely reused teaching purposes long original BRM paper cited. raw code file, can adapted rmarkdown formats teaching purposes, can accessed . contribute improve content directly, please submit Pull Request {performance} package GitHub repository following usual contributing guidelines: https://easystats.github.io/performance/CONTRIBUTING.html. report issues problems, module, seek support, please open issue: https://github.com/easystats/performance/issues. Reference: Thériault, R., Ben-Shachar, M. S., Patil, ., Lüdecke, D., Wiernik, B. M., & Makowski, D. (2024). Check outliers! introduction identifying statistical outliers R easystats. Behavior Research Methods, 1-11. https://doi.org/10.3758/s13428-024-02356-w","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"summary","dir":"Articles","previous_headings":"","what":"Summary","title":"Checking outliers with *performance*","text":"Beyond challenge keeping --date current best practices regarding diagnosis treatment outliers, additional difficulty arises concerning mathematical implementation recommended methods. vignette, provide overview current recommendations best practices demonstrate can easily conveniently implemented R statistical computing software, using {performance} package easystats ecosystem. cover univariate, multivariate, model-based statistical outlier detection methods, recommended threshold, standard output, plotting methods. conclude recommendations handling outliers: different theoretical types outliers, whether exclude winsorize , importance transparency.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"statement-of-need","dir":"Articles","previous_headings":"","what":"Statement of Need","title":"Checking outliers with *performance*","text":"Real-life data often contain observations can considered abnormal compared main population. cause —belong different distribution (originating different generative process) simply extreme cases, statistically rare impossible—can hard assess, boundaries “abnormal” difficult define. Nonetheless, improper handling outliers can substantially affect statistical model estimations, biasing effect estimations weakening models’ predictive performance. thus essential address problem thoughtful manner. Yet, despite existence established recommendations guidelines, many researchers still treat outliers consistent manner, using inappropriate strategies (Simmons, Nelson, Simonsohn 2011; Leys et al. 2013). One possible reason researchers aware existing recommendations, know implement using analysis software. paper, show follow current best practices automatic reproducible statistical outlier detection (SOD) using R {performance} package (Lüdecke et al. 2021), part easystats ecosystem packages build R framework easy statistical modeling, visualization, reporting (Lüdecke et al. [2019] 2023). Installation instructions can found GitHub website, list dependencies CRAN. instructional materials follow aimed audience researchers want follow good practices, appropriate advanced undergraduate students, graduate students, professors, professionals deal nuances outlier treatment.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"identifying-outliers","dir":"Articles","previous_headings":"","what":"Identifying Outliers","title":"Checking outliers with *performance*","text":"Although many researchers attempt identify outliers measures based mean (e.g., z scores), methods problematic mean standard deviation robust influence outliers methods also assume normally distributed data (.e., Gaussian distribution). Therefore, current guidelines recommend using robust methods identify outliers, relying median opposed mean (Leys et al. 2019, 2013, 2018). Nonetheless, exact outlier method use depends many factors. cases, eye-gauging odd observations can appropriate solution, though many researchers favour algorithmic solutions detect potential outliers, example, based continuous value expressing observation stands others. One factors consider selecting algorithmic outlier detection method statistical test interest. using regression model, relevant information can found identifying observations fit well model. approach, known model-based outliers detection (outliers extracted statistical model fit), can contrasted distribution-based outliers detection, based distance observation “center” population. Various quantification strategies distance exist latter, univariate (involving one variable time) multivariate (involving multiple variables). method readily available detect model-based outliers, structural equation modelling (SEM), looking multivariate outliers may relevance. simple tests (t tests correlations) compare values variable, can appropriate check univariate outliers. However, univariate methods can give false positives since t tests correlations, ultimately, also models/multivariable statistics. sense limited, show nonetheless educational purposes. Importantly, whatever approach researchers choose remains subjective decision, usage (rationale) must transparently documented reproducible (Leys et al. 2019). Researchers commit (ideally preregistration) outlier treatment method collecting data. report paper decisions details methods, well deviation original plan. transparency practices can help reduce false positives due excessive researchers’ degrees freedom (.e., choice flexibility throughout analysis). following section, go mentioned methods provide examples implement R.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"univariate-outliers","dir":"Articles","previous_headings":"Identifying Outliers","what":"Univariate Outliers","title":"Checking outliers with *performance*","text":"Researchers frequently attempt identify outliers using measures deviation center variable’s distribution. One popular procedure z score transformation, computes distance standard deviation (SD) mean. However, mentioned earlier, popular method robust. Therefore, univariate outliers, recommended use median along Median Absolute Deviation (MAD), robust interquartile range mean standard deviation (Leys et al. 2019, 2013). Researchers can identify outliers based robust (.e., MAD-based) z scores using check_outliers() function {performance} package, specifying method = \"zscore_robust\".1 Although Leys et al. (2013) suggest default threshold 2.5 Leys et al. (2019) threshold 3, {performance} uses default less conservative threshold ~3.29.2 , data points flagged outliers go beyond +/- ~3.29 MAD. Users can adjust threshold using threshold argument. provide example code using mtcars dataset, extracted 1974 Motor Trend US magazine. dataset contains fuel consumption 10 characteristics automobile design performance 32 different car models (see ?mtcars details). chose dataset accessible base R familiar many R users. might want conduct specific statistical analyses data set, say, t tests structural equation modelling, first, want check outliers may influence test results. automobile names stored column names mtcars, first convert ID column benefit check_outliers() ID argument. Furthermore, really need couple columns demonstration, choose first four (mpg = Miles/(US) gallon; cyl = Number cylinders; disp = Displacement; hp = Gross horsepower). Finally, outliers dataset, add two artificial outliers running function. see check_outliers() robust z score method detected two outliers: cases 33 34, observations added . flagged two variables specifically: mpg (Miles/(US) gallon) cyl (Number cylinders), output provides exact z score variables. describe deal cases details later paper, want exclude detected outliers main dataset, can extract row numbers using () output object, can used indexing: check_outliers() output objects possess plot() method, meaning also possible visualize outliers using generic plot() function resulting outlier object loading {see} package. Visual depiction outliers using robust z-score method. distance represents aggregate score variables mpg, cyl, disp, hp. univariate methods available, using interquartile range (IQR), based different intervals, Highest Density Interval (HDI) Bias Corrected Accelerated Interval (BCI). methods documented described function’s help page.","code":"library(performance) # Create some artificial outliers and an ID column data <- rbind(mtcars[1:4], 42, 55) data <- cbind(car = row.names(data), data) outliers <- check_outliers(data, method = \"zscore_robust\", ID = \"car\") outliers > 2 outliers detected: cases 33, 34. > - Based on the following method and threshold: zscore_robust (3.291). > - For variables: mpg, cyl, disp, hp. > > ----------------------------------------------------------------------------- > > The following observations were considered outliers for two or more > variables by at least one of the selected methods: > > Row car n_Zscore_robust > 1 33 33 2 > 2 34 34 2 > > ----------------------------------------------------------------------------- > Outliers per variable (zscore_robust): > > $mpg > Row car Distance_Zscore_robust > 33 33 33 3.7 > 34 34 34 5.8 > > $cyl > Row car Distance_Zscore_robust > 33 33 33 12 > 34 34 34 17 which(outliers) > [1] 33 34 data_clean <- data[-which(outliers), ] library(see) plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"multivariate-outliers","dir":"Articles","previous_headings":"Identifying Outliers","what":"Multivariate Outliers","title":"Checking outliers with *performance*","text":"Univariate outliers can useful focus particular variable, instance reaction time, extreme values might indicative inattention non-task-related behavior3. However, many scenarios, variables data set independent, abnormal observation impact multiple dimensions. instance, participant giving random answers questionnaire. case, computing z score questions might lead satisfactory results. Instead, one might want look variables together. One common approach compute multivariate distance metrics Mahalanobis distance. Although Mahalanobis distance popular, just like regular z scores method, robust heavily influenced outliers . Therefore, multivariate outliers, recommended use Minimum Covariance Determinant, robust version Mahalanobis distance (MCD, Leys et al. 2018, 2019). {performance}’s check_outliers(), one can use approach method = \"mcd\".4 , detected 9 multivariate outliers (.e,. looking variables dataset together). Visual depiction outliers using Minimum Covariance Determinant (MCD) method, robust version Mahalanobis distance. distance represents MCD scores variables mpg, cyl, disp, hp. multivariate methods available, another type robust Mahalanobis distance case relies orthogonalized Gnanadesikan-Kettenring pairwise estimator (Gnanadesikan Kettenring 1972). methods documented described function’s help page.","code":"outliers <- check_outliers(data, method = \"mcd\", verbose = FALSE) outliers > 2 outliers detected: cases 33, 34. > - Based on the following method and threshold: mcd (20). > - For variables: mpg, cyl, disp, hp. plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"model-based-outliers","dir":"Articles","previous_headings":"Identifying Outliers","what":"Model-Based Outliers","title":"Checking outliers with *performance*","text":"Working regression models creates possibility using model-based SOD methods. methods rely concept leverage, , much influence given observation can model estimates. observations relatively strong leverage/influence model, one can suspect model’s estimates biased observations, case flagging outliers prove helpful (see next section, “Handling Outliers”). {performance}, two model-based SOD methods currently available: Cook’s distance, regular regression models, Pareto, Bayesian models. , check_outliers() can applied directly regression model objects, simply specifying method = \"cook\" (method = \"pareto\" Bayesian models).5 Currently, lm models supported (exception glmmTMB, lmrob, glmrob models), long supported underlying functions stats::cooks.distance() (loo::pareto_k_values()) insight::get_data() (full list 225 models currently supported insight package, see https://easystats.github.io/insight/#list--supported-models--class). Also note although check_outliers() supports pipe operators (|> %>%), support tidymodels time. show demo . Visual depiction outliers based Cook’s distance (leverage standardized residuals), based fitted model. Using model-based outlier detection method, identified two outliers. Table 1 summarizes methods use cases, threshold. recommended thresholds default thresholds.","code":"model <- lm(disp ~ mpg * hp, data = data) outliers <- check_outliers(model, method = \"cook\") outliers > 2 outliers detected: cases 31, 34. > - Based on the following method and threshold: cook (0.806). > - For variable: (Whole model). plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"table-1","dir":"Articles","previous_headings":"Identifying Outliers > Model-Based Outliers","what":"Table 1","title":"Checking outliers with *performance*","text":"Summary Statistical Outlier Detection Methods Recommendations Statistical Test Diagnosis Method Recommended Threshold Function Usage Supported regression model Model-based: Cook (Pareto Bayesian models) qf(0.5, ncol(x), nrow(x) - ncol(x)) (0.7 Pareto) check_outliers(model, method = “cook”) Structural Equation Modeling (unsupported model) Multivariate: Minimum Covariance Determinant (MCD) qchisq(p = 1 - 0.001, df = ncol(x)) check_outliers(data, method = “mcd”) Simple test variables (t test, correlation, etc.) Univariate: robust z scores (MAD) qnorm(p = 1 - 0.001 / 2), ~ 3.29 check_outliers(data, method = “zscore_robust”)","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"cooks-distance-vs--mcd","dir":"Articles","previous_headings":"Identifying Outliers","what":"Cook’s Distance vs. MCD","title":"Checking outliers with *performance*","text":"Leys et al. (2018) report preference MCD method Cook’s distance. Cook’s distance removes one observation time checks corresponding influence model time (Cook 1977), flags observation large influence. view authors, several outliers, process removing single outlier time problematic model remains “contaminated” influenced possible outliers model, rendering method suboptimal presence multiple outliers. However, distribution-based approaches silver bullet either, cases usage methods agnostic theoretical statistical models interest might problematic. example, tall person expected also much heavier average, still fit expected association height weight (.e., line model weight ~ height). contrast, using multivariate outlier detection methods may flag person outlier—unusual two variables, height weight—even though pattern fits perfectly predictions. example , plot raw data see two possible outliers. first one falls along regression line, therefore “line” hypothesis. second one clearly diverges regression line, therefore can conclude outlier may disproportionate influence model. Scatter plot height weight, two extreme observations: one model-consistent (top-right) , model-inconsistent (.e., outlier; bottom-right). Using either z-score MCD methods, model-consistent observation incorrectly flagged outlier influential observation. contrast, model-based detection method displays desired behaviour: correctly flags person tall light, without flagging person tall heavy. leverage method (Cook’s distance) correctly distinguishes true outlier model-consistent extreme observation), based fitted model. Finally, unusual observations happen naturally: extreme observations expected even taken normal distribution. statistical models can integrate “expectation”, multivariate outlier methods might conservative, flagging many observations despite belonging right generative process. reasons, believe model-based methods still preferable MCD using supported regression models. Additionally, presence multiple outliers significant concern, regression methods robust outliers considered—like t regression quantile regression—render precise identification less critical (McElreath 2020).","code":"data <- women[rep(seq_len(nrow(women)), each = 100), ] data <- rbind(data, c(100, 258), c(100, 200)) model <- lm(weight ~ height, data) rempsyc::nice_scatter(data, \"height\", \"weight\") outliers <- check_outliers(model, method = c(\"zscore_robust\", \"mcd\"), verbose = FALSE) which(outliers) > [1] 1501 1502 outliers <- check_outliers(model, method = \"cook\") which(outliers) > [1] 1502 plot(outliers)"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"composite-outlier-score","dir":"Articles","previous_headings":"Identifying Outliers","what":"Composite Outlier Score","title":"Checking outliers with *performance*","text":"{performance} package also offers alternative, consensus-based approach combines several methods, based assumption different methods provide different angles looking given problem. applying variety methods, one can hope “triangulate” true outliers (consistently flagged multiple methods) thus attempt minimize false positives. practice, approach computes composite outlier score, formed average binary (0 1) classification results method. represents probability observation classified outlier least one method. default decision rule classifies rows composite outlier scores superior equal 0.5 outlier observations (.e., classified outliers least half methods). {performance}’s check_outliers(), one can use approach including desired methods corresponding argument. Outliers (counts per variables) individual methods can obtained attributes. example: example sentence reporting usage composite method : Based composite outlier score (see ‘check_outliers()’ function ‘performance’ R package, Lüdecke et al. 2021) obtained via joint application multiple outliers detection algorithms (() median absolute deviation (MAD)-based robust z scores, Leys et al. 2013; (b) Mahalanobis minimum covariance determinant (MCD), Leys et al. 2019; (c) Cook’s distance, Cook 1977), excluded two participants classified outliers least half methods used.","code":"outliers <- check_outliers(model, method = c(\"zscore_robust\", \"mcd\", \"cook\"), verbose = FALSE) which(outliers) > [1] 1501 1502 attributes(outliers)$outlier_var$zscore_robust > $weight > Row Distance_Zscore_robust > 1501 1501 6.9 > 1502 1502 3.7 > > $height > Row Distance_Zscore_robust > 1501 1501 5.9 > 1502 1502 5.9"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"handling-outliers","dir":"Articles","previous_headings":"","what":"Handling Outliers","title":"Checking outliers with *performance*","text":"section demonstrated identify outliers using check_outliers() function {performance} package. outliers identified? Although common automatically discard observation marked “outlier” might infect rest data statistical ailment, believe use SOD methods one step get--know--data pipeline; researcher analyst’s domain knowledge must involved decision deal observations marked outliers means SOD. Indeed, automatic tools can help detect outliers, nowhere near perfect. Although can useful flag suspect data, can misses false alarms, replace human eyes proper vigilance researcher. end manually inspecting data outliers, can helpful think outliers belonging different types outliers, categories, can help decide given outlier.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"error-interesting-and-random-outliers","dir":"Articles","previous_headings":"Handling Outliers","what":"Error, Interesting, and Random Outliers","title":"Checking outliers with *performance*","text":"Leys et al. (2019) distinguish error outliers, interesting outliers, random outliers. Error outliers likely due human error corrected data analysis outright removed since invalid observations. Interesting outliers due technical error may theoretical interest; might thus relevant investigate even though removed current analysis interest. Random outliers assumed due chance alone belong correct distribution , therefore, retained. recommended keep observations expected part distribution interest, even outliers (Leys et al. 2019). However, suspected outliers belong alternative distribution, observations large impact results call question robustness, especially significance conditional inclusion, removed. also keep mind might error outliers detected statistical tools, nonetheless found removed. example, studying effects X Y among teenagers one observation 20-year-old, observation might statistical outlier, outlier context research, discarded. call observations undetected error outliers, sense although statistically stand , belong theoretical empirical distribution interest (e.g., teenagers). way, blindly rely statistical outlier detection methods; due diligence investigate undetected error outliers relative specific research question also essential valid inferences.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"winsorization","dir":"Articles","previous_headings":"Handling Outliers","what":"Winsorization","title":"Checking outliers with *performance*","text":"Removing outliers can case valid strategy, ideally one report results without outliers see extent impact results. approach however can reduce statistical power. Therefore, propose recoding approach, namely, winsorization: bringing outliers back within acceptable limits (e.g., 3 MADs, Tukey McLaughlin 1963). However, possible, recommended collect enough data even removing outliers, still sufficient statistical power without resort winsorization (Leys et al. 2019). easystats ecosystem makes easy incorporate step workflow winsorize() function {datawizard}, lightweight R package facilitate data wrangling statistical transformations (Patil et al. 2022). procedure bring back univariate outliers within limits ‘acceptable’ values, based either percentile, z score, robust alternative based MAD.","code":"data[1501:1502, ] # See outliers rows > height weight > 1501 100 258 > 1502 100 200 # Winsorizing using the MAD library(datawizard) winsorized_data <- winsorize(data, method = \"zscore\", robust = TRUE, threshold = 3) # Values > +/- MAD have been winsorized winsorized_data[1501:1502, ] > height weight > 1501 83 188 > 1502 83 188"},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"the-importance-of-transparency","dir":"Articles","previous_headings":"Handling Outliers","what":"The Importance of Transparency","title":"Checking outliers with *performance*","text":", critical part sound outlier treatment regardless SOD method used, reported reproducible manner. Ideally, handling outliers specified priori much detail possible, preregistered, limit researchers’ degrees freedom therefore risks false positives (Leys et al. 2019). especially true given interesting outliers random outliers often times hard distinguish practice. Thus, researchers always prioritize transparency report following information: () many outliers identified (including percentage); (b) according method criteria, (c) using function R package (applicable), (d) handled (excluded winsorized, latter, using threshold). possible, (e) corresponding code script along data shared public repository like Open Science Framework (OSF), exclusion criteria can reproduced precisely.","code":""},{"path":"https://easystats.github.io/performance/articles/check_outliers.html","id":"conclusion","dir":"Articles","previous_headings":"","what":"Conclusion","title":"Checking outliers with *performance*","text":"vignette, showed investigate outliers using check_outliers() function {performance} package following current good practices. However, best practice outlier treatment stop using appropriate statistical algorithms, entails respecting existing recommendations, preregistration, reproducibility, consistency, transparency, justification. Ideally, one additionally also report package, function, threshold used (linking full code possible). hope paper accompanying check_outlier() function easystats help researchers engage good research practices providing smooth outlier detection experience.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"comparing-vs--testing","dir":"Articles","previous_headings":"","what":"Comparing vs. Testing","title":"Compare, Test, and Select Models","text":"Let’s imagine interested explaining variability Sepal.Length using 3 different predictors. , can build 3 linear models.","code":"model1 <- lm(Sepal.Length ~ Petal.Length, data = iris) model2 <- lm(Sepal.Length ~ Petal.Width, data = iris) model3 <- lm(Sepal.Length ~ Sepal.Width, data = iris)"},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"comparing-indices-of-model-performance","dir":"Articles","previous_headings":"Comparing vs. Testing","what":"Comparing Indices of Model Performance","title":"Compare, Test, and Select Models","text":"eponymous function package, performance(), can used compute different indices performance (umbrella term indices fit). Indices model performance multiple models, one can obtain useful table compare indices glance using compare_performance() function. Comparison Model Performance Indices remember stats lessons, comparing different model fits, like choose model high R^2 value (measure much variance explained predictors), low AIC BIC values, low root mean squared error (RMSE). Based criteria, can immediately see model1 best fit. don’t like looking tables, can also plot using plotting method supported see package: , see: https://easystats.github.io/see/articles/performance.html","code":"library(performance) library(insight) # we will use `print_md` function to display a well-formatted table result <- performance(model1) print_md(result) result <- compare_performance(model1, model2, model3) print_md(result) library(see) plot(compare_performance(model1, model2, model3))"},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"testing-models","dir":"Articles","previous_headings":"Comparing vs. Testing","what":"Testing Models","title":"Compare, Test, and Select Models","text":"comparing indices often useful, making decision (instance, model keep drop) can often hard, indices can give conflicting suggestions. Additionally, sometimes unclear index favour given context. one reason tests useful, facilitate decisions via (infamous) “significance” indices, like p-values (frequentist framework) Bayes Factors (Bayesian framework). model compared model1. However, tests also strong limitations shortcomings, used one criterion rule ! can find information tests .","code":"result <- test_performance(model1, model2, model3) print_md(result)"},{"path":"https://easystats.github.io/performance/articles/compare.html","id":"experimenting","dir":"Articles","previous_headings":"Comparing vs. Testing","what":"Experimenting","title":"Compare, Test, and Select Models","text":"Although shown examples simple linear models, highly encourage try functions models choosing. example, functions work mixed-effects regression models, Bayesian regression models, etc. demonstrate , run Bayesian versions linear regression models just compared: Comparison Model Performance Indices Note , since Bayesian regression models, function automatically picked appropriate indices compare! unfamiliar , explore . Now ’s turn play! :)","code":"library(rstanarm) model1 <- stan_glm(Sepal.Length ~ Petal.Length, data = iris, refresh = 0) model2 <- stan_glm(Sepal.Length ~ Petal.Width, data = iris, refresh = 0) model3 <- stan_glm(Sepal.Length ~ Sepal.Width, data = iris, refresh = 0) result <- compare_performance(model1, model2, model3) print_md(result)"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"what-is-the-r2","dir":"Articles","previous_headings":"","what":"What is the R2?","title":"R-squared (R2)","text":"coefficient determination, denoted R^2 pronounced “R squared”, typically corresponds proportion variance dependent variable (response) explained (.e., predicted) independent variables (predictors). “absolute” index goodness--fit, ranging 0 1 (often expressed percentage), can used model performance assessment models comparison.","code":""},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"different-types-of-r2","dir":"Articles","previous_headings":"","what":"Different types of R2","title":"R-squared (R2)","text":"models become complex, computation R^2 becomes increasingly less straightforward. Currently, depending context regression model object, one can choose following measures supported performance: Bayesian R^2 Cox & Snell’s R^2 Efron’s R^2 Kullback-Leibler R^2 LOO-adjusted R^2 McFadden’s R^2 McKelvey & Zavoinas R^2 Nagelkerke’s R^2 Nakagawa’s R^2 mixed models Somers’ D_{xy} rank correlation binary outcomes Tjur’s R^2 - coefficient determination (D) Xu’ R^2 (Omega-squared) R^2 models zero-inflation COMPLETED. begin, let’s first load package.","code":"library(performance)"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"r2-for-lm","dir":"Articles","previous_headings":"","what":"R2 for lm","title":"R-squared (R2)","text":"","code":"m_lm <- lm(wt ~ am * cyl, data = mtcars) r2(m_lm) > # R2 for Linear Regression > R2: 0.724 > adj. R2: 0.694"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"r2-for-glm","dir":"Articles","previous_headings":"","what":"R2 for glm","title":"R-squared (R2)","text":"context generalized linear model (e.g., logistic model outcome binary), R^2 doesn’t measure percentage “explained variance”, concept doesn’t apply. However, R^2s adapted GLMs retained name “R2”, mostly similar properties (range, sensitivity, interpretation amount explanatory power).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"marginal-vs--conditional-r2","dir":"Articles","previous_headings":"R2 for Mixed Models","what":"Marginal vs. Conditional R2","title":"R-squared (R2)","text":"mixed models, performance return two different R^2s: conditional R^2 marginal R^2 marginal R^2 considers variance fixed effects (without random effects), conditional R^2 takes fixed random effects account (.e., total model). Note r2 functions return R^2 values. encourage users instead always use model_performance function get comprehensive set indices model fit. , current vignette, like exclusively focus family functions talk measure.","code":"library(lme4) # defining a linear mixed-effects model model <- lmer(Petal.Length ~ Petal.Width + (1 | Species), data = iris) r2(model) > # R2 for Mixed Models > > Conditional R2: 0.933 > Marginal R2: 0.303 model_performance(model) > # Indices of model performance > > AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma > ----------------------------------------------------------------------------- > 159.036 | 159.312 | 171.079 | 0.933 | 0.303 | 0.904 | 0.373 | 0.378"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"r2-for-bayesian-models","dir":"Articles","previous_headings":"","what":"R2 for Bayesian Models","title":"R-squared (R2)","text":"discussed , mixed-effects models, two components associated R^2.","code":"library(rstanarm) model <- stan_glm(mpg ~ wt + cyl, data = mtcars, refresh = 0) r2(model) > # Bayesian R2 with Compatibility Interval > > Conditional R2: 0.816 (95% CI [0.704, 0.897]) # defining a Bayesian mixed-effects model model <- stan_lmer(Petal.Length ~ Petal.Width + (1 | Species), data = iris, refresh = 0) r2(model) > # Bayesian R2 with Compatibility Interval > > Conditional R2: 0.954 (95% CI [0.950, 0.957]) > Marginal R2: 0.405 (95% CI [0.186, 0.625])"},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"comparing-change-in-r2-using-cohens-f","dir":"Articles","previous_headings":"","what":"Comparing change in R2 using Cohen’s f","title":"R-squared (R2)","text":"Cohen’s f (ANOVA fame) can used measure effect size context sequential multiple regression (.e., nested models). , comparing two models, can examine ratio increase R^2 unexplained variance: f^{2}={R_{AB}^{2}-R_{}^{2} \\1-R_{AB}^{2}} want know indices, can check details references functions compute .","code":"library(effectsize) data(hardlyworking) m1 <- lm(salary ~ xtra_hours, data = hardlyworking) m2 <- lm(salary ~ xtra_hours + n_comps + seniority, data = hardlyworking) cohens_f_squared(m1, model2 = m2) > Cohen's f2 (partial) | 95% CI | R2_delta > --------------------------------------------- > 1.19 | [0.99, Inf] | 0.17 > > - One-sided CIs: upper bound fixed at [Inf]."},{"path":"https://easystats.github.io/performance/articles/r2.html","id":"interpretation","dir":"Articles","previous_headings":"","what":"Interpretation","title":"R-squared (R2)","text":"want know interpret R^2 values, see interpretation guidelines.","code":""},{"path":"https://easystats.github.io/performance/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Daniel Lüdecke. Author, maintainer. Dominique Makowski. Author, contributor. Mattan S. Ben-Shachar. Author, contributor. Indrajeet Patil. Author, contributor. Philip Waggoner. Author, contributor. Brenton M. Wiernik. Author, contributor. Rémi Thériault. Author, contributor. Vincent Arel-Bundock. Contributor. Martin Jullum. Reviewer. gjo11. Reviewer. Etienne Bacher. Contributor. Joseph Luchman. Contributor.","code":""},{"path":"https://easystats.github.io/performance/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Lüdecke et al., (2021). performance: R Package Assessment, Comparison Testing Statistical Models. Journal Open Source Software, 6(60), 3139. https://doi.org/10.21105/joss.03139","code":"@Article{, title = {{performance}: An {R} Package for Assessment, Comparison and Testing of Statistical Models}, author = {Daniel Lüdecke and Mattan S. Ben-Shachar and Indrajeet Patil and Philip Waggoner and Dominique Makowski}, year = {2021}, journal = {Journal of Open Source Software}, volume = {6}, number = {60}, pages = {3139}, doi = {10.21105/joss.03139}, }"},{"path":"https://easystats.github.io/performance/index.html","id":"performance-","dir":"","previous_headings":"","what":"Assessment of Regression Models Performance","title":"Assessment of Regression Models Performance","text":"Test model good model! crucial aspect building regression models evaluate quality modelfit. important investigate well models fit data fit indices report. Functions create diagnostic plots compute fit measures exist, however, mostly spread different packages. unique consistent approach assess model quality different kind models. primary goal performance package fill gap provide utilities computing indices model quality goodness fit. include measures like r-squared (R2), root mean squared error (RMSE) intraclass correlation coefficient (ICC) , also functions check (mixed) models overdispersion, zero-inflation, convergence singularity.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Assessment of Regression Models Performance","text":"performance package available CRAN, latest development version available R-universe (rOpenSci). downloaded package, can load using: Tip Instead library(performance), use library(easystats). make features easystats-ecosystem available. stay updated, use easystats::install_latest().","code":"library(\"performance\")"},{"path":"https://easystats.github.io/performance/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Assessment of Regression Models Performance","text":"cite performance publications use:","code":"citation(\"performance\") #> To cite package 'performance' in publications use: #> #> Lüdecke et al., (2021). performance: An R Package for Assessment, Comparison and #> Testing of Statistical Models. Journal of Open Source Software, 6(60), 3139. #> https://doi.org/10.21105/joss.03139 #> #> A BibTeX entry for LaTeX users is #> #> @Article{, #> title = {{performance}: An {R} Package for Assessment, Comparison and Testing of Statistical Models}, #> author = {Daniel Lüdecke and Mattan S. Ben-Shachar and Indrajeet Patil and Philip Waggoner and Dominique Makowski}, #> year = {2021}, #> journal = {Journal of Open Source Software}, #> volume = {6}, #> number = {60}, #> pages = {3139}, #> doi = {10.21105/joss.03139}, #> }"},{"path":"https://easystats.github.io/performance/index.html","id":"documentation","dir":"","previous_headings":"","what":"Documentation","title":"Assessment of Regression Models Performance","text":"nice introduction package youtube.","code":""},{"path":[]},{"path":[]},{"path":"https://easystats.github.io/performance/index.html","id":"r-squared","dir":"","previous_headings":"The performance workflow > Assessing model quality","what":"R-squared","title":"Assessment of Regression Models Performance","text":"performance generic r2() function, computes r-squared many different models, including mixed effects Bayesian regression models. r2() returns list containing values related “appropriate” r-squared given model. different R-squared measures can also accessed directly via functions like r2_bayes(), r2_coxsnell() r2_nagelkerke() (see full list functions ). mixed models, conditional marginal R-squared returned. marginal R-squared considers variance fixed effects indicates much model’s variance explained fixed effects part . conditional R-squared takes fixed random effects account indicates much model’s variance explained “complete” model. frequentist mixed models, r2() (resp. r2_nakagawa()) computes mean random effect variances, thus r2() also appropriate mixed models complex random effects structures, like random slopes nested random effects (Johnson 2014; Nakagawa, Johnson, Schielzeth 2017).","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) r2(model) #> # R2 for Linear Regression #> R2: 0.830 #> adj. R2: 0.819 model <- glm(am ~ wt + cyl, data = mtcars, family = binomial) r2(model) #> # R2 for Logistic Regression #> Tjur's R2: 0.705 library(MASS) data(housing) model <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing) r2(model) #> Nagelkerke's R2: 0.108 set.seed(123) library(rstanarm) model <- stan_glmer( Petal.Length ~ Petal.Width + (1 | Species), data = iris, cores = 4 ) r2(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.954 (95% CI [0.951, 0.957]) #> Marginal R2: 0.414 (95% CI [0.204, 0.644]) library(lme4) model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) r2(model) #> # R2 for Mixed Models #> #> Conditional R2: 0.799 #> Marginal R2: 0.279"},{"path":"https://easystats.github.io/performance/index.html","id":"intraclass-correlation-coefficient-icc","dir":"","previous_headings":"The performance workflow > Assessing model quality","what":"Intraclass Correlation Coefficient (ICC)","title":"Assessment of Regression Models Performance","text":"Similar R-squared, ICC provides information explained variance can interpreted “proportion variance explained grouping structure population” (Hox 2010). icc() calculates ICC various mixed model objects, including stanreg models. …models class brmsfit.","code":"library(lme4) model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) icc(model) #> # Intraclass Correlation Coefficient #> #> Adjusted ICC: 0.722 #> Unadjusted ICC: 0.521 library(brms) set.seed(123) model <- brm(mpg ~ wt + (1 | cyl) + (1 + wt | gear), data = mtcars) icc(model) #> # Intraclass Correlation Coefficient #> #> Adjusted ICC: 0.941 #> Unadjusted ICC: 0.779"},{"path":[]},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-overdispersion","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for overdispersion","title":"Assessment of Regression Models Performance","text":"Overdispersion occurs observed variance data higher expected variance model assumption (Poisson, variance roughly equals mean outcome). check_overdispersion() checks count model (including mixed models) overdispersed . Overdispersion can fixed either modelling dispersion parameter (possible packages), choosing different distributional family (like Quasi-Poisson, negative binomial, see (Gelman Hill 2007)).","code":"library(glmmTMB) data(Salamanders) model <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_overdispersion(model) #> # Overdispersion test #> #> dispersion ratio = 2.946 #> Pearson's Chi-Squared = 1873.710 #> p-value = < 0.001"},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-zero-inflation","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for zero-inflation","title":"Assessment of Regression Models Performance","text":"Zero-inflation ((Quasi-)Poisson models) indicated amount observed zeros larger amount predicted zeros, model underfitting zeros. cases, recommended use negative binomial zero-inflated models. Use check_zeroinflation() check zero-inflation present fitted model.","code":"model <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_zeroinflation(model) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 298 #> Ratio: 0.77"},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-singular-model-fits","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for singular model fits","title":"Assessment of Regression Models Performance","text":"“singular” model fit means dimensions variance-covariance matrix estimated exactly zero. often occurs mixed models overly complex random effects structures. check_singularity() checks mixed models (class lme, merMod, glmmTMB MixMod) singularity, returns TRUE model fit singular. Remedies cure issues singular fits can found .","code":"library(lme4) data(sleepstudy) # prepare data set.seed(123) sleepstudy$mygrp <- sample(1:5, size = 180, replace = TRUE) sleepstudy$mysubgrp <- NA for (i in 1:5) { filter_group <- sleepstudy$mygrp == i sleepstudy$mysubgrp[filter_group] <- sample(1:30, size = sum(filter_group), replace = TRUE) } # fit strange model model <- lmer( Reaction ~ Days + (1 | mygrp / mysubgrp) + (1 | Subject), data = sleepstudy ) check_singularity(model) #> [1] TRUE"},{"path":"https://easystats.github.io/performance/index.html","id":"check-for-heteroskedasticity","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Check for heteroskedasticity","title":"Assessment of Regression Models Performance","text":"Linear models assume constant error variance (homoskedasticity). check_heteroscedasticity() functions assess assumption violated:","code":"data(cars) model <- lm(dist ~ speed, data = cars) check_heteroscedasticity(model) #> Warning: Heteroscedasticity (non-constant error variance) detected (p = 0.031)."},{"path":"https://easystats.github.io/performance/index.html","id":"comprehensive-visualization-of-model-checks","dir":"","previous_headings":"The performance workflow > Model diagnostics","what":"Comprehensive visualization of model checks","title":"Assessment of Regression Models Performance","text":"performance provides many functions check model assumptions, like check_collinearity(), check_normality() check_heteroscedasticity(). get comprehensive check, use check_model().","code":"# defining a model model <- lm(mpg ~ wt + am + gear + vs * cyl, data = mtcars) # checking model assumptions check_model(model)"},{"path":"https://easystats.github.io/performance/index.html","id":"model-performance-summaries","dir":"","previous_headings":"The performance workflow","what":"Model performance summaries","title":"Assessment of Regression Models Performance","text":"model_performance() computes indices model performance regression models. Depending model object, typical indices might r-squared, AIC, BIC, RMSE, ICC LOOIC.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"linear-model","dir":"","previous_headings":"The performance workflow > Model performance summaries","what":"Linear model","title":"Assessment of Regression Models Performance","text":"","code":"m1 <- lm(mpg ~ wt + cyl, data = mtcars) model_performance(m1) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma #> --------------------------------------------------------------- #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568"},{"path":"https://easystats.github.io/performance/index.html","id":"logistic-regression","dir":"","previous_headings":"The performance workflow > Model performance summaries","what":"Logistic regression","title":"Assessment of Regression Models Performance","text":"","code":"m2 <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") model_performance(m2) #> # Indices of model performance #> #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log | Score_spherical | PCP #> ----------------------------------------------------------------------------------------------------- #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 | 0.095 | 0.743"},{"path":"https://easystats.github.io/performance/index.html","id":"linear-mixed-model","dir":"","previous_headings":"The performance workflow > Model performance summaries","what":"Linear mixed model","title":"Assessment of Regression Models Performance","text":"","code":"library(lme4) m3 <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy) model_performance(m3) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma #> ---------------------------------------------------------------------------------- #> 1755.628 | 1756.114 | 1774.786 | 0.799 | 0.279 | 0.722 | 23.438 | 25.592"},{"path":"https://easystats.github.io/performance/index.html","id":"models-comparison","dir":"","previous_headings":"The performance workflow","what":"Models comparison","title":"Assessment of Regression Models Performance","text":"compare_performance() function can used compare performance quality several models (including models different types).","code":"counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) outcome <- gl(3, 1, 9) treatment <- gl(3, 3) m4 <- glm(counts ~ outcome + treatment, family = poisson()) compare_performance(m1, m2, m3, m4, verbose = FALSE) #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | RMSE | Sigma | Score_log | Score_spherical | R2 | R2 (adj.) | Tjur's R2 | Log_loss | PCP | R2 (cond.) | R2 (marg.) | ICC | Nagelkerke's R2 #> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ #> m1 | lm | 156.0 (<.001) | 157.5 (<.001) | 161.9 (<.001) | 2.444 | 2.568 | | | 0.830 | 0.819 | | | | | | | #> m2 | glm | 31.3 (>.999) | 32.2 (>.999) | 35.7 (>.999) | 0.359 | 1.000 | -14.903 | 0.095 | | | 0.478 | 0.395 | 0.743 | | | | #> m3 | lmerMod | 1764.0 (<.001) | 1764.5 (<.001) | 1783.1 (<.001) | 23.438 | 25.592 | | | | | | | | 0.799 | 0.279 | 0.722 | #> m4 | glm | 56.8 (<.001) | 76.8 (<.001) | 57.7 (<.001) | 3.043 | 1.000 | -2.598 | 0.324 | | | | | | | | | 0.657"},{"path":"https://easystats.github.io/performance/index.html","id":"general-index-of-model-performance","dir":"","previous_headings":"The performance workflow > Models comparison","what":"General index of model performance","title":"Assessment of Regression Models Performance","text":"One can also easily compute composite index model performance sort models best one worse.","code":"compare_performance(m1, m2, m3, m4, rank = TRUE, verbose = FALSE) #> # Comparison of Model Performance Indices #> #> Name | Model | RMSE | Sigma | AIC weights | AICc weights | BIC weights | Performance-Score #> ----------------------------------------------------------------------------------------------- #> m2 | glm | 0.359 | 1.000 | 1.000 | 1.000 | 1.000 | 100.00% #> m4 | glm | 3.043 | 1.000 | 2.96e-06 | 2.06e-10 | 1.63e-05 | 37.67% #> m1 | lm | 2.444 | 2.568 | 8.30e-28 | 6.07e-28 | 3.99e-28 | 36.92% #> m3 | lmerMod | 23.438 | 25.592 | 0.00e+00 | 0.00e+00 | 0.00e+00 | 0.00%"},{"path":"https://easystats.github.io/performance/index.html","id":"visualisation-of-indices-of-models-performance","dir":"","previous_headings":"The performance workflow > Models comparison","what":"Visualisation of indices of models’ performance","title":"Assessment of Regression Models Performance","text":"Finally, provide convenient visualisation (see package must installed).","code":"plot(compare_performance(m1, m2, m4, rank = TRUE, verbose = FALSE))"},{"path":"https://easystats.github.io/performance/index.html","id":"testing-models","dir":"","previous_headings":"The performance workflow","what":"Testing models","title":"Assessment of Regression Models Performance","text":"test_performance() (test_bf, Bayesian sister) carries relevant appropriate tests based input (instance, whether models nested ).","code":"set.seed(123) data(iris) lm1 <- lm(Sepal.Length ~ Species, data = iris) lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) lm3 <- lm(Sepal.Length ~ Species * Sepal.Width, data = iris) lm4 <- lm(Sepal.Length ~ Species * Sepal.Width + Petal.Length + Petal.Width, data = iris) test_performance(lm1, lm2, lm3, lm4) #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR) #> ------------------------------------------------------------ #> lm1 | lm | | | | | #> lm2 | lm | > 1000 | 0.69 | < .001 | -6.25 | < .001 #> lm3 | lm | > 1000 | 0.36 | < .001 | -3.44 | < .001 #> lm4 | lm | > 1000 | 0.73 | < .001 | -7.77 | < .001 #> Each model is compared to lm1. test_bf(lm1, lm2, lm3, lm4) #> Bayes Factors for Model Comparison #> #> Model BF #> [lm2] Species + Petal.Length 3.45e+26 #> [lm3] Species * Sepal.Width 4.69e+07 #> [lm4] Species * Sepal.Width + Petal.Length + Petal.Width 7.58e+29 #> #> * Against Denominator: [lm1] Species #> * Bayes Factor Type: BIC approximation"},{"path":"https://easystats.github.io/performance/index.html","id":"plotting-functions","dir":"","previous_headings":"The performance workflow","what":"Plotting Functions","title":"Assessment of Regression Models Performance","text":"Plotting functions available see package.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Assessment of Regression Models Performance","text":"Please note performance project released Contributor Code Conduct. contributing project, agree abide terms.","code":""},{"path":"https://easystats.github.io/performance/index.html","id":"contributing","dir":"","previous_headings":"","what":"Contributing","title":"Assessment of Regression Models Performance","text":"happy receive bug reports, suggestions, questions, () contributions fix problems add features. Please follow contributing guidelines mentioned : https://easystats.github.io/performance/CONTRIBUTING.html","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":null,"dir":"Reference","previous_headings":"","what":"Binned residuals for binomial logistic regression — binned_residuals","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"Check model quality binomial logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"","code":"binned_residuals( model, term = NULL, n_bins = NULL, show_dots = NULL, ci = 0.95, ci_type = c(\"exact\", \"gaussian\", \"boot\"), residuals = c(\"deviance\", \"pearson\", \"response\"), iterations = 1000, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"model glm-object binomial-family. term Name independent variable x. NULL, average residuals categories term plotted; else, average residuals estimated probabilities response plotted. n_bins Numeric, number bins divide data. n_bins = NULL, square root number observations taken. show_dots Logical, TRUE, show data points plot. Set FALSE models many observations, generating plot time-consuming. default, show_dots = NULL. case binned_residuals() tries guess whether performance poor due large model thus automatically shows hides dots. ci Numeric, confidence level error bounds. ci_type Character, type error bounds calculate. Can \"exact\" (default), \"gaussian\" \"boot\". \"exact\" calculates error bounds based exact binomial distribution, using binom.test(). \"gaussian\" uses Gaussian approximation, \"boot\" uses simple bootstrap method, confidence intervals calculated based quantiles bootstrap distribution. residuals Character, type residuals calculate. Can \"deviance\" (default), \"pearson\" \"response\". recommended use \"response\" models residuals available. iterations Integer, number iterations use bootstrap method. used ci_type = \"boot\". verbose Toggle warnings messages. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"data frame representing data mapped accompanying plot. case residuals inside error bounds, points black. residuals outside error bounds (indicated grey-shaded area), blue points indicate residuals OK, red points indicate model - -fitting relevant range estimated probabilities.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"Binned residual plots achieved \"dividing data categories (bins) based fitted values, plotting average residual versus average fitted value bin.\" (Gelman, Hill 2007: 97). model true, one expect 95% residuals fall inside error bounds. term NULL, one can compare residuals relation specific model predictor. may helpful check term fit better transformed, e.g. rising falling pattern residuals along x-axis signal consider taking logarithm predictor (cf. Gelman Hill 2007, pp. 97-98).","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"binned_residuals() returns data frame, however, print() method returns short summary result. data frame used plotting. plot() method, turn, creates ggplot-object.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press.","code":""},{"path":"https://easystats.github.io/performance/reference/binned_residuals.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Binned residuals for binomial logistic regression — binned_residuals","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") result <- binned_residuals(model) result #> Warning: Probably bad model fit. Only about 50% of the residuals are inside the error bounds. #> # look at the data frame as.data.frame(result) #> xbar ybar n x.lo x.hi se CI_low #> conf_int 0.03786483 -0.26905395 5 0.01744776 0.06917366 0.07079661 -0.5299658 #> conf_int1 0.09514191 -0.44334345 5 0.07087498 0.15160143 0.06530245 -0.7042553 #> conf_int2 0.25910531 0.03762945 6 0.17159955 0.35374001 1.02017708 -0.3293456 #> conf_int3 0.47954643 -0.19916717 5 0.38363314 0.54063600 1.16107852 -0.5994783 #> conf_int4 0.71108931 0.81563262 5 0.57299903 0.89141359 0.19814385 0.5547207 #> conf_int5 0.97119262 -0.23399465 6 0.91147360 0.99815623 0.77513642 -0.5525066 #> CI_high group #> conf_int -0.008142076 no #> conf_int1 -0.182431572 no #> conf_int2 0.404604465 yes #> conf_int3 0.201143953 yes #> conf_int4 1.076544495 no #> conf_int5 0.084517267 yes # \\donttest{ # plot if (require(\"see\")) { plot(result, show_dots = TRUE) } #> Loading required package: see # }"},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for independence of residuals. — check_autocorrelation","title":"Check model for independence of residuals. — check_autocorrelation","text":"Check model independence residuals, .e. autocorrelation error terms.","code":""},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for independence of residuals. — check_autocorrelation","text":"","code":"check_autocorrelation(x, ...) # Default S3 method check_autocorrelation(x, nsim = 1000, ...)"},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for independence of residuals. — check_autocorrelation","text":"x model object. ... Currently used. nsim Number simulations Durbin-Watson-Test.","code":""},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for independence of residuals. — check_autocorrelation","text":"Invisibly returns p-value test statistics. p-value < 0.05 indicates autocorrelated residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check model for independence of residuals. — check_autocorrelation","text":"Performs Durbin-Watson-Test check autocorrelated residuals. case autocorrelation, robust standard errors return accurate results estimates, maybe mixed model error term cluster groups used.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_autocorrelation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for independence of residuals. — check_autocorrelation","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_autocorrelation(m) #> OK: Residuals appear to be independent and not autocorrelated (p = 0.306)."},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":null,"dir":"Reference","previous_headings":"","what":"Check suitability of data for clustering — check_clusterstructure","title":"Check suitability of data for clustering — check_clusterstructure","text":"checks whether data appropriate clustering using Hopkins' H statistic given data. value Hopkins statistic close 0 (0.5), can reject null hypothesis conclude dataset significantly clusterable. value H lower 0.25 indicates clustering tendency 90% confidence level. visual assessment cluster tendency (VAT) approach (Bezdek Hathaway, 2002) consists investigating heatmap ordered dissimilarity matrix. Following , one can potentially detect clustering tendency counting number square shaped blocks along diagonal.","code":""},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check suitability of data for clustering — check_clusterstructure","text":"","code":"check_clusterstructure(x, standardize = TRUE, distance = \"euclidean\", ...)"},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check suitability of data for clustering — check_clusterstructure","text":"x data frame. standardize Standardize data frame clustering (default). distance Distance method used. methods \"euclidean\" (default) exploratory context clustering tendency. See stats::dist() list available methods. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check suitability of data for clustering — check_clusterstructure","text":"H statistic (numeric)","code":""},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check suitability of data for clustering — check_clusterstructure","text":"Lawson, R. G., & Jurs, P. C. (1990). New index clustering tendency application chemical problems. Journal chemical information computer sciences, 30(1), 36-41. Bezdek, J. C., & Hathaway, R. J. (2002, May). VAT: tool visual assessment (cluster) tendency. Proceedings 2002 International Joint Conference Neural Networks. IJCNN02 (3), 2225-2230. IEEE.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_clusterstructure.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check suitability of data for clustering — check_clusterstructure","text":"","code":"# \\donttest{ library(performance) check_clusterstructure(iris[, 1:4]) #> # Clustering tendency #> #> The dataset is suitable for clustering (Hopkins' H = 0.20). #> plot(check_clusterstructure(iris[, 1:4])) # }"},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check for multicollinearity of model terms — check_collinearity","title":"Check for multicollinearity of model terms — check_collinearity","text":"check_collinearity() checks regression models multicollinearity calculating variance inflation factor (VIF). multicollinearity() alias check_collinearity(). check_concurvity() wrapper around mgcv::concurvity(), can considered collinearity check smooth terms GAMs. Confidence intervals VIF tolerance based Marcoulides et al. (2019, Appendix B).","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check for multicollinearity of model terms — check_collinearity","text":"","code":"check_collinearity(x, ...) multicollinearity(x, ...) # Default S3 method check_collinearity(x, ci = 0.95, verbose = TRUE, ...) # S3 method for class 'glmmTMB' check_collinearity( x, component = c(\"all\", \"conditional\", \"count\", \"zi\", \"zero_inflated\"), ci = 0.95, verbose = TRUE, ... ) check_concurvity(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check for multicollinearity of model terms — check_collinearity","text":"x model object (least respond vcov(), possible, also model.matrix() - however, also work without model.matrix()). ... Currently used. ci Confidence Interval (CI) level VIF tolerance values. verbose Toggle warnings messages. component models zero-inflation component, multicollinearity can checked conditional model (count component, component = \"conditional\" component = \"count\"), zero-inflation component (component = \"zero_inflated\" component = \"zi\") components (component = \"\"). Following model-classes currently supported: hurdle, zeroinfl, zerocount, MixMod glmmTMB.","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check for multicollinearity of model terms — check_collinearity","text":"data frame information name model term, variance inflation factor associated confidence intervals, factor standard error increased due possible correlation terms, tolerance values (including confidence intervals), tolerance = 1/vif.","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check for multicollinearity of model terms — check_collinearity","text":"code compute confidence intervals VIF tolerance values adapted Appendix B Marcoulides et al. paper. Thus, credits go authors original algorithm. also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"multicollinearity","dir":"Reference","previous_headings":"","what":"Multicollinearity","title":"Check for multicollinearity of model terms — check_collinearity","text":"Multicollinearity confused raw strong correlation predictors. matters association one predictor variables, conditional variables model. nutshell, multicollinearity means know effect one predictor, value knowing predictor rather low. Thus, one predictors help much terms better understanding model predicting outcome. consequence, multicollinearity problem, model seems suggest predictors question seems reliably associated outcome (low estimates, high standard errors), although predictors actually strongly associated outcome, .e. indeed might strong effect (McElreath 2020, chapter 6.1). Multicollinearity might arise third, unobserved variable causal effect two predictors associated outcome. cases, actual relationship matters association unobserved variable outcome. Remember: \"Pairwise correlations problem. conditional associations - correlations - matter.\" (McElreath 2020, p. 169)","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"interpretation-of-the-variance-inflation-factor","dir":"Reference","previous_headings":"","what":"Interpretation of the Variance Inflation Factor","title":"Check for multicollinearity of model terms — check_collinearity","text":"variance inflation factor measure analyze magnitude multicollinearity model terms. VIF less 5 indicates low correlation predictor predictors. value 5 10 indicates moderate correlation, VIF values larger 10 sign high, tolerable correlation model predictors (James et al. 2013). Increased SE column output indicates much larger standard error due association predictors conditional remaining variables model. Note thresholds, although commonly used, also criticized high. Zuur et al. (2010) suggest using lower values, e.g. VIF 3 larger may already longer considered \"low\".","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"multicollinearity-and-interaction-terms","dir":"Reference","previous_headings":"","what":"Multicollinearity and Interaction Terms","title":"Check for multicollinearity of model terms — check_collinearity","text":"interaction terms included model, high VIF values expected. portion multicollinearity among component terms interaction also called \"inessential ill-conditioning\", leads inflated VIF values typically seen models interaction terms (Francoeur 2013). Centering interaction terms can resolve issue (Kim Jung 2024).","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"multicollinearity-and-polynomial-terms","dir":"Reference","previous_headings":"","what":"Multicollinearity and Polynomial Terms","title":"Check for multicollinearity of model terms — check_collinearity","text":"Polynomial transformations considered single term thus VIFs calculated .","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"concurvity-for-smooth-terms-in-generalized-additive-models","dir":"Reference","previous_headings":"","what":"Concurvity for Smooth Terms in Generalized Additive Models","title":"Check for multicollinearity of model terms — check_collinearity","text":"check_concurvity() wrapper around mgcv::concurvity(), can considered collinearity check smooth terms GAMs.\"Concurvity occurs smooth term model approximated one smooth terms model.\" (see ?mgcv::concurvity). check_concurvity() returns column named VIF, \"worst\" measure. mgcv::concurvity() range 0 1, VIF value 1 / (1 - worst), make interpretation comparable classical VIF values, .e. 1 indicates problems, higher values indicate increasing lack identifiability. VIF proportion column equals \"estimate\" column mgcv::concurvity(), ranging 0 (problem) 1 (total lack identifiability).","code":""},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check for multicollinearity of model terms — check_collinearity","text":"Francoeur, R. B. (2013). Sequential Residual Centering Resolve Low Sensitivity Moderated Regression? Simulations Cancer Symptom Clusters. Open Journal Statistics, 03(06), 24-44. James, G., Witten, D., Hastie, T., Tibshirani, R. (eds.). (2013). introduction statistical learning: applications R. New York: Springer. Kim, Y., & Jung, G. (2024). Understanding linear interaction analysis causal graphs. British Journal Mathematical Statistical Psychology, 00, 1–14. Marcoulides, K. M., Raykov, T. (2019). Evaluation Variance Inflation Factors Regression Models Using Latent Variable Modeling Methods. Educational Psychological Measurement, 79(5), 874–882. McElreath, R. (2020). Statistical rethinking: Bayesian course examples R Stan. 2nd edition. Chapman Hall/CRC. Vanhove, J. (2019). Collinearity disease needs curing. webpage Zuur AF, Ieno EN, Elphick CS. protocol data exploration avoid common statistical problems: Data exploration. Methods Ecology Evolution (2010) 1:3–14.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_collinearity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check for multicollinearity of model terms — check_collinearity","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_collinearity(m) #> # Check for Multicollinearity #> #> Low Correlation #> #> Term VIF VIF 95% CI Increased SE Tolerance Tolerance 95% CI #> gear 1.53 [1.19, 2.51] 1.24 0.65 [0.40, 0.84] #> #> Moderate Correlation #> #> Term VIF VIF 95% CI Increased SE Tolerance Tolerance 95% CI #> wt 5.05 [3.21, 8.41] 2.25 0.20 [0.12, 0.31] #> cyl 5.41 [3.42, 9.04] 2.33 0.18 [0.11, 0.29] #> disp 9.97 [6.08, 16.85] 3.16 0.10 [0.06, 0.16] # plot results x <- check_collinearity(m) plot(x)"},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":null,"dir":"Reference","previous_headings":"","what":"Convergence test for mixed effects models — check_convergence","title":"Convergence test for mixed effects models — check_convergence","text":"check_convergence() provides alternative convergence test merMod-objects.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Convergence test for mixed effects models — check_convergence","text":"","code":"check_convergence(x, tolerance = 0.001, ...)"},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Convergence test for mixed effects models — check_convergence","text":"x merMod glmmTMB-object. tolerance Indicates value convergence result accepted. smaller tolerance , stricter test . ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Convergence test for mixed effects models — check_convergence","text":"TRUE convergence fine FALSE convergence suspicious. Additionally, convergence value returned attribute.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"convergence-and-log-likelihood","dir":"Reference","previous_headings":"","what":"Convergence and log-likelihood","title":"Convergence test for mixed effects models — check_convergence","text":"Convergence problems typically arise model converged solution log-likelihood true maximum. may result unreliable overly complex (non-estimable) estimates standard errors.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"inspect-model-convergence","dir":"Reference","previous_headings":"","what":"Inspect model convergence","title":"Convergence test for mixed effects models — check_convergence","text":"lme4 performs convergence-check (see ?lme4::convergence), however, discussed suggested one lme4-authors comment, check can strict. check_convergence() thus provides alternative convergence test merMod-objects.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"resolving-convergence-issues","dir":"Reference","previous_headings":"","what":"Resolving convergence issues","title":"Convergence test for mixed effects models — check_convergence","text":"Convergence issues easy diagnose. help page ?lme4::convergence provides current advice resolve convergence issues. Another clue might large parameter values, e.g. estimates (scale linear predictor) larger 10 (non-identity link) generalized linear model might indicate complete separation. Complete separation can addressed regularization, e.g. penalized regression Bayesian regression appropriate priors fixed effects.","code":""},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"convergence-versus-singularity","dir":"Reference","previous_headings":"","what":"Convergence versus Singularity","title":"Convergence test for mixed effects models — check_convergence","text":"Note different meaning singularity convergence: singularity indicates issue \"true\" best estimate, .e. whether maximum likelihood estimation variance-covariance matrix random effects positive definite semi-definite. Convergence question whether can assume numerical optimization worked correctly .","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_convergence.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Convergence test for mixed effects models — check_convergence","text":"","code":"data(cbpp, package = \"lme4\") set.seed(1) cbpp$x <- rnorm(nrow(cbpp)) cbpp$x2 <- runif(nrow(cbpp)) model <- lme4::glmer( cbind(incidence, size - incidence) ~ period + x + x2 + (1 + x | herd), data = cbpp, family = binomial() ) check_convergence(model) #> [1] TRUE #> attr(,\"gradient\") #> [1] 0.0002803063 # \\donttest{ model <- suppressWarnings(glmmTMB::glmmTMB( Sepal.Length ~ poly(Petal.Width, 4) * poly(Petal.Length, 4) + (1 + poly(Petal.Width, 4) | Species), data = iris )) check_convergence(model) #> [1] FALSE # }"},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":null,"dir":"Reference","previous_headings":"","what":"Check correct model adjustment for identifying causal effects — check_dag","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"purpose check_dag() build, check visualize model based directed acyclic graphs (DAG). function checks model correctly adjusted identifying specific relationships variables, especially directed (maybe also \"causal\") effects given exposures outcome. case incorrect adjustments, function suggests minimal required variables adjusted (sometimes also called \"controlled \"), .e. variables least need included model. Depending goal analysis, still possible add variables model just minimally required adjustment sets. check_dag() convenient wrapper around ggdag::dagify(), dagitty::adjustmentSets() dagitty::adjustedNodes() check correct adjustment sets. returns dagitty object can visualized plot(). .dag() small convenient function return dagitty-string, can used online-tool dagitty-website.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"","code":"check_dag( ..., outcome = NULL, exposure = NULL, adjusted = NULL, latent = NULL, effect = c(\"all\", \"total\", \"direct\"), coords = NULL ) as.dag(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"... One formulas, converted dagitty syntax. First element may also model object. model objects provided, formula used first formula, independent variables used adjusted argument. See 'Details' 'Examples'. outcome Name dependent variable (outcome), character string formula. Must valid name formulas provided .... set, first dependent variable formulas used. exposure Name exposure variable (character string formula), direct total causal effect outcome checked. Must valid name formulas provided .... set, first independent variable formulas used. adjusted character vector formula names variables adjusted model, e.g. adjusted = c(\"x1\", \"x2\") adjusted = ~ x1 + x2. model object provided ..., values adjusted overwritten model's independent variables. latent character vector names latent variables model. effect Character string, indicating effect check. Can \"\" (default), \"total\", \"direct\". coords Coordinates variables plotting DAG. coordinates can provided three different ways: list two elements, x y, named vectors numerics. names correspond variable names DAG, values x y indicate x/y coordinates plot. list elements correspond variables DAG. element numeric vector length two x- y-coordinate. data frame three columns: x, y name (contains variable names). See 'Examples'. x object class check_dag, returned check_dag().","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"object class check_dag, can visualized plot(). returned object also inherits class dagitty thus can used functions ggdag dagitty packages.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"specifying-the-dag-formulas","dir":"Reference","previous_headings":"","what":"Specifying the DAG formulas","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"formulas following syntax: One-directed paths: left-hand-side name variables causal effects point (direction arrows, dagitty-language). right-hand-side variables causal effects assumed come . example, formula Y ~ X1 + X2, paths directed X1 X2 Y assumed. Bi-directed paths: Use ~~ indicate bi-directed paths. example, Y ~~ X indicates path Y X bi-directed, arrow points directions. Bi-directed paths often indicate unmeasured cause, unmeasured confounding, two involved variables.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"minimally-required-adjustments","dir":"Reference","previous_headings":"","what":"Minimally required adjustments","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"function checks model correctly adjusted identifying direct total effects exposure outcome. model correctly specified, adjustment needed estimate direct effect. model correctly specified, function suggests minimally required variables adjusted . function distinguishes direct total effects, checks model correctly adjusted . model cyclic, function stops suggests remove cycles model. Note sometimes necessary try different combinations suggested adjustments, check_dag() can always detect whether least one several variables required, whether adjustments done listed variables. can useful copy dagitty-code (using .dag(), prints dagitty-string console) dagitty-website play around different adjustments.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"direct-and-total-effects","dir":"Reference","previous_headings":"","what":"Direct and total effects","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"direct effect exposure outcome effect mediated variable model. total effect sum direct indirect effects. function checks model correctly adjusted identifying direct total effects exposure outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"why-are-dags-important-the-table-fallacy","dir":"Reference","previous_headings":"","what":"Why are DAGs important - the Table 2 fallacy","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"Correctly thinking identifying relationships variables important comes reporting coefficients regression models mutually adjust \"confounders\" include covariates. Different coefficients might different interpretations, depending relationship variables model. Sometimes, regression coefficient represents direct effect exposure outcome, sometimes must interpreted total effect, due involvement mediating effects. problem also called \"Table 2 fallacy\" (Westreich Greenland 2013). DAG helps visualizing thereby focusing relationships variables regression model detect missing adjustments -adjustment.","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"Rohrer, J. M. (2018). Thinking clearly correlations causation: Graphical causal models observational data. Advances Methods Practices Psychological Science, 1(1), 27–42. doi:10.1177/2515245917745629 Westreich, D., & Greenland, S. (2013). Table 2 Fallacy: Presenting Interpreting Confounder Modifier Coefficients. American Journal Epidemiology, 177(4), 292–298. doi:10.1093/aje/kws412","code":""},{"path":"https://easystats.github.io/performance/reference/check_dag.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check correct model adjustment for identifying causal effects — check_dag","text":"","code":"# no adjustment needed check_dag( y ~ x + b, outcome = \"y\", exposure = \"x\" ) #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> #> Identification of direct and total effects #> #> Model is correctly specified. #> No adjustment needed to estimate the direct and total effect of `x` on `y`. #> # incorrect adjustment dag <- check_dag( y ~ x + b + c, x ~ b, outcome = \"y\", exposure = \"x\" ) dag #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> #> Identification of direct and total effects #> #> Incorrectly adjusted! #> To estimate the direct and total effect, at least adjust for `b`. Currently, the model does not adjust for any variables. #> plot(dag) # After adjusting for `b`, the model is correctly specified dag <- check_dag( y ~ x + b + c, x ~ b, outcome = \"y\", exposure = \"x\", adjusted = \"b\" ) dag #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> - Adjustment: b #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> # using formula interface for arguments \"outcome\", \"exposure\" and \"adjusted\" check_dag( y ~ x + b + c, x ~ b, outcome = ~y, exposure = ~x, adjusted = ~ b + c ) #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> - Adjustments: b and c #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> # if not provided, \"outcome\" is taken from first formula, same for \"exposure\" # thus, we can simplify the above expression to check_dag( y ~ x + b + c, x ~ b, adjusted = ~ b + c ) #> # Check for correct adjustment sets #> - Outcome: y #> - Exposure: x #> - Adjustments: b and c #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> # use specific layout for the DAG dag <- check_dag( score ~ exp + b + c, exp ~ b, outcome = \"score\", exposure = \"exp\", coords = list( # x-coordinates for all nodes x = c(score = 5, exp = 4, b = 3, c = 3), # y-coordinates for all nodes y = c(score = 3, exp = 3, b = 2, c = 4) ) ) plot(dag) # alternative way of providing the coordinates dag <- check_dag( score ~ exp + b + c, exp ~ b, outcome = \"score\", exposure = \"exp\", coords = list( # x/y coordinates for each node score = c(5, 3), exp = c(4, 3), b = c(3, 2), c = c(3, 4) ) ) plot(dag) # Objects returned by `check_dag()` can be used with \"ggdag\" or \"dagitty\" ggdag::ggdag_status(dag) # Using a model object to extract information about outcome, # exposure and adjusted variables data(mtcars) m <- lm(mpg ~ wt + gear + disp + cyl, data = mtcars) dag <- check_dag( m, wt ~ disp + cyl, wt ~ am ) dag #> # Check for correct adjustment sets #> - Outcome: mpg #> - Exposure: wt #> - Adjustments: cyl, disp and gear #> #> Identification of direct and total effects #> #> Model is correctly specified. #> All minimal sufficient adjustments to estimate the direct and total effect were done. #> plot(dag)"},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":null,"dir":"Reference","previous_headings":"","what":"Classify the distribution of a model-family using machine learning — check_distribution","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"Choosing right distributional family regression models essential get accurate estimates standard errors. function may help check models' distributional family see model-family probably reconsidered. Since difficult exactly predict correct model family, consider function somewhat experimental.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"","code":"check_distribution(model)"},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"model Typically, model (response residuals()). May also numeric vector.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"function uses internal random forest model classify distribution model-family. Currently, following distributions trained (.e. results check_distribution() may one following): \"bernoulli\", \"beta\", \"beta-binomial\", \"binomial\", \"cauchy\", \"chi\", \"exponential\", \"F\", \"gamma\", \"half-cauchy\", \"inverse-gamma\", \"lognormal\", \"normal\", \"negative binomial\", \"negative binomial (zero-inflated)\", \"pareto\", \"poisson\", \"poisson (zero-inflated)\", \"tweedie\", \"uniform\" \"weibull\". Note similarity certain distributions according shape, skewness, etc. Thus, predicted distribution may perfectly representing distributional family underlying fitted model, response value. plot() method, shows probabilities predicted distributions, however, probability greater zero.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"function somewhat experimental might improved future releases. final decision model-family also based theoretical aspects information data model. also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_distribution.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Classify the distribution of a model-family using machine learning — check_distribution","text":"","code":"data(sleepstudy, package = \"lme4\") model <<- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy) check_distribution(model) #> # Distribution of Model Family #> #> Predicted Distribution of Residuals #> #> Distribution Probability #> cauchy 91% #> gamma 6% #> neg. binomial (zero-infl.) 3% #> #> Predicted Distribution of Response #> #> Distribution Probability #> lognormal 66% #> gamma 34% plot(check_distribution(model))"},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":null,"dir":"Reference","previous_headings":"","what":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"checks whether data appropriate Factor Analysis (FA) running Bartlett's Test Sphericity Kaiser, Meyer, Olkin (KMO) Measure Sampling Adequacy (MSA). See details information interpretation meaning test.","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"","code":"check_factorstructure(x, n = NULL, ...) check_kmo(x, n = NULL, ...) check_sphericity_bartlett(x, n = NULL, ...)"},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"x data frame correlation matrix. latter passed, n must provided. n correlation matrix passed, number observations must specified. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"list lists indices related sphericity KMO.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"bartlett-s-test-of-sphericity","dir":"Reference","previous_headings":"","what":"Bartlett's Test of Sphericity","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"Bartlett's (1951) test sphericity tests whether matrix (correlations) significantly different identity matrix (filled 0). tests whether correlation coefficients 0. test computes probability correlation matrix significant correlations among least variables dataset, prerequisite factor analysis work. often suggested check whether Bartlett’s test sphericity significant starting factor analysis, one needs remember test testing pretty extreme scenario (correlations non-significant). sample size increases, test tends always significant, makes particularly useful informative well-powered studies.","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"kaiser-meyer-olkin-kmo-","dir":"Reference","previous_headings":"","what":"Kaiser, Meyer, Olkin (KMO)","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"(Measure Sampling Adequacy (MSA) Factor Analysis.) Kaiser (1970) introduced Measure Sampling Adequacy (MSA), later modified Kaiser Rice (1974). Kaiser-Meyer-Olkin (KMO) statistic, can vary 0 1, indicates degree variable set predicted without error variables. value 0 indicates sum partial correlations large relative sum correlations, indicating factor analysis likely inappropriate. KMO value close 1 indicates sum partial correlations large relative sum correlations factor analysis yield distinct reliable factors. means patterns correlations relatively compact, factor analysis yield distinct reliable factors. Values smaller 0.5 suggest either collect data rethink variables include. Kaiser (1974) suggested KMO > .9 marvelous, .80s, meritorious, .70s, middling, .60s, mediocre, .50s, miserable, less .5, unacceptable. Hair et al. (2006) suggest accepting value > 0.5. Values 0.5 0.7 mediocre, values 0.7 0.8 good. Variables individual KMO values 0.5 considered exclusion analysis (note need re-compute KMO indices dependent whole dataset).","code":""},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"function wrapper around KMO cortest.bartlett() functions psych package (Revelle, 2016). Revelle, W. (2016). : Use psych package Factor Analysis data reduction. Bartlett, M. S. (1951). effect standardization Chi-square approximation factor analysis. Biometrika, 38(3/4), 337-344. Kaiser, H. F. (1970). second generation little jiffy. Psychometrika, 35(4), 401-415. Kaiser, H. F., & Rice, J. (1974). Little jiffy, mark IV. Educational psychological measurement, 34(1), 111-117. Kaiser, H. F. (1974). index factorial simplicity. Psychometrika, 39(1), 31-36.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_factorstructure.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check suitability of data for Factor Analysis (FA) with Bartlett's Test of Sphericity and KMO — check_factorstructure","text":"","code":"library(performance) check_factorstructure(mtcars) #> # Is the data suitable for Factor Analysis? #> #> #> - Sphericity: Bartlett's test of sphericity suggests that there is sufficient significant correlation in the data for factor analysis (Chisq(55) = 408.01, p < .001). #> - KMO: The Kaiser, Meyer, Olkin (KMO) overall measure of sampling adequacy suggests that data seems appropriate for factor analysis (KMO = 0.83). The individual KMO scores are: mpg (0.93), cyl (0.90), disp (0.76), hp (0.84), drat (0.95), wt (0.74), qsec (0.74), vs (0.91), am (0.88), gear (0.85), carb (0.62). # One can also pass a correlation matrix r <- cor(mtcars) check_factorstructure(r, n = nrow(mtcars)) #> # Is the data suitable for Factor Analysis? #> #> #> - Sphericity: Bartlett's test of sphericity suggests that there is sufficient significant correlation in the data for factor analysis (Chisq(55) = 408.01, p < .001). #> - KMO: The Kaiser, Meyer, Olkin (KMO) overall measure of sampling adequacy suggests that data seems appropriate for factor analysis (KMO = 0.83). The individual KMO scores are: mpg (0.93), cyl (0.90), disp (0.76), hp (0.84), drat (0.95), wt (0.74), qsec (0.74), vs (0.91), am (0.88), gear (0.85), carb (0.62)."},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"check_heterogeneity_bias() checks model predictors variables may cause heterogeneity bias, .e. variables within- /-effect (Bell Jones, 2015).","code":""},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"","code":"check_heterogeneity_bias(x, select = NULL, by = NULL, nested = FALSE)"},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"x data frame mixed model object. select Character vector (formula) names variables select checked. x mixed model object, argument ignored. Character vector (formula) name variable indicates group- cluster-ID. cross-classified nested designs, can also identify two variables group- cluster-IDs. data nested treated , set nested = TRUE. Else, defines two variables nested = FALSE, cross-classified design assumed. x model object, argument ignored. nested designs, can : character vector name variable indicates levels, ordered highest level lowest (e.g. = c(\"L4\", \"L3\", \"L2\"). character vector variable names format = \"L4/L3/L2\", levels separated /. See also section De-meaning cross-classified designs De-meaning nested designs . nested Logical, TRUE, data treated nested. FALSE, data treated cross-classified. applies contains one variable.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"Bell , Jones K. 2015. Explaining Fixed Effects: Random Effects Modeling Time-Series Cross-Sectional Panel Data. Political Science Research Methods, 3(1), 133–153.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_heterogeneity_bias.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model predictor for heterogeneity bias — check_heterogeneity_bias","text":"","code":"data(iris) iris$ID <- sample(1:4, nrow(iris), replace = TRUE) # fake-ID check_heterogeneity_bias(iris, select = c(\"Sepal.Length\", \"Petal.Length\"), by = \"ID\") #> Possible heterogeneity bias due to following predictors: Sepal.Length, Petal.Length"},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for (non-)constant error variance — check_heteroscedasticity","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"Significance testing linear regression models assumes model errors (residuals) constant variance. assumption violated p-values model longer reliable.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"","code":"check_heteroscedasticity(x, ...) check_heteroskedasticity(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"x model object. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"p-value test statistics. p-value < 0.05 indicates non-constant variance (heteroskedasticity).","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"test hypothesis (non-)constant error also called Breusch-Pagan test (1979).","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"Breusch, T. S., Pagan, . R. (1979) simple test heteroscedasticity random coefficient variation. Econometrica 47, 1287-1294.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_heteroscedasticity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for (non-)constant error variance — check_heteroscedasticity","text":"","code":"m <<- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_heteroscedasticity(m) #> Warning: Heteroscedasticity (non-constant error variance) detected (p = 0.042). #> # plot results if (require(\"see\")) { x <- check_heteroscedasticity(m) plot(x) }"},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for homogeneity of variances — check_homogeneity","title":"Check model for homogeneity of variances — check_homogeneity","text":"Check model homogeneity variances groups described independent variables model.","code":""},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for homogeneity of variances — check_homogeneity","text":"","code":"check_homogeneity(x, method = c(\"bartlett\", \"fligner\", \"levene\", \"auto\"), ...) # S3 method for class 'afex_aov' check_homogeneity(x, method = \"levene\", ...)"},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for homogeneity of variances — check_homogeneity","text":"x linear model ANOVA object. method Name method (underlying test) performed check homogeneity variances. May either \"levene\" Levene's Test Homogeneity Variance, \"bartlett\" Bartlett test (assuming normal distributed samples groups), \"fligner\" Fligner-Killeen test (rank-based, non-parametric test), \"auto\". latter case, Bartlett test used model response normal distributed, else Fligner-Killeen test used. ... Arguments passed car::leveneTest().","code":""},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for homogeneity of variances — check_homogeneity","text":"Invisibly returns p-value test statistics. p-value < 0.05 indicates significant difference variance groups.","code":""},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check model for homogeneity of variances — check_homogeneity","text":"also plot()-method implemented see-package.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_homogeneity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for homogeneity of variances — check_homogeneity","text":"","code":"model <<- lm(len ~ supp + dose, data = ToothGrowth) check_homogeneity(model) #> OK: There is not clear evidence for different variances across groups (Bartlett Test, p = 0.226). #> # plot results if (require(\"see\")) { result <- check_homogeneity(model) plot(result) }"},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":null,"dir":"Reference","previous_headings":"","what":"Describe Properties of Item Scales — check_itemscale","title":"Describe Properties of Item Scales — check_itemscale","text":"Compute various measures internal consistencies applied (sub)scales, items extracted using parameters::principal_components().","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Describe Properties of Item Scales — check_itemscale","text":"","code":"check_itemscale(x, factor_index = NULL)"},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Describe Properties of Item Scales — check_itemscale","text":"x object class parameters_pca, returned parameters::principal_components(), data frame. factor_index x data frame, factor_index must specified. must numeric vector length number columns x, element index factor respective column x.","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Describe Properties of Item Scales — check_itemscale","text":"list data frames, related measures internal consistencies subscale.","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Describe Properties of Item Scales — check_itemscale","text":"check_itemscale() calculates various measures internal consistencies, Cronbach's alpha, item difficulty discrimination etc. subscales built several items. Subscales retrieved results parameters::principal_components(), .e. based many components extracted PCA, check_itemscale() retrieves variables belong component calculates mentioned measures.","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Describe Properties of Item Scales — check_itemscale","text":"Item difficulty range 0.2 0.8. Ideal value p+(1-p)/2 (mostly 0.5 0.8). See item_difficulty() details. item discrimination, acceptable values 0.20 higher; closer 1.00 better. See item_reliability() details. case total Cronbach's alpha value acceptable cut-0.7 (mostly index items), mean inter-item-correlation alternative measure indicate acceptability. Satisfactory range lies 0.2 0.4. See also item_intercor().","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Describe Properties of Item Scales — check_itemscale","text":"Briggs SR, Cheek JM (1986) role factor analysis development evaluation personality scales. Journal Personality, 54(1), 106-148. doi: 10.1111/j.1467-6494.1986.tb00391.x","code":""},{"path":"https://easystats.github.io/performance/reference/check_itemscale.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Describe Properties of Item Scales — check_itemscale","text":"","code":"# data generation from '?prcomp', slightly modified C <- chol(S <- toeplitz(0.9^(0:15))) set.seed(17) X <- matrix(rnorm(1600), 100, 16) Z <- X %*% C pca <- parameters::principal_components( as.data.frame(Z), rotation = \"varimax\", n = 3 ) pca #> # Rotated loadings from Principal Component Analysis (varimax-rotation) #> #> Variable | RC3 | RC1 | RC2 | Complexity | Uniqueness | MSA #> -------------------------------------------------------------- #> V1 | 0.85 | 0.17 | 0.20 | 1.20 | 0.21 | 0.90 #> V2 | 0.89 | 0.25 | 0.22 | 1.28 | 0.11 | 0.90 #> V3 | 0.91 | 0.26 | 0.17 | 1.23 | 0.07 | 0.89 #> V4 | 0.88 | 0.33 | 0.13 | 1.33 | 0.10 | 0.91 #> V5 | 0.82 | 0.41 | 0.14 | 1.55 | 0.14 | 0.94 #> V6 | 0.68 | 0.59 | 0.18 | 2.12 | 0.15 | 0.92 #> V7 | 0.57 | 0.74 | 0.20 | 2.04 | 0.09 | 0.93 #> V8 | 0.44 | 0.81 | 0.20 | 1.67 | 0.11 | 0.95 #> V9 | 0.33 | 0.84 | 0.32 | 1.61 | 0.09 | 0.93 #> V10 | 0.29 | 0.85 | 0.33 | 1.55 | 0.09 | 0.92 #> V11 | 0.30 | 0.79 | 0.42 | 1.86 | 0.11 | 0.92 #> V12 | 0.27 | 0.68 | 0.57 | 2.28 | 0.15 | 0.90 #> V13 | 0.20 | 0.55 | 0.71 | 2.06 | 0.15 | 0.90 #> V14 | 0.21 | 0.36 | 0.86 | 1.48 | 0.09 | 0.91 #> V15 | 0.20 | 0.23 | 0.91 | 1.23 | 0.08 | 0.88 #> V16 | 0.11 | 0.15 | 0.90 | 1.09 | 0.15 | 0.87 #> #> The 3 principal components (varimax rotation) accounted for 88.19% of the total variance of the original data (RC3 = 32.81%, RC1 = 31.24%, RC2 = 24.14%). #> check_itemscale(pca) #> # Description of (Sub-)Scales #> Component 1 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V1 | 0 | -0.02 | 1.06 | -0.49 | -0.01 | 0.80 | 0.96 #> V2 | 0 | -0.05 | 1.05 | -0.29 | -0.02 | 0.90 | 0.95 #> V3 | 0 | 0.00 | 1.10 | -0.77 | 0.00 | 0.94 | 0.95 #> V4 | 0 | 0.00 | 1.10 | -0.82 | 0.00 | 0.92 | 0.95 #> V5 | 0 | -0.07 | 1.09 | -0.29 | -0.02 | 0.90 | 0.95 #> V6 | 0 | -0.04 | 1.13 | -0.27 | -0.01 | 0.83 | 0.96 #> #> Mean inter-item-correlation = 0.813 Cronbach's alpha = 0.963 #> #> Component 2 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V7 | 0 | -0.01 | 1.07 | 0.01 | 0.00 | 0.87 | 0.97 #> V8 | 0 | 0.02 | 0.96 | 0.23 | 0.01 | 0.89 | 0.96 #> V9 | 0 | 0.04 | 0.98 | 0.37 | 0.01 | 0.93 | 0.96 #> V10 | 0 | 0.08 | 1.00 | 0.18 | 0.02 | 0.93 | 0.96 #> V11 | 0 | 0.02 | 1.03 | 0.18 | 0.01 | 0.92 | 0.96 #> V12 | 0 | 0.00 | 1.04 | 0.27 | 0.00 | 0.84 | 0.97 #> #> Mean inter-item-correlation = 0.840 Cronbach's alpha = 0.969 #> #> Component 3 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V13 | 0 | 0.04 | 0.95 | 0.10 | 0.01 | 0.81 | 0.95 #> V14 | 0 | -0.02 | 0.96 | 0.24 | -0.01 | 0.93 | 0.91 #> V15 | 0 | -0.03 | 0.94 | 0.41 | -0.01 | 0.92 | 0.91 #> V16 | 0 | 0.03 | 0.96 | 0.28 | 0.01 | 0.82 | 0.94 #> #> Mean inter-item-correlation = 0.811 Cronbach's alpha = 0.945 # as data frame check_itemscale( as.data.frame(Z), factor_index = parameters::closest_component(pca) ) #> # Description of (Sub-)Scales #> Component 1 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V1 | 0 | -0.02 | 1.06 | -0.49 | -0.01 | 0.80 | 0.96 #> V2 | 0 | -0.05 | 1.05 | -0.29 | -0.02 | 0.90 | 0.95 #> V3 | 0 | 0.00 | 1.10 | -0.77 | 0.00 | 0.94 | 0.95 #> V4 | 0 | 0.00 | 1.10 | -0.82 | 0.00 | 0.92 | 0.95 #> V5 | 0 | -0.07 | 1.09 | -0.29 | -0.02 | 0.90 | 0.95 #> V6 | 0 | -0.04 | 1.13 | -0.27 | -0.01 | 0.83 | 0.96 #> #> Mean inter-item-correlation = 0.813 Cronbach's alpha = 0.963 #> #> Component 2 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V7 | 0 | -0.01 | 1.07 | 0.01 | 0.00 | 0.87 | 0.97 #> V8 | 0 | 0.02 | 0.96 | 0.23 | 0.01 | 0.89 | 0.96 #> V9 | 0 | 0.04 | 0.98 | 0.37 | 0.01 | 0.93 | 0.96 #> V10 | 0 | 0.08 | 1.00 | 0.18 | 0.02 | 0.93 | 0.96 #> V11 | 0 | 0.02 | 1.03 | 0.18 | 0.01 | 0.92 | 0.96 #> V12 | 0 | 0.00 | 1.04 | 0.27 | 0.00 | 0.84 | 0.97 #> #> Mean inter-item-correlation = 0.840 Cronbach's alpha = 0.969 #> #> Component 3 #> #> Item | Missings | Mean | SD | Skewness | Difficulty | Discrimination | alpha if deleted #> ------------------------------------------------------------------------------------------ #> V13 | 0 | 0.04 | 0.95 | 0.10 | 0.01 | 0.81 | 0.95 #> V14 | 0 | -0.02 | 0.96 | 0.24 | -0.01 | 0.93 | 0.91 #> V15 | 0 | -0.03 | 0.94 | 0.41 | -0.01 | 0.92 | 0.91 #> V16 | 0 | 0.03 | 0.96 | 0.28 | 0.01 | 0.82 | 0.94 #> #> Mean inter-item-correlation = 0.811 Cronbach's alpha = 0.945"},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Visual check of model assumptions — check_model","title":"Visual check of model assumptions — check_model","text":"Visual check various model assumptions (normality residuals, normality random effects, linear relationship, homogeneity variance, multicollinearity). check_model() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Visual check of model assumptions — check_model","text":"","code":"check_model(x, ...) # Default S3 method check_model( x, panel = TRUE, check = \"all\", detrend = TRUE, bandwidth = \"nrd\", type = \"density\", residual_type = NULL, show_dots = NULL, size_dot = 2, size_line = 0.8, size_title = 12, size_axis_title = base_size, base_size = 10, alpha = 0.2, alpha_dot = 0.8, colors = c(\"#3aaf85\", \"#1b6ca8\", \"#cd201f\"), theme = \"see::theme_lucid\", verbose = FALSE, ... )"},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Visual check of model assumptions — check_model","text":"x model object. ... Arguments passed individual check functions, especially check_predictions() binned_residuals(). panel Logical, TRUE, plots arranged panels; else, single plots diagnostic returned. check Character vector, indicating checks performed plotted. May one \"\", \"vif\", \"qq\", \"normality\", \"linearity\", \"ncv\", \"homogeneity\", \"outliers\", \"reqq\", \"pp_check\", \"binned_residuals\" \"overdispersion\". Note check apply type models (see 'Details'). \"reqq\" QQ-plot random effects available mixed models. \"ncv\" alias \"linearity\", checks non-constant variance, .e. heteroscedasticity, well linear relationship. default, possible checks performed plotted. detrend Logical. Q-Q/P-P plots detrended? Defaults TRUE linear models residual_type = \"normal\". Defaults FALSE QQ plots based simulated residuals (.e. residual_type = \"simulated\"). bandwidth character string indicating smoothing bandwidth used. Unlike stats::density(), used \"nrd0\" default, default used \"nrd\" (seems give plausible results non-Gaussian models). problems plotting occur, try change different value. type Plot type posterior predictive checks plot. Can \"density\", \"discrete_dots\", \"discrete_interval\" \"discrete_both\" (discrete_* options appropriate models discrete - binary, integer ordinal etc. - outcomes). residual_type Character, indicating type residuals used. non-Gaussian models, default \"simulated\", uses simulated residuals. based simulate_residuals() thus uses DHARMa package return randomized quantile residuals. Gaussian models, default \"normal\", uses default residuals model. Setting residual_type = \"normal\" non-Gaussian models use half-normal Q-Q plot absolute value standardized deviance residuals. show_dots Logical, TRUE, show data points plot. Set FALSE models many observations, generating plot time-consuming. default, show_dots = NULL. case check_model() tries guess whether performance poor due large model thus automatically shows hides dots. size_dot, size_line Size line dot-geoms. base_size, size_title, size_axis_title Base font size axis plot titles. alpha, alpha_dot alpha level confidence bands dot-geoms. Scalar 0 1. colors Character vector color codes (hex-format). Must length 3. First color usually used reference lines, second color dots, third color outliers extreme values. theme String, indicating name plot-theme. Must format \"package::theme_name\" (e.g. \"ggplot2::theme_minimal\"). verbose FALSE (default), suppress warning messages.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Visual check of model assumptions — check_model","text":"data frame used plotting.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Visual check of model assumptions — check_model","text":"Bayesian models packages rstanarm brms, models \"converted\" frequentist counterpart, using bayestestR::bayesian_as_frequentist. advanced model-check Bayesian models implemented later stage. See also related vignette.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Visual check of model assumptions — check_model","text":"function just prepares data plotting. create plots, see needs installed. Furthermore, function suppresses possible warnings. case observe suspicious plots, please refer dedicated functions (like check_collinearity(), check_normality() etc.) get informative messages warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"posterior-predictive-checks","dir":"Reference","previous_headings":"","what":"Posterior Predictive Checks","title":"Visual check of model assumptions — check_model","text":"Posterior predictive checks can used look systematic discrepancies real simulated data. helps see whether type model (distributional family) fits well data. See check_predictions() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"linearity-assumption","dir":"Reference","previous_headings":"","what":"Linearity Assumption","title":"Visual check of model assumptions — check_model","text":"plot Linearity checks assumption linear relationship. However, spread dots also indicate possible heteroscedasticity (.e. non-constant variance, hence, alias \"ncv\" plot), thus shows residuals non-linear patterns. plot helps see whether predictors may non-linear relationship outcome, case reference line may roughly indicate relationship. straight horizontal line indicates model specification seems ok. instance, line U-shaped, predictors probably better modeled quadratic term. See check_heteroscedasticity() details. caution needed interpreting plots. Although plots helpful check model assumptions, necessarily indicate -called \"lack fit\", e.g. missed non-linear relationships interactions. Thus, always recommended also look effect plots, including partial residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"homogeneity-of-variance","dir":"Reference","previous_headings":"","what":"Homogeneity of Variance","title":"Visual check of model assumptions — check_model","text":"plot checks assumption equal variance (homoscedasticity). desired pattern dots spread equally straight, horizontal line show apparent deviation.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"influential-observations","dir":"Reference","previous_headings":"","what":"Influential Observations","title":"Visual check of model assumptions — check_model","text":"plot used identify influential observations. points plot fall outside Cook’s distance (dashed lines) considered influential observation. See check_outliers() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"multicollinearity","dir":"Reference","previous_headings":"","what":"Multicollinearity","title":"Visual check of model assumptions — check_model","text":"plot checks potential collinearity among predictors. nutshell, multicollinearity means know effect one predictor, value knowing predictor rather low. Multicollinearity might arise third, unobserved variable causal effect two predictors associated outcome. cases, actual relationship matters association unobserved variable outcome. See check_collinearity() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"normality-of-residuals","dir":"Reference","previous_headings":"","what":"Normality of Residuals","title":"Visual check of model assumptions — check_model","text":"plot used determine residuals regression model normally distributed. Usually, dots fall along line. deviation (mostly tails), indicates model predict outcome well range shows larger deviations line. generalized linear models residual_type = \"normal\", half-normal Q-Q plot absolute value standardized deviance residuals shown, however, interpretation plot remains . See check_normality() details. Usually, generalized linear (mixed) models, test uniformity residuals based simulated residuals conducted (see next section).","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"uniformity-of-residuals","dir":"Reference","previous_headings":"","what":"Uniformity of Residuals","title":"Visual check of model assumptions — check_model","text":"Fore non-Gaussian models, residual_type = \"simulated\" (default generalized linear (mixed) models), residuals expected normally distributed. case, created Q-Q plot checks uniformity residuals. interpretation plot normal Q-Q plot. See simulate_residuals() check_residuals() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"overdispersion","dir":"Reference","previous_headings":"","what":"Overdispersion","title":"Visual check of model assumptions — check_model","text":"count models, overdispersion plot shown. Overdispersion occurs observed variance higher variance theoretical model. Poisson models, variance increases mean , therefore, variance usually (roughly) equals mean value. variance much higher, data \"overdispersed\". See check_overdispersion() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"binned-residuals","dir":"Reference","previous_headings":"","what":"Binned Residuals","title":"Visual check of model assumptions — check_model","text":"models binomial families, binned residuals plot shown. Binned residual plots achieved cutting data bins plotting average residual versus average fitted value bin. model true, one expect 95% residuals fall inside error bounds. See binned_residuals() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"residuals-for-generalized-linear-models","dir":"Reference","previous_headings":"","what":"Residuals for (Generalized) Linear Models","title":"Visual check of model assumptions — check_model","text":"Plots check homogeneity variance use standardized Pearson's residuals generalized linear models, standardized residuals linear models. plots normality residuals (overlayed normal curve) linearity assumption use default residuals lm glm (deviance residuals glm). Q-Q plots use simulated residuals (see simulate_residuals()) non-Gaussian models standardized residuals linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"troubleshooting","dir":"Reference","previous_headings":"","what":"Troubleshooting","title":"Visual check of model assumptions — check_model","text":"models many observations, complex models general, generating plot might become slow. One reason might underlying graphic engine becomes slow plotting many data points. cases, setting argument show_dots = FALSE might help. Furthermore, look check argument see model checks skipped, also increases performance. check_model() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_model.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Visual check of model assumptions — check_model","text":"","code":"# \\donttest{ m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_model(m) data(sleepstudy, package = \"lme4\") m <- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy) check_model(m, panel = FALSE) # }"},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if a distribution is unimodal or multimodal — check_multimodal","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"univariate distributions (one-dimensional vectors), functions performs Ameijeiras-Alonso et al. (2018) excess mass test. multivariate distributions (data frames), uses mixture modelling. However, seems always returns significant result (suggesting distribution multimodal). better method might needed .","code":""},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"","code":"check_multimodal(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"x numeric vector data frame. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"Ameijeiras-Alonso, J., Crujeiras, R. M., Rodríguez-Casal, . (2019). Mode testing, critical bandwidth excess mass. Test, 28(3), 900-919.","code":""},{"path":"https://easystats.github.io/performance/reference/check_multimodal.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if a distribution is unimodal or multimodal — check_multimodal","text":"","code":"# \\donttest{ # Univariate x <- rnorm(1000) check_multimodal(x) #> # Is the variable multimodal? #> #> The Ameijeiras-Alonso et al. (2018) excess mass test suggests that the #> hypothesis of a multimodal distribution cannot be rejected (excess mass #> = 0.02, p = 0.262). #> x <- c(rnorm(1000), rnorm(1000, 2)) check_multimodal(x) #> # Is the variable multimodal? #> #> The Ameijeiras-Alonso et al. (2018) excess mass test suggests that the #> distribution is significantly multimodal (excess mass = 0.02, p = #> 0.040). #> # Multivariate m <- data.frame( x = rnorm(200), y = rbeta(200, 2, 1) ) plot(m$x, m$y) check_multimodal(m) #> # Is the data multimodal? #> #> The parametric mixture modelling test suggests that the multivariate #> distribution is significantly multimodal (Chi2(8) = 25.13, p = 0.001). #> m <- data.frame( x = c(rnorm(100), rnorm(100, 4)), y = c(rbeta(100, 2, 1), rbeta(100, 1, 4)) ) plot(m$x, m$y) check_multimodal(m) #> # Is the data multimodal? #> #> The parametric mixture modelling test suggests that the multivariate #> distribution is significantly multimodal (Chi2(11) = 78.42, p < .001). #> # }"},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for (non-)normality of residuals. — check_normality","title":"Check model for (non-)normality of residuals. — check_normality","text":"Check model (non-)normality residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for (non-)normality of residuals. — check_normality","text":"","code":"check_normality(x, ...) # S3 method for class 'merMod' check_normality(x, effects = c(\"fixed\", \"random\"), ...)"},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for (non-)normality of residuals. — check_normality","text":"x model object. ... Currently used. effects normality residuals (\"fixed\") random effects (\"random\") tested? applies mixed-effects models. May abbreviated.","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for (non-)normality of residuals. — check_normality","text":"p-value test statistics. p-value < 0.05 indicates significant deviation normal distribution.","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check model for (non-)normality of residuals. — check_normality","text":"check_normality() calls stats::shapiro.test checks standardized residuals (studentized residuals mixed models) normal distribution. Note formal test almost always yields significant results distribution residuals visual inspection (e.g. Q-Q plots) preferable. generalized linear models, formal statistical test carried . Rather, plot() method GLMs. plot shows half-normal Q-Q plot absolute value standardized deviance residuals shown (line changes plot.lm() R 4.3+).","code":""},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Check model for (non-)normality of residuals. — check_normality","text":"mixed-effects models, studentized residuals, standardized residuals, used test. also plot()-method implemented see-package.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_normality.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for (non-)normality of residuals. — check_normality","text":"","code":"m <<- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) check_normality(m) #> OK: residuals appear as normally distributed (p = 0.230). #> # plot results x <- check_normality(m) plot(x) # \\donttest{ # QQ-plot plot(check_normality(m), type = \"qq\") # PP-plot plot(check_normality(m), type = \"pp\") # }"},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":null,"dir":"Reference","previous_headings":"","what":"Outliers detection (check for influential observations) — check_outliers","title":"Outliers detection (check for influential observations) — check_outliers","text":"Checks locates influential observations (.e., \"outliers\") via several distance /clustering methods. several methods selected, returned \"Outlier\" vector composite outlier score, made average binary (0 1) results method. represents probability observation classified outlier least one method. decision rule used default classify outliers observations composite outlier score superior equal 0.5 (.e., classified outliers least half methods). See Details section description methods.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Outliers detection (check for influential observations) — check_outliers","text":"","code":"check_outliers(x, ...) # Default S3 method check_outliers( x, method = c(\"cook\", \"pareto\"), threshold = NULL, ID = NULL, verbose = TRUE, ... ) # S3 method for class 'numeric' check_outliers(x, method = \"zscore_robust\", threshold = NULL, ...) # S3 method for class 'data.frame' check_outliers(x, method = \"mahalanobis\", threshold = NULL, ID = NULL, ...) # S3 method for class 'performance_simres' check_outliers( x, type = \"default\", iterations = 100, alternative = \"two.sided\", ... )"},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Outliers detection (check for influential observations) — check_outliers","text":"x model, data.frame, performance_simres simulate_residuals() DHARMa object. ... method = \"ics\", arguments ... passed ICSOutlier::ics.outlier(). method = \"mahalanobis\", passed stats::mahalanobis(). percentage_central can specified method = \"mcd\". objects class performance_simres DHARMa, arguments passed DHARMa::testOutliers(). method outlier detection method(s). Can \"\" \"cook\", \"pareto\", \"zscore\", \"zscore_robust\", \"iqr\", \"ci\", \"eti\", \"hdi\", \"bci\", \"mahalanobis\", \"mahalanobis_robust\", \"mcd\", \"ics\", \"optics\" \"lof\". threshold list containing threshold values method (e.g. list('mahalanobis' = 7, 'cook' = 1)), observation considered outlier. NULL, default values used (see 'Details'). numeric value given, used threshold method run. ID Optional, report ID column along row number. verbose Toggle warnings. type Type method test outliers. Can one \"default\", \"binomial\" \"bootstrap\". applies x object returned simulate_residuals() class DHARMa. See 'Details' ?DHARMa::testOutliers detailed description types. iterations Number simulations run. alternative character string specifying alternative hypothesis.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Outliers detection (check for influential observations) — check_outliers","text":"logical vector detected outliers nice printing method: check (message) whether outliers detected . information distance measure whether observation considered outlier can recovered .data.frame function. Note function (silently) return vector FALSE non-supported data types character strings.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Outliers detection (check for influential observations) — check_outliers","text":"Outliers can defined particularly influential observations. methods rely computation distance metric, observations greater certain threshold considered outliers. Importantly, outliers detection methods meant provide information consider researcher, rather automatized procedure mindless application substitute thinking. example sentence reporting usage composite method : \"Based composite outlier score (see 'check_outliers' function 'performance' R package; Lüdecke et al., 2021) obtained via joint application multiple outliers detection algorithms (Z-scores, Iglewicz, 1993; Interquartile range (IQR); Mahalanobis distance, Cabana, 2019; Robust Mahalanobis distance, Gnanadesikan Kettenring, 1972; Minimum Covariance Determinant, Leys et al., 2018; Invariant Coordinate Selection, Archimbaud et al., 2018; OPTICS, Ankerst et al., 1999; Isolation Forest, Liu et al. 2008; Local Outlier Factor, Breunig et al., 2000), excluded n participants classified outliers least half methods used.\"","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Outliers detection (check for influential observations) — check_outliers","text":"also plot()-method implemented see-package. Please note range distance-values along y-axis re-scaled range 0 1.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"model-specific-methods","dir":"Reference","previous_headings":"","what":"Model-specific methods","title":"Outliers detection (check for influential observations) — check_outliers","text":"Cook's Distance: Among outlier detection methods, Cook's distance leverage less common basic Mahalanobis distance, still used. Cook's distance estimates variations regression coefficients removing observation, one one (Cook, 1977). Since Cook's distance metric F distribution p n-p degrees freedom, median point quantile distribution can used cut-(Bollen, 1985). common approximation heuristic use 4 divided numbers observations, usually corresponds lower threshold (.e., outliers detected). works frequentist models. Bayesian models, see pareto. Pareto: reliability approximate convergence Bayesian models can assessed using estimates shape parameter k generalized Pareto distribution. estimated tail shape parameter k exceeds 0.5, user warned, although practice authors loo package observed good performance values k 0.7 (default threshold used performance).","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"univariate-methods","dir":"Reference","previous_headings":"","what":"Univariate methods","title":"Outliers detection (check for influential observations) — check_outliers","text":"Z-scores (\"zscore\", \"zscore_robust\"): Z-score, standard score, way describing data point deviance central value, terms standard deviations mean (\"zscore\") , case (\"zscore_robust\") default (Iglewicz, 1993), terms Median Absolute Deviation (MAD) median (robust measures dispersion centrality). default threshold classify outliers 1.959 (threshold = list(\"zscore\" = 1.959)), corresponding 2.5% (qnorm(0.975)) extreme observations (assuming data normally distributed). Importantly, Z-score method univariate: computed column column. data frame passed, Z-score calculated variable separately, maximum (absolute) Z-score kept observations. Thus, observations extreme least one variable might detected outliers. Thus, method suited high dimensional data (many columns), returning liberal results (detecting many outliers). IQR (\"iqr\"): Using IQR (interquartile range) robust method developed John Tukey, often appears box--whisker plots (e.g., ggplot2::geom_boxplot). interquartile range range first third quartiles. Tukey considered outliers data point fell outside either 1.5 times (default threshold 1.7) IQR first third quartile. Similar Z-score method, univariate method outliers detection, returning outliers detected least one column, might thus suited high dimensional data. distance score IQR absolute deviation median upper lower IQR thresholds. , value divided IQR threshold, “standardize” facilitate interpretation. CI (\"ci\", \"eti\", \"hdi\", \"bci\"): Another univariate method compute, variable, sort \"confidence\" interval consider outliers values lying beyond edges interval. default, \"ci\" computes Equal-Tailed Interval (\"eti\"), types intervals available, Highest Density Interval (\"hdi\") Bias Corrected Accelerated Interval (\"bci\"). default threshold 0.95, considering outliers observations outside 95% CI variable. See bayestestR::ci() details intervals. distance score CI methods absolute deviation median upper lower CI thresholds. , value divided difference upper lower CI bounds divided two, “standardize” facilitate interpretation.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"multivariate-methods","dir":"Reference","previous_headings":"","what":"Multivariate methods","title":"Outliers detection (check for influential observations) — check_outliers","text":"Mahalanobis Distance: Mahalanobis distance (Mahalanobis, 1930) often used multivariate outliers detection distance takes account shape observations. default threshold often arbitrarily set deviation (terms SD MAD) mean (median) Mahalanobis distance. However, Mahalanobis distance can approximated Chi squared distribution (Rousseeuw Van Zomeren, 1990), can use alpha quantile chi-square distribution k degrees freedom (k number columns). default, alpha threshold set 0.025 (corresponding 2.5\\ Cabana, 2019). criterion natural extension median plus minus coefficient times MAD method (Leys et al., 2013). Robust Mahalanobis Distance: robust version Mahalanobis distance using Orthogonalized Gnanadesikan-Kettenring pairwise estimator (Gnanadesikan Kettenring, 1972). Requires bigutilsr package. See bigutilsr::dist_ogk() function. Minimum Covariance Determinant (MCD): Another robust version Mahalanobis. Leys et al. (2018) argue Mahalanobis Distance robust way determine outliers, uses means covariances data - including outliers - determine individual difference scores. Minimum Covariance Determinant calculates mean covariance matrix based central subset data (default, 66\\ deemed robust method identifying removing outliers regular Mahalanobis distance. method percentage_central argument allows specifying breakdown point (0.75, default, recommended Leys et al. 2018, commonly used alternative 0.50). Invariant Coordinate Selection (ICS): outlier detected using ICS, default uses alpha threshold 0.025 (corresponding 2.5\\ value outliers classification. Refer help-file ICSOutlier::ics.outlier() get details procedure. Note method = \"ics\" requires ICS ICSOutlier installed, takes time compute results. can speed computation time using parallel computing. Set number cores use options(mc.cores = 4) (example). OPTICS: Ordering Points Identify Clustering Structure (OPTICS) algorithm (Ankerst et al., 1999) using similar concepts DBSCAN (unsupervised clustering technique can used outliers detection). threshold argument passed minPts, corresponds minimum size cluster. default, size set 2 times number columns (Sander et al., 1998). Compared techniques, always detect several outliers (usually defined percentage extreme values), algorithm functions different manner always detect outliers. Note method = \"optics\" requires dbscan package installed, takes time compute results. Additionally, optics_xi (default 0.05) passed dbscan::extractXi() function refine cluster selection. Local Outlier Factor: Based K nearest neighbors algorithm, LOF compares local density point local densities neighbors instead computing distance center (Breunig et al., 2000). Points substantially lower density neighbors considered outliers. LOF score approximately 1 indicates density around point comparable neighbors. Scores significantly larger 1 indicate outliers. default threshold 0.025 classify outliers observations located qnorm(1-0.025) * SD) log-transformed LOF distance. Requires dbscan package.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"methods-for-simulated-residuals","dir":"Reference","previous_headings":"","what":"Methods for simulated residuals","title":"Outliers detection (check for influential observations) — check_outliers","text":"approach detecting outliers based simulated residuals differs traditional methods may detecting outliers expected. Literally, approach compares observed simulated values. However, know deviation observed data model expectation, thus, term \"outlier\" taken grain salt. refers \"simulation outliers\". Basically, comparison tests whether observed data point outside simulated range. strongly recommended read related documentations DHARMa package, e.g. ?DHARMa::testOutliers.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"threshold-specification","dir":"Reference","previous_headings":"","what":"Threshold specification","title":"Outliers detection (check for influential observations) — check_outliers","text":"Default thresholds currently specified follows:","code":"list( zscore = stats::qnorm(p = 1 - 0.001 / 2), zscore_robust = stats::qnorm(p = 1 - 0.001 / 2), iqr = 1.7, ci = 1 - 0.001, eti = 1 - 0.001, hdi = 1 - 0.001, bci = 1 - 0.001, cook = stats::qf(0.5, ncol(x), nrow(x) - ncol(x)), pareto = 0.7, mahalanobis = stats::qchisq(p = 1 - 0.001, df = ncol(x)), mahalanobis_robust = stats::qchisq(p = 1 - 0.001, df = ncol(x)), mcd = stats::qchisq(p = 1 - 0.001, df = ncol(x)), ics = 0.001, optics = 2 * ncol(x), optics_xi = 0.05, lof = 0.001 )"},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"meta-analysis-models","dir":"Reference","previous_headings":"","what":"Meta-analysis models","title":"Outliers detection (check for influential observations) — check_outliers","text":"meta-analysis models (e.g. objects class rma metafor package metagen package meta), studies defined outliers confidence interval lies outside confidence interval pooled effect.","code":""},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Outliers detection (check for influential observations) — check_outliers","text":"Archimbaud, ., Nordhausen, K., Ruiz-Gazen, . (2018). ICS multivariate outlier detection application quality control. Computational Statistics Data Analysis, 128, 184-199. doi:10.1016/j.csda.2018.06.011 Gnanadesikan, R., Kettenring, J. R. (1972). Robust estimates, residuals, outlier detection multiresponse data. Biometrics, 81-124. Bollen, K. ., Jackman, R. W. (1985). Regression diagnostics: expository treatment outliers influential cases. Sociological Methods Research, 13(4), 510-542. Cabana, E., Lillo, R. E., Laniado, H. (2019). Multivariate outlier detection based robust Mahalanobis distance shrinkage estimators. arXiv preprint arXiv:1904.02596. Cook, R. D. (1977). Detection influential observation linear regression. Technometrics, 19(1), 15-18. Iglewicz, B., Hoaglin, D. C. (1993). detect handle outliers (Vol. 16). Asq Press. Leys, C., Klein, O., Dominicy, Y., Ley, C. (2018). Detecting multivariate outliers: Use robust variant Mahalanobis distance. Journal Experimental Social Psychology, 74, 150-156. Liu, F. T., Ting, K. M., Zhou, Z. H. (2008, December). Isolation forest. 2008 Eighth IEEE International Conference Data Mining (pp. 413-422). IEEE. Lüdecke, D., Ben-Shachar, M. S., Patil, ., Waggoner, P., Makowski, D. (2021). performance: R package assessment, comparison testing statistical models. Journal Open Source Software, 6(60), 3139. doi:10.21105/joss.03139 Thériault, R., Ben-Shachar, M. S., Patil, ., Lüdecke, D., Wiernik, B. M., Makowski, D. (2023). Check outliers! introduction identifying statistical outliers R easystats. Behavior Research Methods, 1-11. doi:10.3758/s13428-024-02356-w Rousseeuw, P. J., Van Zomeren, B. C. (1990). Unmasking multivariate outliers leverage points. Journal American Statistical association, 85(411), 633-639.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_outliers.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Outliers detection (check for influential observations) — check_outliers","text":"","code":"data <- mtcars # Size nrow(data) = 32 # For single variables ------------------------------------------------------ outliers_list <- check_outliers(data$mpg) # Find outliers outliers_list # Show the row index of the outliers #> OK: No outliers detected. #> - Based on the following method and threshold: zscore_robust (3.291). #> - For variable: data$mpg #> #> as.numeric(outliers_list) # The object is a binary vector... #> [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 filtered_data <- data[!outliers_list, ] # And can be used to filter a data frame nrow(filtered_data) # New size, 28 (4 outliers removed) #> [1] 32 # Find all observations beyond +/- 2 SD check_outliers(data$mpg, method = \"zscore\", threshold = 2) #> 2 outliers detected: cases 18, 20. #> - Based on the following method and threshold: zscore (2). #> - For variable: data$mpg. #> #> ----------------------------------------------------------------------------- #> Outliers per variable (zscore): #> #> $`data$mpg` #> Row Distance_Zscore #> 18 18 2.042389 #> 20 20 2.291272 #> # For dataframes ------------------------------------------------------ check_outliers(data) # It works the same way on data frames #> OK: No outliers detected. #> - Based on the following method and threshold: mahalanobis (31.264). #> - For variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb #> #> # You can also use multiple methods at once outliers_list <- check_outliers(data, method = c( \"mahalanobis\", \"iqr\", \"zscore\" )) outliers_list #> OK: No outliers detected. #> - Based on the following methods and thresholds: mahalanobis (3.291), iqr (2), zscore (31.264). #> - For variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb #> #> # Using `as.data.frame()`, we can access more details! outliers_info <- as.data.frame(outliers_list) head(outliers_info) #> Row Distance_Zscore Outlier_Zscore Distance_IQR Outlier_IQR #> 1 1 1.189901 0 0.4208483 0 #> 2 2 1.189901 0 0.2941176 0 #> 3 3 1.224858 0 0.5882353 0 #> 4 4 1.122152 0 0.5882353 0 #> 5 5 1.043081 0 0.3915954 0 #> 6 6 1.564608 0 0.6809025 0 #> Distance_Mahalanobis Outlier_Mahalanobis Outlier #> 1 8.946673 0 0 #> 2 8.287933 0 0 #> 3 8.937150 0 0 #> 4 6.096726 0 0 #> 5 5.429061 0 0 #> 6 8.877558 0 0 outliers_info$Outlier # Including the probability of being an outlier #> [1] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [8] 0.0000000 0.3333333 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [15] 0.0000000 0.3333333 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [22] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 #> [29] 0.0000000 0.0000000 0.3333333 0.0000000 # And we can be more stringent in our outliers removal process filtered_data <- data[outliers_info$Outlier < 0.1, ] # We can run the function stratified by groups using `{datawizard}` package: group_iris <- datawizard::data_group(iris, \"Species\") check_outliers(group_iris) #> OK: No outliers detected. #> - Based on the following method and threshold: mahalanobis (20). #> - For variables: Sepal.Length, Sepal.Width, Petal.Length, Petal.Width #> #> # nolint start # nolint end # \\donttest{ # You can also run all the methods check_outliers(data, method = \"all\", verbose = FALSE) #> Package `parallel` is installed, but `check_outliers()` will run on a #> single core. #> To use multiple cores, set `options(mc.cores = 4)` (for example). #> 3 outliers detected: cases 9, 29, 31. #> - Based on the following methods and thresholds: zscore_robust (3.291), #> iqr (2), ci (1), cook (1), pareto (0.7), mahalanobis (31.264), #> mahalanobis_robust (31.264), mcd (31.264), ics (0.001), optics (22), lof #> (0.05), optics_xi (0.001). #> - For variables: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb. #> Note: Outliers were classified as such by at least half of the selected methods. #> #> ----------------------------------------------------------------------------- #> #> The following observations were considered outliers for two or more #> variables by at least one of the selected methods: #> #> Row n_Zscore_robust n_IQR n_ci n_Mahalanobis_robust n_MCD #> 1 3 2 0 0 0 0 #> 2 9 2 1 1 (Multivariate) (Multivariate) #> 3 18 2 0 0 0 0 #> 4 19 2 0 2 0 (Multivariate) #> 5 20 2 0 2 0 0 #> 6 26 2 0 0 0 0 #> 7 28 2 0 1 (Multivariate) (Multivariate) #> 8 31 2 2 2 (Multivariate) (Multivariate) #> 9 32 2 0 0 0 0 #> 10 8 1 0 0 0 (Multivariate) #> 11 21 1 0 0 (Multivariate) 0 #> 12 27 1 0 0 (Multivariate) (Multivariate) #> 13 29 1 0 1 (Multivariate) (Multivariate) #> 14 30 1 0 0 0 (Multivariate) #> 15 7 0 0 0 (Multivariate) 0 #> 16 24 0 0 0 (Multivariate) 0 #> n_ICS #> 1 0 #> 2 (Multivariate) #> 3 0 #> 4 0 #> 5 0 #> 6 0 #> 7 0 #> 8 0 #> 9 0 #> 10 0 #> 11 0 #> 12 0 #> 13 (Multivariate) #> 14 0 #> 15 0 #> 16 0 # For statistical models --------------------------------------------- # select only mpg and disp (continuous) mt1 <- mtcars[, c(1, 3, 4)] # create some fake outliers and attach outliers to main df mt2 <- rbind(mt1, data.frame( mpg = c(37, 40), disp = c(300, 400), hp = c(110, 120) )) # fit model with outliers model <- lm(disp ~ mpg + hp, data = mt2) outliers_list <- check_outliers(model) plot(outliers_list) insight::get_data(model)[outliers_list, ] # Show outliers data #> disp mpg hp #> Maserati Bora 301 15 335 #> 2 400 40 120 # }"},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":null,"dir":"Reference","previous_headings":"","what":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"check_overdispersion() checks generalized linear (mixed) models overdispersion (underdispersion).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"","code":"check_overdispersion(x, ...) # S3 method for class 'performance_simres' check_overdispersion(x, alternative = c(\"two.sided\", \"less\", \"greater\"), ...)"},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"x Fitted model class merMod, glmmTMB, glm, glm.nb (package MASS), object returned simulate_residuals(). ... Arguments passed simulate_residuals(). applies models zero-inflation component, models class glmmTMB nbinom1 nbinom2 family. alternative character string specifying alternative hypothesis.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"list results overdispersion test, like chi-squared statistics, p-value dispersion ratio.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Overdispersion occurs observed variance higher variance theoretical model. Poisson models, variance increases mean , therefore, variance usually (roughly) equals mean value. variance much higher, data \"overdispersed\". less common case underdispersion, variance much lower mean.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"interpretation-of-the-dispersion-ratio","dir":"Reference","previous_headings":"","what":"Interpretation of the Dispersion Ratio","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"dispersion ratio close one, Poisson model fits well data. Dispersion ratios larger one indicate overdispersion, thus negative binomial model similar might fit better data. Dispersion ratios much smaller one indicate underdispersion. p-value < .05 indicates either overdispersion underdispersion (first common).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"overdispersion-in-poisson-models","dir":"Reference","previous_headings":"","what":"Overdispersion in Poisson Models","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Poisson models, overdispersion test based code Gelman Hill (2007), page 115.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"overdispersion-in-negative-binomial-or-zero-inflated-models","dir":"Reference","previous_headings":"","what":"Overdispersion in Negative Binomial or Zero-Inflated Models","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"negative binomial (mixed) models models zero-inflation component, overdispersion test based simulated residuals (see simulate_residuals()).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"overdispersion-in-mixed-models","dir":"Reference","previous_headings":"","what":"Overdispersion in Mixed Models","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"merMod- glmmTMB-objects, check_overdispersion() based code GLMM FAQ, section can deal overdispersion GLMMs?. Note function returns approximate estimate overdispersion parameter. Using approach inaccurate zero-inflated negative binomial mixed models (fitted glmmTMB), thus, cases, overdispersion test based simulate_residuals() (identical check_overdispersion(simulate_residuals(model))).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"how-to-fix-overdispersion","dir":"Reference","previous_headings":"","what":"How to fix Overdispersion","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Overdispersion can fixed either modeling dispersion parameter, choosing different distributional family (like Quasi-Poisson, negative binomial, see Gelman Hill (2007), pages 115-116).","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"certain models, resp. model certain families, tests based simulated residuals (see simulate_residuals()). usually accurate testing models traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. Arguments specified ... passed simulate_residuals(), relies DHARMa::simulateResiduals() (therefore, arguments ... passed DHARMa). defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"Bolker B et al. (2017): GLMM FAQ. Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_overdispersion.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check overdispersion (and underdispersion) of GL(M)M's — check_overdispersion","text":"","code":"data(Salamanders, package = \"glmmTMB\") m <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_overdispersion(m) #> # Overdispersion test #> #> dispersion ratio = 2.946 #> Pearson's Chi-Squared = 1873.710 #> p-value = < 0.001 #> #> Overdispersion detected."},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":null,"dir":"Reference","previous_headings":"","what":"Posterior predictive checks — check_predictions","title":"Posterior predictive checks — check_predictions","text":"Posterior predictive checks mean \"simulating replicated data fitted model comparing observed data\" (Gelman Hill, 2007, p. 158). Posterior predictive checks can used \"look systematic discrepancies real simulated data\" (Gelman et al. 2014, p. 169). performance provides posterior predictive check methods variety frequentist models (e.g., lm, merMod, glmmTMB, ...). Bayesian models, model passed bayesplot::pp_check(). check_predictions() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Posterior predictive checks — check_predictions","text":"","code":"check_predictions(object, ...) # Default S3 method check_predictions( object, iterations = 50, check_range = FALSE, re_formula = NULL, bandwidth = \"nrd\", type = \"density\", verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Posterior predictive checks — check_predictions","text":"object statistical model. ... Passed simulate(). iterations number draws simulate/bootstrap. check_range Logical, TRUE, includes plot minimum value original response minimum values replicated responses, maximum value. plot helps judging whether variation original data captured model (Gelman et al. 2020, pp.163). minimum maximum values y inside range related minimum maximum values yrep. re_formula Formula containing group-level effects (random effects) considered simulated data. NULL (default), condition random effects. NA ~0, condition random effects. See simulate() lme4. bandwidth character string indicating smoothing bandwidth used. Unlike stats::density(), used \"nrd0\" default, default used \"nrd\" (seems give plausible results non-Gaussian models). problems plotting occur, try change different value. type Plot type posterior predictive checks plot. Can \"density\", \"discrete_dots\", \"discrete_interval\" \"discrete_both\" (discrete_* options appropriate models discrete - binary, integer ordinal etc. - outcomes). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Posterior predictive checks — check_predictions","text":"data frame simulated responses original response vector.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Posterior predictive checks — check_predictions","text":"example posterior predictive checks can also used model comparison Figure 6 Gabry et al. 2019, Figure 6. model shown right panel (b) can simulate new data similar observed outcome model left panel (). Thus, model (b) likely preferred model ().","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Posterior predictive checks — check_predictions","text":"Every model object simulate()-method work check_predictions(). R 3.6.0 higher, bayesplot (package imports bayesplot rstanarm brms) loaded, pp_check() also available alias check_predictions(). check_predictions() work expected, try setting verbose = TRUE get hints possible problems.","code":""},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Posterior predictive checks — check_predictions","text":"Gabry, J., Simpson, D., Vehtari, ., Betancourt, M., Gelman, . (2019). Visualization Bayesian workflow. Journal Royal Statistical Society: Series (Statistics Society), 182(2), 389–402. https://doi.org/10.1111/rssa.12378 Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press. Gelman, ., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, ., Rubin, D. B. (2014). Bayesian data analysis. (Third edition). CRC Press. Gelman, ., Hill, J., Vehtari, . (2020). Regression Stories. Cambridge University Press.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_predictions.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Posterior predictive checks — check_predictions","text":"","code":"# linear model model <- lm(mpg ~ disp, data = mtcars) check_predictions(model) # discrete/integer outcome set.seed(99) d <- iris d$skewed <- rpois(150, 1) model <- glm( skewed ~ Species + Petal.Length + Petal.Width, family = poisson(), data = d ) check_predictions(model, type = \"discrete_both\")"},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":null,"dir":"Reference","previous_headings":"","what":"Check uniformity of simulated residuals — check_residuals","title":"Check uniformity of simulated residuals — check_residuals","text":"check_residuals() checks generalized linear (mixed) models uniformity randomized quantile residuals, can used identify typical model misspecification problems, /underdispersion, zero-inflation, residual spatial temporal autocorrelation.","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check uniformity of simulated residuals — check_residuals","text":"","code":"check_residuals(x, ...) # Default S3 method check_residuals(x, alternative = c(\"two.sided\", \"less\", \"greater\"), ...)"},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check uniformity of simulated residuals — check_residuals","text":"x object returned simulate_residuals() DHARMa::simulateResiduals(). ... Passed stats::ks.test(). alternative character string specifying alternative hypothesis. See stats::ks.test() details.","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check uniformity of simulated residuals — check_residuals","text":"p-value test statistics.","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check uniformity of simulated residuals — check_residuals","text":"Uniformity residuals checked using Kolmogorov-Smirnov test. plot() method visualize distribution residuals. test uniformity basically tests extent observed values deviate model expectations (.e. simulated values). sense, check_residuals() function similar goals like check_predictions().","code":""},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Check uniformity of simulated residuals — check_residuals","text":"certain models, resp. model certain families, tests like check_zeroinflation() check_overdispersion() based simulated residuals. usually accurate tests traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. simulate_residuals() relies DHARMa::simulateResiduals(), additional arguments specified ... passed function. defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_residuals.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check uniformity of simulated residuals — check_residuals","text":"","code":"dat <- DHARMa::createData(sampleSize = 100, overdispersion = 0.5, family = poisson()) m <- glm(observedResponse ~ Environment1, family = poisson(), data = dat) res <- simulate_residuals(m) check_residuals(res) #> Warning: Non-uniformity of simulated residuals detected (p = 0.021). #>"},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check mixed models for boundary fits — check_singularity","title":"Check mixed models for boundary fits — check_singularity","text":"Check mixed models boundary fits.","code":""},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check mixed models for boundary fits — check_singularity","text":"","code":"check_singularity(x, tolerance = 1e-05, ...)"},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check mixed models for boundary fits — check_singularity","text":"x mixed model. tolerance Indicates value convergence result accepted. larger tolerance , stricter test . ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check mixed models for boundary fits — check_singularity","text":"TRUE model fit singular.","code":""},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check mixed models for boundary fits — check_singularity","text":"model \"singular\", means dimensions variance-covariance matrix estimated exactly zero. often occurs mixed models complex random effects structures. \"singular models statistically well defined (theoretically sensible true maximum likelihood estimate correspond singular fit), real concerns (1) singular fits correspond overfitted models may poor power; (2) chances numerical problems mis-convergence higher singular models (e.g. may computationally difficult compute profile confidence intervals models); (3) standard inferential procedures Wald statistics likelihood ratio tests may inappropriate.\" (lme4 Reference Manual) gold-standard deal singularity random-effects specification choose. Beside using fully Bayesian methods (informative priors), proposals frequentist framework : avoid fitting overly complex models, variance-covariance matrices can estimated precisely enough (Matuschek et al. 2017) use form model selection choose model balances predictive accuracy overfitting/type error (Bates et al. 2015, Matuschek et al. 2017) \"keep maximal\", .e. fit complex model consistent experimental design, removing terms required allow non-singular fit (Barr et al. 2013) since version 1.1.9, glmmTMB package allows use priors frequentist framework, . One recommendation use Gamma prior (Chung et al. 2013). mean may vary 1 large values (like 1e8), shape parameter set value 2.5. can update() model specified prior. glmmTMB, code look like : Large values mean parameter Gamma prior large impact random effects variances terms \"bias\". Thus, 1 fix singular fit, can safely try larger values. Note different meaning singularity convergence: singularity indicates issue \"true\" best estimate, .e. whether maximum likelihood estimation variance-covariance matrix random effects positive definite semi-definite. Convergence question whether can assume numerical optimization worked correctly .","code":"# \"model\" is an object of class gmmmTMB prior <- data.frame( prior = \"gamma(1, 2.5)\", # mean can be 1, but even 1e8 class = \"ranef\" # for random effects ) model_with_priors <- update(model, priors = prior)"},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Check mixed models for boundary fits — check_singularity","text":"Bates D, Kliegl R, Vasishth S, Baayen H. Parsimonious Mixed Models. arXiv:1506.04967, June 2015. Barr DJ, Levy R, Scheepers C, Tily HJ. Random effects structure confirmatory hypothesis testing: Keep maximal. Journal Memory Language, 68(3):255-278, April 2013. Chung Y, Rabe-Hesketh S, Dorie V, Gelman , Liu J. 2013. \"Nondegenerate Penalized Likelihood Estimator Variance Parameters Multilevel Models.\" Psychometrika 78 (4): 685–709. doi:10.1007/s11336-013-9328-2 Matuschek H, Kliegl R, Vasishth S, Baayen H, Bates D. Balancing type error power linear mixed models. Journal Memory Language, 94:305-315, 2017. lme4 Reference Manual, https://cran.r-project.org/package=lme4","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_singularity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check mixed models for boundary fits — check_singularity","text":"","code":"data(sleepstudy, package = \"lme4\") set.seed(123) sleepstudy$mygrp <- sample(1:5, size = 180, replace = TRUE) sleepstudy$mysubgrp <- NA for (i in 1:5) { filter_group <- sleepstudy$mygrp == i sleepstudy$mysubgrp[filter_group] <- sample(1:30, size = sum(filter_group), replace = TRUE) } model <- lme4::lmer( Reaction ~ Days + (1 | mygrp / mysubgrp) + (1 | Subject), data = sleepstudy ) #> boundary (singular) fit: see help('isSingular') check_singularity(model) #> [1] TRUE # \\dontrun{ # Fixing singularity issues using priors in glmmTMB # Example taken from `vignette(\"priors\", package = \"glmmTMB\")` dat <- readRDS(system.file( \"vignette_data\", \"gophertortoise.rds\", package = \"glmmTMB\" )) model <- glmmTMB::glmmTMB( shells ~ prev + offset(log(Area)) + factor(year) + (1 | Site), family = poisson, data = dat ) # singular fit check_singularity(model) #> [1] TRUE # impose Gamma prior on random effects parameters prior <- data.frame( prior = \"gamma(1, 2.5)\", # mean can be 1, but even 1e8 class = \"ranef\" # for random effects ) model_with_priors <- update(model, priors = prior) # no singular fit check_singularity(model_with_priors) #> [1] FALSE # }"},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":null,"dir":"Reference","previous_headings":"","what":"Check model for violation of sphericity — check_sphericity","title":"Check model for violation of sphericity — check_sphericity","text":"Check model violation sphericity. Bartlett's Test Sphericity (used correlation matrices factor analyses), see check_sphericity_bartlett.","code":""},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check model for violation of sphericity — check_sphericity","text":"","code":"check_sphericity(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check model for violation of sphericity — check_sphericity","text":"x model object. ... Arguments passed car::Anova.","code":""},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check model for violation of sphericity — check_sphericity","text":"Invisibly returns p-values test statistics. p-value < 0.05 indicates violation sphericity.","code":""},{"path":"https://easystats.github.io/performance/reference/check_sphericity.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check model for violation of sphericity — check_sphericity","text":"","code":"data(Soils, package = \"carData\") soils.mod <- lm( cbind(pH, N, Dens, P, Ca, Mg, K, Na, Conduc) ~ Block + Contour * Depth, data = Soils ) check_sphericity(Manova(soils.mod)) #> OK: Data seems to be spherical (p > .999). #>"},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":null,"dir":"Reference","previous_headings":"","what":"Check distribution symmetry — check_symmetry","title":"Check distribution symmetry — check_symmetry","text":"Uses Hotelling Solomons test symmetry testing standardized nonparametric skew (\\(\\frac{(Mean - Median)}{SD}\\)) different 0. underlying assumption Wilcoxon signed-rank test.","code":""},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check distribution symmetry — check_symmetry","text":"","code":"check_symmetry(x, ...)"},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check distribution symmetry — check_symmetry","text":"x Model numeric vector ... used.","code":""},{"path":"https://easystats.github.io/performance/reference/check_symmetry.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check distribution symmetry — check_symmetry","text":"","code":"V <- suppressWarnings(wilcox.test(mtcars$mpg)) check_symmetry(V) #> OK: Data appears symmetrical (p = 0.119). #>"},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":null,"dir":"Reference","previous_headings":"","what":"Check for zero-inflation in count models — check_zeroinflation","title":"Check for zero-inflation in count models — check_zeroinflation","text":"check_zeroinflation() checks whether count models - underfitting zeros outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check for zero-inflation in count models — check_zeroinflation","text":"","code":"check_zeroinflation(x, ...) # Default S3 method check_zeroinflation(x, tolerance = 0.05, ...) # S3 method for class 'performance_simres' check_zeroinflation( x, tolerance = 0.1, alternative = c(\"two.sided\", \"less\", \"greater\"), ... )"},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check for zero-inflation in count models — check_zeroinflation","text":"x Fitted model class merMod, glmmTMB, glm, glm.nb (package MASS). ... Arguments passed simulate_residuals(). applies models zero-inflation component, models class glmmTMB nbinom1 nbinom2 family. tolerance tolerance ratio observed predicted zeros considered - underfitting zeros. ratio 1 +/- tolerance considered OK, ratio beyond threshold indicate - underfitting. alternative character string specifying alternative hypothesis.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check for zero-inflation in count models — check_zeroinflation","text":"list information amount predicted observed zeros outcome, well ratio two values.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check for zero-inflation in count models — check_zeroinflation","text":"amount observed zeros larger amount predicted zeros, model underfitting zeros, indicates zero-inflation data. cases, recommended use negative binomial zero-inflated models. case negative binomial models, models zero-inflation component, hurdle models, results check_zeroinflation() based simulate_residuals(), .e. check_zeroinflation(simulate_residuals(model)) internally called necessary.","code":""},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Check for zero-inflation in count models — check_zeroinflation","text":"certain models, resp. model certain families, tests based simulated residuals (see simulate_residuals()). usually accurate testing models traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. Arguments specified ... passed simulate_residuals(), relies DHARMa::simulateResiduals() (therefore, arguments ... passed DHARMa). defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/check_zeroinflation.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check for zero-inflation in count models — check_zeroinflation","text":"","code":"data(Salamanders, package = \"glmmTMB\") m <- glm(count ~ spp + mined, family = poisson, data = Salamanders) check_zeroinflation(m) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 298 #> Ratio: 0.77 #> #> Model is underfitting zeros (probable zero-inflation). # for models with zero-inflation component, it's better to carry out # the check for zero-inflation using simulated residuals m <- glmmTMB::glmmTMB( count ~ spp + mined, ziformula = ~ mined + spp, family = poisson, data = Salamanders ) res <- simulate_residuals(m) check_zeroinflation(res) #> # Check for zero-inflation #> #> Observed zeros: 387 #> Predicted zeros: 387 #> Ratio: 1.00 #> #> Model seems ok, ratio of observed and predicted zeros is within the #> tolerance range (p > .999)."},{"path":"https://easystats.github.io/performance/reference/classify_distribution.html","id":null,"dir":"Reference","previous_headings":"","what":"Classify the distribution of a model-family using machine learning — classify_distribution","title":"Classify the distribution of a model-family using machine learning — classify_distribution","text":"Classify distribution model-family using machine learning","code":""},{"path":"https://easystats.github.io/performance/reference/classify_distribution.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Classify the distribution of a model-family using machine learning — classify_distribution","text":"trained model classify distributions, used check_distribution() function.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Compare performance of different models — compare_performance","title":"Compare performance of different models — compare_performance","text":"compare_performance() computes indices model performance different models hence allows comparison indices across models.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compare performance of different models — compare_performance","text":"","code":"compare_performance( ..., metrics = \"all\", rank = FALSE, estimator = \"ML\", verbose = TRUE )"},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compare performance of different models — compare_performance","text":"... Multiple model objects (also different classes). metrics Can \"\", \"common\" character vector metrics computed. See related documentation() object's class details. rank Logical, TRUE, models ranked according 'best' overall model performance. See 'Details'. estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compare performance of different models — compare_performance","text":"data frame one row per model one column per \"index\" (see metrics).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"model-weights","dir":"Reference","previous_headings":"","what":"Model Weights","title":"Compare performance of different models — compare_performance","text":"information criteria (IC) requested metrics (.e., \"\", \"common\", \"AIC\", \"AICc\", \"BIC\", \"WAIC\", \"LOOIC\"), model weights based criteria also computed. IC except LOOIC, weights computed w = exp(-0.5 * delta_ic) / sum(exp(-0.5 * delta_ic)), delta_ic difference model's IC value smallest IC value model set (Burnham Anderson, 2002). LOOIC, weights computed \"stacking weights\" using loo::stacking_weights().","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"ranking-models","dir":"Reference","previous_headings":"","what":"Ranking Models","title":"Compare performance of different models — compare_performance","text":"rank = TRUE, new column Performance_Score returned. score ranges 0\\ performance. Note score value necessarily sum 100\\ Rather, calculation based normalizing indices (.e. rescaling range 0 1), taking mean value indices model. rather quick heuristic, might helpful exploratory index. particular models different types (e.g. mixed models, classical linear models, logistic regression, ...), indices computed model. case index calculated specific model type, model gets NA value. indices NAs excluded calculating performance score. plot()-method compare_performance(), creates \"spiderweb\" plot, different indices normalized larger values indicate better model performance. Hence, points closer center indicate worse fit indices (see online-documentation details).","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"reml-versus-ml-estimator","dir":"Reference","previous_headings":"","what":"REML versus ML estimator","title":"Compare performance of different models — compare_performance","text":"default, estimator = \"ML\", means values information criteria (AIC, AICc, BIC) specific model classes (like models lme4) based ML-estimator, default behaviour AIC() classes setting REML = TRUE. default intentional, comparing information criteria based REML fits usually valid (might useful, though, models share fixed effects - however, usually case nested models, prerequisite LRT). Set estimator = \"REML\" explicitly return (AIC/...) values defaults AIC.merMod().","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Compare performance of different models — compare_performance","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compare performance of different models — compare_performance","text":"Burnham, K. P., Anderson, D. R. (2002). Model selection multimodel inference: practical information-theoretic approach (2nd ed.). Springer-Verlag. doi:10.1007/b97636","code":""},{"path":"https://easystats.github.io/performance/reference/compare_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compare performance of different models — compare_performance","text":"","code":"data(iris) lm1 <- lm(Sepal.Length ~ Species, data = iris) lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris) lm3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris) compare_performance(lm1, lm2, lm3) #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | R2 #> --------------------------------------------------------------------- #> lm1 | lm | 231.5 (<.001) | 231.7 (<.001) | 243.5 (<.001) | 0.619 #> lm2 | lm | 106.2 (0.566) | 106.6 (0.611) | 121.3 (0.964) | 0.837 #> lm3 | lm | 106.8 (0.434) | 107.6 (0.389) | 127.8 (0.036) | 0.840 #> #> Name | R2 (adj.) | RMSE | Sigma #> -------------------------------- #> lm1 | 0.614 | 0.510 | 0.515 #> lm2 | 0.833 | 0.333 | 0.338 #> lm3 | 0.835 | 0.330 | 0.336 compare_performance(lm1, lm2, lm3, rank = TRUE) #> # Comparison of Model Performance Indices #> #> Name | Model | R2 | R2 (adj.) | RMSE | Sigma | AIC weights | AICc weights #> ----------------------------------------------------------------------------- #> lm2 | lm | 0.837 | 0.833 | 0.333 | 0.338 | 0.566 | 0.611 #> lm3 | lm | 0.840 | 0.835 | 0.330 | 0.336 | 0.434 | 0.389 #> lm1 | lm | 0.619 | 0.614 | 0.510 | 0.515 | 3.65e-28 | 4.23e-28 #> #> Name | BIC weights | Performance-Score #> -------------------------------------- #> lm2 | 0.964 | 99.23% #> lm3 | 0.036 | 77.70% #> lm1 | 2.80e-27 | 0.00% m1 <- lm(mpg ~ wt + cyl, data = mtcars) m2 <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") m3 <- lme4::lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris) compare_performance(m1, m2, m3) #> When comparing models, please note that probably not all models were fit #> from same data. #> # Comparison of Model Performance Indices #> #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | RMSE | Sigma #> ------------------------------------------------------------------------------- #> m1 | lm | 156.0 (<.001) | 157.5 (<.001) | 161.9 (<.001) | 2.444 | 2.568 #> m2 | glm | 31.3 (>.999) | 32.2 (>.999) | 35.7 (>.999) | 0.359 | 1.000 #> m3 | lmerMod | 74.6 (<.001) | 74.9 (<.001) | 86.7 (<.001) | 0.279 | 0.283 #> #> Name | R2 | R2 (adj.) | Tjur's R2 | Log_loss | Score_log | Score_spherical #> ----------------------------------------------------------------------------- #> m1 | 0.830 | 0.819 | | | | #> m2 | | | 0.478 | 0.395 | -14.903 | 0.095 #> m3 | | | | | | #> #> Name | PCP | R2 (cond.) | R2 (marg.) | ICC #> ---------------------------------------------- #> m1 | | | | #> m2 | 0.743 | | | #> m3 | | 0.972 | 0.096 | 0.969"},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":null,"dir":"Reference","previous_headings":"","what":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"","code":"cronbachs_alpha(x, ...)"},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"x matrix data frame. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Cronbach's Alpha value x.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Cronbach's Alpha value x. value closer 1 indicates greater internal consistency, usually following rule thumb applied interpret results: α < 0.5 unacceptable, 0.5 < α < 0.6 poor, 0.6 < α < 0.7 questionable, 0.7 < α < 0.8 acceptable, everything > 0.8 good excellent.","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"Bland, J. M., Altman, D. G. Statistics notes: Cronbach's alpha. BMJ 1997;314:572. 10.1136/bmj.314.7080.572","code":""},{"path":"https://easystats.github.io/performance/reference/cronbachs_alpha.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Cronbach's Alpha for Items or Scales — cronbachs_alpha","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] cronbachs_alpha(x) #> [1] 0.09463206"},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Print tables in different output formats — display.performance_model","title":"Print tables in different output formats — display.performance_model","text":"Prints tables (.e. data frame) different output formats. print_md() alias display(format = \"markdown\").","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print tables in different output formats — display.performance_model","text":"","code":"# S3 method for class 'performance_model' display(object, format = \"markdown\", digits = 2, caption = NULL, ...) # S3 method for class 'performance_model' print_md( x, digits = 2, caption = \"Indices of model performance\", layout = \"horizontal\", ... ) # S3 method for class 'compare_performance' print_md( x, digits = 2, caption = \"Comparison of Model Performance Indices\", layout = \"horizontal\", ... )"},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print tables in different output formats — display.performance_model","text":"object, x object returned model_performance() compare_performance(). summary. format String, indicating output format. Currently, \"markdown\" supported. digits Number decimal places. caption Table caption string. NULL, table caption printed. ... Currently used. layout Table layout (can either \"horizontal\" \"vertical\").","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Print tables in different output formats — display.performance_model","text":"character vector. format = \"markdown\", return value character vector markdown-table format.","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Print tables in different output formats — display.performance_model","text":"display() useful table-output functions, usually printed formatted text-table console, formatted pretty table-rendering markdown documents, knitted rmarkdown PDF Word files. See vignette examples.","code":""},{"path":"https://easystats.github.io/performance/reference/display.performance_model.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Print tables in different output formats — display.performance_model","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) mp <- model_performance(model) display(mp) #> #> #> |AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma | #> |:------|:------:|:------:|:----:|:---------:|:----:|:-----:| #> |156.01 | 157.49 | 161.87 | 0.83 | 0.82 | 2.44 | 2.57 |"},{"path":"https://easystats.github.io/performance/reference/icc.html","id":null,"dir":"Reference","previous_headings":"","what":"Intraclass Correlation Coefficient (ICC) — icc","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"function calculates intraclass-correlation coefficient (ICC) - sometimes also called variance partition coefficient (VPC) repeatability - mixed effects models. ICC can calculated models supported insight::get_variance(). models fitted brms-package, icc() might fail due large variety models families supported brms-package. cases, alternative ICC variance_decomposition(), based posterior predictive distribution (see 'Details').","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"","code":"icc( model, by_group = FALSE, tolerance = 1e-05, ci = NULL, iterations = 100, ci_method = NULL, null_model = NULL, approximation = \"lognormal\", model_component = NULL, verbose = TRUE, ... ) variance_decomposition(model, re_formula = NULL, robust = TRUE, ci = 0.95, ...)"},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"model (Bayesian) mixed effects model. by_group Logical, TRUE, icc() returns variance components random-effects level (multiple levels). See 'Details'. tolerance Tolerance singularity check random effects, decide whether compute random effect variances . Indicates value convergence result accepted. larger tolerance , stricter test . See performance::check_singularity(). ci Confidence resp. credible interval level. icc(), r2(), rmse(), confidence intervals based bootstrapped samples ICC, R2 RMSE value. See iterations. iterations Number bootstrap-replicates computing confidence intervals ICC, R2, RMSE etc. ci_method Character string, indicating bootstrap-method. NULL (default), case lme4::bootMer() used bootstrapped confidence intervals. However, bootstrapped intervals calculated way, try ci_method = \"boot\", falls back boot::boot(). may successfully return bootstrapped confidence intervals, bootstrapped samples may appropriate multilevel structure model. also option ci_method = \"analytical\", tries calculate analytical confidence assuming chi-squared distribution. However, intervals rather inaccurate often narrow. recommended calculate bootstrapped confidence intervals mixed models. null_model Optional, null model compute random effect variances, passed insight::get_variance(). Usually required calculation r-squared ICC fails null_model specified. calculating null model takes longer already fit null model, can pass , , speed process. approximation Character string, indicating approximation method distribution-specific (observation level, residual) variance. applies non-Gaussian models. Can \"lognormal\" (default), \"delta\" \"trigamma\". binomial models, default theoretical distribution specific variance, however, can also \"observation_level\". See Nakagawa et al. 2017, particular supplement 2, details. model_component models can zero-inflation component, specify component variances returned. NULL \"full\" (default), conditional zero-inflation component taken account. \"conditional\", conditional component considered. verbose Toggle warnings messages. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict(). re_formula Formula containing group-level effects considered prediction. NULL (default), include group-level effects. Else, instance nested models, name specific group-level effect calculate variance decomposition group-level. See 'Details' ?brms::posterior_predict. robust Logical, TRUE, median instead mean used calculate central tendency variances.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"list two values, adjusted ICC unadjusted ICC. variance_decomposition(), list two values, decomposed ICC well credible intervals ICC.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"interpretation","dir":"Reference","previous_headings":"","what":"Interpretation","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"ICC can interpreted \"proportion variance explained grouping structure population\". grouping structure entails measurements organized groups (e.g., test scores school can grouped classroom multiple classrooms classroom administered test) ICC indexes strongly measurements group resemble . index goes 0, grouping conveys information, 1, observations group identical (Gelman Hill, 2007, p. 258). word, ICC - sometimes conceptualized measurement repeatability - \"can also interpreted expected correlation two randomly drawn units group\" (Hox 2010: 15), although definition might apply mixed models complex random effects structures. ICC can help determine whether mixed model even necessary: ICC zero (close zero) means observations within clusters similar observations different clusters, setting random factor might necessary.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"difference-with-r-","dir":"Reference","previous_headings":"","what":"Difference with R2","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"coefficient determination R2 (can computed r2()) quantifies proportion variance explained statistical model, definition mixed model complex (hence, different methods compute proxy exist). ICC related R2 ratios variance components. precisely, R2 proportion explained variance (full model), ICC proportion explained variance can attributed random effects. simple cases, ICC corresponds difference conditional R2 marginal R2 (see r2_nakagawa()).","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"calculation","dir":"Reference","previous_headings":"","what":"Calculation","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"ICC calculated dividing random effect variance, σ2i, total variance, .e. sum random effect variance residual variance, σ2ε.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"adjusted-and-unadjusted-icc","dir":"Reference","previous_headings":"","what":"Adjusted and unadjusted ICC","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"icc() calculates adjusted unadjusted ICC, take sources uncertainty (.e. random effects) account. adjusted ICC relates random effects, unadjusted ICC also takes fixed effects variances account, precisely, fixed effects variance added denominator formula calculate ICC (see Nakagawa et al. 2017). Typically, adjusted ICC interest analysis random effects interest. icc() returns meaningful ICC also complex random effects structures, like models random slopes nested design (two levels) applicable models distributions Gaussian. details computation variances, see ?insight::get_variance.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"icc-for-unconditional-and-conditional-models","dir":"Reference","previous_headings":"","what":"ICC for unconditional and conditional models","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"Usually, ICC calculated null model (\"unconditional model\"). However, according Raudenbush Bryk (2002) Rabe-Hesketh Skrondal (2012) also feasible compute ICC full models covariates (\"conditional models\") compare much, e.g., level-2 variable explains portion variation grouping structure (random intercept).","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"icc-for-specific-group-levels","dir":"Reference","previous_headings":"","what":"ICC for specific group-levels","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"proportion variance specific levels related overall model can computed setting by_group = TRUE. reported ICC variance (random effect) group compared total variance model. mixed models simple random intercept, identical classical (adjusted) ICC.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"variance-decomposition-for-brms-models","dir":"Reference","previous_headings":"","what":"Variance decomposition for brms-models","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"model class brmsfit, icc() might fail due large variety models families supported brms package. cases, variance_decomposition() alternative ICC measure. function calculates variance decomposition based posterior predictive distribution. case, first, draws posterior predictive distribution conditioned group-level terms (posterior_predict(..., re_formula = NA)) calculated well draws distribution conditioned random effects (default, unless specified else re_formula) taken. , second, variances draws calculated. \"ICC\" ratio two variances. recommended way analyse random-effect-variances non-Gaussian models. possible compare variances across models, also specifying different group-level terms via re_formula-argument. Sometimes, variance posterior predictive distribution large, variance ratio output makes sense, e.g. negative. cases, might help use robust = TRUE.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"supported-models-and-model-families","dir":"Reference","previous_headings":"","what":"Supported models and model families","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"single variance components required calculate marginal conditional r-squared values calculated using insight::get_variance() function. results validated solutions provided Nakagawa et al. (2017), particular examples shown Supplement 2 paper. model families validated results MuMIn package. means r-squared values returned r2_nakagawa() accurate reliable following mixed models model families: Bernoulli (logistic) regression Binomial regression (binary outcomes) Poisson Quasi-Poisson regression Negative binomial regression (including nbinom1, nbinom2 nbinom12 families) Gaussian regression (linear models) Gamma regression Tweedie regression Beta regression Ordered beta regression Following model families yet validated, work: Zero-inflated hurdle models Beta-binomial regression Compound Poisson regression Generalized Poisson regression Log-normal regression Skew-normal regression Extracting variance components models zero-inflation part straightforward, definitely clear distribution-specific variance calculated. Therefore, recommended carefully inspect results, probably validate models, e.g. Bayesian models (although results may roughly comparable). Log-normal regressions (e.g. lognormal() family glmmTMB gaussian(\"log\")) often low fixed effects variance (calculated suggested Nakagawa et al. 2017). results low ICC r-squared values, may meaningful.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"Hox, J. J. (2010). Multilevel analysis: techniques applications (2nd ed). New York: Routledge. Nakagawa, S., Johnson, P. C. D., Schielzeth, H. (2017). coefficient determination R2 intra-class correlation coefficient generalized linear mixed-effects models revisited expanded. Journal Royal Society Interface, 14(134), 20170213. Rabe-Hesketh, S., Skrondal, . (2012). Multilevel longitudinal modeling using Stata (3rd ed). College Station, Tex: Stata Press Publication. Raudenbush, S. W., Bryk, . S. (2002). Hierarchical linear models: applications data analysis methods (2nd ed). Thousand Oaks: Sage Publications.","code":""},{"path":"https://easystats.github.io/performance/reference/icc.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Intraclass Correlation Coefficient (ICC) — icc","text":"","code":"model <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) icc(model) #> # Intraclass Correlation Coefficient #> #> Adjusted ICC: 0.910 #> Unadjusted ICC: 0.311 # ICC for specific group-levels data(sleepstudy, package = \"lme4\") set.seed(12345) sleepstudy$grp <- sample(1:5, size = 180, replace = TRUE) sleepstudy$subgrp <- NA for (i in 1:5) { filter_group <- sleepstudy$grp == i sleepstudy$subgrp[filter_group] <- sample(1:30, size = sum(filter_group), replace = TRUE) } model <- lme4::lmer( Reaction ~ Days + (1 | grp / subgrp) + (1 | Subject), data = sleepstudy ) icc(model, by_group = TRUE) #> # ICC by Group #> #> Group | ICC #> ------------------ #> subgrp:grp | 0.017 #> Subject | 0.589 #> grp | 0.001"},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":null,"dir":"Reference","previous_headings":"","what":"Difficulty of Questionnaire Items — item_difficulty","title":"Difficulty of Questionnaire Items — item_difficulty","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Difficulty of Questionnaire Items — item_difficulty","text":"","code":"item_difficulty(x, maximum_value = NULL)"},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Difficulty of Questionnaire Items — item_difficulty","text":"x Depending function, x may matrix returned cor()-function, data frame items (e.g. test questionnaire). maximum_value Numeric value, indicating maximum value item. NULL (default), maximum taken maximum value columns x (assuming maximum value least appears data). NA, item's maximum value taken maximum. required maximum value present data, specify theoreritcal maximum using maximum_value.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Difficulty of Questionnaire Items — item_difficulty","text":"data frame three columns: name(s) item(s), item difficulties item, ideal item difficulty.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Difficulty of Questionnaire Items — item_difficulty","text":"Item difficutly item defined quotient sum actually achieved item maximum achievable score. function calculates item difficulty, range 0.2 0.8. Lower values signal difficult items, higher values close one sign easier items. ideal value item difficulty p + (1 - p) / 2, p = 1 / max(x). cases, ideal item difficulty lies 0.5 0.8.","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Difficulty of Questionnaire Items — item_difficulty","text":"Bortz, J., Döring, N. (2006). Quantitative Methoden der Datenerhebung. J. Bortz N. Döring, Forschungsmethoden und Evaluation. Springer: Berlin, Heidelberg: 137–293 Kelava , Moosbrugger H (2020). Deskriptivstatistische Itemanalyse und Testwertbestimmung. : Moosbrugger H, Kelava , editors. Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer, 143–158","code":""},{"path":"https://easystats.github.io/performance/reference/item_difficulty.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Difficulty of Questionnaire Items — item_difficulty","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_difficulty(x) #> Item Difficulty #> #> Item | Difficulty | Ideal #> ------------------------- #> cyl | 0.02 | 0.50 #> gear | 0.01 | 0.50 #> carb | 0.01 | 0.50 #> hp | 0.44 | 0.50"},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":null,"dir":"Reference","previous_headings":"","what":"Discrimination of Questionnaire Items — item_discrimination","title":"Discrimination of Questionnaire Items — item_discrimination","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Discrimination of Questionnaire Items — item_discrimination","text":"","code":"item_discrimination(x, standardize = FALSE)"},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Discrimination of Questionnaire Items — item_discrimination","text":"x matrix data frame. standardize Logical, TRUE, data frame's vectors standardized. Recommended variables different measures / scales.","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Discrimination of Questionnaire Items — item_discrimination","text":"data frame item discrimination (corrected item-total correlations) item scale.","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Discrimination of Questionnaire Items — item_discrimination","text":"function calculates item discriminations (corrected item-total correlations item x remaining items) item scale. absolute value item discrimination indices 0.2. index 0.2 0.4 considered \"fair\", satisfactory index ranges 0.4 0.7. Items low discrimination indices often ambiguously worded examined. Items negative indices examined determine negative value obtained (e.g. reversed answer categories regarding positive negative poles).","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Discrimination of Questionnaire Items — item_discrimination","text":"Kelava , Moosbrugger H (2020). Deskriptivstatistische Itemanalyse und Testwertbestimmung. : Moosbrugger H, Kelava , editors. Testtheorie und Fragebogenkonstruktion. Berlin, Heidelberg: Springer, 143–158","code":""},{"path":"https://easystats.github.io/performance/reference/item_discrimination.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Discrimination of Questionnaire Items — item_discrimination","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_discrimination(x) #> Item Discrimination #> #> Item | Discrimination #> --------------------- #> cyl | 0.83 #> gear | -0.13 #> carb | 0.75 #> hp | 0.88"},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":null,"dir":"Reference","previous_headings":"","what":"Mean Inter-Item-Correlation — item_intercor","title":"Mean Inter-Item-Correlation — item_intercor","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mean Inter-Item-Correlation — item_intercor","text":"","code":"item_intercor(x, method = c(\"pearson\", \"spearman\", \"kendall\"))"},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Mean Inter-Item-Correlation — item_intercor","text":"x matrix returned cor()-function, data frame items (e.g. test questionnaire). method Correlation computation method. May one \"pearson\" (default), \"spearman\" \"kendall\". may use initial letter .","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Mean Inter-Item-Correlation — item_intercor","text":"mean inter-item-correlation value x.","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Mean Inter-Item-Correlation — item_intercor","text":"function calculates mean inter-item-correlation, .e. correlation matrix x computed (unless x already matrix returned cor() function) mean sum items' correlation values returned. Requires either data frame computed cor() object. \"Ideally, average inter-item correlation set items 0.20 0.40, suggesting items reasonably homogeneous, contain sufficiently unique variance isomorphic . values lower 0.20, items may representative content domain. values higher 0.40, items may capturing small bandwidth construct.\" (Piedmont 2014)","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Mean Inter-Item-Correlation — item_intercor","text":"Piedmont RL. 2014. Inter-item Correlations. : Michalos AC (eds) Encyclopedia Quality Life Well-Research. Dordrecht: Springer, 3303-3304. doi:10.1007/978-94-007-0753-5_1493","code":""},{"path":"https://easystats.github.io/performance/reference/item_intercor.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Mean Inter-Item-Correlation — item_intercor","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_intercor(x) #> [1] 0.294155"},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":null,"dir":"Reference","previous_headings":"","what":"Reliability Test for Items or Scales — item_reliability","title":"Reliability Test for Items or Scales — item_reliability","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reliability Test for Items or Scales — item_reliability","text":"","code":"item_reliability(x, standardize = FALSE, digits = 3)"},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reliability Test for Items or Scales — item_reliability","text":"x matrix data frame. standardize Logical, TRUE, data frame's vectors standardized. Recommended variables different measures / scales. digits Amount digits returned values.","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reliability Test for Items or Scales — item_reliability","text":"data frame corrected item-total correlations (item discrimination, column item_discrimination) Cronbach's Alpha (item deleted, column alpha_if_deleted) item scale, NULL data frame less columns.","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Reliability Test for Items or Scales — item_reliability","text":"function calculates item discriminations (corrected item-total correlations item x remaining items) Cronbach's alpha item, deleted scale. absolute value item discrimination indices 0.2. index 0.2 0.4 considered \"fair\", index 0.4 (-0.4) \"good\". range satisfactory values 0.4 0.7. Items low discrimination indices often ambiguously worded examined. Items negative indices examined determine negative value obtained (e.g. reversed answer categories regarding positive negative poles).","code":""},{"path":"https://easystats.github.io/performance/reference/item_reliability.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Reliability Test for Items or Scales — item_reliability","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_reliability(x) #> term alpha_if_deleted item_discrimination #> 1 cyl 0.048 0.826 #> 2 gear 0.110 -0.127 #> 3 carb 0.058 0.751 #> 4 hp 0.411 0.881"},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":null,"dir":"Reference","previous_headings":"","what":"Split-Half Reliability — item_split_half","title":"Split-Half Reliability — item_split_half","text":"Compute various measures internal consistencies tests item-scales questionnaires.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Split-Half Reliability — item_split_half","text":"","code":"item_split_half(x, digits = 3)"},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Split-Half Reliability — item_split_half","text":"x matrix data frame. digits Amount digits returned values.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Split-Half Reliability — item_split_half","text":"list two elements: split-half reliability splithalf Spearman-Brown corrected split-half reliability spearmanbrown.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Split-Half Reliability — item_split_half","text":"function calculates split-half reliability items x, including Spearman-Brown adjustment. Splitting done selecting odd versus even columns x. value closer 1 indicates greater internal consistency.","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Split-Half Reliability — item_split_half","text":"Spearman C. 1910. Correlation calculated faulty data. British Journal Psychology (3): 271-295. doi:10.1111/j.2044-8295.1910.tb00206.x Brown W. 1910. experimental results correlation mental abilities. British Journal Psychology (3): 296-322. doi:10.1111/j.2044-8295.1910.tb00207.x","code":""},{"path":"https://easystats.github.io/performance/reference/item_split_half.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Split-Half Reliability — item_split_half","text":"","code":"data(mtcars) x <- mtcars[, c(\"cyl\", \"gear\", \"carb\", \"hp\")] item_split_half(x) #> $splithalf #> [1] 0.9070215 #> #> $spearmanbrown #> [1] 0.9512441 #>"},{"path":"https://easystats.github.io/performance/reference/looic.html","id":null,"dir":"Reference","previous_headings":"","what":"LOO-related Indices for Bayesian regressions. — looic","title":"LOO-related Indices for Bayesian regressions. — looic","text":"Compute LOOIC (leave-one-cross-validation (LOO) information criterion) ELPD (expected log predictive density) Bayesian regressions. LOOIC ELPD, smaller larger values respectively indicative better fit.","code":""},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"LOO-related Indices for Bayesian regressions. — looic","text":"","code":"looic(model, verbose = TRUE)"},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"LOO-related Indices for Bayesian regressions. — looic","text":"model Bayesian regression model. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"LOO-related Indices for Bayesian regressions. — looic","text":"list four elements, ELPD, LOOIC standard errors.","code":""},{"path":"https://easystats.github.io/performance/reference/looic.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"LOO-related Indices for Bayesian regressions. — looic","text":"","code":"# \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0 )) looic(model) #> # LOOIC and ELPD with Standard Error #> #> LOOIC: 155.90 [8.79] #> ELPD: -77.95 [4.39] # }"},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Model Performance — model_performance","title":"Model Performance — model_performance","text":"See documentation object's class: Frequentist Regressions Instrumental Variables Regressions Mixed models Bayesian models CFA / SEM lavaan models Meta-analysis models","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Model Performance — model_performance","text":"","code":"model_performance(model, ...) performance(model, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Model Performance — model_performance","text":"model Statistical model. ... Arguments passed methods, resp. compare_performance(), one multiple model objects (also different classes).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Model Performance — model_performance","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Model Performance — model_performance","text":"model_performance() correctly detects transformed response returns \"corrected\" AIC BIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Model Performance — model_performance","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma #> --------------------------------------------------------------- #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568 model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log #> --------------------------------------------------------------------------- #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 #> #> AIC | Score_spherical | PCP #> -------------------------------- #> 31.298 | 0.095 | 0.743"},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of instrumental variable regression models — model_performance.ivreg","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"Performance instrumental variable regression models","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"","code":"# S3 method for class 'ivreg' model_performance(model, metrics = \"all\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"model model. metrics Can \"\", \"common\" character vector metrics computed (c(\"AIC\", \"AICc\", \"BIC\", \"R2\", \"RMSE\", \"SIGMA\", \"Sargan\", \"Wu_Hausman\", \"weak_instruments\")). \"common\" compute AIC, BIC, R2 RMSE. verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.ivreg.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of instrumental variable regression models — model_performance.ivreg","text":"model_performance() correctly detects transformed response returns \"corrected\" AIC BIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":null,"dir":"Reference","previous_headings":"","what":"Model summary for k-means clustering — model_performance.kmeans","title":"Model summary for k-means clustering — model_performance.kmeans","text":"Model summary k-means clustering","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Model summary for k-means clustering — model_performance.kmeans","text":"","code":"# S3 method for class 'kmeans' model_performance(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Model summary for k-means clustering — model_performance.kmeans","text":"model Object type kmeans. verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.kmeans.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Model summary for k-means clustering — model_performance.kmeans","text":"","code":"# a 2-dimensional example x <- rbind( matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2) ) colnames(x) <- c(\"x\", \"y\") model <- kmeans(x, 2) model_performance(model) #> # Indices of model performance #> #> Sum_Squares_Total | Sum_Squares_Within | Sum_Squares_Between | Iterations #> ------------------------------------------------------------------------- #> 71.530 | 16.523 | 55.007 | 1.000"},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Compute indices model performance SEM CFA models lavaan package.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"","code":"# S3 method for class 'lavaan' model_performance(model, metrics = \"all\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"model lavaan model. metrics Can \"\" character vector metrics computed (\"Chi2\", \"Chi2_df\", \"p_Chi2\", \"Baseline\", \"Baseline_df\", \"p_Baseline\", \"GFI\", \"AGFI\", \"NFI\", \"NNFI\", \"CFI\", \"RMSEA\", \"RMSEA_CI_low\", \"RMSEA_CI_high\", \"p_RMSEA\", \"RMR\", \"SRMR\", \"RFI\", \"PNFI\", \"IFI\", \"RNI\", \"Loglikelihood\", \"AIC\", \"BIC\", \"BIC_adjusted\". verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"See documentation ?lavaan::fitmeasures.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"indices-of-fit","dir":"Reference","previous_headings":"","what":"Indices of fit","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Chisq: model Chi-squared assesses overall fit discrepancy sample fitted covariance matrices. p-value > .05 (.e., hypothesis perfect fit rejected). However, quite sensitive sample size. GFI/AGFI: (Adjusted) Goodness Fit proportion variance accounted estimated population covariance. Analogous R2. GFI AGFI > .95 > .90, respectively. NFI/NNFI/TLI: (Non) Normed Fit Index. NFI 0.95, indicates model interest improves fit 95\\ null model. NNFI (also called Tucker Lewis index; TLI) preferable smaller samples. > .90 (Byrne, 1994) > .95 (Schumacker Lomax, 2004). CFI: Comparative Fit Index revised form NFI. sensitive sample size (Fan, Thompson, Wang, 1999). Compares fit target model fit independent, null, model. > .90. RMSEA: Root Mean Square Error Approximation parsimony-adjusted index. Values closer 0 represent good fit. < .08 < .05. p-value printed tests hypothesis RMSEA less equal .05 (cutoff sometimes used good fit), thus significant. RMR/SRMR: (Standardized) Root Mean Square Residual represents square-root difference residuals sample covariance matrix hypothesized model. RMR can sometimes hard interpret, better use SRMR. < .08. RFI: Relative Fit Index, also known RHO1, guaranteed vary 0 1. However, RFI close 1 indicates good fit. IFI: Incremental Fit Index (IFI) adjusts Normed Fit Index (NFI) sample size degrees freedom (Bollen's, 1989). 0.90 good fit, index can exceed 1. PNFI: Parsimony-Adjusted Measures Index. commonly agreed-upon cutoff value acceptable model index. > 0.50.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"what-to-report","dir":"Reference","previous_headings":"","what":"What to report","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Kline (2015) suggests minimum following indices reported: model chi-square, RMSEA, CFI SRMR.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"Byrne, B. M. (1994). Structural equation modeling EQS EQS/Windows. Thousand Oaks, CA: Sage Publications. Tucker, L. R., Lewis, C. (1973). reliability coefficient maximum likelihood factor analysis. Psychometrika, 38, 1-10. Schumacker, R. E., Lomax, R. G. (2004). beginner's guide structural equation modeling, Second edition. Mahwah, NJ: Lawrence Erlbaum Associates. Fan, X., B. Thompson, L. Wang (1999). Effects sample size, estimation method, model specification structural equation modeling fit indexes. Structural Equation Modeling, 6, 56-83. Kline, R. B. (2015). Principles practice structural equation modeling. Guilford publications.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lavaan.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of lavaan SEM / CFA Models — model_performance.lavaan","text":"","code":"# Confirmatory Factor Analysis (CFA) --------- data(HolzingerSwineford1939, package = \"lavaan\") structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 \" model <- lavaan::cfa(structure, data = HolzingerSwineford1939) model_performance(model) #> # Indices of model performance #> #> Chi2(24) | p (Chi2) | Baseline(36) | p (Baseline) | GFI | AGFI | NFI #> ------------------------------------------------------------------------- #> 85.306 | < .001 | 918.852 | < .001 | 0.943 | 0.894 | 0.907 #> #> Chi2(24) | NNFI | CFI | RMSEA | RMSEA CI | p (RMSEA) | RMR | SRMR #> --------------------------------------------------------------------------- #> 85.306 | 0.896 | 0.931 | 0.092 | [0.07, 0.11] | < .001 | 0.082 | 0.065 #> #> Chi2(24) | RFI | PNFI | IFI | RNI | Loglikelihood | AIC | BIC | BIC_adjusted #> --------------------------------------------------------------------------------------------- #> 85.306 | 0.861 | 0.605 | 0.931 | 0.931 | -3737.745 | 7517.490 | 7595.339 | 7528.739"},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Regression Models — model_performance.lm","title":"Performance of Regression Models — model_performance.lm","text":"Compute indices model performance regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Regression Models — model_performance.lm","text":"","code":"# S3 method for class 'lm' model_performance(model, metrics = \"all\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Regression Models — model_performance.lm","text":"model model. metrics Can \"\", \"common\" character vector metrics computed (one \"AIC\", \"AICc\", \"BIC\", \"R2\", \"R2_adj\", \"RMSE\", \"SIGMA\", \"LOGLOSS\", \"PCP\", \"SCORE\"). \"common\" compute AIC, BIC, R2 RMSE. verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Regression Models — model_performance.lm","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of Regression Models — model_performance.lm","text":"Depending model, following indices computed: AIC: Akaike's Information Criterion, see ?stats::AIC AICc: Second-order (small sample) AIC correction small sample sizes BIC: Bayesian Information Criterion, see ?stats::BIC R2: r-squared value, see r2() R2_adj: adjusted r-squared, see r2() RMSE: root mean squared error, see performance_rmse() SIGMA: residual standard deviation, see insight::get_sigma() LOGLOSS: Log-loss, see performance_logloss() SCORE_LOG: score logarithmic proper scoring rule, see performance_score() SCORE_SPHERICAL: score spherical proper scoring rule, see performance_score() PCP: percentage correct predictions, see performance_pcp() model_performance() correctly detects transformed response returns \"corrected\" AIC BIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.lm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Regression Models — model_performance.lm","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma #> --------------------------------------------------------------- #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568 model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log #> --------------------------------------------------------------------------- #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 #> #> AIC | Score_spherical | PCP #> -------------------------------- #> 31.298 | 0.095 | 0.743"},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Mixed Models — model_performance.merMod","title":"Performance of Mixed Models — model_performance.merMod","text":"Compute indices model performance mixed models.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Mixed Models — model_performance.merMod","text":"","code":"# S3 method for class 'merMod' model_performance( model, metrics = \"all\", estimator = \"REML\", verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Mixed Models — model_performance.merMod","text":"model mixed effects model. metrics Can \"\", \"common\" character vector metrics computed (c(\"AIC\", \"AICc\", \"BIC\", \"R2\", \"ICC\", \"RMSE\", \"SIGMA\", \"LOGLOSS\", \"SCORE\")). \"common\" compute AIC, BIC, R2, ICC RMSE. estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings messages. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Mixed Models — model_performance.merMod","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"intraclass-correlation-coefficient-icc-","dir":"Reference","previous_headings":"","what":"Intraclass Correlation Coefficient (ICC)","title":"Performance of Mixed Models — model_performance.merMod","text":"method returns adjusted ICC , typically interest judging variance attributed random effects part model (see also icc()).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"reml-versus-ml-estimator","dir":"Reference","previous_headings":"","what":"REML versus ML estimator","title":"Performance of Mixed Models — model_performance.merMod","text":"default behaviour model_performance() computing AIC BIC linear mixed model package lme4 AIC() BIC() (.e. estimator = \"REML\"). However, model comparison using compare_performance() sets estimator = \"ML\" default, comparing information criteria based REML fits usually valid (unless models fixed effects). Thus, make sure set correct estimator-value looking fit-indices comparing model fits.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"other-performance-indices","dir":"Reference","previous_headings":"","what":"Other performance indices","title":"Performance of Mixed Models — model_performance.merMod","text":"Furthermore, see 'Details' model_performance.lm() details returned indices.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.merMod.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Mixed Models — model_performance.merMod","text":"","code":"model <- lme4::lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris) model_performance(model) #> # Indices of model performance #> #> AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma #> -------------------------------------------------------------------------- #> 77.320 | 77.595 | 89.362 | 0.972 | 0.096 | 0.969 | 0.279 | 0.283"},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Meta-Analysis Models — model_performance.rma","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"Compute indices model performance meta-analysis model metafor package.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"","code":"# S3 method for class 'rma' model_performance( model, metrics = \"all\", estimator = \"ML\", verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"model rma object returned metafor::rma(). metrics Can \"\" character vector metrics computed (c(\"AIC\", \"BIC\", \"I2\", \"H2\", \"TAU2\", \"R2\", \"CochransQ\", \"QE\", \"Omnibus\", \"QM\")). estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings. ... Arguments passed methods.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"indices-of-fit","dir":"Reference","previous_headings":"","what":"Indices of fit","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"AIC Akaike's Information Criterion, see ?stats::AIC BIC Bayesian Information Criterion, see ?stats::BIC I2: random effects model, I2 estimates (percent) much total variability effect size estimates can attributed heterogeneity among true effects. mixed-effects model, I2 estimates much unaccounted variability can attributed residual heterogeneity. H2: random-effects model, H2 estimates ratio total amount variability effect size estimates amount sampling variability. mixed-effects model, H2 estimates ratio unaccounted variability effect size estimates amount sampling variability. TAU2: amount (residual) heterogeneity random mixed effects model. CochransQ (QE): Test (residual) Heterogeneity. Without moderators model, simply Cochran's Q-test. Omnibus (QM): Omnibus test parameters. R2: Pseudo-R2-statistic, indicates amount heterogeneity accounted moderators included fixed-effects model. See documentation ?metafor::fitstats.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.rma.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Meta-Analysis Models — model_performance.rma","text":"","code":"data(dat.bcg, package = \"metadat\") dat <- metafor::escalc( measure = \"RR\", ai = tpos, bi = tneg, ci = cpos, di = cneg, data = dat.bcg ) model <- metafor::rma(yi, vi, data = dat, method = \"REML\") model_performance(model) #> # Indices of model performance #> #> AIC | BIC | I2 | H2 | TAU2 | CochransQ | p (CochransQ) | df #> ------------------------------------------------------------------------- #> 29.376 | 30.345 | 0.922 | 12.856 | 0.313 | 152.233 | < .001 | 12 #> #> AIC | Omnibus | p (Omnibus) #> ------------------------------ #> 29.376 | 15.796 | < .001"},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":null,"dir":"Reference","previous_headings":"","what":"Performance of Bayesian Models — model_performance.stanreg","title":"Performance of Bayesian Models — model_performance.stanreg","text":"Compute indices model performance (general) linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Performance of Bayesian Models — model_performance.stanreg","text":"","code":"# S3 method for class 'stanreg' model_performance(model, metrics = \"all\", verbose = TRUE, ...) # S3 method for class 'BFBayesFactor' model_performance( model, metrics = \"all\", verbose = TRUE, average = FALSE, prior_odds = NULL, ... )"},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Performance of Bayesian Models — model_performance.stanreg","text":"model Object class stanreg brmsfit. metrics Can \"\", \"common\" character vector metrics computed (c(\"LOOIC\", \"WAIC\", \"R2\", \"R2_adj\", \"RMSE\", \"SIGMA\", \"LOGLOSS\", \"SCORE\")). \"common\" compute LOOIC, WAIC, R2 RMSE. verbose Toggle warnings. ... Arguments passed methods. average Compute model-averaged index? See bayestestR::weighted_posteriors(). prior_odds Optional vector prior odds models compared first model (denominator, BFBayesFactor objects). data.frames, used basis weighting.","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Performance of Bayesian Models — model_performance.stanreg","text":"data frame (one row) one column per \"index\" (see metrics).","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Performance of Bayesian Models — model_performance.stanreg","text":"Depending model, following indices computed: ELPD: expected log predictive density. Larger ELPD values mean better fit. See looic(). LOOIC: leave-one-cross-validation (LOO) information criterion. Lower LOOIC values mean better fit. See looic(). WAIC: widely applicable information criterion. Lower WAIC values mean better fit. See ?loo::waic. R2: r-squared value, see r2_bayes(). R2_adjusted: LOO-adjusted r-squared, see r2_loo(). RMSE: root mean squared error, see performance_rmse(). SIGMA: residual standard deviation, see insight::get_sigma(). LOGLOSS: Log-loss, see performance_logloss(). SCORE_LOG: score logarithmic proper scoring rule, see performance_score(). SCORE_SPHERICAL: score spherical proper scoring rule, see performance_score(). PCP: percentage correct predictions, see performance_pcp().","code":""},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Performance of Bayesian Models — model_performance.stanreg","text":"Gelman, ., Goodrich, B., Gabry, J., Vehtari, . (2018). R-squared Bayesian regression models. American Statistician, American Statistician, 1-6.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/model_performance.stanreg.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Performance of Bayesian Models — model_performance.stanreg","text":"","code":"# \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0 )) model_performance(model) #> # Indices of model performance #> #> ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | R2 (adj.) | RMSE | Sigma #> ------------------------------------------------------------------------------------ #> -78.243 | 4.270 | 156.486 | 8.539 | 156.468 | 0.814 | 0.798 | 2.445 | 2.660 model <- suppressWarnings(rstanarm::stan_glmer( mpg ~ wt + cyl + (1 | gear), data = mtcars, chains = 1, iter = 500, refresh = 0 )) model_performance(model) #> # Indices of model performance #> #> ELPD | ELPD_SE | LOOIC | LOOIC_SE | WAIC | R2 | R2 (marg.) #> --------------------------------------------------------------------- #> -79.362 | 4.741 | 158.723 | 9.482 | 158.664 | 0.820 | 0.823 #> #> ELPD | R2 (adj.) | R2_adjusted_marginal | ICC | RMSE | Sigma #> ------------------------------------------------------------------ #> -79.362 | 0.788 | 0.788 | 0.184 | 2.442 | 2.594 # }"},{"path":"https://easystats.github.io/performance/reference/performance-package.html","id":null,"dir":"Reference","previous_headings":"","what":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","title":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","text":"crucial aspect building regression models evaluate quality modelfit. important investigate well models fit data fit indices report. Functions create diagnostic plots compute fit measures exist, however, mostly spread different packages. unique consistent approach assess model quality different kind models. primary goal performance package fill gap provide utilities computing indices model quality goodness fit. include measures like r-squared (R2), root mean squared error (RMSE) intraclass correlation coefficient (ICC), also functions check (mixed) models overdispersion, zero-inflation, convergence singularity. References: Lüdecke et al. (2021) doi:10.21105/joss.03139","code":""},{"path":"https://easystats.github.io/performance/reference/performance-package.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","text":"performance-package","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/performance-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"performance: An R Package for Assessment, Comparison and Testing of Statistical Models — performance-package","text":"Maintainer: Daniel Lüdecke d.luedecke@uke.de (ORCID) Authors: Dominique Makowski dom.makowski@gmail.com (ORCID) [contributor] Mattan S. Ben-Shachar matanshm@post.bgu.ac.il (ORCID) [contributor] Indrajeet Patil patilindrajeet.science@gmail.com (ORCID) [contributor] Philip Waggoner philip.waggoner@gmail.com (ORCID) [contributor] Brenton M. Wiernik brenton@wiernik.org (ORCID) [contributor] Rémi Thériault remi.theriault@mail.mcgill.ca (ORCID) [contributor] contributors: Vincent Arel-Bundock vincent.arel-bundock@umontreal.ca (ORCID) [contributor] Martin Jullum [reviewer] gjo11 [reviewer] Etienne Bacher etienne.bacher@protonmail.com (ORCID) [contributor] Joseph Luchman (ORCID) [contributor]","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":null,"dir":"Reference","previous_headings":"","what":"Accuracy of predictions from model fit — performance_accuracy","title":"Accuracy of predictions from model fit — performance_accuracy","text":"function calculates predictive accuracy linear logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Accuracy of predictions from model fit — performance_accuracy","text":"","code":"performance_accuracy( model, method = c(\"cv\", \"boot\"), k = 5, n = 1000, ci = 0.95, verbose = TRUE )"},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Accuracy of predictions from model fit — performance_accuracy","text":"model linear logistic regression model. mixed-effects model also accepted. method Character string, indicating whether cross-validation (method = \"cv\") bootstrapping (method = \"boot\") used compute accuracy values. k number folds k-fold cross-validation. n Number bootstrap-samples. ci level confidence interval. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Accuracy of predictions from model fit — performance_accuracy","text":"list three values: Accuracy model predictions, .e. proportion accurately predicted values model, standard error, SE, Method used compute accuracy.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Accuracy of predictions from model fit — performance_accuracy","text":"linear models, accuracy correlation coefficient actual predicted value outcome. logistic regression models, accuracy corresponds AUC-value, calculated bayestestR::auc()-function. accuracy mean value multiple correlation resp. AUC-values, either computed cross-validation non-parametric bootstrapping (see argument method). standard error standard deviation computed correlation resp. AUC-values.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_accuracy.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Accuracy of predictions from model fit — performance_accuracy","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) performance_accuracy(model) #> # Accuracy of Model Predictions #> #> Accuracy (95% CI): 95.79% [92.14%, 99.11%] #> Method: Correlation between observed and predicted model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") performance_accuracy(model) #> # Accuracy of Model Predictions #> #> Accuracy (95% CI): 87.56% [78.00%, 100.00%] #> Method: Area under Curve"},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute the AIC or second-order AIC — performance_aicc","title":"Compute the AIC or second-order AIC — performance_aicc","text":"Compute AIC second-order Akaike's information criterion (AICc). performance_aic() small wrapper returns AIC, however, models transformed response variable, performance_aic() returns corrected AIC value (see 'Examples'). generic function also works models AIC method (like Tweedie models). performance_aicc() returns second-order (\"small sample\") AIC incorporates correction small sample sizes.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute the AIC or second-order AIC — performance_aicc","text":"","code":"performance_aicc(x, ...) performance_aic(x, ...) # Default S3 method performance_aic(x, estimator = \"ML\", verbose = TRUE, ...) # S3 method for class 'lmerMod' performance_aic(x, estimator = \"REML\", verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute the AIC or second-order AIC — performance_aicc","text":"x model object. ... Currently used. estimator linear models. Corresponds different estimators standard deviation errors. estimator = \"ML\" (default, except performance_aic() model object class lmerMod), scaling done n (biased ML estimator), equivalent using AIC(logLik()). Setting \"REML\" give results AIC(logLik(..., REML = TRUE)). verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute the AIC or second-order AIC — performance_aicc","text":"Numeric, AIC AICc value.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Compute the AIC or second-order AIC — performance_aicc","text":"performance_aic() correctly detects transformed response , unlike stats::AIC(), returns \"corrected\" AIC value original scale. get back original scale, likelihood model multiplied Jacobian/derivative transformation. case possible return corrected AIC value, warning given corrected log-likelihood value computed.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Compute the AIC or second-order AIC — performance_aicc","text":"Akaike, H. (1973) Information theory extension maximum likelihood principle. : Second International Symposium Information Theory, pp. 267-281. Petrov, B.N., Csaki, F., Eds, Akademiai Kiado, Budapest. Hurvich, C. M., Tsai, C.-L. (1991) Bias corrected AIC criterion underfitted regression time series models. Biometrika 78, 499–509.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_aicc.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute the AIC or second-order AIC — performance_aicc","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) AIC(m) #> [1] 159.1051 performance_aicc(m) #> [1] 162.4651 # correct AIC for models with transformed response variable data(\"mtcars\") mtcars$mpg <- floor(mtcars$mpg) model <- lm(log(mpg) ~ factor(cyl), mtcars) # wrong AIC, not corrected for log-transformation AIC(model) #> [1] -19.67061 # performance_aic() correctly detects transformed response and # returns corrected AIC performance_aic(model) #> [1] 168.2152 # \\dontrun{ # there are a few exceptions where the corrected log-likelihood values # cannot be returned. The following exampe gives a warning. model <- lm(1 / mpg ~ factor(cyl), mtcars) performance_aic(model) #> Warning: Could not compute corrected log-likelihood for models with transformed #> response. Log-likelihood value is probably inaccurate. #> [1] -196.3387 # }"},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":null,"dir":"Reference","previous_headings":"","what":"Cross-validated model performance — performance_cv","title":"Cross-validated model performance — performance_cv","text":"function cross-validates regression models user-supplied new sample using holdout (train-test), k-fold, leave-one-cross-validation.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cross-validated model performance — performance_cv","text":"","code":"performance_cv( model, data = NULL, method = c(\"holdout\", \"k_fold\", \"loo\"), metrics = \"all\", prop = 0.3, k = 5, stack = TRUE, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Cross-validated model performance — performance_cv","text":"model regression model. data Optional. data frame containing variables model used cross-validation sample. method Character string, indicating cross-validation method use: whether holdout (\"holdout\", aka train-test), k-fold (\"k_fold\"), leave-one-(\"loo\"). data supplied, argument ignored. metrics Can \"\", \"common\" character vector metrics computed (c(\"ELPD\", \"Deviance\", \"MSE\", \"RMSE\", \"R2\")). \"common\" compute R2 RMSE. prop method = \"holdout\", proportion sample hold test sample? k method = \"k_fold\", number folds use. stack Logical. method = \"k_fold\", performance computed stacking residuals holdout fold calculating metric stacked data (TRUE, default) performance computed calculating metrics within holdout fold averaging performance across fold (FALSE)? verbose Toggle warnings. ... used.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Cross-validated model performance — performance_cv","text":"data frame columns metric requested, well k method = \"holdout\" Method used cross-validation. method = \"holdout\" stack = TRUE, standard error (standard deviation across holdout folds) metric also included.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_cv.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Cross-validated model performance — performance_cv","text":"","code":"model <- lm(mpg ~ wt + cyl, data = mtcars) performance_cv(model) #> # Cross-validation performance (30% holdout method) #> #> MSE | RMSE | R2 #> ----------------- #> 6.3 | 2.5 | 0.75"},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":null,"dir":"Reference","previous_headings":"","what":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"Check model quality logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"","code":"performance_hosmer(model, n_bins = 10)"},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"model glm-object binomial-family. n_bins Numeric, number bins divide data.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"object class hoslem_test following values: chisq, Hosmer-Lemeshow chi-squared statistic; df, degrees freedom p.value p-value goodness--fit test.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"well-fitting model shows significant difference model observed data, .e. reported p-value greater 0.05.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"Hosmer, D. W., Lemeshow, S. (2000). Applied Logistic Regression. Hoboken, NJ, USA: John Wiley Sons, Inc. doi:10.1002/0471722146","code":""},{"path":"https://easystats.github.io/performance/reference/performance_hosmer.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Hosmer-Lemeshow goodness-of-fit test — performance_hosmer","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") performance_hosmer(model) #> # Hosmer-Lemeshow Goodness-of-Fit Test #> #> Chi-squared: 5.137 #> df: 8 #> p-value: 0.743 #> #> Summary: model seems to fit well."},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":null,"dir":"Reference","previous_headings":"","what":"Log Loss — performance_logloss","title":"Log Loss — performance_logloss","text":"Compute log loss models binary outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Log Loss — performance_logloss","text":"","code":"performance_logloss(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Log Loss — performance_logloss","text":"model Model binary outcome. verbose Toggle warnings. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Log Loss — performance_logloss","text":"Numeric, log loss model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Log Loss — performance_logloss","text":"Logistic regression models predict probability outcome \"success\" \"failure\" (1 0 etc.). performance_logloss() evaluates good bad predicted probabilities . High values indicate bad predictions, low values indicate good predictions. lower log-loss, better model predicts outcome.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/performance_logloss.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Log Loss — performance_logloss","text":"","code":"data(mtcars) m <- glm(formula = vs ~ hp + wt, family = binomial, data = mtcars) performance_logloss(m) #> [1] 0.2517054"},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":null,"dir":"Reference","previous_headings":"","what":"Mean Absolute Error of Models — performance_mae","title":"Mean Absolute Error of Models — performance_mae","text":"Compute mean absolute error models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mean Absolute Error of Models — performance_mae","text":"","code":"performance_mae(model, ...) mae(model, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Mean Absolute Error of Models — performance_mae","text":"model model. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Mean Absolute Error of Models — performance_mae","text":"Numeric, mean absolute error model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mae.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Mean Absolute Error of Models — performance_mae","text":"","code":"data(mtcars) m <- lm(mpg ~ hp + gear, data = mtcars) performance_mae(m) #> [1] 2.545822"},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":null,"dir":"Reference","previous_headings":"","what":"Mean Square Error of Linear Models — performance_mse","title":"Mean Square Error of Linear Models — performance_mse","text":"Compute mean square error linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mean Square Error of Linear Models — performance_mse","text":"","code":"performance_mse(model, ...) mse(model, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Mean Square Error of Linear Models — performance_mse","text":"model model. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Mean Square Error of Linear Models — performance_mse","text":"Numeric, mean square error model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Mean Square Error of Linear Models — performance_mse","text":"mean square error mean sum squared residuals, .e. measures average squares errors. Less technically speaking, mean square error can considered variance residuals, .e. variation outcome model explain. Lower values (closer zero) indicate better fit.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_mse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Mean Square Error of Linear Models — performance_mse","text":"","code":"data(mtcars) m <- lm(mpg ~ hp + gear, data = mtcars) performance_mse(m) #> [1] 8.752858"},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":null,"dir":"Reference","previous_headings":"","what":"Percentage of Correct Predictions — performance_pcp","title":"Percentage of Correct Predictions — performance_pcp","text":"Percentage correct predictions (PCP) models binary outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Percentage of Correct Predictions — performance_pcp","text":"","code":"performance_pcp(model, ci = 0.95, method = \"Herron\", verbose = TRUE)"},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Percentage of Correct Predictions — performance_pcp","text":"model Model binary outcome. ci level confidence interval. method Name method calculate PCP (see 'Details'). Default \"Herron\". May abbreviated. verbose Toggle warnings.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Percentage of Correct Predictions — performance_pcp","text":"list several elements: percentage correct predictions full null model, confidence intervals, well chi-squared p-value Likelihood-Ratio-Test full null model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Percentage of Correct Predictions — performance_pcp","text":"method = \"Gelman-Hill\" (\"gelman_hill\") computes PCP based proposal Gelman Hill 2017, 99, defined proportion cases deterministic prediction wrong, .e. proportion predicted probability 0.5, although y=0 (vice versa) (see also Herron 1999, 90). method = \"Herron\" (\"herron\") computes modified version PCP (Herron 1999, 90-92), sum predicted probabilities, y=1, plus sum 1 - predicted probabilities, y=0, divided number observations. approach said accurate. PCP ranges 0 1, values closer 1 mean model predicts outcome better models PCP closer 0. general, PCP 0.5 (.e. 50\\ Furthermore, PCP full model considerably null model's PCP. likelihood-ratio test indicates whether model significantly better fit null-model (cases, p < 0.05).","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Percentage of Correct Predictions — performance_pcp","text":"Herron, M. (1999). Postestimation Uncertainty Limited Dependent Variable Models. Political Analysis, 8, 83–98. Gelman, ., Hill, J. (2007). Data analysis using regression multilevel/hierarchical models. Cambridge; New York: Cambridge University Press, 99.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_pcp.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Percentage of Correct Predictions — performance_pcp","text":"","code":"data(mtcars) m <- glm(formula = vs ~ hp + wt, family = binomial, data = mtcars) performance_pcp(m) #> # Percentage of Correct Predictions from Logistic Regression Model #> #> Full model: 83.75% [70.96% - 96.53%] #> Null model: 50.78% [33.46% - 68.10%] #> #> # Likelihood-Ratio-Test #> #> Chi-squared: 27.751 #> df: 2.000 #> p-value: 0.000 #> performance_pcp(m, method = \"Gelman-Hill\") #> # Percentage of Correct Predictions from Logistic Regression Model #> #> Full model: 87.50% [76.04% - 98.96%] #> Null model: 56.25% [39.06% - 73.44%] #> #> # Likelihood-Ratio-Test #> #> Chi-squared: 27.751 #> df: 2.000 #> p-value: 0.000 #>"},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":null,"dir":"Reference","previous_headings":"","what":"Root Mean Squared Error — performance_rmse","title":"Root Mean Squared Error — performance_rmse","text":"Compute root mean squared error (mixed effects) models, including Bayesian regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Root Mean Squared Error — performance_rmse","text":"","code":"performance_rmse( model, normalized = FALSE, ci = NULL, iterations = 100, ci_method = NULL, verbose = TRUE, ... ) rmse( model, normalized = FALSE, ci = NULL, iterations = 100, ci_method = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Root Mean Squared Error — performance_rmse","text":"model model. normalized Logical, use TRUE normalized rmse returned. ci Confidence resp. credible interval level. icc(), r2(), rmse(), confidence intervals based bootstrapped samples ICC, R2 RMSE value. See iterations. iterations Number bootstrap-replicates computing confidence intervals ICC, R2, RMSE etc. ci_method Character string, indicating bootstrap-method. NULL (default), case lme4::bootMer() used bootstrapped confidence intervals. However, bootstrapped intervals calculated way, try ci_method = \"boot\", falls back boot::boot(). may successfully return bootstrapped confidence intervals, bootstrapped samples may appropriate multilevel structure model. also option ci_method = \"analytical\", tries calculate analytical confidence assuming chi-squared distribution. However, intervals rather inaccurate often narrow. recommended calculate bootstrapped confidence intervals mixed models. verbose Toggle warnings messages. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Root Mean Squared Error — performance_rmse","text":"Numeric, root mean squared error.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Root Mean Squared Error — performance_rmse","text":"RMSE square root variance residuals indicates absolute fit model data (difference observed data model's predicted values). can interpreted standard deviation unexplained variance, units response variable. Lower values indicate better model fit. normalized RMSE proportion RMSE related range response variable. Hence, lower values indicate less residual variance.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rmse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Root Mean Squared Error — performance_rmse","text":"","code":"data(Orthodont, package = \"nlme\") m <- nlme::lme(distance ~ age, data = Orthodont) # RMSE performance_rmse(m, normalized = FALSE) #> [1] 1.086327 # normalized RMSE performance_rmse(m, normalized = TRUE) #> [1] 0.07242178"},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":null,"dir":"Reference","previous_headings":"","what":"Simple ROC curve — performance_roc","title":"Simple ROC curve — performance_roc","text":"function calculates simple ROC curves x/y coordinates based response predictions binomial model. returns area curve (AUC) percentage, corresponds probability randomly chosen observation \"condition 1\" correctly classified model higher probability \"condition 1\" randomly chosen \"condition 2\" observation. Applying .data.frame() output returns data frame containing following: Sensitivity (actually corresponds 1 - Specificity): False Positive Rate. Sensitivity: True Positive Rate, proportion correctly classified \"condition 1\" observations.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simple ROC curve — performance_roc","text":"","code":"performance_roc(x, ..., predictions, new_data)"},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simple ROC curve — performance_roc","text":"x numeric vector, representing outcome (0/1), model binomial outcome. ... One models binomial outcome. case, new_data ignored. predictions x numeric, numeric vector length x, representing actual predicted values. new_data x model, data frame passed predict() newdata-argument. NULL, ROC full model calculated.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simple ROC curve — performance_roc","text":"data frame three columns, x/y-coordinate pairs ROC curve (Sensitivity Specificity), column model name.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Simple ROC curve — performance_roc","text":"also plot()-method implemented see-package.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_roc.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simple ROC curve — performance_roc","text":"","code":"library(bayestestR) data(iris) set.seed(123) iris$y <- rbinom(nrow(iris), size = 1, .3) folds <- sample(nrow(iris), size = nrow(iris) / 8, replace = FALSE) test_data <- iris[folds, ] train_data <- iris[-folds, ] model <- glm(y ~ Sepal.Length + Sepal.Width, data = train_data, family = \"binomial\") as.data.frame(performance_roc(model, new_data = test_data)) #> Sensitivity Specificity Model #> 1 0.0000000 0.00000000 Model 1 #> 2 0.1428571 0.00000000 Model 1 #> 3 0.1428571 0.09090909 Model 1 #> 4 0.1428571 0.18181818 Model 1 #> 5 0.1428571 0.27272727 Model 1 #> 6 0.1428571 0.36363636 Model 1 #> 7 0.2857143 0.36363636 Model 1 #> 8 0.2857143 0.45454545 Model 1 #> 9 0.2857143 0.54545455 Model 1 #> 10 0.2857143 0.63636364 Model 1 #> 11 0.2857143 0.72727273 Model 1 #> 12 0.4285714 0.72727273 Model 1 #> 13 0.5714286 0.72727273 Model 1 #> 14 0.5714286 0.81818182 Model 1 #> 15 0.7142857 0.81818182 Model 1 #> 16 0.8571429 0.81818182 Model 1 #> 17 0.8571429 0.90909091 Model 1 #> 18 1.0000000 0.90909091 Model 1 #> 19 1.0000000 1.00000000 Model 1 #> 20 1.0000000 1.00000000 Model 1 as.numeric(performance_roc(model)) #> [1] 0.540825 roc <- performance_roc(model, new_data = test_data) area_under_curve(roc$Specificity, roc$Sensitivity) #> [1] 0.3766234 if (interactive()) { m1 <- glm(y ~ Sepal.Length + Sepal.Width, data = iris, family = \"binomial\") m2 <- glm(y ~ Sepal.Length + Petal.Width, data = iris, family = \"binomial\") m3 <- glm(y ~ Sepal.Length + Species, data = iris, family = \"binomial\") performance_roc(m1, m2, m3) # if you have `see` package installed, you can also plot comparison of # ROC curves for different models if (require(\"see\")) plot(performance_roc(m1, m2, m3)) }"},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":null,"dir":"Reference","previous_headings":"","what":"Residual Standard Error for Linear Models — performance_rse","title":"Residual Standard Error for Linear Models — performance_rse","text":"Compute residual standard error linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Residual Standard Error for Linear Models — performance_rse","text":"","code":"performance_rse(model)"},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Residual Standard Error for Linear Models — performance_rse","text":"model model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Residual Standard Error for Linear Models — performance_rse","text":"Numeric, residual standard error model.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Residual Standard Error for Linear Models — performance_rse","text":"residual standard error square root residual sum squares divided residual degrees freedom.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_rse.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Residual Standard Error for Linear Models — performance_rse","text":"","code":"data(mtcars) m <- lm(mpg ~ hp + gear, data = mtcars) performance_rse(m) #> [1] 3.107785"},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":null,"dir":"Reference","previous_headings":"","what":"Proper Scoring Rules — performance_score","title":"Proper Scoring Rules — performance_score","text":"Calculates logarithmic, quadratic/Brier spherical score model binary count outcome.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Proper Scoring Rules — performance_score","text":"","code":"performance_score(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Proper Scoring Rules — performance_score","text":"model Model binary count outcome. verbose Toggle warnings. ... Arguments functions, usually used internally.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Proper Scoring Rules — performance_score","text":"list three elements, logarithmic, quadratic/Brier spherical score.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Proper Scoring Rules — performance_score","text":"Proper scoring rules can used evaluate quality model predictions model fit. performance_score() calculates logarithmic, quadratic/Brier spherical scoring rules. spherical rule takes values interval [0, 1], values closer 1 indicating accurate model, logarithmic rule interval [-Inf, 0], values closer 0 indicating accurate model. stan_lmer() stan_glmer() models, predicted values based posterior_predict(), instead predict(). Thus, results may differ expected non-Bayesian counterparts lme4.","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Proper Scoring Rules — performance_score","text":"Code partially based GLMMadaptive::scoring_rules().","code":""},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Proper Scoring Rules — performance_score","text":"Carvalho, . (2016). overview applications proper scoring rules. Decision Analysis 13, 223–242. doi:10.1287/deca.2016.0337","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/performance_score.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Proper Scoring Rules — performance_score","text":"","code":"## Dobson (1990) Page 93: Randomized Controlled Trial : counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) outcome <- gl(3, 1, 9) treatment <- gl(3, 3) model <- glm(counts ~ outcome + treatment, family = poisson()) performance_score(model) #> # Proper Scoring Rules #> #> logarithmic: -2.5979 #> quadratic: 0.2095 #> spherical: 0.3238 # \\donttest{ data(Salamanders, package = \"glmmTMB\") model <- glmmTMB::glmmTMB( count ~ spp + mined + (1 | site), zi = ~ spp + mined, family = nbinom2(), data = Salamanders ) performance_score(model) #> # Proper Scoring Rules #> #> logarithmic: -1.3275 #> quadratic: 262.1651 #> spherical: 0.0316 # }"},{"path":"https://easystats.github.io/performance/reference/r2.html","id":null,"dir":"Reference","previous_headings":"","what":"Compute the model's R2 — r2","title":"Compute the model's R2 — r2","text":"Calculate R2, also known coefficient determination, value different model objects. Depending model, R2, pseudo-R2, marginal / adjusted R2 values returned.","code":""},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Compute the model's R2 — r2","text":"","code":"r2(model, ...) # Default S3 method r2(model, ci = NULL, verbose = TRUE, ...) # S3 method for class 'mlm' r2(model, multivariate = TRUE, ...) # S3 method for class 'merMod' r2(model, ci = NULL, tolerance = 1e-05, ...)"},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Compute the model's R2 — r2","text":"model statistical model. ... Arguments passed related r2-methods. ci Confidence interval level, scalar. NULL (default), confidence intervals R2 calculated. verbose Logical. details R2 CI methods given (TRUE) (FALSE)? multivariate Logical. multiple R2 values reported separated response (FALSE) single R2 reported combined across responses computed r2_mlm (TRUE). tolerance Tolerance singularity check random effects, decide whether compute random effect variances conditional r-squared . Indicates value convergence result accepted. r2_nakagawa() returns warning, stating random effect variances computed (thus, conditional r-squared NA), decrease tolerance-level. See also check_singularity().","code":""},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Compute the model's R2 — r2","text":"Returns list containing values related appropriate R2 given model (NULL R2 extracted). See list : Logistic models: Tjur's R2 General linear models: Nagelkerke's R2 Multinomial Logit: McFadden's R2 Models zero-inflation: R2 zero-inflated models Mixed models: Nakagawa's R2 Bayesian models: R2 bayes","code":""},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"note","dir":"Reference","previous_headings":"","what":"Note","title":"Compute the model's R2 — r2","text":"r2()-method defined given model class, r2() tries return \"generic\" r-quared value, calculated following: 1-sum((y-y_hat)^2)/sum((y-y_bar)^2)","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/r2.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Compute the model's R2 — r2","text":"","code":"# Pseudo r-quared for GLM model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2(model) #> # R2 for Logistic Regression #> Tjur's R2: 0.478 # r-squared including confidence intervals model <- lm(mpg ~ wt + hp, data = mtcars) r2(model, ci = 0.95) #> R2: 0.827 [0.654, 0.906] #> adj. R2: 0.815 [0.632, 0.899] model <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) r2(model) #> # R2 for Mixed Models #> #> Conditional R2: 0.969 #> Marginal R2: 0.658"},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":null,"dir":"Reference","previous_headings":"","what":"Bayesian R2 — r2_bayes","title":"Bayesian R2 — r2_bayes","text":"Compute R2 Bayesian models. mixed models (including random part), additionally computes R2 related fixed effects (marginal R2). r2_bayes() returns single R2 value, r2_posterior() returns posterior sample Bayesian R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Bayesian R2 — r2_bayes","text":"","code":"r2_bayes(model, robust = TRUE, ci = 0.95, verbose = TRUE, ...) r2_posterior(model, ...) # S3 method for class 'brmsfit' r2_posterior(model, verbose = TRUE, ...) # S3 method for class 'stanreg' r2_posterior(model, verbose = TRUE, ...) # S3 method for class 'BFBayesFactor' r2_posterior(model, average = FALSE, prior_odds = NULL, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Bayesian R2 — r2_bayes","text":"model Bayesian regression model (brms, rstanarm, BayesFactor, etc). robust Logical, TRUE, median instead mean used calculate central tendency variances. ci Value vector probability CI (0 1) estimated. verbose Toggle warnings. ... Arguments passed r2_posterior(). average Compute model-averaged index? See bayestestR::weighted_posteriors(). prior_odds Optional vector prior odds models compared first model (denominator, BFBayesFactor objects). data.frames, used basis weighting.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Bayesian R2 — r2_bayes","text":"list Bayesian R2 value. mixed models, list Bayesian R2 value marginal Bayesian R2 value. standard errors credible intervals R2 values saved attributes.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Bayesian R2 — r2_bayes","text":"r2_bayes() returns \"unadjusted\" R2 value. See r2_loo() calculate LOO-adjusted R2, comes conceptually closer adjusted R2 measure. mixed models, conditional marginal R2 returned. marginal R2 considers variance fixed effects, conditional R2 takes fixed random effects account. Technically, since r2_bayes() relies rstantools::bayes_R2(), \"marginal\" R2 calls bayes_R2(re.form = NA), \"conditional\" R2 calls bayes_R2(re.form = NULL). re.form argument passed rstantools::posterior_epred(), internally called bayes_R2(). Note \"marginal\" \"conditional\", refer wording suggested Nakagawa et al. 2017. Thus, use term \"marginal\" sense random effects integrated , \"ignored\". r2_posterior() actual workhorse r2_bayes() returns posterior sample Bayesian R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Bayesian R2 — r2_bayes","text":"Gelman, ., Goodrich, B., Gabry, J., Vehtari, . (2018). R-squared Bayesian regression models. American Statistician, 1–6. doi:10.1080/00031305.2018.1549100 Nakagawa, S., Johnson, P. C. D., Schielzeth, H. (2017). coefficient determination R2 intra-class correlation coefficient generalized linear mixed-effects models revisited expanded. Journal Royal Society Interface, 14(134), 20170213.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_bayes.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Bayesian R2 — r2_bayes","text":"","code":"library(performance) # \\donttest{ model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0, show_messages = FALSE )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.811 (95% CI [0.681, 0.884]) model <- suppressWarnings(rstanarm::stan_lmer( Petal.Length ~ Petal.Width + (1 | Species), data = iris, chains = 1, iter = 500, refresh = 0 )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.954 (95% CI [0.951, 0.957]) #> Marginal R2: 0.387 (95% CI [0.174, 0.611]) # } # \\donttest{ model <- suppressWarnings(brms::brm( mpg ~ wt + cyl, data = mtcars, silent = 2, refresh = 0 )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.826 (95% CI [0.759, 0.855]) model <- suppressWarnings(brms::brm( Petal.Length ~ Petal.Width + (1 | Species), data = iris, silent = 2, refresh = 0 )) r2_bayes(model) #> # Bayesian R2 with Compatibility Interval #> #> Conditional R2: 0.954 (95% CI [0.951, 0.957]) #> Marginal R2: 0.383 (95% CI [0.169, 0.615]) # }"},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":null,"dir":"Reference","previous_headings":"","what":"Cox & Snell's R2 — r2_coxsnell","title":"Cox & Snell's R2 — r2_coxsnell","text":"Calculates pseudo-R2 value based proposal Cox & Snell (1989).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cox & Snell's R2 — r2_coxsnell","text":"","code":"r2_coxsnell(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Cox & Snell's R2 — r2_coxsnell","text":"model Model binary outcome. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Cox & Snell's R2 — r2_coxsnell","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Cox & Snell's R2 — r2_coxsnell","text":"index proposed Cox Snell (1989, pp. 208-9) , apparently independently, Magee (1990); suggested earlier binary response models Maddala (1983). However, index achieves maximum less 1 discrete models (.e. models whose likelihood product probabilities) maximum 1, instead densities, can become infinite (Nagelkerke, 1991).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Cox & Snell's R2 — r2_coxsnell","text":"Cox, D. R., Snell, E. J. (1989). Analysis binary data (Vol. 32). Monographs Statistics Applied Probability. Magee, L. (1990). R 2 measures based Wald likelihood ratio joint significance tests. American Statistician, 44(3), 250-253. Maddala, G. S. (1986). Limited-dependent qualitative variables econometrics (. 3). Cambridge university press. Nagelkerke, N. J. (1991). note general definition coefficient determination. Biometrika, 78(3), 691-692.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_coxsnell.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Cox & Snell's R2 — r2_coxsnell","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_coxsnell(model) #> Cox & Snell's R2 #> 0.4401407"},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":null,"dir":"Reference","previous_headings":"","what":"Efron's R2 — r2_efron","title":"Efron's R2 — r2_efron","text":"Calculates Efron's pseudo R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Efron's R2 — r2_efron","text":"","code":"r2_efron(model)"},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Efron's R2 — r2_efron","text":"model Generalized linear model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Efron's R2 — r2_efron","text":"R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Efron's R2 — r2_efron","text":"Efron's R2 calculated taking sum squared model residuals, divided total variability dependent variable. R2 equals squared correlation predicted values actual values, however, note model residuals generalized linear models generally comparable OLS.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Efron's R2 — r2_efron","text":"Efron, B. (1978). Regression ANOVA zero-one data: Measures residual variation. Journal American Statistical Association, 73, 113-121.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_efron.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Efron's R2 — r2_efron","text":"","code":"## Dobson (1990) Page 93: Randomized Controlled Trial: counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) # outcome <- gl(3, 1, 9) treatment <- gl(3, 3) model <- glm(counts ~ outcome + treatment, family = poisson()) r2_efron(model) #> [1] 0.5265152"},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":null,"dir":"Reference","previous_headings":"","what":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"Calculates Ferrari's Cribari-Neto's pseudo R2 (beta-regression models).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"","code":"r2_ferrari(model, ...) # Default S3 method r2_ferrari(model, correct_bounds = FALSE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"model Generalized linear, particular beta-regression model. ... Currently used. correct_bounds Logical, whether correct bounds response variable avoid 0 1. TRUE, response variable normalized \"compressed\", .e. zeros ones excluded.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"list pseudo R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"Ferrari, S., Cribari-Neto, F. (2004). Beta Regression Modelling Rates Proportions. Journal Applied Statistics, 31(7), 799–815. doi:10.1080/0266476042000214501","code":""},{"path":"https://easystats.github.io/performance/reference/r2_ferrari.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Ferrari's and Cribari-Neto's R2 — r2_ferrari","text":"","code":"data(\"GasolineYield\", package = \"betareg\") model <- betareg::betareg(yield ~ batch + temp, data = GasolineYield) r2_ferrari(model) #> # R2 for Generalized Linear Regression #> Ferrari's R2: 0.962"},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":null,"dir":"Reference","previous_headings":"","what":"Kullback-Leibler R2 — r2_kullback","title":"Kullback-Leibler R2 — r2_kullback","text":"Calculates Kullback-Leibler-divergence-based R2 generalized linear models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Kullback-Leibler R2 — r2_kullback","text":"","code":"r2_kullback(model, ...) # S3 method for class 'glm' r2_kullback(model, adjust = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Kullback-Leibler R2 — r2_kullback","text":"model generalized linear model. ... Additional arguments. Currently used. adjust Logical, TRUE (default), adjusted R2 value returned.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Kullback-Leibler R2 — r2_kullback","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Kullback-Leibler R2 — r2_kullback","text":"Cameron, . C. Windmeijer, . G. (1997) R-squared measure goodness fit common nonlinear regression models. Journal Econometrics, 77: 329-342.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_kullback.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Kullback-Leibler R2 — r2_kullback","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_kullback(model) #> Kullback-Leibler R2 #> 0.3834362"},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":null,"dir":"Reference","previous_headings":"","what":"LOO-adjusted R2 — r2_loo","title":"LOO-adjusted R2 — r2_loo","text":"Compute LOO-adjusted R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"LOO-adjusted R2 — r2_loo","text":"","code":"r2_loo(model, robust = TRUE, ci = 0.95, verbose = TRUE, ...) r2_loo_posterior(model, ...) # S3 method for class 'brmsfit' r2_loo_posterior(model, verbose = TRUE, ...) # S3 method for class 'stanreg' r2_loo_posterior(model, verbose = TRUE, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"LOO-adjusted R2 — r2_loo","text":"model Bayesian regression model (brms, rstanarm, BayesFactor, etc). robust Logical, TRUE, median instead mean used calculate central tendency variances. ci Value vector probability CI (0 1) estimated. verbose Toggle warnings. ... Arguments passed r2_posterior().","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"LOO-adjusted R2 — r2_loo","text":"list Bayesian R2 value. mixed models, list Bayesian R2 value marginal Bayesian R2 value. standard errors credible intervals R2 values saved attributes. list LOO-adjusted R2 value. standard errors credible intervals R2 values saved attributes.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"LOO-adjusted R2 — r2_loo","text":"r2_loo() returns \"adjusted\" R2 value computed using leave-one--adjusted posterior distribution. conceptually similar adjusted/unbiased R2 estimate classical regression modeling. See r2_bayes() \"unadjusted\" R2. Mixed models currently fully supported. r2_loo_posterior() actual workhorse r2_loo() returns posterior sample LOO-adjusted Bayesian R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_loo.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"LOO-adjusted R2 — r2_loo","text":"","code":"model <- suppressWarnings(rstanarm::stan_glm( mpg ~ wt + cyl, data = mtcars, chains = 1, iter = 500, refresh = 0, show_messages = FALSE )) r2_loo(model) #> # LOO-adjusted R2 with Compatibility Interval #> #> Conditional R2: 0.794 (95% CI [0.687, 0.879])"},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":null,"dir":"Reference","previous_headings":"","what":"McFadden's R2 — r2_mcfadden","title":"McFadden's R2 — r2_mcfadden","text":"Calculates McFadden's pseudo R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"McFadden's R2 — r2_mcfadden","text":"","code":"r2_mcfadden(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"McFadden's R2 — r2_mcfadden","text":"model Generalized linear multinomial logit (mlogit) model. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"McFadden's R2 — r2_mcfadden","text":"models, list McFadden's R2 adjusted McFadden's R2 value. models, McFadden's R2 available.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"McFadden's R2 — r2_mcfadden","text":"McFadden, D. (1987). Regression-based specification tests multinomial logit model. Journal econometrics, 34(1-2), 63-82. McFadden, D. (1973). Conditional logit analysis qualitative choice behavior.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mcfadden.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"McFadden's R2 — r2_mcfadden","text":"","code":"if (require(\"mlogit\")) { data(\"Fishing\", package = \"mlogit\") Fish <- mlogit.data(Fishing, varying = c(2:9), shape = \"wide\", choice = \"mode\") model <- mlogit(mode ~ price + catch, data = Fish) r2_mcfadden(model) } #> Loading required package: mlogit #> Loading required package: dfidx #> #> Attaching package: ‘dfidx’ #> The following object is masked from ‘package:MASS’: #> #> select #> The following object is masked from ‘package:stats’: #> #> filter #> McFadden's R2 #> 0.17823"},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":null,"dir":"Reference","previous_headings":"","what":"McKelvey & Zavoinas R2 — r2_mckelvey","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"Calculates McKelvey Zavoinas pseudo R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"","code":"r2_mckelvey(model)"},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"model Generalized linear model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"McKelvey Zavoinas R2 based explained variance, variance predicted response divided sum variance predicted response residual variance. binomial models, residual variance either pi^2/3 logit-link 1 probit-link. poisson-models, residual variance based log-normal approximation, similar distribution-specific variance described ?insight::get_variance.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"McKelvey, R., Zavoina, W. (1975), \"Statistical Model Analysis Ordinal Level Dependent Variables\", Journal Mathematical Sociology 4, S. 103–120.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mckelvey.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"McKelvey & Zavoinas R2 — r2_mckelvey","text":"","code":"## Dobson (1990) Page 93: Randomized Controlled Trial: counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12) # outcome <- gl(3, 1, 9) treatment <- gl(3, 3) model <- glm(counts ~ outcome + treatment, family = poisson()) r2_mckelvey(model) #> McKelvey's R2 #> 0.3776292"},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":null,"dir":"Reference","previous_headings":"","what":"Multivariate R2 — r2_mlm","title":"Multivariate R2 — r2_mlm","text":"Calculates two multivariate R2 values multivariate linear regression.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Multivariate R2 — r2_mlm","text":"","code":"r2_mlm(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Multivariate R2 — r2_mlm","text":"model Multivariate linear regression model. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Multivariate R2 — r2_mlm","text":"named vector R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Multivariate R2 — r2_mlm","text":"two indexes returned summarize model fit set predictors given system responses. compared default r2 index multivariate linear models, indexes returned function provide single fit value collapsed across responses. two returned indexes proposed Van den Burg Lewis (1988) extension metrics proposed Cramer Nicewander (1979). numerous indexes proposed across two papers, two metrics, \\(R_{xy}\\) \\(P_{xy}\\), recommended use Azen Budescu (2006). multivariate linear regression \\(p\\) predictors \\(q\\) responses \\(p > q\\), \\(R_{xy}\\) index computed : $$R_{xy} = 1 - \\prod_{=1}^p (1 - \\rho_i^2)$$ \\(\\rho\\) canonical variate canonical correlation predictors responses. metric symmetric value change roles variables predictors responses swapped. \\(P_{xy}\\) computed : $$P_{xy} = \\frac{q - trace(\\bf{S}_{\\bf{YY}}^{-1}\\bf{S}_{\\bf{YY|X}})}{q}$$ \\(\\bf{S}_{\\bf{YY}}\\) matrix response covariances \\(\\bf{S}_{\\bf{YY|X}}\\) matrix residual covariances given predictors. metric asymmetric can change depending variables considered predictors versus responses.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Multivariate R2 — r2_mlm","text":"Azen, R., & Budescu, D. V. (2006). Comparing predictors multivariate regression models: extension dominance analysis. Journal Educational Behavioral Statistics, 31(2), 157-180. Cramer, E. M., & Nicewander, W. . (1979). symmetric, invariant measures multivariate association. Psychometrika, 44, 43-54. Van den Burg, W., & Lewis, C. (1988). properties two measures multivariate association. Psychometrika, 53, 109-122.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Multivariate R2 — r2_mlm","text":"Joseph Luchman","code":""},{"path":"https://easystats.github.io/performance/reference/r2_mlm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Multivariate R2 — r2_mlm","text":"","code":"model <- lm(cbind(qsec, drat) ~ wt + mpg + cyl, data = mtcars) r2_mlm(model) #> Symmetric Rxy Asymmetric Pxy #> 0.8573111 0.5517522 model_swap <- lm(cbind(wt, mpg, cyl) ~ qsec + drat, data = mtcars) r2_mlm(model_swap) #> Symmetric Rxy Asymmetric Pxy #> 0.8573111 0.3678348"},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":null,"dir":"Reference","previous_headings":"","what":"Nagelkerke's R2 — r2_nagelkerke","title":"Nagelkerke's R2 — r2_nagelkerke","text":"Calculate Nagelkerke's pseudo-R2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nagelkerke's R2 — r2_nagelkerke","text":"","code":"r2_nagelkerke(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Nagelkerke's R2 — r2_nagelkerke","text":"model generalized linear model, including cumulative links resp. multinomial models. ... Currently used.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Nagelkerke's R2 — r2_nagelkerke","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Nagelkerke's R2 — r2_nagelkerke","text":"Nagelkerke, N. J. (1991). note general definition coefficient determination. Biometrika, 78(3), 691-692.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nagelkerke.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Nagelkerke's R2 — r2_nagelkerke","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_nagelkerke(model) #> Nagelkerke's R2 #> 0.5899593"},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":null,"dir":"Reference","previous_headings":"","what":"Nakagawa's R2 for mixed models — r2_nakagawa","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"Compute marginal conditional r-squared value mixed effects models complex random effects structures.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"","code":"r2_nakagawa( model, by_group = FALSE, tolerance = 1e-08, ci = NULL, iterations = 100, ci_method = NULL, null_model = NULL, approximation = \"lognormal\", model_component = NULL, verbose = TRUE, ... )"},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"model mixed effects model. by_group Logical, TRUE, returns explained variance different levels (multiple levels). essentially similar variance reduction approach Hox (2010), pp. 69-78. tolerance Tolerance singularity check random effects, decide whether compute random effect variances conditional r-squared . Indicates value convergence result accepted. r2_nakagawa() returns warning, stating random effect variances computed (thus, conditional r-squared NA), decrease tolerance-level. See also check_singularity(). ci Confidence resp. credible interval level. icc(), r2(), rmse(), confidence intervals based bootstrapped samples ICC, R2 RMSE value. See iterations. iterations Number bootstrap-replicates computing confidence intervals ICC, R2, RMSE etc. ci_method Character string, indicating bootstrap-method. NULL (default), case lme4::bootMer() used bootstrapped confidence intervals. However, bootstrapped intervals calculated way, try ci_method = \"boot\", falls back boot::boot(). may successfully return bootstrapped confidence intervals, bootstrapped samples may appropriate multilevel structure model. also option ci_method = \"analytical\", tries calculate analytical confidence assuming chi-squared distribution. However, intervals rather inaccurate often narrow. recommended calculate bootstrapped confidence intervals mixed models. null_model Optional, null model compute random effect variances, passed insight::get_variance(). Usually required calculation r-squared ICC fails null_model specified. calculating null model takes longer already fit null model, can pass , , speed process. approximation Character string, indicating approximation method distribution-specific (observation level, residual) variance. applies non-Gaussian models. Can \"lognormal\" (default), \"delta\" \"trigamma\". binomial models, default theoretical distribution specific variance, however, can also \"observation_level\". See Nakagawa et al. 2017, particular supplement 2, details. model_component models can zero-inflation component, specify component variances returned. NULL \"full\" (default), conditional zero-inflation component taken account. \"conditional\", conditional component considered. verbose Toggle warnings messages. ... Arguments passed lme4::bootMer() boot::boot() bootstrapped ICC, R2, RMSE etc.; variance_decomposition(), arguments passed brms::posterior_predict().","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"list conditional marginal R2 values.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"Marginal conditional r-squared values mixed models calculated based Nakagawa et al. (2017). details computation variances, see insight::get_variance(). random effect variances actually mean random effect variances, thus r-squared value also appropriate mixed models random slopes nested random effects (see Johnson, 2014). Conditional R2: takes fixed random effects account. Marginal R2: considers variance fixed effects. contribution random effects can deduced subtracting marginal R2 conditional R2 computing icc().","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"supported-models-and-model-families","dir":"Reference","previous_headings":"","what":"Supported models and model families","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"single variance components required calculate marginal conditional r-squared values calculated using insight::get_variance() function. results validated solutions provided Nakagawa et al. (2017), particular examples shown Supplement 2 paper. model families validated results MuMIn package. means r-squared values returned r2_nakagawa() accurate reliable following mixed models model families: Bernoulli (logistic) regression Binomial regression (binary outcomes) Poisson Quasi-Poisson regression Negative binomial regression (including nbinom1, nbinom2 nbinom12 families) Gaussian regression (linear models) Gamma regression Tweedie regression Beta regression Ordered beta regression Following model families yet validated, work: Zero-inflated hurdle models Beta-binomial regression Compound Poisson regression Generalized Poisson regression Log-normal regression Skew-normal regression Extracting variance components models zero-inflation part straightforward, definitely clear distribution-specific variance calculated. Therefore, recommended carefully inspect results, probably validate models, e.g. Bayesian models (although results may roughly comparable). Log-normal regressions (e.g. lognormal() family glmmTMB gaussian(\"log\")) often low fixed effects variance (calculated suggested Nakagawa et al. 2017). results low ICC r-squared values, may meaningful.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"Hox, J. J. (2010). Multilevel analysis: techniques applications (2nd ed). New York: Routledge. Johnson, P. C. D. (2014). Extension Nakagawa Schielzeth’s R2 GLMM random slopes models. Methods Ecology Evolution, 5(9), 944–946. doi:10.1111/2041-210X.12225 Nakagawa, S., Schielzeth, H. (2013). general simple method obtaining R2 generalized linear mixed-effects models. Methods Ecology Evolution, 4(2), 133–142. doi:10.1111/j.2041-210x.2012.00261.x Nakagawa, S., Johnson, P. C. D., Schielzeth, H. (2017). coefficient determination R2 intra-class correlation coefficient generalized linear mixed-effects models revisited expanded. Journal Royal Society Interface, 14(134), 20170213.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_nakagawa.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Nakagawa's R2 for mixed models — r2_nakagawa","text":"","code":"model <- lme4::lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) r2_nakagawa(model) #> # R2 for Mixed Models #> #> Conditional R2: 0.969 #> Marginal R2: 0.658 r2_nakagawa(model, by_group = TRUE) #> # Explained Variance by Level #> #> Level | R2 #> ---------------- #> Level 1 | 0.569 #> Species | -0.853 #>"},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":null,"dir":"Reference","previous_headings":"","what":"Somers' Dxy rank correlation for binary outcomes — r2_somers","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"Calculates Somers' Dxy rank correlation logistic regression models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"","code":"r2_somers(model)"},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"model logistic regression model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"Somers, R. H. (1962). new asymmetric measure association ordinal variables. American Sociological Review. 27 (6).","code":""},{"path":"https://easystats.github.io/performance/reference/r2_somers.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Somers' Dxy rank correlation for binary outcomes — r2_somers","text":"","code":"# \\donttest{ if (require(\"correlation\") && require(\"Hmisc\")) { model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_somers(model) } #> Loading required package: correlation #> Loading required package: Hmisc #> #> Attaching package: ‘Hmisc’ #> The following object is masked from ‘package:psych’: #> #> describe #> The following objects are masked from ‘package:ggdag’: #> #> label, label<- #> The following objects are masked from ‘package:base’: #> #> format.pval, units #> Somers' Dxy #> 0.8253968 # }"},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":null,"dir":"Reference","previous_headings":"","what":"Tjur's R2 - coefficient of determination (D) — r2_tjur","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"method calculates Coefficient Discrimination D (also known Tjur's R2; Tjur, 2009) generalized linear (mixed) models binary outcomes. alternative pseudo-R2 values like Nagelkerke's R2 Cox-Snell R2. Coefficient Discrimination D can read like (pseudo-)R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"","code":"r2_tjur(model, ...)"},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"model Binomial Model. ... Arguments functions, usually used internally.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"named vector R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"Tjur, T. (2009). Coefficients determination logistic regression models - new proposal: coefficient discrimination. American Statistician, 63(4), 366-372.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_tjur.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Tjur's R2 - coefficient of determination (D) — r2_tjur","text":"","code":"model <- glm(vs ~ wt + mpg, data = mtcars, family = \"binomial\") r2_tjur(model) #> Tjur's R2 #> 0.4776926"},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":null,"dir":"Reference","previous_headings":"","what":"Xu' R2 (Omega-squared) — r2_xu","title":"Xu' R2 (Omega-squared) — r2_xu","text":"Calculates Xu' Omega-squared value, simple R2 equivalent linear mixed models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Xu' R2 (Omega-squared) — r2_xu","text":"","code":"r2_xu(model)"},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Xu' R2 (Omega-squared) — r2_xu","text":"model linear (mixed) model.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Xu' R2 (Omega-squared) — r2_xu","text":"R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Xu' R2 (Omega-squared) — r2_xu","text":"r2_xu() crude measure explained variance linear (mixed) effects models, originally denoted Ω2.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Xu' R2 (Omega-squared) — r2_xu","text":"Xu, R. (2003). Measuring explained variation linear mixed effects models. Statistics Medicine, 22(22), 3527–3541. doi:10.1002/sim.1572","code":""},{"path":"https://easystats.github.io/performance/reference/r2_xu.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Xu' R2 (Omega-squared) — r2_xu","text":"","code":"model <- lm(Sepal.Length ~ Petal.Length + Species, data = iris) r2_xu(model) #> Xu's R2 #> 0.8367238"},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":null,"dir":"Reference","previous_headings":"","what":"R2 for models with zero-inflation — r2_zeroinflated","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"Calculates R2 models zero-inflation component, including mixed effects models.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"","code":"r2_zeroinflated(model, method = c(\"default\", \"correlation\"))"},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"model model. method Indicates method calculate R2. See 'Details'. May abbreviated.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"default-method, list R2 adjusted R2 values. method = \"correlation\", named numeric vector correlation-based R2 value.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"default-method calculates R2 value based residual variance divided total variance. method = \"correlation\", R2 correlation-based measure, rather crude. simply computes squared correlation model's actual predicted response.","code":""},{"path":"https://easystats.github.io/performance/reference/r2_zeroinflated.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"R2 for models with zero-inflation — r2_zeroinflated","text":"","code":"# \\donttest{ if (require(\"pscl\")) { data(bioChemists) model <- zeroinfl( art ~ fem + mar + kid5 + ment | kid5 + phd, data = bioChemists ) r2_zeroinflated(model) } #> Loading required package: pscl #> Classes and Methods for R originally developed in the #> Political Science Computational Laboratory #> Department of Political Science #> Stanford University (2002-2015), #> by and under the direction of Simon Jackman. #> hurdle and zeroinfl functions by Achim Zeileis. #> # R2 for Zero-Inflated and Hurdle Regression #> R2: 0.180 #> adj. R2: 0.175 # }"},{"path":"https://easystats.github.io/performance/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"Objects exported from other packages — reexports","title":"Objects exported from other packages — reexports","text":"objects imported packages. Follow links see documentation. insight display, print_html, print_md","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate randomized quantile residuals from a model — simulate_residuals","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"Returns simulated residuals model. useful checking uniformity residuals, particular non-Gaussian models, residuals expected normally distributed.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"","code":"simulate_residuals(x, iterations = 250, ...) # S3 method for class 'performance_simres' residuals(object, quantile_function = NULL, outlier_values = NULL, ...)"},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"x model object. iterations Number simulations run. ... Arguments passed DHARMa::simulateResiduals(). object performance_simres object, returned simulate_residuals(). quantile_function function apply residuals. NULL, residuals returned . NULL, residuals passed function. useful returning normally distributed residuals, example: residuals(x, quantile_function = qnorm). outlier_values vector length 2, specifying values replace -Inf Inf , respectively.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"Simulated residuals, can processed check_residuals(). returned object class DHARMa performance_simres.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"function small wrapper around DHARMa::simulateResiduals(). basically sets plot = FALSE adds additional class attribute (\"performance_sim_res\"), allows using DHARMa object plotting functions see package. See also vignette(\"DHARMa\"). plot() method visualize distribution residuals.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"tests-based-on-simulated-residuals","dir":"Reference","previous_headings":"","what":"Tests based on simulated residuals","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"certain models, resp. model certain families, tests like check_zeroinflation() check_overdispersion() based simulated residuals. usually accurate tests traditionally used Pearson residuals. However, simulating complex models, mixed models models zero-inflation, several important considerations. simulate_residuals() relies DHARMa::simulateResiduals(), additional arguments specified ... passed function. defaults DHARMa set conservative option works models. However, many cases, help advises use different settings particular situations particular models. recommended read 'Details' ?DHARMa::simulateResiduals closely understand implications simulation process arguments modified get accurate results.","code":""},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"Hartig, F., & Lohse, L. (2022). DHARMa: Residual Diagnostics Hierarchical (Multi-Level / Mixed) Regression Models (Version 0.4.5). Retrieved https://CRAN.R-project.org/package=DHARMa Dunn, P. K., & Smyth, G. K. (1996). Randomized Quantile Residuals. Journal Computational Graphical Statistics, 5(3), 236. doi:10.2307/1390802","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/simulate_residuals.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simulate randomized quantile residuals from a model — simulate_residuals","text":"","code":"m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars) simulate_residuals(m) #> Simulated residuals from a model of class `lm` based on 250 simulations. #> Use `check_residuals()` to check uniformity of residuals or #> `residuals()` to extract simulated residuals. It is recommended to refer #> to `?DHARMa::simulateResiudals` and `vignette(\"DHARMa\")` for more #> information about different settings in particular situations or for #> particular models. # extract residuals head(residuals(simulate_residuals(m))) #> [1] 0.356 0.448 0.096 0.568 0.668 0.204"},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Test if models are different — test_bf","title":"Test if models are different — test_bf","text":"Testing whether models \"different\" terms accuracy explanatory power delicate often complex procedure, many limitations prerequisites. Moreover, many tests exist, coming interpretation, set strengths weaknesses. test_performance() function runs relevant appropriate tests based type input (instance, whether models nested ). However, still requires user understand tests order prevent misinterpretation. See Details section information regarding different tests interpretation.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Test if models are different — test_bf","text":"","code":"test_bf(...) # Default S3 method test_bf(..., reference = 1, text_length = NULL) test_likelihoodratio(..., estimator = \"ML\", verbose = TRUE) test_lrt(..., estimator = \"ML\", verbose = TRUE) test_performance(..., reference = 1, verbose = TRUE) test_vuong(..., verbose = TRUE) test_wald(..., verbose = TRUE)"},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Test if models are different — test_bf","text":"... Multiple model objects. reference applies models non-nested, determines model taken reference, models tested. text_length Numeric, length (number chars) output lines. test_bf() describes models formulas, can lead overly long lines output. text_length fixes length lines specified limit. estimator Applied comparing regression models using test_likelihoodratio(). Corresponds different estimators standard deviation errors. Defaults \"OLS\" linear models, \"ML\" models (including mixed models), \"REML\" linear mixed models fixed effects. See 'Details'. verbose Toggle warning messages.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Test if models are different — test_bf","text":"data frame containing relevant indices.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"nested-vs-non-nested-models","dir":"Reference","previous_headings":"","what":"Nested vs. Non-nested Models","title":"Test if models are different — test_bf","text":"Model's \"nesting\" important concept models comparison. Indeed, many tests make sense models \"nested\", .e., predictors nested. means fixed effects predictors model contained within fixed effects predictors larger model (sometimes referred encompassing model). instance, model1 (y ~ x1 + x2) \"nested\" within model2 (y ~ x1 + x2 + x3). Usually, people list nested models, instance m1 (y ~ 1), m2 (y ~ x1), m3 (y ~ x1 + x2), m4 (y ~ x1 + x2 + x3), conventional \"ordered\" smallest largest, user reverse order largest smallest. test shows whether parsimonious model, whether adding predictor, results significant difference model's performance. case, models usually compared sequentially: m2 tested m1, m3 m2, m4 m3, etc. Two models considered \"non-nested\" predictors different. instance, model1 (y ~ x1 + x2) model2 (y ~ x3 + x4). case non-nested models, models usually compared reference model (default, first list). Nesting detected via insight::is_nested_models() function. Note , apart nesting, order tests valid, requirements often fulfilled. instance, outcome variables (response) must . meaningfully test whether apples significantly different oranges!","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"estimator-of-the-standard-deviation","dir":"Reference","previous_headings":"","what":"Estimator of the standard deviation","title":"Test if models are different — test_bf","text":"estimator relevant comparing regression models using test_likelihoodratio(). estimator = \"OLS\", uses method anova(..., test = \"LRT\") implemented base R, .e., scaling n-k (unbiased OLS estimator) using estimator alternative hypothesis. estimator = \"ML\", instance used lrtest(...) package lmtest, scaling done n (biased ML estimator) estimator null hypothesis. moderately large samples, differences negligible, possible OLS perform slightly better small samples Gaussian errors. estimator = \"REML\", LRT based REML-fit log-likelihoods models. Note types estimators available model classes.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"reml-versus-ml-estimator","dir":"Reference","previous_headings":"","what":"REML versus ML estimator","title":"Test if models are different — test_bf","text":"estimator = \"ML\", default linear mixed models (unless share fixed effects), values information criteria (AIC, AICc) based ML-estimator, default behaviour AIC() may different (particular linear mixed models lme4, sets REML = TRUE). default test_likelihoodratio() intentional, comparing information criteria based REML fits requires fixed effects models, often case. Thus, anova.merMod() automatically refits models REML performing LRT, test_likelihoodratio() checks comparison based REML fits indeed valid, , uses REML default (else, ML default). Set estimator argument explicitely override default behaviour.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"tests-description","dir":"Reference","previous_headings":"","what":"Tests Description","title":"Test if models are different — test_bf","text":"Bayes factor Model Comparison - test_bf(): models fit data, returned BF shows Bayes Factor (see bayestestR::bayesfactor_models()) model reference model (depends whether models nested ). Check vignette details. Wald's F-Test - test_wald(): Wald test rough approximation Likelihood Ratio Test. However, applicable LRT: can often run Wald test situations test can run. Importantly, test makes statistical sense models nested. Note: test also available base R anova() function. returns F-value column statistic associated p-value. Likelihood Ratio Test (LRT) - test_likelihoodratio(): LRT tests model better (likely) explanation data. Likelihood-Ratio-Test (LRT) gives usually somewhat close results (equivalent) Wald test , similarly, makes sense nested models. However, maximum likelihood tests make stronger assumptions method moments tests like F-test, turn efficient. Agresti (1990) suggests use LRT instead Wald test small sample sizes (30) parameters large. Note: regression models, similar anova(..., test=\"LRT\") (models) lmtest::lrtest(...), depending estimator argument. lavaan models (SEM, CFA), function calls lavaan::lavTestLRT(). models transformed response variables (like log(x) sqrt(x)), logLik() returns wrong log-likelihood. However, test_likelihoodratio() calls insight::get_loglikelihood() check_response=TRUE, returns corrected log-likelihood value models transformed response variables. Furthermore, since LRT accepts nested models (.e. models differ fixed effects), computed log-likelihood always based ML estimator, REML fits. Vuong's Test - test_vuong(): Vuong's (1989) test can used nested non-nested models, actually consists two tests. Test Distinguishability (Omega2 column associated p-value) indicates whether models can possibly distinguished basis observed data. p-value significant, means models distinguishable. Robust Likelihood Test (LR column associated p-value) indicates whether model fits better reference model. models nested, test works robust LRT. code function adapted nonnest2 package, credit go authors.","code":""},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Test if models are different — test_bf","text":"Vuong, Q. H. (1989). Likelihood ratio tests model selection non-nested hypotheses. Econometrica, 57, 307-333. Merkle, E. C., , D., & Preacher, K. (2016). Testing non-nested structural equation models. Psychological Methods, 21, 151-163.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/reference/test_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Test if models are different — test_bf","text":"","code":"# Nested Models # ------------- m1 <- lm(Sepal.Length ~ Petal.Width, data = iris) m2 <- lm(Sepal.Length ~ Petal.Width + Species, data = iris) m3 <- lm(Sepal.Length ~ Petal.Width * Species, data = iris) test_performance(m1, m2, m3) #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR) #> ------------------------------------------------------------ #> m1 | lm | | | | | #> m2 | lm | 0.007 | 9.54e-04 | 0.935 | 0.15 | 0.919 #> m3 | lm | 0.037 | 0.02 | 0.081 | 3.41 | 0.099 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. test_bf(m1, m2, m3) #> Bayes Factors for Model Comparison #> #> Model BF #> [m2] Petal.Width + Species 0.007 #> [m3] Petal.Width * Species 2.64e-04 #> #> * Against Denominator: [m1] Petal.Width #> * Bayes Factor Type: BIC approximation test_wald(m1, m2, m3) # Equivalent to anova(m1, m2, m3) #> Name | Model | df | df_diff | F | p #> ------------------------------------------- #> m1 | lm | 148 | | | #> m2 | lm | 146 | 2.00 | 0.08 | 0.927 #> m3 | lm | 144 | 2.00 | 1.66 | 0.195 #> Models were detected as nested (in terms of fixed parameters) and are compared in sequential order. # Equivalent to lmtest::lrtest(m1, m2, m3) test_likelihoodratio(m1, m2, m3, estimator = \"ML\") #> # Likelihood-Ratio-Test (LRT) for Model Comparison (ML-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------ #> m1 | lm | 3 | | | #> m2 | lm | 5 | 2 | 0.15 | 0.926 #> m3 | lm | 7 | 2 | 3.41 | 0.182 # Equivalent to anova(m1, m2, m3, test='LRT') test_likelihoodratio(m1, m2, m3, estimator = \"OLS\") #> # Likelihood-Ratio-Test (LRT) for Model Comparison (OLS-estimator) #> #> Name | Model | df | df_diff | Chi2 | p #> ------------------------------------------ #> m1 | lm | 3 | | | #> m2 | lm | 5 | 2 | 0.15 | 0.927 #> m3 | lm | 7 | 2 | 3.31 | 0.191 if (require(\"CompQuadForm\")) { test_vuong(m1, m2, m3) # nonnest2::vuongtest(m1, m2, nested=TRUE) # Non-nested Models # ----------------- m1 <- lm(Sepal.Length ~ Petal.Width, data = iris) m2 <- lm(Sepal.Length ~ Petal.Length, data = iris) m3 <- lm(Sepal.Length ~ Species, data = iris) test_performance(m1, m2, m3) test_bf(m1, m2, m3) test_vuong(m1, m2, m3) # nonnest2::vuongtest(m1, m2) } #> Loading required package: CompQuadForm #> Name | Model | Omega2 | p (Omega2) | LR | p (LR) #> --------------------------------------------------- #> m1 | lm | | | | #> m2 | lm | 0.19 | < .001 | -4.57 | < .001 #> m3 | lm | 0.12 | < .001 | 2.51 | 0.006 #> Each model is compared to m1. # Tweak the output # ---------------- test_performance(m1, m2, m3, include_formula = TRUE) #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR) #> --------------------------------------------------------------------------------------- #> m1 | lm(Sepal.Length ~ Petal.Width) | | | | | #> m2 | lm(Sepal.Length ~ Petal.Length) | > 1000 | 0.19 | < .001 | -4.57 | < .001 #> m3 | lm(Sepal.Length ~ Species) | < 0.001 | 0.12 | < .001 | 2.51 | 0.006 #> Each model is compared to m1. # SEM / CFA (lavaan objects) # -------------------------- # Lavaan Models if (require(\"lavaan\")) { structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 visual ~~ textual + speed \" m1 <- lavaan::cfa(structure, data = HolzingerSwineford1939) structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 visual ~~ 0 * textual + speed \" m2 <- lavaan::cfa(structure, data = HolzingerSwineford1939) structure <- \" visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 visual ~~ 0 * textual + 0 * speed \" m3 <- lavaan::cfa(structure, data = HolzingerSwineford1939) test_likelihoodratio(m1, m2, m3) # Different Model Types # --------------------- if (require(\"lme4\") && require(\"mgcv\")) { m1 <- lm(Sepal.Length ~ Petal.Length + Species, data = iris) m2 <- lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris) m3 <- gam(Sepal.Length ~ s(Petal.Length, by = Species) + Species, data = iris) test_performance(m1, m2, m3) } } #> Loading required package: mgcv #> This is mgcv 1.9-1. For overview type 'help(\"mgcv-package\")'. #> #> Attaching package: ‘mgcv’ #> The following objects are masked from ‘package:brms’: #> #> s, t2 #> The following object is masked from ‘package:mclust’: #> #> mvn #> Name | Model | BF #> ------------------------ #> m1 | lm | #> m2 | lmerMod | < 0.001 #> m3 | gam | 0.038 #> Each model is compared to m1."},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-12-5","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"performance 0.12.5","text":"check_outliers() method = \"optics\" now returns refined cluster selection, passing optics_xi argument dbscan::extractXi(). Deprecated arguments alias-function-names removed. Argument names check_model() refer plot-aesthetics (like dot_size) now harmonized across easystats packages, meaning renamed. now follow pattern aesthetic_type, e.g. size_dot (instead dot_size).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-12-5","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.12.5","text":"Increased accuracy check_convergence() glmmTMB models. r2() r2_mcfadden() now support beta-binomial (non-mixed) models package glmmTMB. .numeric() resp. .double() method objects class performance_roc added. Improved documentation performance_roc().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.5","text":"check_outliers() warn numeric variables found response variable numeric, relevant predictors . check_collinearity() work glmmTMB models zero-inflation component set ~0.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0124","dir":"Changelog","previous_headings":"","what":"performance 0.12.4","title":"performance 0.12.4","text":"CRAN release: 2024-10-18","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-12-4","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.12.4","text":"check_dag() now also checks colliders, suggests removing printed output. Minor revisions printed output check_dag().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-4","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.4","text":"Fixed failing tests broke due changes latest glmmTMB update.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0123","dir":"Changelog","previous_headings":"","what":"performance 0.12.3","title":"performance 0.12.3","text":"CRAN release: 2024-09-02","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-12-3","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.12.3","text":"check_dag(), check DAGs correct adjustment sets.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-12-3","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.12.3","text":"check_heterogeneity_bias() gets nested argument. Furthermore, can specify one variable, meaning nested cross-classified model designs can also tested heterogeneity bias.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0122","dir":"Changelog","previous_headings":"","what":"performance 0.12.2","title":"performance 0.12.2","text":"CRAN release: 2024-07-18 Patch release, ensure performance runs older version datawizard Mac OSX R (old-release).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0121","dir":"Changelog","previous_headings":"","what":"performance 0.12.1","title":"performance 0.12.1","text":"CRAN release: 2024-07-15","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-12-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.12.1","text":"icc() r2_nakagawa() get null_model argument. can useful computing R2 ICC mixed models, internal computation null model fails, already fit null model want save time. icc() r2_nakagawa() get approximation argument indicating approximation method distribution-specific (residual) variance. See Nakagawa et al. 2017 details. icc() r2_nakagawa() get model_component argument indicating component zero-inflation hurdle models. performance_rmse() (resp. rmse()) can now compute analytical bootstrapped confidence intervals. function gains following new arguments: ci, ci_method iterations. New function r2_ferrari() compute Ferrari & Cribari-Neto’s R2 generalized linear models, particular beta-regression. Improved documentation functions.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.1","text":"Fixed issue check_model() model contained transformed response variable named like valid R function name (e.g., lm(log(lapply) ~ x), data contained variable named lapply). Fixed issue check_predictions() linear models response transformed ratio (e.g. lm(succes/trials ~ x)). Fixed issue r2_bayes() mixed models rstanarm.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0120","dir":"Changelog","previous_headings":"","what":"performance 0.12.0","title":"performance 0.12.0","text":"CRAN release: 2024-06-08","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-0-12-0","dir":"Changelog","previous_headings":"","what":"Breaking","title":"performance 0.12.0","text":"Aliases posterior_predictive_check() check_posterior_predictions() check_predictions() deprecated. Arguments named group group_by deprecated future release. Please use instead. affects check_heterogeneity_bias() performance.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-12-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.12.0","text":"Improved documentation new vignettes added. check_model() gets base_size argument, set base font size plots. check_predictions() stanreg brmsfit models now returns plots usual style models longer returns plots bayesplot::pp_check(). Updated trained model used prediction distributions check_distribution().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-12-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.12.0","text":"check_model() now falls back normal Q-Q plots model supported DHARMa package simulated residuals calculated.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0110","dir":"Changelog","previous_headings":"","what":"performance 0.11.0","title":"performance 0.11.0","text":"CRAN release: 2024-03-22","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-supported-models-0-11-0","dir":"Changelog","previous_headings":"","what":"New supported models","title":"performance 0.11.0","text":"Rudimentary support models class serp package serp.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-11-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.11.0","text":"simulate_residuals() check_residuals(), simulate check residuals generalized linear (mixed) models. Simulating residuals based DHARMa package, objects returned simulate_residuals() inherit DHARMa class, thus can used functions DHARMa package. However, also implementations performance package, check_overdispersion(), check_zeroinflation(), check_outliers() check_model(). Plots check_model() improved. Q-Q plots now based simulated residuals DHARMa package non-Gaussian models, thus providing accurate informative plots. half-normal QQ plot generalized linear models can still obtained setting new argument residual_type = \"normal\". Following functions now support simulated residuals (simulate_residuals()) resp. objects returned DHARMa::simulateResiduals(): check_overdispersion() check_zeroinflation() check_outliers() check_model()","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-11-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.11.0","text":"Improved error messages check_model() QQ-plots created. check_distribution() stable possibly sparse data.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-11-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.11.0","text":"Fixed issue check_normality() t-tests. Fixed issue check_itemscale() data frame inputs, factor_index named vector.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0109","dir":"Changelog","previous_headings":"","what":"performance 0.10.9","title":"performance 0.10.9","text":"CRAN release: 2024-02-17","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-10-9","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.10.9","text":"r2() models class glmmTMB without random effects now returns correct r-squared value non-mixed models. check_itemscale() now also accepts data frames input. case, factor_index must specified, must numeric vector length number columns x, element index factor respective column x. check_itemscale() gets print_html() method. Clarification documentation estimator argument performance_aic(). Improved plots overdispersion-checks negative-binomial models package glmmTMB (affects check_overdispersion() check_model()). Improved detection rates singularity check_singularity() models package glmmTMB. model class glmmTMB, deviance residuals now used check_model() plot. Improved (better understand) error messages check_model(), check_collinearity() check_outliers() models non-numeric response variables. r2_kullback() now gives informative error non-supported models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-9","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.9","text":"Fixed issue binned_residuals() models binary outcome, rare occasions empty bins occur. performance_score() longer fail models scoring rules can’t calculated. Instead, informative message returned. check_outliers() now properly accept percentage_central argument using \"mcd\" method. Fixed edge cases check_collinearity() check_outliers() models response variables classes Date, POSIXct, POSIXlt difftime. Fixed issue check_model() models package quantreg.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0108","dir":"Changelog","previous_headings":"","what":"performance 0.10.8","title":"performance 0.10.8","text":"CRAN release: 2023-10-30","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-0-10-8","dir":"Changelog","previous_headings":"","what":"Changes","title":"performance 0.10.8","text":"Changed behaviour check_predictions() models binomial family, get comparable plots different ways outcome specification. Now, outcome proportion, defined matrix trials successes, produced plots (models , ).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-8","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.8","text":"Fixed CRAN check errors. Fixed issue binned_residuals() models binomial family, outcome proportion.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0107","dir":"Changelog","previous_headings":"","what":"performance 0.10.7","title":"performance 0.10.7","text":"CRAN release: 2023-10-27","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-10-7","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"performance 0.10.7","text":"binned_residuals() gains new arguments control residuals used test, well different options calculate confidence intervals (namely, ci_type, residuals, ci iterations). default values compute binned residuals changed. Default residuals now “deviance” residuals (longer “response” residuals). Default confidence intervals now “exact” intervals (longer based Gaussian approximation). Use ci_type = \"gaussian\" residuals = \"response\" get old defaults.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-7","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.7","text":"binned_residuals() - like check_model() - gains show_dots argument show hide data points lie inside error bounds. particular useful models many observations, generating plot slow.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-10-6","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.10.6","text":"Support nestedLogit models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-6","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.6","text":"check_outliers() method \"ics\" now detects number available cores parallel computing via \"mc.cores\" option. robust previous method, used parallel::detectCores(). Now set number cores via options(mc.cores = 4).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-6","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.6","text":"Fixed issues check_model() models used data sets variables class \"haven_labelled\".","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0105","dir":"Changelog","previous_headings":"","what":"performance 0.10.5","title":"performance 0.10.5","text":"CRAN release: 2023-09-12","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-5","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.5","text":"informative message test_*() functions “nesting” refers fixed effects parameters currently ignores random effects detecting nested models. check_outliers() \"ICS\" method now stable less likely fail. check_convergence() now works parsnip _glm models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.5","text":"check_collinearity() work hurdle- zero-inflated models package pscl model explicitly defined formula zero-inflation model.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0104","dir":"Changelog","previous_headings":"","what":"performance 0.10.4","title":"performance 0.10.4","text":"CRAN release: 2023-06-02","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-4","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.4","text":"icc() r2_nakagawa() gain ci_method argument, either calculate confidence intervals using boot::boot() (instead lmer::bootMer()) ci_method = \"boot\" analytical confidence intervals (ci_method = \"analytical\"). Use ci_method = \"boot\" default method fails compute confidence intervals use ci_method = \"analytical\" bootstrapped intervals calculated . Note default computation method preferred. check_predictions() accepts bandwidth argument (smoothing bandwidth), passed plot() methods density-estimation. check_predictions() gains type argument, passed plot() method change plot-type (density discrete dots/intervals). default, type set \"default\" models without discrete outcomes, else type = \"discrete_interval\". performance_accuracy() now includes confidence intervals, reports default (standard error longer reported, still included).","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-4","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.4","text":"Fixed issue check_collinearity() fixest models used () create interactions formulas.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0103","dir":"Changelog","previous_headings":"","what":"performance 0.10.3","title":"performance 0.10.3","text":"CRAN release: 2023-04-07","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-10-3","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.10.3","text":"item_discrimination(), calculate discrimination scale’s items.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"support-for-new-models-0-10-3","dir":"Changelog","previous_headings":"","what":"Support for new models","title":"performance 0.10.3","text":"model_performance(), check_overdispersion(), check_outliers() r2() now work objects class fixest_multi (@etiennebacher, #554). model_performance() can now return “Weak instruments” statistic p-value models class ivreg metrics = \"weak_instruments\" (@etiennebacher, #560). Support mclogit models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-3","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.3","text":"test_*() functions now automatically fit null-model one model objects provided testing multiple models. Warnings model_performance() unsupported objects class BFBayesFactor can now suppressed verbose = FALSE. check_predictions() longer fails issues re_formula = NULL mixed models, instead gives warning tries compute posterior predictive checks re_formuka = NA. check_outliers() now also works meta-analysis models packages metafor meta. plot() performance::check_model() longer produces normal QQ plot GLMs. Instead, now shows half-normal QQ plot absolute value standardized deviance residuals.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-3","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.3","text":"Fixed issue print() method check_collinearity(), mix correct order parameters.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0102","dir":"Changelog","previous_headings":"","what":"performance 0.10.2","title":"performance 0.10.2","text":"CRAN release: 2023-01-12","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-10-2","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.10.2","text":"Revised usage insight::get_data() meet forthcoming changes insight package.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-2","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.2","text":"check_collinearity() now accepts NULL ci argument.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.2","text":"Fixed issue item_difficulty() detecting maximum values item set. Furthermore, item_difficulty() gets maximum_value argument case item contains maximum value due missings.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0101","dir":"Changelog","previous_headings":"","what":"performance 0.10.1","title":"performance 0.10.1","text":"CRAN release: 2022-11-25","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-10-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.10.1","text":"Minor improvements documentation.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-1","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.1","text":"icc() r2_nakagawa() get ci iterations arguments, compute confidence intervals ICC resp. R2, based bootstrapped sampling. r2() gets ci, compute (analytical) confidence intervals R2. model underlying check_distribution() now also trained detect cauchy, half-cauchy inverse-gamma distributions. model_performance() now allows include ICC Bayesian models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.10.1","text":"verbose didn’t work r2_bayes() BFBayesFactor objects. Fixed issues check_model() models convergence issues lead NA values residuals. Fixed bug check_outliers whereby passing multiple elements threshold list generated error (#496). test_wald() now warns user inappropriate F test calls test_likelihoodratio() binomial models. Fixed edge case usage parellel::detectCores() check_outliers().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-0100","dir":"Changelog","previous_headings":"","what":"performance 0.10.0","title":"performance 0.10.0","text":"CRAN release: 2022-10-03","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-change-0-10-0","dir":"Changelog","previous_headings":"","what":"Breaking Change","title":"performance 0.10.0","text":"minimum needed R version bumped 3.6. alias performance_lrt() removed. Use test_lrt() resp. test_likelihoodratio().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-10-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.10.0","text":"Following functions moved package parameters performance: check_sphericity_bartlett(), check_kmo(), check_factorstructure() check_clusterstructure().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-10-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.10.0","text":"check_normality(), check_homogeneity() check_symmetry() now works htest objects. Print method check_outliers() changed significantly: now states methods, thresholds, variables used, reports outliers per variable (univariate methods) well observation flagged several variables/methods. Includes new optional ID argument add along row number output (@rempsyc #443). check_outliers() now uses conventional outlier thresholds. IQR confidence interval methods now gain improved distance scores continuous instead discrete.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-10-0","dir":"Changelog","previous_headings":"","what":"Bug Fixes","title":"performance 0.10.0","text":"Fixed wrong z-score values using vector instead data frame check_outliers() (#476). Fixed cronbachs_alpha() objects parameters::principal_component().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-092","dir":"Changelog","previous_headings":"","what":"performance 0.9.2","title":"performance 0.9.2","text":"CRAN release: 2022-08-10","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-9-2","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.9.2","text":"print() methods model_performance() compare_performance() get layout argument, can \"horizontal\" (default) \"vertical\", switch layout printed table. Improved speed performance check_model() performance_*() functions. Improved support models class geeglm.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-9-2","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.9.2","text":"check_model() gains show_dots argument, show hide data points. particular useful models many observations, generating plot slow.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-9-2","dir":"Changelog","previous_headings":"","what":"Bug Fixes","title":"performance 0.9.2","text":"Fixes wrong column names model_performance() output kmeans objects (#453)","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-091","dir":"Changelog","previous_headings":"","what":"performance 0.9.1","title":"performance 0.9.1","text":"CRAN release: 2022-06-20","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-0-9-1","dir":"Changelog","previous_headings":"","what":"Breaking","title":"performance 0.9.1","text":"formerly “conditional” ICC icc() now named “unadjusted” ICC.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-9-1","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.9.1","text":"performance_cv() cross-validated model performance.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"support-for-new-models-0-9-1","dir":"Changelog","previous_headings":"","what":"Support for new models","title":"performance 0.9.1","text":"Added support models package estimator.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-9-1","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.9.1","text":"check_overdispersion() gets plot() method. check_outliers() now also works models classes gls lme. consequence, check_model() longer fail models. check_collinearity() now includes confidence intervals VIFs tolerance values. model_performance() now also includes within-subject R2 measures, applicable. Improved handling random effects check_normality() (.e. argument effects = \"random\").","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-9-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.9.1","text":"check_predictions() work GLMs matrix-response. check_predictions() work logistic regression models (.e. models binary response) package glmmTMB item_split_half() work input data frame matrix contained two columns. Fixed wrong computation BIC model_performance() models transformed response values. Fixed issues check_model() GLMs matrix-response.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-090","dir":"Changelog","previous_headings":"","what":"performance 0.9.0","title":"performance 0.9.0","text":"CRAN release: 2022-03-30","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-9-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.9.0","text":"check_concurvity(), returns GAM concurvity measures (comparable collinearity checks).","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"check-functions-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Check functions","title":"performance 0.9.0","text":"check_predictions(), check_collinearity() check_outliers() now support (mixed) regression models BayesFactor. check_zeroinflation() now also works lme4::glmer.nb() models. check_collinearity() better supports GAM models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"test-functions-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Test functions","title":"performance 0.9.0","text":"test_performance() now calls test_lrt() test_wald() instead test_vuong() package CompQuadForm missing. test_performance() test_lrt() now compute corrected log-likelihood models transformed response variables (log- sqrt-transformations) passed functions.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"model-performance-functions-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Model performance functions","title":"performance 0.9.0","text":"performance_aic() now corrects AIC value models transformed response variables. also means comparing models using compare_performance() allows comparisons AIC values models without transformed response variables. Also, model_performance() now corrects AIC BIC values models transformed response variables.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"plotting-and-printing-0-9-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Plotting and printing","title":"performance 0.9.0","text":"print() method binned_residuals() now prints short summary results (longer generates plot). plot() method added generate plots. plot() output check_model() revised: binomial models, constant variance plot omitted, binned residuals plot included. density-plot showed normality residuals replaced posterior predictive check plot.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-9-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.9.0","text":"model_performance() models lme4 report AICc requested. r2_nakagawa() messed order group levels by_group TRUE.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-080","dir":"Changelog","previous_headings":"","what":"performance 0.8.0","title":"performance 0.8.0","text":"CRAN release: 2021-10-01","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-8-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"performance 0.8.0","text":"ci-level r2() Bayesian models now defaults 0.95, line latest changes bayestestR package. S3-method dispatch pp_check() revised, avoid problems bayesplot package, generic located.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-8-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.8.0","text":"Minor revisions wording messages check-functions. posterior_predictive_check() check_predictions() added aliases pp_check().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-8-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.8.0","text":"check_multimodal() check_heterogeneity_bias(). functions removed parameters packages future.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-8-0","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.8.0","text":"r2() linear models can now compute confidence intervals, via ci argument.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-8-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.8.0","text":"Fixed issues check_model() Bayesian models. Fixed issue pp_check() models transformed response variables, now predictions observed response values (transformed) scale.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-073","dir":"Changelog","previous_headings":"","what":"performance 0.7.3","title":"performance 0.7.3","text":"CRAN release: 2021-07-21","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-7-3","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.7.3","text":"check_outliers() new ci (hdi, eti) method filter based Confidence/Credible intervals. compare_performance() now also accepts list model objects. performance_roc() now also works binomial models classes glm. Several functions, like icc() r2_nakagawa(), now .data.frame() method. check_collinearity() now correctly handles objects forthcoming afex update.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-072","dir":"Changelog","previous_headings":"","what":"performance 0.7.2","title":"performance 0.7.2","text":"CRAN release: 2021-05-17","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-7-2","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.7.2","text":"performance_mae() calculate mean absolute error.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-7-2","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.7.2","text":"Fixed issue \"data length differs size matrix\" warnings examples forthcoming R 4.2. Fixed issue check_normality() models sample size larger 5.000 observations. Fixed issue check_model() glmmTMB models. Fixed issue check_collinearity() glmmTMB models zero-inflation, zero-inflated model intercept-model.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-071","dir":"Changelog","previous_headings":"","what":"performance 0.7.1","title":"performance 0.7.1","text":"CRAN release: 2021-04-09","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-supported-models-0-7-1","dir":"Changelog","previous_headings":"","what":"New supported models","title":"performance 0.7.1","text":"Add support model_fit (tidymodels). model_performance supports kmeans models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-7-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.7.1","text":"Give informative warning r2_bayes() BFBayesFactor objects can’t calculated. Several check_*() functions now return informative messages invalid model types input. r2() supports mhurdle (mhurdle) models. Added print() methods classes r2(). performance_roc() performance_accuracy() functions unfortunately spelling mistakes output columns: Sensitivity called Sensivity Specificity called Specifity. think understandable mistakes :-)","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"check_model-0-7-1","dir":"Changelog","previous_headings":"Changes to functions","what":"check_model()","title":"performance 0.7.1","text":"check_model() gains arguments, customize plot appearance. Added option detrend QQ/PP plots check_model().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"model_performance-0-7-1","dir":"Changelog","previous_headings":"Changes to functions","what":"model_performance()","title":"performance 0.7.1","text":"metrics argument model_performance() compare_performance() gains \"AICc\" option, also compute 2nd order AIC. \"R2_adj\" now explicit option metrics argument model_performance() compare_performance().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"other-functions-0-7-1","dir":"Changelog","previous_headings":"Changes to functions","what":"Other functions","title":"performance 0.7.1","text":"default-method r2() now tries compute r-squared models specific r2()-method yet, using following formula: 1-sum((y-y_hat)^2)/sum((y-y_bar)^2)) column name Parameter check_collinearity() now appropriately named Term.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-7-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.7.1","text":"test_likelihoodratio() now correctly sorts models identical fixed effects part, different model parts (like zero-inflation). Fixed incorrect computation models inverse-Gaussian families, Gaussian families fitted glm(). Fixed issue performance_roc() models outcome 0/1 coded. Fixed issue performance_accuracy() logistic regression models method = \"boot\". cronbachs_alpha() work matrix-objects, stated docs. now .","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-070","dir":"Changelog","previous_headings":"","what":"performance 0.7.0","title":"performance 0.7.0","text":"CRAN release: 2021-02-03","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-7-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.7.0","text":"Roll-back R dependency R >= 3.4.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-7-0","dir":"Changelog","previous_headings":"","what":"Breaking Changes","title":"performance 0.7.0","text":"compare_performance() doesn’t return models’ Bayes Factors, now returned test_performance() test_bf().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-to-test-or-compare-models-0-7-0","dir":"Changelog","previous_headings":"","what":"New functions to test or compare models","title":"performance 0.7.0","text":"test_vuong(), compare models using Vuong’s (1989) Test. test_bf(), compare models using Bayes factors. test_likelihoodratio() alias performance_lrt(). test_wald(), rough approximation LRT. test_performance(), run relevant appropriate tests based input.","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance_lrt-0-7-0","dir":"Changelog","previous_headings":"Changes to functions","what":"performance_lrt()","title":"performance 0.7.0","text":"performance_lrt() get alias test_likelihoodratio(). return AIC/BIC now (related LRT per se can easily obtained functions). Now contains column difference degrees freedom models. Fixed column names consistency.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"model_performance-0-7-0","dir":"Changelog","previous_headings":"Changes to functions","what":"model_performance()","title":"performance 0.7.0","text":"Added diagnostics models class ivreg.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"other-functions-0-7-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Other functions","title":"performance 0.7.0","text":"Revised computation performance_mse(), ensure ’s always based response residuals. performance_aic() now robust.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-7-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.7.0","text":"Fixed issue icc() variance_decomposition() multivariate response models, model parts contained random effects. Fixed issue compare_performance() duplicated rows. check_collinearity() longer breaks models rank deficient model matrix, gives warning instead. Fixed issue check_homogeneity() method = \"auto\", wrongly tested response variable, residuals. Fixed issue check_homogeneity() edge cases predictor non-syntactic names.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-061","dir":"Changelog","previous_headings":"","what":"performance 0.6.1","title":"performance 0.6.1","text":"CRAN release: 2020-12-09","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-6-1","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.6.1","text":"check_collinearity() gains verbose argument, toggle warnings messages.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-6-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.6.1","text":"Fixed examples, now using suggested packages conditionally.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-060","dir":"Changelog","previous_headings":"","what":"performance 0.6.0","title":"performance 0.6.0","text":"CRAN release: 2020-12-01","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-6-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.6.0","text":"model_performance() now supports margins, gamlss, stanmvreg semLme.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-6-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.6.0","text":"r2_somers(), compute Somers’ Dxy rank-correlation R2-measure logistic regression models. display(), print output package-functions different formats. print_md() alias display(format = \"markdown\").","code":""},{"path":[]},{"path":"https://easystats.github.io/performance/news/index.html","id":"model_performance-0-6-0","dir":"Changelog","previous_headings":"Changes to functions","what":"model_performance()","title":"performance 0.6.0","text":"model_performance() now robust doesn’t fail index computed. Instead, returns indices possible calculate. model_performance() gains default-method catches model objects previously supported. model object also supported default-method, warning given. model_performance() metafor-models now includes degrees freedom Cochran’s Q.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"other-functions-0-6-0","dir":"Changelog","previous_headings":"Changes to functions","what":"Other functions","title":"performance 0.6.0","text":"performance_mse() performance_rmse() now always try return (R)MSE response scale. performance_accuracy() now accepts types linear logistic regression models, even class lm glm. performance_roc() now accepts types logistic regression models, even class glm. r2() mixed models r2_nakagawa() gain tolerance-argument, set tolerance level singularity checks computing random effect variances conditional r-squared.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-6-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.6.0","text":"Fixed issue icc() introduced last update make lme-models fail. Fixed issue performance_roc() models factors response.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-051","dir":"Changelog","previous_headings":"","what":"performance 0.5.1","title":"performance 0.5.1","text":"CRAN release: 2020-10-29","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"breaking-changes-0-5-1","dir":"Changelog","previous_headings":"","what":"Breaking changes","title":"performance 0.5.1","text":"Column names model_performance() compare_performance() changed line easystats naming convention: LOGLOSS now Log_loss, SCORE_LOG Score_log SCORE_SPHERICAL now Score_spherical.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-5-1","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.5.1","text":"r2_posterior() Bayesian models obtain posterior distributions R-squared.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-5-1","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.5.1","text":"r2_bayes() works Bayesian models BayesFactor ( #143 ). model_performance() works Bayesian models BayesFactor ( #150 ). model_performance() now also includes residual standard deviation. Improved formatting Bayes factors compare_performance(). compare_performance() rank = TRUE doesn’t use BF values BIC present, prevent “double-dipping” BIC values (#144). method argument check_homogeneity() gains \"levene\" option, use Levene’s Test homogeneity.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-5-1","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.5.1","text":"Fix bug compare_performance() ... arguments function calls regression objects, instead direct function calls.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-050","dir":"Changelog","previous_headings":"","what":"performance 0.5.0","title":"performance 0.5.0","text":"CRAN release: 2020-09-12","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-5-0","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.5.0","text":"r2() icc() support semLME models (package smicd). check_heteroscedasticity() now also work zero-inflated mixed models glmmTMB GLMMadpative. check_outliers() now returns logical vector. Original numerical vector still accessible via .numeric().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"new-functions-0-5-0","dir":"Changelog","previous_headings":"","what":"New functions","title":"performance 0.5.0","text":"pp_check() compute posterior predictive checks frequentist models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-5-0","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.5.0","text":"Fixed issue incorrect labeling groups icc() by_group = TRUE. Fixed issue check_heteroscedasticity() mixed models sigma calculated straightforward way. Fixed issues check_zeroinflation() MASS::glm.nb(). Fixed CRAN check issues.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-048","dir":"Changelog","previous_headings":"","what":"performance 0.4.8","title":"performance 0.4.8","text":"CRAN release: 2020-07-27","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-8","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.8","text":"Removed suggested packages removed CRAN.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-8","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.8","text":"icc() now also computes “classical” ICC brmsfit models. former way calculating “ICC” brmsfit models now available new function called variance_decomposition().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-8","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.8","text":"Fix issue new version bigutilsr check_outliers(). Fix issue model order performance_lrt().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-047","dir":"Changelog","previous_headings":"","what":"performance 0.4.7","title":"performance 0.4.7","text":"CRAN release: 2020-06-14","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-7","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.7","text":"Support models package mfx.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-7","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.7","text":"model_performance.rma() now includes results heterogeneity test meta-analysis objects. check_normality() now also works mixed models (limitation studentized residuals used). check_normality() gets effects-argument mixed models, check random effects normality.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-7","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.7","text":"Fixed issue performance_accuracy() binomial models response variable non-numeric factor levels. Fixed issues performance_roc(), printed 1 - AUC instead AUC.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-046","dir":"Changelog","previous_headings":"","what":"performance 0.4.6","title":"performance 0.4.6","text":"CRAN release: 2020-05-03","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-6","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.6","text":"Minor revisions model_performance() meet changes mlogit package. Support bayesx models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-6","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.6","text":"icc() gains by_group argument, compute ICCs per different group factors mixed models multiple levels cross-classified design. r2_nakagawa() gains by_group argument, compute explained variance different levels (following variance-reduction approach Hox 2010). performance_lrt() now works lavaan objects.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-6","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.6","text":"Fix issues functions models logical dependent variable. Fix bug check_itemscale(), caused multiple computations skewness statistics. Fix issues r2() gam models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"performance-045","dir":"Changelog","previous_headings":"","what":"performance 0.4.5","title":"performance 0.4.5","text":"CRAN release: 2020-03-28","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"general-0-4-5","dir":"Changelog","previous_headings":"","what":"General","title":"performance 0.4.5","text":"model_performance() r2() now support rma-objects package metafor, mlm bife models.","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"changes-to-functions-0-4-5","dir":"Changelog","previous_headings":"","what":"Changes to functions","title":"performance 0.4.5","text":"compare_performance() gets bayesfactor argument, include exclude Bayes factor model comparisons output. Added r2.aov().","code":""},{"path":"https://easystats.github.io/performance/news/index.html","id":"bug-fixes-0-4-5","dir":"Changelog","previous_headings":"","what":"Bug fixes","title":"performance 0.4.5","text":"Fixed issue performance_aic() models package survey, returned three different AIC values. Now AIC value returned. Fixed issue check_collinearity() glmmTMB models zero-inflated formula one predictor. Fixed issue check_model() lme models. Fixed issue check_distribution() brmsfit models. Fixed issue check_heteroscedasticity() aov objects. Fixed issues lmrob glmrob objects.","code":""}]

*%;LB5YsRWLuIzmS^^Z-4TYyAb%)+{S?k=$sx9pkqduY}I ze+j$S?92OrSW;LXIlWj;DbS)N)uW^AhNI?Jzf~wVT`V-sbQ}fCwwcbEFPZM6>2Xtg zIF;;3r8>eL?ct8rwsdiMqsQv&$@2nskoi@Wm4TaN!`mz4MpCKa0n0R1FrYD?Gu2M* zGl*(*Ifl|XW6J52?ns{U!ZTP3Qfm$qQko@gT`|F#Z-Pl8rMibZObbX$_iX`<#U4?< zX>nZ|yO%I2?4){)dQ}zXzA~t1N!?;ZHt4lpqiNOz=dc7t?P5Y%U#~PJfblq?bTX9BbrbNAr@3JSOOSlh z0qUT3wM6t(kpYG|P$&w=Z~>N#+Y1FvYEkX`K1awJKSJ0KAQ$|2dN;XUvR8xo;X%=V7{5;huX- z7AF)Lf6!2Z zw%Em*=2Y36tI1)(C%tZ59BGI|l8ud>VYaZPJ>sisK19d_e*T%97S%a>c6G$^8(f4o z)9VL|y5QZYuc~StsT3I&neFXJCB5DxlgF>yT$qxiqkIZ@!xM9g8|HG+QDga*An&OK zQTt5Xo=3ZyG~bR|iKh*?be6E@=Eg?A2Uak!id|-LHO- zO1p({;-z3_guNGlN$BI2eFs8@5%aiTQ5NN#MbNuJhRI&`8uBEs3MS`R{+uA#FgS?V z^F|~EI{+P25)+y~Ex^(wrM`%#d4L1YC+g@-P}9?=MMz^tqSun(kKj&a&0*m)@hq9T zy?p&5yt{fYX@fIr0@2+>hVVI-E$7a8h_f|J9I8Fb=PB#leA}}u-4ti5xp@2(iXeZJ z#uw@d#`XF&i8`A>PG-&16bs5iIg2^h|0@9kz}d<#{AIytaL(V>IOqU?xBp*g0e~O< z$MzHydz~88bN~hz0Du5(=l^XG)PJMM-E9zr{Bo0ZDWNI)Z_$uFsP0&Ps4s==dZuO0 zz;jCiZ6(-?taFqv`v=tTYF`4g`cpyik=1=Ae*I8wWw5y~HZ)6dZSOao0X*L}eDZ~! zjVIT=V9Upr)rm~8)VySBoeqTt~uh`uj$-Y=t4?vp0uKcBS^%bh; zjjr?_c=cP8{f@E2yAYlZF4zfec{)+g>3YK!JtqCx!V6@BtaH02Oe61uz0OzzHy< zJf3Cna0b-h00J;+5ru^$7($Y624X)ycK{3oAb`i2fP?|y$x&9FPq4`**3bO^e2#V> zpD$1)f4+n#M@q` zWc7y3zCp%dNFTbTbq0QG;5CNSdM2edB-a~RT^YEgA+d0WatZ{LZPQ^{LxMeu=?o0l zAl?|_v@bMP9~Gkwbj^jvFG5uYia4f_g#?l`5TyhH27KNPmm9E|A==Rp8Q$F^!b6|{ z%A6Eh0tU?>6zvK{qq+qLVF?V7;E$x^>x0D;Vu}a%g2ICxAbewQ33W4z%lQgoaDhca zVHi7GK6VJGGuYr{YjCt(O=U2|kr#&8I`ZP36C86qh%4tt?_Xop78wb*g+?NoP{!%B zM`WiMu`?nQ6QhC!#VPTJ=D`ht5v$W2(K^)#wPECJ#>}Y9DMd-OO6^g+SBQ9QgmZ}hLt)9EAlP}#sH ztjR0)1%nblu03mt=<*NB%UVHBua@PnWG6q#@odRVG1fQG$O_&f-H#?s0cS~-187PS i`HeoioFZ%utz0xg?(ISPFCGnO(ClSCajp(3lsnWHUcCABm^J@gm?!a424b` zi5Vji=9IMpz+V;E#>}Cv0L5q|L@Qx9w6ZamlRz$RxZ+^)?u6X1?Bf>6QlK2>90HWX za{T9yWrQTNKVMh&-dCwZ*U%lfMq&@ZPBH=26RmwBj_Xr$+bJH1{pR~#BG-wKNO!#3}wycm-#sTq-KmmeKFZ@uL8B=9uO#IKW{b8O=ckgAy zBHSWk8X9%fhTofNl5UmWO-IV$O**JOyA@G2MQyzRodSrGoju%A&+9k2T|QFj+-iIr z&H25ZB+b1X0SiM)o-v)-ZkyRn3OU^N5%nGBxH-&INcEoP-YT{LXfpDrvn18_%cW~3+n)?^{bBcsDungF_tq-696O*cFn@mZS3{5 zHo)3ZYh`^lXI|RgBzRMDug{`7JcI+jOF{rJmru|;s0sl9I=}`o)K_P8J2^VM=BAc_ zd8)EAA?WZ(@pF~91Avag`jfO?o_jC8$hxAv1`(iZRMU_ITANz3wjhTxWdgvD3kCa3Z|3 z+kpn79=K$`<6d#wDo45X`|NYsFD>Ui)5O;va0M}b>mBN&W}UIue&kDsop#@2Zw8;> z|N2d|Q(eLa^8Ur@890oXu+OoYpQ@f|l@#T=4H#qiP|vUPiU+>(OW+QDW*l|ibx-00 z7MJ5rpE`MBYI0(HY;2tqpMCmCzt`=wTOYoYZLH^M)B;z0HQ#9DNmPTSzj>XKLKsMdPlIGb zFost5N497xt({^G0}KW14Kpbx`mTeM&Im%ybM?pEAC{I1DG7)Un_whQX`e1jhd@ki z5M>@$Um!Y_te8|N#wcfLusM=Ti5C60tWz28`sd}Z_6LZL$8P+z|(;hC*Xw@ZtBSL zvtN}su_otg^PTMW4Y6+Ik;r9qN$*pmk>015wqBe-zt6D05VdgSFI{xyvkd5tVLK=h z!Y(dN;?w@f_O&}z8**pZc4_zNs%jLxsq>aLau~5qjs<&Nr6N*Us0^w-&Y+R75oYay{)|D2bwyUoC1orC`_@wby9p^mSFE zz~$4j^Z=WGk0WxmGT$A!#t(VgiVHJK`>RE^mCB(?U4A`GBQGKu86Mi%cnAapV5fJA zQ9zlz6UNx`cX*{;_}s49$O$kZ%X7spE`Ldy3nUwkr7Q1yBLj_w*CEahcLEgMid&w` zfFW=fZm6!wqs*JKgR+)Ls(KY9z7oVEalRMP^n4 zLSk>yz|#An3gDjcbtdHtGvPqmCVE|MslDTj3F2ZFy}HVOS#W7 zDXmx@JmayA#<`SI6%F=!co0W@I(Z9OwPZfQg;Ev?F#AC=9g9;vWw?~sv`z%brh1DQvQX_^3z_hGYZt*JJh5<&4ts1lm)2XE;6C}Mq zg6+)dfV8mULvz*$;kENE^$wYTCq2O20y1J}aim)?v>FlwP6&3hxD(BW!yLM|>FY-F zC=J9zq_JNWtR0Dq{7KI1tcgrB-&G|pfl%lTIQC?z=m(PX0LLr=V|DEfYs#oa-CYR^ znJ0SI?RH0voY_+#bO>+-(P(1f>U-E+*k>~W<7A#qfSE-Ko0cN*Sc?Hk>Nr{Ju# zV4W3#vXr~N7mP?N4P6AXh#`Bau6U@hz%oL)FI(s9>}b-k7QnH90&;tk0c({kN@`MG zuPE*k|mPX(pb0JY}9(qKTVgAbf2vQ?p+zBHvd=^!kZ%-@?&oFYX3R*TsQbN6! zGc}qXqs#6fY5~}q?lvPu(rVOFy^)vA8pfSp@L;Vr-}at3%NtQ!C|=A=IrpFvYJ!<$ zxUz~HqH5Qu8t!8>+*qwyuPVYKrw2m1k@EHhFFYIcRtT&vqqN0I3p^zto&a#arYm&Esea#@qS_4W6VnwS|z3&(O+v~m(;R@O3FZS=u@AcEMaBYA)4rY-{wGC#lHC)K`44-XUxS`FLQ!G;*=8g{dR1QS|mn_H=1GPXsw722=yh?89)-DrFFy@dO+{{lD&9 zgTXmAzXm9!^)<+AAE*axM&2bEoYP-C&!0CAs=0w zU0_3A{|MfuHnPqYJbFPIPrIyiZOKh!b+b3@Q2}>PV96lmt&}X z*EUBCfKaI_?e=F1GLRe2NC^D~airJUyz6CcyXgmCQ7XzKyLSJc0!H3vIj~o0{;m!{ zR|a8eKQ0B5$C?EU&c<8TN#I~%&B5Fsw)E_P2$}p(i+qh!cuRRIFN1fCw~%MCD44Jv zL4aw-PqT8RQ2StEp!KA@C~{9->Fdt@Ui9tu!SL%>U#a%zZ+<`=`S|u*ZQs+^-)G$Z zvt=3o@t6ZJ{FF8S#TxrCUxS=3YOnI{49)q0#~oYjTh;+jF@{$U6Lj=4anI`oZnpLv zqZRm~^ztKPPnyqL zWWa9~9^?=<{k4%2?N?zR{=fv^>WZA8xh$kL>V0Rvgk2b7u>fZP`C8P0P}q8*2Plk{ zmPf(r=SpM=EKGFn{7Nd==PQ}@94k3@b{3r5v|^zxww-|h7QS&Mg36hRxhq3qkAmgl zs_a=kGe8=fMx~l0Rl2o&xIwJ;XT4Ur?8Oxr*Hm0tbG4~X;ZFUG?o5w3Psbb5* zc6q{b+tB{{b+tthPAfwe5~|o}Q~# ztK~xNKpB7K{XrD`GB(w#ROwT0zBmlx`5Pi7rdc-s9K9IV`n{ypq# zj$Uu;&(PD{fvXmFwY1`*szpZ=+nY9V+BlArMz++kxuIrjU{n1n_Umh}q5gkweSH=? z^-gS8SW{wksa92HWswyHmKT0;D{NvT$CBI=&UqH)B^G{?CZ5G~bQM!krbd4;i7`3$gi(Zv;WS3Vzvzb<7oNpnIE{f2qXUczoS-+zh~Npj z!7#sso}Y$~d%fJk-a6ExI~qrNVLx~hbFsRPBr3G%riq*EP;yHs$G zCJ-%qT+0!+8DX`1&2q>^r$8$L4Nm(qoLTrC++gQ=yB6!pF7J}FV)rh&&$i=t$L4CY ze=J&xW??ECg?YiaptD*8w=!YLWx1PFYm6*nL+n3B7Zokc)U2wItU_QAWa@L(T#-8@ zFhGn8o1lBenivukTwaC&V3YlaDppxgD-m)o$=#$_BV-X7BL5Le4TIy$=^FpZ>sD?Es%MB}tUHq4JF$N`9FNKJbt)3h5$_RFvfWO$J*w_@3 zcn2U1g-#nab~WPd(HVzba`WULgWO#D{ytVqQW*nb7LSvug{yfd?{kz-9jULh3)hE0sBC zfCwfgAwE%%mR3XzloDE^jEK!C3Qo~pT#VaX;O3$)BDagE|0ydYW5`- zt_x`30khOzP@A^)6yhWP+%8YSo?|FKG=LP`Rw}-$Vp1aUaj(v=8OO?ZW^8Bo2%txj zr67AbD62`@{Puvb0>Z$%tL~U;QW6w|8o0nqZXG#iLz+GXr!H&ujG5o-@O@e23}bwH zQ`7eUU4mRl;tZfj$x)edJh`X{I)*A$p=ez0zTJ=AzaP6;E)xJA!K4826abRvA)v&? zks2`ghIfjwa*9$u9*z?4G96>+QtNT$D#WI9S*FmXvUl6NGeL8si}VXE`twnlXfTN| zQ(kGtsP*l)diQp|F$vKmUK+KE?=2cd6CqOyM77gEwVk>=3?q#3L0k6Iz6ndaF;>nZ zLV)P~y|Ei*kyxDl&pjFJC)1pG5+sNW!S)}*uM1mk{+OdFO=-&91 z_@`2V85U#*ONcdVh&@NhE_>k)aE5sDg81-(9C8>E;1ncK5F}JMBw92iP7)+t1|&xg zBu^fsND-t&8H9}usZ*$h=pOHx?lX0w4&JA|QTYh3QxeJM9ckIY{C9 zW}6X2M}V`MiRA$3_8V;`5XghoeP)9)m929^r`qY)BO6IFz4cRcAA8oU}CujZLO!1z#;o` zf%SkGlNU|P-q%BOAG`Cv#TR1o6)twW%Rp&8y|L+W*Yfiyb@}Dsq*_u;R0j83x7ak> z=aR&;oFG&`Iq-KN&rkbA#VYL$hP|RZZ~BgxuVn$X)fj#CrY&?~&3zPad+$u=nZ*3Y zIi~KQugOr$&5PY!?Ld=Kex9+u^sZeCiV+`Qm6P{1+oEf4I%9Mr ztS_BY+erE;Z?=Y*Sz|qO7A(14D)QcZ`P*72of0QOlC(?0`_4s|Tz17(q(&S~nzd-v zrd^kAz4~zVyX}q#9vU)i#Hcajro8adES`C7uPOk()+9OQN&$wgENiTAv^?(w> zH6esg=?%de$>=t)#uKp~jX63C(kvlvgn7bv!FeP2BKRYmggFJ5L`)hX=`a~^7fHJW zxlF4&>b%pG(O&^;ta0KzEw{VvH<1?<@l z^GWIp;giM}oj1Hug!K*)dXGRa?MQPJ9%D{zR%-^DbHZ?hMx<^NBGWfGHs83fqf+ml z)d$B0eLQ-U@rcbh@+);$Z5R3|`WfYmiCC;%`pFjaQM#(!ss)X2G(ROTNhQgi62`d8 zddB&zc1G*c$X2vax;^Hh)RdUp2@Ba8A9biTO0A+DIRACRX(ChXk&5fz1p%^3PT^*o z%J6BrL+&uCx4=6N+bo5ajRGjolmS3{gro3}`JRhR_t1cK_zSS(LOuhu5m0s;@E^c; zmM)+My$Aq^V~PZz28SSmZ~)S909vqnZg6P92q5TYtpf~zZoN}(1Y>|%fq@E#!P+VS z00;mM!5S5118ge*1ORI*d0#H$%mRfd(T2HEOFJo#&NvzJJD8l%=m?;RWg}%T#hr455$3Gx9XlLPjA!V{?%qUY|d1(_x-s!%xqWQrv1m9X&G7>9w4Z2iVLGLKk2J-9Cn zqVp9(tc>!By47d3N{wn#(q1ZrrD~f9#JZwNx&j?;fUG?3_QRy9_Pm#^m4Tmgq()Om z?$aq&rqZo*jLK2svL?OlPW#7u{0_y~-02dP_dF>^g5~+~cS(y*vg~*X79mcWY{hC^ zG3I$&Axw%qZ09v6gZN1-Z}IHR;6Q(0Z%=ot#oLbdw$_&Brbd_3VYgW=W>bUFpx0?N zYE`{b9GGX}`Lm}_W~q16$#^vEUtM0D_qv^StJ!$x^}@ma-Xw-W!qwHmzLJVTsI5ai zDuGO(PP{Z2vjo?7raN%;nl{F3HgMpk?vKfqlApA^kgF3u~#<1Y^{P=L;fAPvv2 zoOD&Vz&ef&9DA8&)G(5~&+Go!Otq`cYouI?1eF-$PZQB4UmK~*mjm?Iru&tUl2Gw+ zJFhZC%T6LdY$Sc48cK!+lGpEb?)v0H@CzLdWa`#>PgE-T>~LtXT16mw6di~2g2>SeRo7lXj8;X{;S^Le+1Q!HHJ=j3NvSn|=+Fj&Hh%Ja?Hmm2`6kCJ%|5PPE%)jevmCP>8>|d4Jg2yD2g|@r zd%=xRng%x$Z|_r!32Cb>)V{-1Xh^RK*jePk+LxNYh!MF-9;gQqNk&x=uPO-RO77kd z)k8@VwGy~i#c$y{*QxkYa-I{xUAY%}?K9{cjx+GMq9ccjF}|3TOH_|AYHQ_Ew;$c5 zy_&g<;B86DZW`XQ*&<6;s~OEkvM(KX*F3USS6<~LJT${~4_urJxgO4F$(HuP4FMYb z_0tr+aFjZUF0uE9Zd>?BGFoJsR(UPyLcRU9aw(aJs@ir%4BBMjX|q_*V{;Aj!1D{( zV#V}~FCxAc1R|e1wDfNo^D@ZwWfOZVGbdxoz!_Tl0pi2BlbB?MmS|B)7|dB&e&#HM zn%2hik?JaJxMnx@2L^C+Q-{7Ktfw)!p>fOG;*Dh}+9Y~8-{B9vCT)@?7eARI&&DEK znw_?zo$=)6+IVn#&=q4-wU(Mp&W$@3pFV<)#HKSK)0P4h;}rbF14AY}qXQKgT4w_b zqpW?^6>W*8j?$Sl&iRm9%m7r=b){Bp5~PJ!DYOE*o9h4?4zNkexU4YEc%!p~AS|S9 zFW8mcvz)D<7g$*u&`j$|h+;q}r|WEdQL|7Zq&Aon%s-aDkT@gE1`A=%WM!#O$;E4! zEvSxKi%GC-s6Y+sXD`zUBw)9J=kJ11DeN0G)!(ne;$2v`Pd6(DXUAN@)LjR#Td{pK^^lCpDmqs<+_NFaAT-2=dsWo7HqR@o)Q+srk5^XO(DKVs|*A$Wf zJ`v5n*Whd@JxatA;k)ev-4UGyyZ4yb?46zt_*Y_sDDMy=*t0L?xR!~tC)Syc>={>l z1A(~>MG#O|i}i~4VVth;Qu@{(xmGw~pS798A}`3q-@{tlQbnZD2-x-HHD6>M(U1Tl zyx^S`v4${AMBE#@vFG7sHvqFD=g>lDiX1+}O?1w00k|30oGIafbtF8f@3MJ1MQn|P;@Y{=KO5`@Xz1=Eu4_x@@(Ak7; z9l7tIr1by5Krv&i*>nAOGZ?|VJMY@C*8Wbj{QYm?b45)rGw>vd{e^H$>x_ijsE=@I zDYJr|9gIbbRWyEgFc;p`Fm+s@V>oj+NwP}$`S>FmmN_#Gz7QnFYCbY17j|y&h+0sMK(1p z2gW&E<1dXGigK9auS}_kq4g8jvjAWbMqcq!YH!DDr;dsHL;*o^5CO@&qK!(nOy>&{ zeFjotme=)**syGPN1x)qD-Wu@>sK+{e4PH&x7wcpw4sxLm%!y?tjD=;$7VI|`0i1v`HN*z99D;c4Dli(;oSuQ%;k$Ugc_0Wt432o1P#qXg zvO|DtLR;1PmnsR3G+X@k0^gZEb*UdBct_C~IwH%y>PG3hT=I+c(a_(XQU4g78j}*4 zvf*}hc3MO$@mQ%R}aSo-SrR0ew<2&vg~3kOwEzOqwb!H+dazd?74AJi;X+9p~iL0c@#2Yz@Z9z zcxVL+?pb0vo{;y%6FTC^*&V+tbQI40IdLJk({%1uYKd`d`*yKCSnY4xxHgfBmQ5?7 zMQ(L=MoVblu5k`jBaLAKE6#mDEXU?o|NCd|jAdiU`bV~dTYOn(LT z;MZEcBeZ z00Q=&9X-_%Be4&rr`kFQ-bIrl&2+)4Mp6ry)%e204Eu3C{cC{bn8jA5bo6DSON=WK zi@RLv$~voRx>N!DrAi-Wdvf$%)wt^|qE;8td!yGTv`tb^FXCPZVZIxE zm-=ICdCtJXfRdaoa?7Whxr930rUF|C2btvxI{qhZaIgdT*RiD+ob zKy66$$2MXIV6js_m>LiyK9H5h2bW^Yjvn%xcNM?io=|_uAsd<|+z+Jck&O-2-utDm zg4hhSnG!OqXtu1j(>R!999D*r_|!EJpvJ4|HIbP!%FAuy&_Lw)G%>q)`?SqGjgHDH2Wzb~Ut3-=@FHt*~ zg+2J)n};8%B~#L=ilt%sSe*u_S%L2Ww?PR#wKn)@+3QJFxt__e%r!#d0+=S;{?$)_ zPJ^59Uu)|hU}6FW_2$H5@#AWooX(af|7lcS8;8e6p8Olwgdyp&5bG|9xR+7}1*wp6 zC?)kZpn%ciAA*lV)ZE0QOt3(@db_l}-kQvE3}AadwqYXs$7$N>YQTLUK*Unm)T2uO zgu!8#gkM@K1}{<=ENjW&uL_KJ;xIqAU*wd&WhXezXid9xjnVH{Ek!7_cT)S!6pz z^M}?zyldq%bNkCHBuMh@!>ZBbZd@oIxnR@n-Etf)=dD92Lo-js5iBNV0)g0!S(CNa z@s4ow`iU6oIipUxrM~E}xu)L$Jtp0&??F@SSc|OPAe9DJFE*bfae6Rt97}fp$zo3! zjEVg;8XOcG7Ovd;#!BHA)31NFKJJZtAn8!dTBomL@ee|J^5aI@B_RmH2M1aJ*0Q;} zY~-le(Dgm%v^)tx>8y$p4$w$foEQOxM2SL{yd~>{iW8D3uv!CpUGCwt*m7(s_Rw4t zq}cKgP5b!R99cT9$OezELjuP%(5mAX(JVS4C@arPwbAE#>3AjkRNAtW^DzT+Syq(~ zdCW@Y@z?~q70v}G2E+TF;XH4IIr9zLe?kQysMuaaG3@8AP0T87#ZJWDSoiR4>=I@n zzSL)qmy&y=4=;+a-3V03L{-FR-rMxF^HnLlN9nZOl&a;E=C97y&n2`63zK;59$3z- z3f6RWvz(b=g`nLkh3o0Is;XI&DV z`4md#(hV)LM^nl+vNz!m_M(S$HW!2Y*-Ri!_g0J}cSbMqKfKb+ZDvlAN*Irm9_T&;gY-1Q!Dmi$>1%?=*6TGQ}zD5+uX{U=|d5`u|5bX3Bk_=2-`dmibS5 z^&uI~vkSE?D;}A37kB$qV_p~@B( zf)_BS6iAT$&~IR~EezZF)9U-n7e@~+Vn;5-S+lttQtr`;QT_hQmrit_io6o{`l|As z)ox^{KR4xqe;s^27=J@R^RHK33Lhg_-61(w8tiR(Q`+B+x%msC;I&;-dC-YgrMv&d zo4Z&Rbd6sZi@C2FJWi{vK1Ndy=H2gc?ah0j9olTHx^9wIKZJe|fx=Rxy&K5&seF)d zmA^K_ukgupFTeTWLp9!d7}U7ABm3@>A@7F1I|Wr9gWZJPKTM;t{ecWpZk-CvrfahR zZbkqouv#QOoWixqS1&dHi3@K|KXCJ-{d(09aCaBK^{_A}ot2Bu5K6MCqHYF91LIZd z3RP;!7@^X%CaOMGEvEE!R-Vr=rxx~-%~(bq#~&gMT)SRxZ`l;N+`z4dv<9ud)Xm>^ zEuGtHqd&w-)WQjdZ@$GDUb0NQIQb#%Trvfh}=y8J!WVpDv=*bb&gH&M_9z zjIfDlZiMwBr)QLQ-ImtJeK8^HbV-fl6yGp`zF`xl%$oNFGxP@=#8RRT!`7s_6=~eG zfijgVpQ9M$=aS8p2!nLK02-+VfvC^{R1h5PvvF+w zke&1&1bw*48@W#md8TH zXK)3Tm~0fM8b!xhQtC(mW1lo@$)UVR?zH*%rN^?N7EHBb;wZxsuVfK*S8gj8z-3uNrM6z;LXf~ zg$M#R`JKn8cLPx_*4>z_bzU{da-MGN!k*Q)M0;e>Ol-TATS=%dDAA>i2~dKhF}B`H zU^lpnVwvsG%c5um@~s&gMpHjUGP*SFnkk@El?fYd+!UiUdVjHJg$_I=1 zCgx2S?)0GcTwc9%8#Ab3c-3rLZ@2OSKDPYH*6TUFQtDWZnzwMY;f{&3?qc?ox^mHS z(_GMelqoZLsDdUAB~4LAY>6-3NVIZMuPbhXAp4c5*OfOwklZL5#P6H)kunf{%-w)J z*x^!Z6!q*g9^BEnRfDIOI6RTmAF? z{7o4t^pO3C%xK|g*16>{Xfx}`ZYO#3zO}fsA6-wd>^4YfqceT`Wb*2Rj@^vX_bZ)u z&zT&&Px1MRiWEx6m)_G`uXoFczIMl@q9zzOFMyPrp^sPCSyEEi+{ZMG^KO0M^20@$ zdV9G~WblbE!0QssH8Gr=Y?K=BaN5?zLm_ES!>v90t_xdBR=SQ;F-#)Uk8J_>u*H0Z^eFi95 z*|z^k?8-aiM;LgG1$^b|f6|-{3%P&tGPf!TNy|!0W7THzNg5P^lO0f6Ca!WVb<-&t z9?rqdL(Ao?k<>WaeUB-Gl98*WatKXk;aqy1?(UAx8D}7;P;A1M@HAN%epX>ti2}~O zoLG*4q9I$r7G!yiTm~*jLFxe@0Ca*3cue){`e)D@Yy2x$KiI5_3js>PyjLo(QTSHO z?j21kmodG(vW6rkuxeYmnMyvQ+c$~JfcTP1! zm`Qp)v+ENdQJ+nH}K{XcXcu zt@nJj#HV7Td(`q7oZ@*_U?oBz%}=e#%}K45771vymB)Y%AP&}hvCxF2*A+c2Z(O@x zZ?v5iHMtAD+nx z^3TI{1q%kvBw8$TKM@w5}`I5r(gqb z1tvP=>u1W0thS7r+T1EBEQ}H2XfbK2_~R8WR)~**e?{v%FRfWO=xWZKukU*vlxB2* zD3MYEX;G>&51Xo%QUx@7hv%v%q4eP<&kr8UgVvk$;pk9{c82=vWi={d=kmKN>zdKr zyIz`uZuloxW5Tr_tLKx40fcmxUTbkdZj`TvuHJ_V>(3t$s~3R)fJjG1#?m71!QSwP z9s125^G0mQR49z*|6kSWt<|pFAj(}O>nZ9<`?$=NjAdlQ-h5CSMameho3x%yY@!vZ z$Q)9mQ@EH`ms3ii_dL3>>eK?sVG=49nO0kpYKUNr2W`3~QrJ$z_ateC-Z^xkaKfcJ zgL$`i2~egvUwF)=IfJ`2p5D1#tEGea^;Y7>E~8kuB;+6%D%h}Qquw@@ABxGT7zPw} z;LJ{wa7b8$SR8G)e#5HDJb;HH96Z~_vd)=i%P9BET=d1a%u;o^tSXdiC&hIKMcc)$ z(&vUG>eKXSwglA@N_U^sMg9J4*MNo&I1ePI!ZiFYJ-~VyUKfRHjN=YdhujgL`DO0p z%Uy2hx<5LspgcZC%+X1*bK@9YkyGUQ{gDZWp9o*&w(Lc)e)#Y9g4U=z03=6^?FsaR zxlKj`qw_mko$Qv))~`LU(`Po*|?qX z?a5M~Iw^}wBbHfxg>@ucPd&hnH{nY-iEjH`Ps~5z+Lz2TS_r3YfJgVB)jQ{7Uqwdn zw7{~>>W^CU#1%bipW4fHC!QB_qP)1c5{k{_B;re;CN{u^k{pTVVC`w0D6Yael06>b z_c9Z-QLUeKz>Xy8!gqyAq?u=Pd3cnq5>Aem|L`FxgB>#f72>~kzx5O^eXdN6ozuWa zBo|kzZWk8udUItej8_ZSVCH1%-drBnmk^A0pf(;Vbg^SHlK$TrX#SuA4P zEN6k3MsSQ|ss;A=2M0DnF*lIUfdeR7zc>9w1MG2XEBy2Y(y5Qf)y_~_5NDn@CY+(Q z!OvV_t$#MN> zufm#gH4k$&5MK)LkenP~ zw-(bu5Sd_vo@`|@9Yv`%uh0=(@9XC229;?FpVPn4ZyjZjI?LFk-cf!H;yLbXI*jjcqKVHzZ6TmxS+QLcbKt zZ;n6Z&Q0^*NRBOd)-~#9I>E>7O?Zkn?szq*-e69p^I|p87ay(&PS0EVUYn(+q*V=0 zs`U}sleJOC>Wmf;%~v=&$&hu5=JQ@m*t=&Bja=pQhxcW>;T;F3LLSB?CVU;ccz)MF zzM``$8SB$K_4t98^c zj;^j8onxq2(4Xjy0o-3*Dqssptj_LVzLKl^px?D;Y1RqcGfdBvNor>V4O74~&A+&n z|GTvPe;s@5k#2Qc{O{Bg1I3R*&YY5J){5jbYJ1`-l$}~2NnPMOEhMI(v#83P>8>;q z4rmj2kHORq!59Y=1%^8Ee5d7dVO&^P(YU`j;8BO}I|y&P9bL)yuYf4z0EMLn3HL`6 zq;e*vz;nkVfhYjBR(@Uy09~^O-`?0X?tzAd<(&Fw+r9nu=F;{0N=#A4;M}+`;+IUH zg~Tg2@7szy`p_z3kL27pfRn^#ri`zMEIGa8mFn<=3$A zQT~NH)mG5JUCGds)ZaYTXVZidToB8p&HYq)c@LwiGjl`smpto~x^szXYg^AVWKNis znF|PH5PF|mu5dLBI(_AWh0F->&AwYWPx6g4^*TT*YXOlgmk|>SHTEJ8(adz$$FYI~#~Z{O*+Xif0XzGH>D3FvMrO9giF_P04~w}V3- z&*2Rf8`%SD!IC+^qPKRIh~xt_Ml(;(?mbS>k3$hc`k4Um;z2uj%I&%GJEK45^sXp1Qb8 z>b%FbW?MBdMOW6(a}1+-*8&3&F0_xV(C4uwF5b=N-%t8tiWBO2Z0GgP>-$sPI9MzV zq&F$iBu#?$BcYyV-QRBHK2CCt5}M2{lZz z4PL78X__u$+B77O3LhuWFDoaRv#I-ojfY{)_Ajl1ifsrEOELeTv2_hASELRIXgP2; zIX%-~n@7~ZkAn6frTVjR$}7>bG*j9IBIK9&So?x zeHqM0=B+0krYT$*8Kjb@XzwRB zM0g?=->J1+8p6lQ@TKL&rfkao;MPMhcYn}p;>L|Db=;H2q14B?%Ed#`*WNo)6I!j% z$0xd+#fMk_0{S3?K6hDoW~@S&oK=#G$>Qr!6>*@4mDCGq4P|*Q4jNI)AnCJc`?C{s zgvi@ggnne6L256RAD)|*MOtH0Wa@2T2xe&EM;Rb6p+@K$q`j2TQO-bm;tlAJnn3#w;-gzZ}{9y2QIt_HhEW ze^!6g%^!$v`;+O#X~4*j$Kc|c35K#Ay1ZGBO)E`d=rb#JME*5JMr4(?WMi2$@)Hh;z=c0M(tKd>(*?P)u za^r-2Q0K}TF?uWtg+6e-IZ!LljJYp&H+EP!1M^N_gzZmskjr2ZP+#-5Iz+fe)qX}MqYU^jqc_*^?Dk$y_t^9jx z(KX*zEi2qZqlvuYzo3EmcQ9hn*&ucOt?Oqyh2_N&9tFf{^0VW%_l=p$xLNH#zjk(} z<<4|6owhdEk=0~S|Bzd*;nz32sYf6skAO4y)?@5okJsL9#1-s?IcWbSnl=p&tIR?5 zueD5e%z!Wy!3rw@Ev(%1+U6zzltiuF%qqcbs9qP?W8Lye(yAFA06HusSM8ymrR%?- zN3**JAz+ft5G2OvAdkV|!h;609meeI{i>-{IUV$YJoV71p@{RG`wA^XLeh~~VEm*~DgMQkbKG*>D*)O~7&!XyAH7vgnmd!r4 z4s$XNTC&Lo5It8+iL7YVw6e9;bRoZGZHokyM@JGH2e?o{tgbGZz3_4+N0i&ows~2Y zhugZ41hwu9GJ*KN61pPi*9^6iTNP$i%rU_N?X3W*vt-(mFhAC$RcQ&ONz)@N|8M{0 z<>i{svI{*qZEeqYfZWypx3aP&-w0%(OtK_@*DteTHRb}7E}AHvS88UFj25?{J7;G; zQMNNDzpAaF0zjG77R(KQuGqYapDL>RH%f(DoWFlPG>HToE7%Q(X7oyzY-_|F3To`nk1 zowHoBbfVjo+;L0;RB;^OlkXxQJs8i~c&mBJV~1vGs{xA5W^9od17lrSt@KQ7O#wUx zc&*f_1zJN`;ryf;l*)?~^`rjByJamVQZ1*wG}M!5HR17_RD_4%fo2N>m4 z%Q+lM_ikS4mt!qBr4yD5wDotA<%t1eYmsy}(*80mZ2;YiJ5vM#M@)GPKn-PpBFdYj z>^|vmbY1Lk>oGKrf03K(r#6AB)aGs1g%J^3wSYKd%5!P)FVeeoT^CAnO!Df8UFfV^ zwL*faMs`?~!&BMb*7I|@Zr}h4>W|ofSSZD@hB|0Kf|`{ul@HEEF{+#VLeKBIj(Z!% zW{-*QU$^C#ew`}3*M&ZfmNjlN_r?c|4ZD+d85^fy=%&1>bg1}H1cnE)+E{`E_k=rBcC^*p!A{3Nlo}ETyqbJ;Q1}5(j_Oe2i9NOH(B%KIesBgrY zE%1lDDCtSbGafQPH8Dv6MUP4?1w~#*lri3Yr5m3rkyVG_@IR=Hu9joFIu?tUh+e? z8leYD2k7-H+tiWY7-o7vNipUaL2|m1k8J>?j$iP_+qbt)|9Z07NBX$`**{sVkA=MO z&_l}6YuE~Am2P(EQm*NuQHcCK*cu8%KNl+He@+Hsf7v=nM?5I=GjP~;ar(Aa|4bme zF;8evNJqezYpl%{YUNU8CSlpZ=^yb@ROU;%9Kt}YZ^_3qYVRnhuVTQxbqaw>$foWT$Tpil7r3YmB&wG7Qq;i zj2m;Ie8A5<*N7``j=hZSNHAThgzLk|zAXyW$u8GoK#@_Z>NG=%t1ig{nS{>F`=MH)<$MMuT5jXge1-6QWs+R<}=`lRbYwQhrU=xH4cSvZ1A6T zhKxFa=CNqJSFnB+WEdF$2)7zOv)=1aa0nRVQ;9`(t6R$DPX!F4DULbL`eLa&)` zFd`Fi<;bL|_R=@nFXIj{j`&?1$Ff{h9UfxG$4rv=NvEn>P8N&K z7CExqXOHmc2nQtJx2x@Jwy235p4#e=GLo?f1<{3ZeO2dE^IuCt^X;E6%T<2y zy*)NYLA<=AxdoxKbn|?V@4w{bjqB_B1DD4hf4p>VKJWKuQ}a>l{;koKVdETTRL_0z za+Ei>UfnExk3aE5bT_gMw|A_$7*xvI^X+hai?S%L7>v~hx>+qoP`(@DVyUZAE&Bin z+;z-wklCy)6Dynj;5fp=Dup4ioo8kug{S~LNIBGgQcaWd&k>+(oze&Hpl}j?S@%{y z)2^Iy+$BIBg$t$BM{j|B@{H!mU|l&o@!!i>&gRnf}h1L0a(R;VloCv5pv)*CY2Fp zo&e497dYHfm#ALkQ(xqHOhhR~Yh#3hI&nazWnEd?oKVxc-Bk_1M;*iiW25HIW_b}q z&^bHFEb8$XpklEcT4C_NuP(xb!}d;vWe%uyiYQr_%#D#0a6%E~RzoT||4PuTH-i9b zG~jT#(#E_f%S9`5?I`T$%gpahX?KSTV0F>+AT8+&>LOXlfKKeI3dVtfo6CKcx!}ai zl6+hswX*`X&Vpfa*3Kv}N}#I}90a55)NCMCp3u^oWJCRHlS zOpuDSr&@82ZBy(g72+t`H1xVS-@(c(m0B*@ViE7PR4#)L8WK=Jp@lkZ>g9aScbiBq zd;~G-%bf-0=&bEKu2ImSQI2h<>+v9>AlMj{Ub70owF}$&h~8t#gF~IFbxNa?9YTQYlhK?M){&I{-oBk@l^n_}5?#Xe|>-j?w+-Z)45`)Hwx zuC*eB#g5;#8&kB>sEE0M;KhhWJ!?Q=vtT-I*~R%h#@YU|*zyRJXWZS)?T;w;pJQ28 z>l4ihxZBq3XuT#%{{8#!NCZ)^bKl5Vgc8JP8_1CTCK=}GG?RZ z)3~zhXKODGu%r}j5ZgghtgJ>gUKLXzo+X4>#bN@4TW0A^geq#;-yb5sG(^T@-cFHN z3wa!lO_MBr7PCKOA~~q8#UjS=SHFav5D4~n9&0LP*8BN3Ti#)S*Ki(FjFuVWE(qq^ z7J@`k0FM*N82r}he+0HQR_B9^zu72bFaI?52_HKFsJC_H;bWnCjHpu+s;~DwW|{e` zIFSDw(Y2LdkKiMP^p}BW(z_-1duIHVN8%qKORXi4Ro4-<3DsjDJ&F_hNZsF?*qmoE zzqx~b=If_Hlstyi`fwBNAdlx7upz7pY%PTr8fT1&c#L}I2)7J& zJD63Io|Vko=(yc*Q_8$q!NP^h_QZnNtzn@VeB@TCcEB1%aH-rIJ#u@x0oNBbhNzur zjrukK1OQ)bF4Z;_O82k*eV5`E006lD)4>A(0H!~~j{h@llDE;Di2#IP{}w_(;EqiK z5IRQ_7PKsa?vCE|bly7AfBwKnDVOPrMuL;Bwpxl7ZEfRk{K$?Gl$-R609FQxWxu0g za!DMwtmhJc8Oi3{f70l8F#0||ytipD~^JRkYpS=ms>h`7C}z>Xs>uZmWEP!TqEe_~bk^B=~r zd>h_V2+D_+nCw}nrk3{lr8J}Mx5W-(QxxtFQRo>X@@huWHWX-^!^*RaU7uoAC;26v zbBO>W1n2@u@g@Q?+(_QdmcGF3b}%$hq)ug>uay1f0CXmasr6s!gi& zOyvtxA$QhL93B`|nl7&$yM*CfB9)DF+bZ6M;!9V2f;}Iqt>)L@7Cq$nDz`p3kuk?g z4l5corZ&TJp<3OFQpmpoLiDislco%&4e4E<@sj0o6ejr_1@6xcs_yvH8KrA03m93u!JbHKI{Xv6%YZq&$@ zy>&sWC5=A=tt)ru6oQ%hYhW1+kL2d+K#OW#p*bh6MfhP{# zm7Vmq2)TVYP3}-cw25-89r}MLPT&1Q2};OADOO13R4#Pv5ISR~t04eUT6ib|VM{m$ zA%;V(1E4EI!hF@bFo1*dX4XK?gm)3PX_Zk4N670ai7l^BxpqBJMMonXzSP0O4v=NJ zIp$SN)QX!}qYMjf|A)XM;i{ZFWnA9i6^h%0LRXxIL7(8+KAo>1;kov_ zxiH18z=GL%20s30?gjONQ7}tTn#ne^HOuPr9G`|M&xop$khFj;a6K!Oxjn_4T@vP& z&L1YFV?7$Wvd_vmW9dK+|gyokT8 zh~N5<((=6-nxBmbe))~>I;d+C;Jo>c+YhHMTb#W3)}3!TADB2Sr-BPzyY3B#cc z)gN+D-OAU_;Av(7oB?D0PEjwOs)ABqUQA`Frzn2#NPJm}Ce8f1oToPOC4rP0?^Qdw zQstUeIvTO{DhwjVB7YTocOSxx(go7T*o?v>B7} z8Ki(tPGTEHk4l5G1_!NH+8>=bk^ z33}N!=w`B@nMT6O=nzTB6etoM6D<)6;n2j;q9siJ{|T@{C=7r2kOXoW(Do Pab&0C?+heW+1Nz@tgQ^J literal 0 HcmV?d00001 diff --git a/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmaiArmlw.woff2 b/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmaiArmlw.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..47e69cf8aa8ea6c5daeff3acd53343d5519602d3 GIT binary patch literal 13740 zcmV;dHB-uWPew8T0RR9105z-t5&!@I0CzM105wAZ0RR9100000000000000000000 z0000QZX1zi9D{lWU_Vn-K~!D3 zcn2U1g-#nsYc*_JO7!470HH|xn@5qy0b*VxieTdaki>?g|Nn&KWDErjm|E5UgNQ5> zSzs$pK*6fHszqyQrBw@791)eLqQMO~slJny#N7tl=tBH{uEE^vb)I1pthC@!2RDuNRS zD(=u-x>FNNZE0iD(30#@+c&zauI1gfZsjfQ#)OA>yYD-7yKcXkk7Y!JfHGRK9BqVX zCtosz+{N?Z#rtQnGfAynRmg*{c9g<3eJhvBoV4flOffd$a1i)iz zQ4|zga$kB`Z`>IskL;?dU02Qe9#DoLq8f{X)ZNK+dYe8J5!rtRX;Np3EJtj~GC%qW z^p?Kh6mjp=>)2)au0wfQlK7?V_0EK6SDsN|HCyW2H3jItkyaOyK2C~`>&$Gr<|!zb zq5c5UP_N*&l=$unULx`F*VJCRPw(ydxo9GtJ)`qa-D^szfO!JDF6lG`omP_Vo8A4b z=^Uk%qI{I}-`EcdLA~QvSWp6J01rR`EQG`Od^WAps%~qn-94}m5Q1zuKcI~8CwepyT&^t#M0Y>*Md1iqqIsnY>}mlwJ^B2pq)8 zaU9%ZnN>hd?HR+gXcd)Y_I|DJ{A#{r*L$Fzb*!_sloCP-As+Xa!n0d-T7MKIMY6s;1umgk$ zKsEpmfX4_KE*$_Nqd@?_5eC6`8z@Wzk;gZwDnQW5DtS5ZIHRDh64-+SUgJO^c z-R|_6s!9-Whd$`4Rout~$^fLa(j(T-6{brO%e)#y8+m|%yMKU|t2Epl#W#X6R!2W>u!Dy901~&q; zyIgrEd-4!4x6t3(nn~AExtW_enyYy!xx;zURvNCm`2{MyGJ*Lzhk>v|^d?P}A2WTI zwrCU%skuKoTb9{e%t(&N6$Z?Bt^x3BFu&Nv1I+DM)AFzKG+zV@XOw1lNQ9#W!V)$fX(Kv>xl;sjPZyrH zMLoI$2=Lw0g94(&ci=ST0KbkiF<`2;K750|7=BBVJ|4qLp7WU#rl$<@Kyyn9HKEDu znL+J}Yj?)g^R9Fd;gjZvNyBIQHN93fuvO=#Z3R&g`IBh97KXn1YQ8W3b_22{t<5 zU?T|8hbD+Xi<5_WX!*bGU(e_i{oQ8pKl}$F{L2`X}zyQ^ZE}l_W0zts)lS}F#06KL=K?P(4R?3z! zfpL_V&bJl_ho&O{FTfSlK^cmQ=OK4P*AO0m-+m4jgmQxa#uWJu*DeYXi99KQLWxc+ z5>PTpshDeEsL;KbWt7J_-ZAd(KL}dyJDfffPW1S>D_vh=4KL z!H)w^=L|~+10`GV3+}a)!bPleeyB)M?t9>UA9(0Pm8#rz&%53W8)T0F>0QU>$jQ)a z#v!=$=I1xvVOMR}eza;fg?dxQvvzH-6=#&UI^c=Rrt3@l!52GK7TD&z!v;-Sb_&k8 ze#{mFj`(~sYl+Ls{34E~^+|8i>on}Zx4hs>o)|OjoXg&+2lXHCH9T6g{4>)I-B#&z z=vnz))>9Vg;Utl1)TYaTQChO#%kN{KhX-2hHEz~|D?Ox2YgYWUc4~iLZ%=nuXGgnc zcUx;qb5mo3y1uSfRa0G6sjMh3D=jHj$cto!1^LpvTuDxLR%S-JI4w0fNtBo%6vW5z zd0a07*FZHB?kTHr)Xq=)GkaS_HHve0)(tC5za7EN>w{rOu%}}8&!m24to4R1ESRp3 zOJh&-2G^TqaVUV#Vxj$W-al=GYFz<5HNcg zG-LvdsJoEB(|W<0grClvw||+%u=9+HW6RRFRwMyQ%SCv4U6eE`QVTOJL&{o&R}e>bP+_bdb_0Y^9_+Uu!N8o~qZw|e z?6jSulo|WZBm$L{eQ8jT02bUdD&&SZ&pbh8%~r#c7ak63hp0_&f0=Q6BbONC$$keoOjiM-g3D znuPm?eOkX>@6;G}oU#cMoCXUyKI zT^IL1TWysaDW;5DHL^fIx!Xqo@U;wt54S=$Q+AvXn7g6UQzAig9Dge<0vr1cr%M@ z1~iE$BW|L+B!Guhv`)n)V zw0A282BObv4=^r(dgqiYaaNJ-=?Vfj@XhWUhpIb{V;^#*KUV^1*_@}&0k`F*X6rjF zz?HCEJ`+?|1c905|O;+AlS6t1_*r|p7;u>Z5rZv zcQwEUqI_yGxF5b{neIQ^6CgCez-MjbW0A?V)ix=Dz`!Di*Fq2|IBTsHG{)oX!WL+W zeHgLc)gg3SM^FbUqs`W7I=8&zyj-rWI*wQ3fR-Lle@;q~cH|>c$*( zPuCX}&&67{ll4tXBO_tlX>OYL*{R-MabU$JRCaOE1&*pmiW-;;1?A`D90n2n06`NZYCb%x&5TzVpa6ZQ9cu!agZ7+UoxHw~MR@%pjZkQp?1 z?4p>T(ZNA%z;?$2M=AkYm2flAJ{Oz;XP=@v08R)aLdPE}1iTpShnafvMm?0<1G{)5 zoaG(^BBcX$xh_lQn5`2VDMe1am|!nUF5%LdNRHXAv2y;OR|w4w%_8sFQkX;*NnaIH6v3arp;Ybpv3F<>Z_& zt2fek>E_-Dbs}w6U~W=j0E}7#*Ps5EoZXo=-M+O4 zc!%BWA)nx=%zr2pvt#U08QY;Z9^+@+c1v6JX>jG+@5A{? zCB2DY?WGD z&FT#gT*@l( z3Q$J?gq}La{}*ys!kUiQ`Ha;J`iN{p%@lZ^&QCT4m4F~_Cz=Xh@|Z55Q*WWJsy+m- z7%rxw6m^frJf|`z5NhB-wB;MD6gGMB=1nIQWoO85x^3J5-@NO0>dlQ)$W%1cOp${H-K2 zGd;IjTG%lg`ufWDEcXc4l=}^?|LpqH!ibD;mCwu#I>AV`DPU=_kc@Kn#;WRJ|b?{eb1B@t)4nS7#meFK7|f(y{UDs+(#*w z|95qwNS6XgbcZ#r&4DE&QIw(qBC$wHs<}=y|;DJt8(YdbJk3=W91|t6-rlPI-GXw5%??70 zYWDWryCv6pM~3BUYB$C8?kP956`TmxL2pIcDCTzCH7FCL2@a_ZqOH*w9VQ|=3fRi~W zZL#A;UYlvfZD(k%($ohr$ZCzdrZ`eNc53f_o6RtaLokXZdYno@9Dq&Sv&>?y*AW7; zI07*ij$~|ojGs0m308Kp2gJf!b__E@y?F9G@p?52;n$YCa#8VK*&?6*1X#4mz8_TY zH0lpD=W?)IaO&`p>S+ltDQ7ovI$5ig+4~sQT5{GScB`#OX4f3*Sh3AsCarsJoxMU) z-k)K>q%NdC!~yU(WB>Lw1JyUHrCtjX61I+ky&zuq<|BmaTPJ_qloN4|> z?Hbpoy3g(Bg66!VJ3d@R?$x~fe)om;kED~jq(7&}rj!2EUVJX?Ti9&by@4EAwUP_= z4Tx$A_Qe(U1>UF|FQ;7385&9|Xc&HfvOmxzQG^Qf)f>XU*JSb56Dk-9CSV zf=1>B=YATV`+3eG;qg=KiTc8a{9`?zOO9U31XpT~2@Q1dQZjVnOV#cNsP;9>L}ABJ zVzqocPP*6cR!i+}#vNt9mX}+sapeV&xXCjPTv+6+&8$-me<@|U+UtPp5&1*23|Kp5 zvT8C~f9Ij1xA()G`s*WWl9>9&&_rP)i_?yg?q&u^j;H1mJ}`8fVDwsNe@CVD+LSs* zug=cCm^G4XcKxtOx$e5t$H$y*A6p?Ibo;y(bqmntM*3JnZuQa2BOVN&=Kja6NQjFd)<%DyIE%N9tv6wT`S+>Zmo`2Qa za9_2W!r48#i_Qtg|Q&tW4MbgM7t$6o%jU})l}ivE0_mp5OkQpfl6isWF) zr-2INc@>TMb1Xip~nA|UmZZ-CMEYf zAn)j!Hkvm+A;?CFYsB!o497b>JG(Kx_ zKRqx8r@ytvwuZl3JvNhApwM#5dLwRFG)T@bv}&Aj|C$+lmxctcaojAXv443S zY!m|Fne@;mx_;y{JekIShJOOjJMao8dTL(i~s z_Em<^hJ49`w2%sGGk1pmx-4>UD_4qVn<>C4x3}o0Hpl2th_>O|Mhj{4CAWsqS}`_i z00Q*D7*Kz``}|J}g}lA;l}HAgV06Hw^Z%T5!YVraLf6tJRVLFjbPhXWBTyN-q8vCT ztPpB^a&v-56pxngi|O)WKW>ULEL!f^x#HMD0yFeu6`Diw#TmW6l05%$#XbFl9D0$$ zix-jW9mOZ+y4c84QdAFB#%>jr7Fse607_7khP6$baa_;yLZQD+R*UYX$em*=e=fv~^7lC!R)0x!NrVJjVzA{bgCmp#XmmkU2!vvztTK6!a!js0qk=z-> zal?0vmuaE6WCG4>9qdo%S_Z||(Ac{xFhK!-v-FIM_|Jq;1!YOrb0oe_)ajcm4Vtdd zFR#QA^Ln{h5B1*c@B`SnnQTdk-J z6s+9&p&A`LfIIKLQiXouf7eC2vW)0@+d#^RjV+4a0vzpKe7Sj~j`67T5TCmNo zb|qp&U38kZ@y_RYB`znUJER#c6v13^j8eB#v7PxDO%(p__ZV(#3C|BuNJBXB%3!9L zdKJZusIShL@y}Q5#D%BY-}WCFL}b*5Ff*wE6k#@n8&d6+$71l1(vib;vL&`gH782m zS16)4SErqzRDLnqa$TeujlQF%G{oL+9XXL$2nOfbRlDhztJUtqWdvbhTa_@6Q_AF* zk_M6}$)rJ1St~Q6VT_Mr^(Eyqru{-A>XyChSuMPq!HxArn$uD9%a%*8Mi_fr+opBO zoERBN*MX6kwd6HMWRf56B@n>HyAjI0_7{tp*o4?gJ+l=FLU=CwW;#jc;TZrtfp>e)YyB@$8u1cI4OI8rl zUs_EgqK$V^9um?coNO*c>pob=aNZRl$0kTxMdt&I_Su{Dw89{>R;G54}|! z*Nef$yn_;ORsXWc!2`}1eh*PCJUAkf6G9$XjlYgHMv@;AGa?XYhR>+#lMYX%l)7^* z=-4rLcPD0oA1Rxy@)jxRCpuCWPv&X*M3N}4CZkW2;N@A27*@+qQ%TlIitpY`;r&4F z#7?_7S;?Fm5I5Ed_)OdK@O@dK_uU+u@w?I7P|D4vf~h`5HJjWufn550UxDYlj}0s7 z<3n*Bs_X;`N{ZbPi3xE1ZR z3uVoCZ9bk)YpY4f3@>8Fm3j>(QIb3c1!Xni0_9|+BfpJsO^hf04*u}$z~sq^lUj?n zyKK-4hW-BjQ-{s}L-}7QL%Wln?@#14_QXgA4o|4}Gh?z+l47!~O$rI%6w)-aI_rtDYDY}u0(g&U2Y`E)!hZ@c^s_jIf!KL z?OwWn=)~@g9x&zl|L6ADna5XMjZ5RJ@xnllSRbJ@cFOwRZ)N{-n%eY&_HeK6gl1(a z-!D3#Ke@})8?ox8%O750{q*P4*b=i@Gjgg5+X|XOk2ap8dU>T-ms7B9{k7}&i#1(W z3kuHHt){W%<-U>ZGFD`3;jXspZF)yH}l7*Hd=l=V@MeuQvC7dwcE!dDntF5ySl9+mkO1Br#=fl3O27Wq$;? z<54#JnDsZYpIcyFbg^Gp#5_ko)mdErnEFWzMyv~`>4HDCkUd$$$aQF%&I0o^YNOYD z^OMMreN1mN{1-3R_CR%uX#w2_}T_=U}y`uiP&^>;QE-cb`m-`T^y zvqZSix)iv#W&K=ElTaMpKFC*@`pv#P|8+^#z2Rob>c`3}ZkGc$JG1WYZge?cZfEEjb^)Et?`OYP4sh6-3|?eMf2t}Q z#L+#L*)M!mU3a%*kjuX+(e0W0Y|j3@Y(n+M!-)3#|%(TM9~OE*4Q+Ib#dB;j%jCGq^iY!0_5JC)}a;N}LD2ju1!h&OvX z(!nIxv7~RxQXK4yBOA>vl7HHN5N_<&wrSmLxeN9b=jEXp@V^Z+p<`b+w;j7g434~- zdNXLWh1Od4pBzWdKz;o#@+(UI^|v47bm5sSQoVnT9Jwd9Vbo6d{!zE@UJ8jgNvDs@ z&SMXG~x8ZA{@F^el16_F9g@rtbYI9_gy7n-W9 z*czoM|04@a^K+kdU2l64JwY=^gGE54v>E_(4T@r0u(i_Ik2a!N9*(HsJm-*;?wGiL zqI0JZm3)o6R0GcllBBEbVoL28v!Wt-k_OjbwS~m9UOyz#6-HpTIac1=1$KmeT?B`U zs`)L{&Xcc_0D#L`@sS*Pwj#K|BoQ`!SuDimqqctKjV+?fxF5W-UV-#ZB6L4&DNK6d zK1A<6G|Fq;p}ut(jiy7`c^ed71tbP#(!2Wo-tFrp9L^Pc?>n1;W=PEW6wgXVEgE8K zJVNd8%Y}AvCm1TNb+y zjhMdfP+uUf91?fneA&2z?E1zl1pLHUTkp7A@0PBGj;G2U$n7-NzW!5T93|jIHDVwPZ9BlTlCO zQQeCA#G`V8^+KD?pxr#8?KJrD!su=Wq_`;$zDgWN+B`Miak-M2H??FmILW9d@u&`` zVezQkmw#s4Yy>w~(mdtoDNfV;DY^AM*2klB1$6W1?qmun9+lUM8;{C(SkJZ@E=Df4 zc}@Abd@qzKjw6dpD%+vR&`iy)uG@x)Zfmq!$D-{vnf4-_&tXKrsC$H#p^HD~q35FkD}(PjzY%Pc>05Y^G@ zhkT&U&_Cyr+E<;)mfFR_n@8W@2I?u?lu?aJm%ndIm8#oL19ano<8`*?udXwE1&>f4 z)96UueQ4&&#>JTH(wjbr@x6y`Oj~K-8obe}RuIMk-prbU&=?+5>60-u%DdZF7tKtb z-bN+r%11Ytqc?AL(%@^HjUOAsmQJiw$IHi1&$qoE0;_9l%=ND%az)eBIrc42KBvxw zfA(dW4!^UEDX7%R#OO>`PeBwhzrMlHV>@fDFuxD*tZY#7BxH4)rx@rN9xnZ4STP6KASE0`xTw@ z4^lp-aeqs^kPmMQW%K~__a_u&K+S)x3vyw6lkFsUbab!%uM9w5W)oQ~p2^YCe|2Wv zw?h6YhWZ(>4zbt})tlX>A*`-r<7BmyR8v!Bp8z`hzvrF1y0X|kkfojpNL%8^rK$H5 z15JIY%9+Z?St+OT=ZD)XYHMaspzr&m-=LI>vYVWEws<( z8_-=}SCOCgW&NE7)<%4fegipX*YsGgjE?ODhdpV=yi)JFL9TB&Tcp!_eQpDUX~%|2 zhcXcpuRV1$n~RVuc7B&b)^kr(tu=vCDOAcsj}qa}j%>rxrUS+crCYH}s5aEd@@ycHuSPgbL=^!q4B> zz4@P}3+Ao&h8;+ZV~3qpy9+RzFd8FzdN{7MkmX?Bf@+aynE|$v`?ep6T46Zq0`8EHwv0&7uTZ*-V9o4Y&HXv>zVjv5HH_d#O_{$F+E6yBx zAIC;KyM*>*IUeWO&aF16kt^}&r>pQ2s4bp?lar%%l3vg|>>Qlzp42hBov%RC(oH9kZxOX(Dm;0wco z5ZogS-xT44-_+RoxKVL;)wjN+6q~ni{1M=%=1ud)T$~C2{PbzaabC)yLtHZ-z9o4H zaZmUK0MkT51lf_dYl^$Xrtv#qnkr(okSj}%>|a7GoJgMBRMjFvS@TMCb+Sfaz=^p8 z&n6;eA4CEQd}H4#(tBD|X^pcOpv0g_huu3pt&AUwV1c(s{A+>^xuC~p10-iS9Z(s?=HdQ+C{~YLNk+S1QgDdTvTbMGd5y> z6^n8I==d0(G%j;SgcY{Qre~hD&bPoMr7yX*RR`ydfIG>Z*@oxl#xr%=wbT(}uOmwB zIAW9T%FOVp`R$6^hAI=Sj(9l?i&ZAM-IJ5Yip_pmTE9rcR(@(gA|?JTSz$fSr137_ zZJ619L%fz22JR2ff+c>dcEf6SZ_i?iBF89=B8w$XBxBPw2#B<39s&<6jRS0-CGAlZ z8l}BzXP3IhbN;8=6=LS?9km$1Q1t2k?-aN2Rsuq;0uz*}yM~!X;exR09n}<>kMcnu zybJAGiN#^^v&m^qfiuwWAVA)%H)?G&cVR)n`UVMR?|Q79bEY4?{8nz_^`|m@D9j(2 zx2+w3S)_Sr=3JA-Mp9;VKHhm+HygN!S`gmk(x-Q8EqzrO4|ajF=(Cbo3y5xgSfR}b z*ZkZTjO7i-QkJpls`bXESO!HTM+2S-Ugqp_mEu|6$)3)#O5tYpd`x1OYG$9KTlele z5ZjK1eI%as@(5a(<|2&_Fxiu9Qk70aAZ>#{x(a@@3jFY7ASoAD9wT7{0YTB0drB$- z_A>x1O@ct$1bEsiXnAjcySI(sB9NKq>~bw$=7mB_oiygzHg~~&G2ztI0JWQ=T7s0- zHN^*A)w>uU)YhVv92B6F4gaNKXWPkGh^0+Hqiw+Vu8J2zUGXN`&jF(Bna{i{fU5(Y z1(r5J53Hg)U_9`Yegdb5r^`{uOi+5_t#Sp&C3&rBqeHqspprvQeFtra287}Du%$AM0m zqe%~~0!Y200>E#LR!kKgUP4rbE&z7SXAXCsancVoG%X<{kIb1e0fDw5ObbK`JGc9c zy@cRS3SjpD&JXkn*?U;9qg&uizpw*nIJlT&Ha@;k0iYPPQbyh`-2#!R(GbH^Z`s?* zxr8iuSwN-?$m8)77-yxQhBjK5cyvU`bla}%6xFGwC_bV*fjLnXO z)e3Mf+tmZ!3o@nnOjj6Z53Hh>U^6fe?_PZILUa(RYYzgZM(7j=`0akxz^*nG&=t0& z#?%~p>Fa@48N4;aZ}zpy0Tzr?LLI2^j{t6|f`V3lu(RU|_EZUJMR>4R*IHpwp~(|?o!@%w5w-h4%@sS~ z1unD!Jn209rPC^)7Neo%=283Z@lID@8cS`}wIg)30mO1%m;}=jwx4ruwYxp||Hc@!nwXh>2nenDf^Pmz6yBtGi9H99awPt{R6DYv`0RRC+Yw1?n?hmOO z{=cb%4FKTNPbfbC;D>ju`qKYy+bHWU03QYb1lAczO91;K`XQ&b)!~4oakP>cR8)BX z-JFNo3QnDd#?Hd29%NE+8KxG^ebK4qi>qA*T1V(abf5jNQzg@(A%D_7`m2)juAm-( z!GlLcKC~;pl`+hKx3vh%Y$!7Tx!FaUk;u*6D992F)wbdKA@4MD{irhsj{RL;(xW78 zNp4RUNZTBa*lt@ zbR7=12yV>>ZhgSf-fLg|>&W!UYx-=C28tXf>7xdF$KC}u&o&*h;kMJyuwj|}Q;trp z*J+*C)W`@L?rLfOlD2GUfG>;~UABS_2*@&U^Msr+9fWbri7z!y%^QS~ zjKSGJNH>tP4lbRK+%5uPc&MboF>#nOvKAex|2gwd{YBM49R*VMf-KIrxxDoXme!Tw zBkZn=;BkKNkopWiCd7yVJgbWk1^8L9e9cn*QwNcLKrhiP^61fki|zWm`t+W1_4(9r zoM1H~1^+8rgrl`>T<9qy{l06D8S;=zkn)2>{=2;_I;58=tIQhW6>Dw=Ph*ClTIhA z%%S8J58|B~#vs@)ARvs1GAJA#M5U7f;f@UNmX3Tkmkbx-M!lM-uTK-(8f0L~5YrSwYpKa3 zC6Q{NaeFSh8m0YQa|18aa>cS?ctum07ub{@)^S7alv$synncM|Gt}AW!T^?DQ1z2} zDOGmQw*T&_q2-Cu=I=w$^svzQ8XDdRf_j;#zYTSFF={UnRF|P zj8+_%qWthBlpS`2(rGB^hT^VA730fK-V8<6AggB-)?yT7Lw;^2Erz@@E;kt@(U21i z+5UqpZ^-mkGMpjZ8pOt(kY;=dsVWaC%8;ziNYVz8ZkH$x34%-ziZcYl5HHdZS3e4$ z8+f#cOI6@7NX9aW#^6!dWLz{!vNn!JIetY&Fvoj1lS*LFkitS#pofSM8sbU_p^Xa; zVj(a<1%JPIkbPM2#KU=zya=RV2fReuqX}+^;Uyeza3Mwzh;%z!3G93c&NzdUt-;aO zghMdi^w@a2_L10l`^CWh!NF$rjNJM$@$g7g+#87=YdzMA8Wbz46bG|pR-TkfBUf&g zKR3*sa<8#X;?2U)fx@M*bM<~h$=sW&9}G3g34k$ zi?i}r%0@14=ARc#^zMy--qh%4g4}p*X4vmnH=i_K7kfTBby;qcvvs4t%pg82C-Y)J zhyL~4`4yaY5$Nt2&%88%yeR8pGwL_A8%jC)~EDnfrDlsk;owS1t W$zFi)LqQISL(Z9if!eGx0RRC0y_JFh literal 0 HcmV?d00001 diff --git a/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmbiArmlw.woff2 b/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmbiArmlw.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..bc95855cdb4d03ee9a9999e8a4152f94dfcaf123 GIT binary patch literal 7856 zcmV;h9#7$SPew8T0RR9103NUa5&!@I073u&03JyI0RR9100000000000000000000 z0000QN*jh=9D-y9U_Vn-K~!DpgVx+O1(M;WdHy9K*kvR?Y#k^VR^7rVp3QNr2wS>Gk}Q>8$szup&p5$IrS^YtmqK>FP+~tGc`Pzvor=eaa{E5slTGT&mQi^2aP0+ zr_fcnRj(tKni(<#|L>k+nh}!B{ye|UKli=I0}H`u!V1uyVqt-mb0~s^iK19*v={1} zRm)S1#pQ*Xb^eb zEeE7S3l=oQX-Yu(p#fDbH40=mH5R0_2b;zOY#COr$j^O ze!2s$JY!Iuk3OihW8qHiPfne0JARggVS~5(Hz15G0PAQ@|60BqurSj=LQcp}QbrrP zQS2(*AbK~(1IHY9Shu#vn(UewxIb}sEM25FsgEM1f#6|IZjUFE8?D;H(yZA`lm zf{*Q{x{(Bo{8;yL_PvRAesyb0%M#fU!H7}Ly4H>d1Q;+OY4(Bv0haW|BCuu$0hbdT z1a?4x00#g&04$*}UCj}wr4xMS0EQqG0gIQU=7QaONpn3gfdF{CR4!?NYE{~LV1%VK zApncg81^+oz~Ecd5=J63s2Ktn(El}DtgkxFvgD7LdIUJUc^VxIwE@5wTvG&O7V`j9AjX41 z17I{G5Gsa&LJly1e6xAioB-NZ1aM6O$SioE=`aqob3nQ9_@bFW?8k2h0uvAl5lSRN z6tP(GBoZY{g}}CNFRYz@#E5ruw@#}DyCKa}t1{Mzd1fK#!uMj1= z`1$lJ6^aQoaKUB6j4|}UfUm2w9d+1TeYO)_HxI?SStn~}y<=}?ZLEbgv2|lNvR2m2 zv}`SF(AcW90DxRLBq2mf%@~9V;-ftg0{|A}Dnah6w_;ILD}gc^qz_FH@Z1SXPv?uff;W%!9J1^biKHxs%AEPKxr(uXi9#n@%3a#m>xyH^<)(cBXHCT~#^K{T5=PMJ zq}}?Bx#$RMBhmwJu-zU%&COY~Aon+9uZSQ1x=T6@n*QKtd;vR6IpM5p@m{>Ldq`^1 zob6=FuJb;<4n0neE^?vxpTb(EGHY$pWxFAh_B!cH_zAgTgRMr)IUwL=K((eOC&v4F zd%C+iJKA-dH*MUoe%;!()|TcbZDWI`zD`|RQ(aYAp(-ycEh$ziisXd_c_*B`e{pZn z?{@CqxqYkMYBuV%*I&w3mMdw@qcc96tW+yWOyk*Kwa28egwOdxaL^E1*Xnl9=I2Ul zr&yyw$kXe?Sgz*!PAQX$7*_sr?LN1=Lrb|_0+DnMBu_WFt#(h-;A|4?FwMmg+gb7H zt`;zM12$v=I?~4xvq<%(A|2nq=Fg{b&^X2w!@|TxS%cB-VFnH9lvJ85&lI@=5%|A+(S(9_+xWW56 zv94Ak>ol&WSBGOuuZ|9FIh&wf9kai1%$)gC7uU#+Kq_lS1s_+&GN0^C+dX5j=v3ME zu}?-vIk{exq%0edYfdsbT-0K$6C!0(CU!_FW-yDjfd*stm}|^Lg1DT-5u@^fj#uW7 zmyVZ@*JjpdRHR$^i+!hmk&+n6uO-$~j(lRh#gfWD-7ZyH(Y+U1kTnb5dRZ4&;=6mf z(;fH7pLu*}e=S-MwG+kQGe*&1(|~Pubi~7Z+sxD@rl%r>rJB z=0Fg{SB0k{*RMqCypkR!hmopdFe)eHdOYReFKO z9ff<&6mJ&rX+%;p-sNqnB(0OW)G_s`2w-6IPHbrFbh0C*9mF^xv(2$0t~Bz3cwnk? z`V{DDHJGk6;8TaZdrB{mQ>4{Eb%v&4V&NehbX*a=J+zbrIa(LPEtnp zEtjm(_M276Y9O5V2FjMqOWl!E*Z4F`ATBPvVNDs;Xqc-(MR%8)HFvwcYUM5-DVdfw z=NR`*Jiqu##)GMc=FZ;WlL_cq94b~`5N&#F!0mPh9Y2Q7Lkk(C7_KO4`Wn(=tqkh~ zz6vd`L9?7z86u_yE-|!6m1Va^WgAvVB*A~c&q%u+K%gt0JzUV@GM>2baqHos9!*L^ zwZEzf6L)$Tv>u~A(*marsvJu##-71J)U84-K$ogj5~8zszU|NtP*yfZR?YUeZ$SYi!fIfio1@pwVHOO9M8x5BZ3URb>oqQcjx=zeRl>Vego^`c@Z zA0G%uja+pre9UOrS{cN29;G!-YEVCgrxJi0>2@)xAd0}4O;h{}D;@Cmo(zwU9E+gl zLDZ0~ugO(@PWkcY<>rnBw$(n5o*5sLUcz$DX zCdFG)^9>|+-~$Mjc-jBX0G)bO$do?B1KSCmhpfsfD>I$?ds>S%^~97yAWq6b{bUbW z>H!G63-1Hs*M9g1u939lYsq!tXJgO z2kr<@t{;%&ek$xwiJafM$e;=0;Gggol<)puZ!Q(wF5f;AxOn~J88b-wj!!ID3c*W`vl;bxJx0#8YcCvbPWMk6-=@$rY-6KlNYLsN;*_$$pdF)Dpx_iO-^xc$77PLE*qN2jR^yF&pX@(#WtS@UUxB`-3$Blo9=~ktiFK@)(0It3ba4W>tZMIp?$r;w1k>$BJj|ybM zL`k&982Kjz1Q;-YL2&>8fFS^Y_y9-;_$Pqe4rtu~_W>|^k-u_Ek2ca;s%#FvodA3yU#01U7!#1XT4?|-p972SZnKFy?RFo) z8FV_-XtjD0oc2A3JIx^>mR61$o!v~nZCn-nb6K$&aJfpR2fg-q)yIXgz+T5}w?L;; zStMM_yS{N7b2@)2tFs~Zh5nqe zpSRDHtSHcY>`W-%68Nrp z)B*j{cl&pKcUL_ydRx8Cy_4^*f2YkE8JCu;&>7yZ)|~m)zje#&m%rzg4e@XA-T%ne zQz;B(Ms0G#)~KsBJyViOQ4K(XUCDG!sf#n zwS$36+8mDw4HK6J^ws4nqpO2bDm}+az0$mTvsJBOsil1x0kN7jF15Pr`|B=Hr!5=*z%tpyR&mRB@pU%daXb@}!03wcUTO;EB`oY5CB*5E{s&gVy@TE-d> zv+R*X-<#cWHh*N;($6j5*!$VY)sWGzIxP17m=0zB2A>q!P(tNYeK*z4 zN8&q+^Vj=Gqdy8KHuv;cefa$LytU0KiQ!hY5yLXpGrC8i?k~Az{$p%@Ap81a$wXg8 zL#%h#D04~tO<|kpvU9L5bYvi@LtBvM!&X{8NEx+XUk?8$S3vBveA z@pS3ho;dmT-J`AJ;c>F;%s3g#$Mv6$hDl*{#o>{GF%CifUS2zVH`0r0{{yu}Wq zKX`bBiLIZz`$1URJaRSkbQ#=#IKO+j2PT|e|I}E{etzDGtWIvQP4#z+_exd9Pq5y3 zG-W>)hTj{`lba?PE-m`0$i$+);37r%*prGPCxvnGxta0t=VJD~J=O5G8Ts(HZQ7)r ze(T7b{TI~7`hN88yTrmN_-kkK?IZJt=ihQ9RmTU=z`MxWIq!cqdF;)LZ(KZTO0H~d zg03@#lUBC!m+mST7hA=8*}2AKi`;mQW8VrM9D8%_jk%*f51zhAwe;|kN__+YXX8A1 zQ4XnQ9$txwLLaZsZyxX>T-%oWB&LDvdmqj{`jJRUQAbc=L06EZM2Eko8R@a{iBava z(Fb2=tlp~!9LcQ2I*UZtT)z9MIO|*w=dVtVM@fsjg5`zbVWq`UW5O6|X6DaCXhx2; zpI26rpMQwhb*em-G4abkgzD_Z3!_kLnYbH&1SwbT&!~Os0{M$r@|x6I3k`P;H`%i~((> z_ifFD0sPVwsB!}uP(=9+53I6G>?r_7&olLe#Imv}%rv+gTu0&5jF8nr!YM;#e!>@` z%|+*Js3k~BiVkm!9CS5V40wmbVATCdGfQsE zx=)1qmDt03R#AKvGT3W~77P9(C>9J+kGoYpT8js6iz*%%1v>4lg!Y6fDU4BjfZ~!V zc-5q2FM*+D+Bu*ug1CsXiy*pyRbbeGBu7YK84(r@29Rjb@}CwRcA~^VFC>cr!^nLw z_7RNY@@_t*#C+)*@ntu*#F_0Y@@?6v93X-7 ze8+Ri_fT7Q_7-3n+E*ace5E*}Q~>5KgaiOAxePlw21A5~a>oQH<{0LkW6vhZ8^NwF z>#&-K67E-QAJnfgFSq4#r_=ty0W_{>LR}mG)i=Eeu=RA|m;soQ_MiS<$)me7w)V>S zu~wJ@5LfmPSgjOXKLI#vZ*f(xZtWfaWx6f%@^!OYHr~8fF3o}3QopCOwz8G&1>}NW zHVyaL?)R$(CRi74M~BxRVh463ZRP2oZ5d3sx`89>cuHt`MVNb#=^SwS+q@0TZd2G- zD_j2DKRG5(&4Iq{)jsG8ShlsMqd$SUByz#j{BvsVYpS0fJ2U~#_n@cmdlsM{XQ%z^ zSm&?EYw_@4txg2sq z=pRhOeYX4QVRtitimh|I|LwaZtOcvv4cW=`^=DowJ*!3F`V`Go>yqvU)9Es` zLb*Q3xp+bV*FVG`_;$`vN;e1uc*UoRQOl{a(B{PGH{!zr;F7bdaf&?P*TgN zC>Df3WTHC|0usuTU1{e)bedV=$f6GK1)_yVZ9SP)K1#v_Q{GUECMn<0@)c!J*{W-{ zP#BxF`}FdY)8Sxv`!+gQdA6!u>+nd{Eh@yx*DrCbVGgzsi+t0GCoKVoFc|CEApV&W zktPY${PXRC*R|-|7n6FB0TY(G=<1V+kL|Q)4}?UZ?3pgn!d8rVrNS|1^Pntprf4N> zq@vhu{mM02pw(eSW$k1l{S-~!)dyDJH5cOO2%uy3MeI~0M~L;*@YCLJJ-Y&k<$LkD z>jLD|wpe$7-Yxe$fL7S;FwCiX1?dOLITE%gYP-yv>^J0&5NGi@j8pSy#*5XQ0uXYO zIch#9dhX6Og@ycTh)J%Pv*5sJ^Iej<5Gd^X^}jz? z>hyz>l9L5Rw7h;Zjf}k;N6tzyu~1Z$d-Bx@bfV_Q$&d8Z?}{IX_k$`aVO+0E;93U2K^)$@k4Zo5wJC5hG!+u%eV!>59SQa{3K?^s|yn2zImYb)=& zZZFTlv5>5dxp{#AgnKC={WhJ|hfy?8+fQqe#}qBW`;u-EH}?qTtOoDPMNDiuv&}hV z=0>!dhq+5;wK>ddjvdV*Ba9X2Fncm~51qr=TEEaO`gjW#wUKQdTEtE1VI!JH%owvv z=Ly`5GfJFQf@3ikqaOifW#lw6)5pzSCXSMVMl7|{A!$?H`HitC~cp^ z<*=w3a{|^|n$3uBqH9O{_8BJxEK!Dbt`B0+|N za?MsMQsaEvG{VAE-wQ*28A@uFsieY8AGORBQhz`NKPth6Ei?xvx*o#ZYTJ&5Vi5V} z`XZv?cwr;buNv zWzNz>GmFND;|pdLujVlmy%fcXFc0FI@rvTr<6nnwI**arXrc=Ik>2mtWwmzd_| zwndPf5DNeXfB^si0-&p38w1dW3cQ<;X-yt%X!I*Lgg5!gBi%L1s&F3OBTQQS9}L*!WvTgQ8>woV)JUCMZ2wscUDBjh!tKOfjS-gI=- zLg#qf2JrF_cvU~pZL$cOL-7ea#(~ntcWK7ME+R~!dOQNTJN0*gkX1LengU$uZp2|N72Ny)Zs7Wj6ufUgTjPQbyn-Kr`}fCvy&QwX8xlnF?Db7u ziQk(lTy8vY$K$4f$7Op^d&c*R?S%kfNH`M8kbnV;uW5k&b!Y)xGsA?Fr*jAc05}si zQ2~K5GL~4`dR@RFz=s6_9Ow%)aI;O4l2yNx2_ZsHKmf+zBMJx%!AW9ZS!uEv0Cl$v zrN^`}KsJp!CF;Ft>66k%2h@@@ANEx%lka4CEUTuxO0`NzvKgBjG!gNANX^o$Qm=x> z=w>ryWRp=syQ;DwSsG}w6hu;$$`+k0Dah3*wMtcOobUx!NWaJGFyZ(D_%+=@-WdE= zbRw03FS=BCbFESfw|VLRNceISqH0G(W@4gIBdRiNEk+AVO5@V{d>)L| zDphNbZBa_7Y#gAF8aPr(O~s{blDf4{1Z9MUQB5q@J>in4HY&=!)7G0WH~kx%X6&b&nWpU#jhmhXO@1oTf z-bbEs;pG|uFGnemtzcM|Iwn&O%h2u8WkZ_q=B0|1r-*&YVl|TDVTqiQ(gq3DB7vDG zo(((>!(z#Z7&1(}QPB=R1Cik)Jt91aCkkc?3(+DpBm=8LoOmI@BLV}&@b}ZgSC|ZM zFEQM032v?)Jl8;bTe-9|<2lnt-!N`VIr1ZU{7@S!Icx+xt~JH6q8zN~>;r8b6fd+1 zvT`W23k>W3I+nIkRtF2r{mt^&{;WJR8B1moRPty+l4mR;#Z6H0}Uy5ESz8AkJgRFc}Y1VwfqW z!|rT2E*8_!g&6vm>z#6ePL-lJ$Sl;OE&p6|-u8;K3iKp>+>m!$xaKrMgEu_`1HidT z(GTUUIfKL8AUlm=aakEo;0;a9;LZCk1`%djz&nKJ=DKnx^6(s>$c8s0~}$;#xcMx(h?c(4mC zGLhd*&$lh=l*hT@@HlLE&7oRw$!sx{;-V?V1UFaHZ8*gv>t_sx-mm4~(Jjxfc61wt zLyxQE>BUE8;KC~PTXl?vMiP^UaKHXNlbdAseF9cRLRpq1frJ4-)hbeTtP#flH}~d! zPqHmTkd<+y0U!nO|IADPodGP83*?^ed-h%TI0f1-<`3}O>>uKyEJVcuXC#Us1}Y^{ zN=SuBDJf`U6jn^#)Ct>eE}G4`8*aPP?OfgU!rSim?nmxdJG9PPSE@@`BSL{P)E=XB zH>g2Dz2_R35)+T+%Gtod%n{iCudGpZFAEEYoG03hN$j01kRlMOY(&|XLN`COOc_0; zxc}3o{rN*6X#{K{t*wQlb+BskQeHpxL0JO7KBb*5Fo3}7r?$`CBiu`Mk^^H+>Hn{W z0Pa7e@rLY=O3`s0Y0@>%L0=h)j|!mc4e;Lk{Bzf!J%bO?DuJgij5bHxgH0>aqj-15as*sfKoo zgSehzoL~Q~Q({5gpOoi>=;?0Ht)n?%*0;_wI-A1!l9_peH|OnMu9X@Kl*43G-kZEs zeERZUf5I)eQen4q>*v6yn$Hft`%Uw<99+Vm58`0hbM;e;_0ADz;Wy;YCIg^|%()eQ zj)7c>*wm31YxOOblMpAfi`hv~Dv9!?D>43G?-9foc@{k<6*Du@T_;dj%H0ap2*`{> zFm7w@Qr8@WHpLri+L8dmzy?=sR%l>j?roC)6EiB;ocrouJ$Vd5{mKsFDK%$b)<+^+6R|a@*O@FGJ<2 zvK)lxgput)!kllDRba9}Gv&3byy!fXj?xmAnT>)Sbdyc#36#mxK<)@>ZETxLV)v$` ziT(edQbfwmSxB#P6XZcYlt5|cvfM?;Q^Y(};p&5&%S$AMW~n=Qojwm_CuFG6n9bmqu-lPTen=P&9U1M*y0j$HA z3!vPPNBCQOP&@*H&2|XVlLmPqlA%K24fVk&Q5+UW{mx}03*l95wT+6zhm^qOX=paYxWV}MzzLMhwvzbco0$?k{-gnZ`% z21BLaVMT?Y3PUB9m{N`QdHK;EF{x)}VQqGI4h;=j-!vE_VdWs>;jl+=aDAWxSDhq$ zOPqOy0NA;umdp0S;)xgW8KgkyT?f~_W5yn6k2VY-mb-@;K4biZQg>e53oB;GFWceA zbLjvPu=bO^z-z!|qvZly@;krfg;`6kS@Tgmi~qHL4&5Se`*_Kz+r+PrC#d=SugSU% z(Rg&)bQv&i#-a~g_tK9djoYX>UgzAdlXQ;AT3S4De0az+IMCnM+tc0E>F#K6Yi((6 zYIHdp9Cn-4Vy>?<84Y?}t+qy^Rw)&7nN%Vc39I=$ZdE0R&0;d>v<$f1ujay2dxoww=WY7+Siom@Q{dEj8jzcDZ(LWD19BmI*ssk!zRG!?{LA zmSs1eqgX~c~xpNGZ)i~)IOehm?A0>3mMvvw^H{)jA0@ra*qBQjt*8HD)d z#0!q?nx+A;AqkBIZ}^!AMV`vhnN%GO#`-Dr&q~p`K4cOe81|{PRo|?!=s4|7+2GX5 za67=&rfYyOJ3a1%3NfUa)DAV!yCm@U)oedbRYxb>K*u0Ndr!HKGLbapo#M_Rv7eMt zI(ULOeNu4p-Quq0lVBI9-OOL-nrm8pqZxOJ;ErWvhCV`Pa2z4iCniaZSnF5I(=DnK z7TmLTYwskvBX|5<7(!{%VvNQoZ0yB&+B0j|YTkp>v1cB`uYFiLoeB!K_MDtzvgcgEt8UiZ3kD3POlR0lhlLPManI!i}_ox>Q^}Nxwo{2}EwOi#xiX$e+BLSMdJT|@I zJT+_V2kxfaJ@jtUc7)VW)f7a$%Z}e{N_2+nz5-8v#OD*Er(&t^tt7(INU_RNm!&E} z7~(_Pc=N>}1}xrl22fQj*r-#3e2IdeRA9++Xw3#* znTFB2aG=%4&0%Y*5sIT$N;UdSVh~g%GaPV{wpqrT6*~r5ciROj0`wF`T9mZLs804A zf@k5|ofVUYHyy_T^e#VNTf}8~IW-Pa+iz;NUSI)DiJBYBjQ#WbpJ}>B*a{AwS<&V~ zU)KlM*s6-Guy;vMW=j=ysI9|jE~o{)gF=7TdL`Y^aD0ATWbx9at!a}XQ0Wa<+9;|s z30M&U0RCnO{)eB~^CP2}r9oh(g!M4>$#B64pjfFZI9sKD}q@Kly&u<<59Q~*1^ z>>{6POs*w(KSY5M^~l~RMu@>l8;wz6TwSx$1s2Go6`LIun$k@am_TMM@>FW^G%t-; z$+d*xcq|eu7!>BSR|>x)56P2UA1jy|!DFIcq_a8sz6hU7@hIEr9ujzJBvhPcP0M{Y zYHoLQV24FiPH-^+4;s*p66sWV9w~H57=r!gvVp%cNTurw5>8A^K}qd{QBAd&6d6yY z(4U&?j|I5ab5BBT=rF{t)$b2Yld3Y=GqGkfl*xV2rp8{dgarM8Pv!$dYE@Lw2sO!T z8=neZFu3L!tlObb%!o4aQHO|+Mz_JvtaIZnRM~~R=Ef+i-Gl-9bfm@5R!&<>9vUZh zg|EMX$ZQ};C?z+I|LI+XCDuwYViBE-B0=14MKdp`N^5@GHakN=eA812lS;1r<>r+= zEnt8kr%}w{xZU!P5T7??-bTlewaUDw@ zIZu`kQ|Iyt4Hf?{aRVxIYfs92bRIW!@n^l^K5g}<&Q)K%9bPV}=}m+jEz@W>JjQyD z)GYErF4kg?)bZZ?aq(9%+B^Lo?r_!!S5)3AZVBk!;?JT+2+|1IxZy#&xps^nulVGr zw@!8Es&%y@(amoTl|6i`Ur}T@i~2$Mwh}wAu2dA&Q2K@iE@sal?LU}5hiVSxFV(I7 zXmc^!a%F@Mu*p+}wmKb0=!g1LmqIs7QOQNCZD z^tEydNqzz?X)o81KDB%|e+eJpLta-jT=`?Pxgu>u@5tT@R;^wukqW{n!Mk2Jb)7AX zI{L(&kOyB$2-Ze{EZzD*n4)smbYR?iqtECYfENK1uK=ch4e%Af`+)H&_Hw4y8rnQE zq@5&iN)nhnFoI;+vozg`Tf_n@7RwrkWivbnc@ReBc`;NIwa$ffk#jb*cVk&^t~kdP6X;86ziFC|rIDULvlw$OrlQkg-E_*E`p%DRBC8$~ zSj${=wbHz8dCP9_I1;~O@htF)700ggFv!4^Di|vni)_w|DHr;Z= zjnk;WZp+|A0d>rEWQ*y6IAkaY&a}X*vI{WAKH0Q-HrEZ!@c#S>gc;`RE6>Z0Lp-yE9lUYxH1pC6 zq}&n~BBA+{i8gb)GD|yuDH52b2JKrul+18}=#yAq(#ca-vPCDEW*w`Dpbb^cf$N!nX0N0iV2w{dHmN zJ7jE0P%?E)bF+=Mv>I4dxPCQC-}5TMRISpNNMPyBnSblAiCYKP)7ZdG$>A2tj|$}b z?Fbn65Jo_8>b0#)pfAPvTVnxZj6W*(mF9jxa!mpmtUFwwLUb4XjtL2(SgYK8-3XOf-x1p^*^`wcYSBBxu4A`1-SVlJCEYZS8B;#Y7TyOtQ7)6E%*z^3gZAP zOTj0%7}VUs)@Phkrb3%=ZV2$39_H4=*Zni$+y4C)roZMeohNKOjMW@I7M|Su=qUs# z%UwX)_e&NJs50%yMYQ~r5V!I!{B}$vagFn%HNxLo7cP>`!pY1AC3hk7)im|_A^xfL z*hNCUXez@YtvZ(ZA}|(L8#SL@PbP^I3&{o)r9SzJ+x!o50cj;s#Rx3LfT7rvu0xOu zcL8?;UpEb32zzq40=2jp@cbY`qOBQ8vz9P$O-?Fb+sh(&BmuYO(-&AJ0nHtPP63_rPY_= z@QtsN>TFs4BqPP!X}eIQ)11fl80p8g0S?*Oi5H1FaO75ng55M$tj?l_|7J1XmhtFM zC(a~aKa>8K$G{$c0N_uky*-!8c=_8aRoUY$9qBy7DM~|6>6)Xy_=dCpD!o~{g6hz7 zjvp}i5Ao@ars68H0ZA35v~#Gb%`AxpMN_Hs6EJ_f73K|_+g3OF(A&n>TI4*}pvM#u zbtsB7xt&RZw~^Gfg_O!cmauIPO$%fl9G^=+mCLhv`GyhHWx1(j%{6?1?P$4C(|D$2 z+bCHsPhWf=IvGbc`0G;F8IMxW#Y8Sbh<_UMblm+)GcfSArv0u*LZ0ccT*MiNzBab4 z1sr^8XBV|v)IqK2jVHI9__x_{X-n(wdZ@KsGWXbTZmBxEwp8|;cl(&- z4jYllw3d}QD%oWYTlvv1um5vSLM=m1Kap8Zq<>`iu`Xw+%A~^<6ZL3{BH7I(!P`mN z+Tsel6(tm+EjVd4@pR+B$ttDQTdf~QU6s|9tUJdpRyw+F=NSLn)KLPZmgU()O*)IvbZZNqi@XZ4(u>XDX> zotjKi-6?E6swR_s^x-CEh<5Z|-P@&%JYZzYUq*JYrxU8|ON;$C$Ol z`45`rEIIcSlaoBHd$y7_3&o&7E4eFQ_SgOHl&q`tVdKzws+HkmO#fDW`Ih3pK8bJs zu&>_yvpNdu2l%&uJ^naKjn@nI1~o&rPsF%fi@pDJ`u^ohh&os9dl!wK_D%TyFy(vi zi>v(R74$-L?H7!*o*yYcxFrRiJI``<7hG?YlNWyC>V6j9yLGsV(>GbwV3=cSMspvz z8@qAawy{OJ%G7g|j!LERSK?@3yf@OD;H0lK2{wn~qCG5sYll%=m-cP$n`z$u!Z)sxvi**(?+EBLWKJaQOfhxHSo`On*%)P=l;2XnWKirA zgN7LV_Yz*7P59*O{nS1FV;SrIbD*`h;!Pgae~$fFaU5?U9)E~@snhM=m#y4jqcNIy zcCT`k4_;b-t?~4w{VVGt@G8+O3_5wo?elyam+uIIKH63H7d$ZxAET<|GgRYn(UX4l zK+)5>kr}F7JxkS(V4gy@a2~J6IZh|`2-w7dah_A^D8F=J=iWJ+pY}~OZ@)7}mk=*F zQ~Fvhh+nqZ$= zH(N;RnxHt6#Q4vwT08d?FpeBfE)8$MBvx?CrRSOP*XRu)9_;dcvH>fd0)NJO85K2MbB({L$$jMnL{K!5f1#rH&sK`9pNyQ$8Jwg8F&y z-i1KkZAIO}*6t5%N1in=t9OVxniDZIzO9*#vrSeD2k|$&M_gXr1j+n zqjI85YF`z}XXKKK!5--v(vi9UF|Yy14gLCZi69NyQuNd^cam=~Ez{~f_y>ntl!sdJ z59)`OXhx&A$~;^2JTL?0Crpv77Ce_`%%k&ijOYIpN@etR@D*%x8QB7#gq07F9ay@W z;}6!ftbVL-?6@szS-?HCW!ZrC^z?sV9X$_>R)4U=A511%>_nExMJ6}IkI=K|aie61 ziwwgwd;{#s-2ehE|1u2PS@yhPcBM*dT%?(Y${!qRRUT}?KQMciXeP}HzhSQIX zM@&*JmNK@orIOwQVg2b&j*-SQ=FvD-Jk^Srf>lDCcR2my)Q??RgOy)+rY|uSO&7VA zDa@v;I^%>R1WZQ{+awx7eF58)xlo%`nKdLZw3Kjahq-97Lz-gItdzpf1g5D)+Y_P^NR1Oax!{l@Xr zoLXK_mRyxLZ`s~>BFZ;va%t3t5}qNaKN_mX;Wltb5|Oq->?AUK0$XSeJCK6gxTnL9 zcG5I3@&O}oH~Fs-nO*UuA`td1vI|?-h0N|O#CAd=TL1{y00IjB^myl=`@%y$x&KqH z4m^+R18Y|Qb1?yGEb%AZ!;#fG-3HDd_ZLwRU@y9F8$Zjja(c2<3e2?e>%db{vB8u} z6WB^fhWP&b@$OXM#b>QOr&uP*5K^hmn=?Myc%~@U8#CzTs!Rd{p-PI>RaXaaHyx|V(frgYj;#@2$#LP?wzMgJ zG%wAh>@I>&*r>QcSftJj_y;!tZ!iA}(j#p1CA}54KH~ZzXTuq!2iBpJCs}3-FPbz( zv)b^SOfZB>?`UO>Sn)L5-KPq}XiozUhJJY&@U{eU6WUqw+%fB`G8&g@)**r&qzMnK zo|Wx-AKy*_NOZtv#oH%!aGCuXxhAHd4)tH3jX<*}O~WE}Gcy6)ZSfx9ioQRVs7d&!K&UTb zR;qKe=oyJsVr>#TY%t+Ub#=Rhj%&6H7jre!+XllK`39KHl{Ti7aE$3#YIJG}Zi+dz zy*Gd@isr+kGadrR#k9j{iW5ujwnEUTe-k%wH(@{IfUG%pbe=P;CT>4cr9m%QHv;#W z2=x#Rn$|KZFI$GbIv+7G_aH`y>pXHB2Z1a`p6QGk#6|l)4%c;bMEk?nMq3X?;zC{w zhOKu&R&m64T_MR`cMnwmcm=p{uqE+}mI%yv^!1c5YR%t`z8`&~g<6we|7&6V?`X%8 zTmy~0?}&Ku{Bc~$Pk)J0va8Jntf-1ZFMx%Hqx%RFL>UK4J`lVTKF7#=I9L($Jr`Dr zT3m!I=IP)$k3MvclIt!A|e;m%^^?!}25pXr_*dB}zLAq>j`ohrTDTTt;Ma zO=DOIdjusi@x4fH8JWf?&(Er*Mdjxt{>=|vk?j=sX&VoYeU~Y=I@B7QL!2qH+BIsM zeShKYpXM~VOr#iw>>Iw(*{`R+b%za>R+9ur20D@kN(H3q!-E}j-%h`FCkzG_-I_}` zB^`{>qYPN$$*9rMAfSY1X#SMYveWP#+_YxOabgS7}%x4Nexz&!KU#nCCpJ16nI8|RqAsVipt+*9a$+N& zPUz@k=~)8>)dn#AB$=IuW8(Sc_{vIVkvk#@1CJUwRO=}wH_MxfQ`w}{tgKh?3_OiR z$)+Oa;(S>+skxjXYM>UivMOmE{WQ6Hgu|MsDXJ^}puu^r6q2Qb%2gZZRdd^YhkZM2 zj^2Y73ey@@dNzfl^&4S) z+K|VCAAh>m%Y%Q$OfC8wD)Dn!YSAyFziJK9yD=pe!Lgzg`iqRLX9!AW#`{-DBq^!l z!0!@D#fG8^VIvvWQdLRo?&ru=103qK0aK4TZLyy%c7{UAvHrT0RK~rOJu%-cLQ;ct z!W^C<>6m^2h>?vU5@0>Cv@)-|an54jK0|7Mc`z8T+m{FB;NT#L+ArL<3Mdad_nimM znzQa~Set2k)%`_7Zv%+b(|HsEJF;Vv5L@f!*bQ47`2ooI`1{!R-yJ#NlziW{c9Mb5 z^;Ngq^l4^WCEs%Y*PUM`i?7{PSQg9uc8yZ1ivZ%}YWq;B4(EZZPT3MxnJo+c(%lI* zs@>wOgtm|8RQF+`2Grfs*T9zfx57gD#OAkL++2nwm0t%QNKW+mn}32DM23-k0);c+ zJ_ImY17d=hU|EXedPrUO#bLYqXgraO1t+jH`eol9wMM;&)5w$XwdHxa3I7O|hcN9pM)Qn;b1xL<|$vl*w8QwA{1@vyGpKS+2@{O*j|LHtKlVyUhK&ch`?~ z&Axr)X=a|7A43p%t>xi|=KTBzx9~JCL+*pBn3TqQ0oFNG6ca~<&&w2#$C%qEYxegy zcza83Zi36g*ffp-G11#F=f`|B^`-kdr|Al~F1{Qll;D_1$Q-il1k02=s)X`Y6 zf`k~AFT4fXVEL@3ueH?Zgs|2;Vt(T|4w^b=ElhofBHG@RswN!M=Z+XmWeiNW8J{D@{veqltf7RO z%~#`?BvjO5lTEM5m~+~PNK*3(m%_Ox)CX5Do0Hp<=jZHpXH(akqdB@tze&`Wo^0^1{^ z&!4nXTh$_YL5duMYJ}TsUHClcpBO|Ih#XIV-U1ubMI{i8SNl!(F{#vf6ePMq+ z&U0T;zOctpEhwOE><@*1=wZn@_Yt2iE?V2~u#yi2-peq$dmk~EOEGmSY`PUv#U?$X za8{5x!c-9pyCuP7FSO>BtkGkHT`wFV?C))A19{K&CTmmJ#E=Gd1={0cQi#-1YOSS8 z*8_1mQ(H+31+6-C!AK+k_I@WkWu`K2hn9FH0wa7lEav|);s4+- zD*I9Y@Y1Cnrw*VU^3DDIRen3=AF?W^_C?wJ%XM;uLk=X;UjP`eoX|g54`m#Ut)K7(R zPU#B2!VEt94Ok$szJ78pt~UR#Hk@%-4`156!wAp}gUUBoROw|ug`j;jQYi@$Tp8M! zjj-U~Q>_{8i>jD0t(+zb4qN}cs{X}^kccy&(tZ=s6hi*SOyPNf%Eb7tyn_+uS*T9tr59cCqP+c zx=}}4?@*9FtY+A@M<-m>tq7A^pVoGmpJ=at_ztR@`1LV#Lz!qX5=JoPE!d6vh0yvb zNtLtJZW9$nRR?G)o~k@mUb|^)oyqvKsrXICHYCqn83p^PTB>!1vM!mRDR-nw!uQwb zK)urF@DoWL?BwuIS;I3?E|Hb@$O#BvDtB2kKAct7(+K+xL;sS#o5|~br>X;@jw|mk zoj`C0{CZXBK?!0g1un3z6~9!iJ-XL=i6o^%J4v($)?Ytosb1h=(0`$ZfxwMO_Ilqw ztWln2K>1;MGcW?v{6-7v1b8uG=q!@N6vYIPF32g#hcFNNvbE!$}-$wCt*!anS)QrY276=}3aoOI4RC11S^ero_Uz z8#KbnQw~fF?2I^Q*Hnv&YTQh2Yh&uBY7;amq)-M;HCvUosKlsTW>eoFM!;0nqo9;1 zMTHJ`MJvE7Kr~k*bX?r++L!ih7R^ReG0{*v*TiB3m4A*AgbEmz87Ag&u`(Bo&R!vD zUX*dnb@vDvORJdF8zwial}qY7CBhGyP0F1nftEIy;8(_8a^ zleqFg8RL4JN~ah`%6UIx7+%a!&~-!nxEaK$Are43)j8tJa!wd&mGsU{Zzwt z=C1X89LtG>o6i_{{kB`TKxkTY*SLm;8C?DJG3ur(Y9|x4X}2cz(&Sxro}-Gp%HUju z59O}LmAMS5J3?YZv4O`C=@ht7BjMFj4S|FVzPy5`GPtT&m8hxY*>DcmTsBWwTnwhn zV6a^}si6@UsMLoP%5DWR$m1N+m_ZzN3GqKx<)w4|rmVC8j>Cp1DRiN@kc*e3T1$xh!O75s?lHmX?|UPsPN+lp;0)p1fx;C4=oIA5ZnJyMS5!h9@ofjmY{=A+;uB>tg#8m{!7Q0rE)Wn9q=G_ eggi13Z*)*fHjLgKLi1zKOE34lD*<2y)C&N5(t+au literal 0 HcmV?d00001 diff --git a/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVnoiArmlw.woff2 b/deps/Roboto-0.4.9/KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVnoiArmlw.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..15e1583a0f7320719a1e47697fb1986aea0a829f GIT binary patch literal 19660 zcmV({K+?Z=Pew8T0RR9108GpP5&!@I0Fr0`08C*30RR9100000000000000000000 z0000Qfe;&|ZXAdP24Fu^R6$f;0D~R~fqn^}1`!MjfweG!pDYW65C8!-0we>790VW* zgm?!a424b`7la3xXK{wx0cg9wcHxnT+zw9ahJ10t9q)D^rG5Hx_WyrMaxz2#dlJ*C z*X}#4A`l{j0=@2zr(7v*k?CNQN9D8_Jyd1jHU(zn`o+fDye2Y4&kPP^j$?FK+7q7U zkcL|glRYisrI9*ai$>!lf5>bx5bB^%DW1mBcu>ke?e^2uO^?y7(&A+cldsvhvFQ$- zwqY_Eiz@YBZ+&%Ku;mA*It5vOFF0zgz)KJQH7Ps0q*pWGQ>Q2sMenM zPxtif2ACX>Vq`_=QcD?jX%s&Pv57P{fMQDELDeOi?Q$Ud()^oftEi zh~P4K#>5YG_TOkU8d+#tF@~HhP?kS1=Hp#Opo8{pQ(aSVdrfs!`zf-?=G!!FDiB8E zEDnK27_7snTVlqY2NUP87raw9-RZ$ z2~O8k(f47aKu@~oCRHv_lNM>{6BywBZ_WxACOG9OVK&|S>7S)pN{)>JQYWjIq%8~6 zSCv$_UIF~yoBK91sn}vxB>O^wh-F2kEn-r{f*!dX^AWpWdjGXqRp3?Nvm$mAwJ%6A zoeaUk%Q6$~-*tg&a$Qq7jl_{P@k?9zGbWsKv{f>+szdEMCY(jnLY-T425GSz6NK2# zk|*2$e_i;gKY%pU$46E@efj@yYTEw6EEW(~wh1YBOUF=jik)Lq*hM}6{x|dI|G^*( z$R!2S)v~-Ofqk> zyQQf4fI$>+GnHEc`TXx^Y2W=XuYH+PvOsFGn3TZJj;yNiRTZY^|DWtM_rGLLviv7Y z^cFMaWe}>Qdpy}>mM|sGPJvifqwtkI^&VKFs&Ed4VRhelmzSv*|3QirC#7^HlpVGc zRi6@XKez64zvc-cgb40=%b3=M5OD0Yr?y3wDZj@srsbm$liQ7-+l~pSpaf75Ie)fA z+*mMW!!gSEW&uW3KapF(7?c=+?Bxv-yJZk@E08qlcP3kQCRc7IUqPl=ai&~(rcza= zMos&5{a|mreZxWP$ZXh{{^9o((PI$ARbaW!gQ)&i5MUWjDxiDsYvlxt&T8jH0?qTB zK@mXtVr@b+kbnXgtnJIIB3FeOwBe$2c+o(8Ft?cU=Su(*0tyB&LL|sL_URX*Mwq`PhFZ*N6FH5W9y^}6E!%yX zFBGqAqt&%%mE5lk&*5R?qqRuxd4Vef+)B@N}bQF-n$c^{$)U|ifd>o zqXUS&lR`SMymRp8iQYEWiua}t3sE`r&Kc0#?)gg*&htUlj|z1B{?M7(mQd7+8(_CH z|GbNXPaic!A;`HbN*Q-vkxJgIQw#qEe)xxj!~4&k<~LpVs%h(~QVCg~A4+$3fim~d z!iqR-o#jaHrfQ}F+yj|lqhqMU^Pmz=R^t@i~A_YlQ((1XC;FsDBmH9BWe(07-R%$6y6w|2?8dIDKLqC0_+!(-#bF|J>kGz z5z0?BO)h@Ilu48dvds_FHmYh&0cs#ZeWcK!38KA;X>yDjW@v}vVN8Sx2r=F0qr1Sq zj;4Sa%xP(G#?Mp=96C~a3Sql2Br)#+=CFjb^&{l#iEsQFeFox_ zM@(!E;&BEZu}OpywdW}FTU^Bbfvfx3L>tmdk_#>+Nrf;3&E(+_fy!>E{Q{5oVf5nT zw8F3#e(_>qJ8=cx+Gz$YmlzQvUYrGSMde}+4D$~VLyw7JwJF8Lp6~PsGd{}n5;KE2 zEa3>}>fE#3E^tvLk^PP0!gcLo7C-SQLIh>Dq;uB|epewneNibh zAF~{<%wUM=7ZxSh6PB>1*9U)fYEPKjwl(}W*j7pZh5TgP2`{G4Q#`!VY)^6=kVYPH6wzw1t< z^e^u3IQzIZco%Sj;$TiV?((TTnX@;`j9w>@R$hA}-w4m>&n^JC|M%C7tA*d0Pa1*# z>@iUCK~OHXbMG^!(LFP(6QOyAoUvZf4|4OwkPze^r~Iw{^1*&7Hj-QlQ15(M z<0_g{*^H__Vc1pQPTG_&WB{|WiR|~}>^!dp`Mp!ryyE(lxgA&ZSY?B%x{qXAJ(MG! z8sH(bM1-IiLFhs_;zMtUV81A6vo@9nDZ=^1XmBz&NAPX*Cz$&idsI~ zvy}J0jf!78ej{n381xXPUz0J6WzfP!C7Gqm9pO%}s za6utiB%_3-ygg-2DQ`qY&s6qCRYTT!XM+hrr4^`#@ggL%D1RoT09B|?#29E9EWeTz-j=pFe`)d4R2?%kgq^W~6?pNqmq;;z^RP zDQMB@10;I%Q>gw(rOBUpMae3j8D*KMm=%@tN7Z};Sw$n5V0aKbm?9K#I!!Rl(;?M> z*(m2{I(rqcJ3eY7su13Ve$sX6iO@F7%(7(JSa$+@oM_Afd-rm1g6Vx7c3sIngm0BO zl*G*($+r%V7FlM_1Ll7U;OAcnK?CT3C$0WcGy04EUugkHPz zNJ)e2W&G5sd4DEFU4+KopnBq%CjNGki5DP1dgHi=9t#sJMDWv}dgi$oURD~~&*ZjS z2HbJiJ@-BE&?6Qzc!-kX1qL7`1+P_M(kz6qx;X^i7v~|07@?P)j0^R0E)^OPNrIM# zzI^m6uPb3|k=aj@2qsZt#2aL!347#^N$-5{$6x;-l)g4v6zunjstmk~fCWa0WkLTW z5gPbAF%=orjkRj0-^-5MRbN%gxPFnfn_OiekZklP)rW@fkPdZ9WJwUt7nhaY^sp&{ zpnikJ-oG%07nw|rrAAXfQzQ9b6jD7kks3}7rN&b~l7k*??>Bd@T}#aAof5+c*CsOKx_GU90v4fx)u-{ zMpN2Xg4Mm{>9n^49|9X40JzWgJIvM;>;yLdcmRKabH}y>EYOPpwOi){HiCdOxdg0s zxjx&0uRA*-6oP>3&I?U|0Eqt~Du9?uRmOyajD=$>8%N**iVg92G~bE`P})*pAo3i_ z26~C3@!)w6v=fF{`COsM(&I2gPdMYfZv{bmJKB{V_Of@cu(Vj)S+Qh%k~@)O%fs?4 zU&<$W33(a$t@389mKo&J|De+x>$tV;?@WDCbMWS-wbYCFIKkJZ>38kr=hvs9ge6e` zi;scDJBz;WI(BFL+A_CMU#p~fE->5ghy{Lo*}38VItZ}8r?c5kxPa4{^l^QkE92*y z_{bzgOteh0COKk?y*{bZ{I9@g8w#x}QtY{+Y*!V&s8lrouzKecj&*VTT~-U2IGsL! ziRDE0Q0d>*uyc<0X*MbeTjZMK-sal8c1~-IYXY?$4U#;k^r+D2ilao;$BvnqrI)w-O}uG zI_x&9#cXP7Y^XOH^tw82twyaXd?9;W34xAq2b6no)(u{}bbWCP2TM(+QZ2HMB|i`oM0} zj|ucS&&VBHmX)k61(4Lw!tM5GGOpx3P!r`vkn`!{5oZK#jeyUueP7rBk~nXgW;h_; zez+m^$ORER$@7Pz^%m#ZBr_A?LS`b?x}Lnx)j&!^y!%j>PCF4aCV}q;k}ls3HIf&B zqcF2c*=WQ z9OXqp5vNr*gcAuOrP_?ZS#+LVAOw5LJPl1Cm0jI9;_5^Cus%k0lqO8`r>^)P0YZcy z3D#jqlyWlRFP1ggFq5g|CwD0QtSr+H4Fn$Sb|UV?L|n&4m_K)IUdJkJaTz274C}!g{>+#nv{J!Tnk@xm{ZjCE%Ai?Kn}k<}eOk6muh-~voU#Kd zI8`t_l)1X3I>6*$!QD_9!jX#_9LtMs(rzx~w^gdT+NuLO1`({Nk^dYMIv ziH*eS7GiO$@RC!-sf5S;8gPY(|J^m$)OxN3&xxQP%EqjH1TUaD!V$Mja-h*#-zs;s zsZP-A?sVFtancXm!yO^smq|@Cd8_+WT*SjI?pVWC^FLcO-|>yz8VgoSRLG`lFGwj4 zEVpo^Wwd<^M-pI&Z$Z;EZ=N-x$!VUS!c>yJp&2crp@la(u@G-`r_(N5R>bz(A{%O2 z;GtZsFPk%mJn%e)rPwn&bQ7qxpeqP?*p+xOaYZ}1US<)yk+!g=iC9{C1qqz(5hvJ6T*bxa?i|q=+&n<%{?de-|x{ z9_uN0tCU4FMcUIlelUiIlFW_ua8x&_wpp6HvX_s+RYwj6bUhA>1xa3-Q1(;siwY4o zP_Sk-+zPFuV!_DjhhyQdnMNp#I#i}%zN{Gd!*Z&;V(TF5d67aY==OnzfFS{-#t36^ z!c*OoZ3I`rQ-fEWI<7m8EAaWgTn@nFXR<07P+B%MTOZ^GV#K7?GeQ3Si5Ds|!c1^* z<(4in{D_Y4!c0}1C@i)^8`)AtUAoSNXdFy{9)Ng9B$&PC zYEA{})&v+7^p|^nz+Cr+7t+cL32A+su|lIdfwSgSjd9HRR`8HcA55hap>U#%^WZ_XDoYT@Ceb3PlXBB51%TI zT;0>^hrvVY6&*QlGP&v(l=dJnun=AApc6DEvYsBmEpCSg)8G`!b0XG*;dkeGQQ-OP zqL}y@a3@^~u9~ZUq0xa(C|qFj>w>hG96Q6#XO|fM0qY$SEo$rHjeQGu=gf*_doT?c zv^f%noEGECeidkTXc?f9w=P?`XodUQ0V}|QXfMA<1-D2=(4&0o;GYB-hzmy+Mv2f5 z>lKEmsXCsRf|uE_T$fuP3gB>h1u+pi3?Wy2*DP%qmi_6z33ai&xqD87aY3ns`rdNP z`9jn|ab6K7QMB|t3b-J+`$HHX1`~9|0xv%65C}{3A*e%l*}Ye%I)w9_-9uFk83C4a zpvJ6K(#(m)8768g)_Mw&ya=Dcpb(4uI37TMw>rqsF_;M#etgY}@@P<1GVgD-=8uNY zJTGm4Q#DurSQZUw0ubO4XDV|rp_vGjC*6%z7|1R_Nhn7kxPS2fCa3wkv90$(LuMvH1<+Yh+53!sawcam7o4UX;%Q<CZ8ar@PC`Dft9Xotjw2U31%7{YkRw#hb$AjX6D! zaGFXH%!iMVXC&mcFmR!oETH2%eQ9x96pSXm@V!hM;fl)R#qqB`KJO};gy2XCbYAqp z`CKie*eZm9?8PlJ<-FPoTdTXoR5QGuTD#cetc45kR0F>AVb+)?V7%iPnXV}uwf}OD zkUB#0>(mGRZCjZp+)kj#aGu{zRQPC2ENz~)#MxF5R{j(wzoFGzUH6L<0= zdk^~eCRq?bUL?~Fwq)0YWHi_lM;Y#}fCMBdAhsEneM8I?4TFW+|B3qg{{(Pk8LH-) zUyDyB7)h^jGn7P*Ji#cvSxp69Jf-Uj#yz7Hky2a#>S=p=-k&W9=*uu@h{o--Qnu>` z;`UKJOVlaFy68>9EoNZ1EGN~qXB=H7YV11d$tRC@T=l)`c@*a;JxS@&Zd+^=os$}B zW}Dml?#iqAd<;ml9(XDV=1e#khRYw73L?`qG^cB#zH3)&QtGPPY|cpy^H?TzFr5*l z+v~BTL#Eq_rtLGd-EtFkooc6!gOq4(E;>ppq!bOXvYF3`0Qav{6gAN-OjKLEMJt_B z(vn68*Sea@@kV32W-1_q>`1b#+`TCNnhDrFt-$Z!ymmtGpw&yDL}5%HD#Cs^BZVkV z%m_?j_e#zFH!`CL`!IgWeWwtp|M*I@`V63_^~vqstq#?S(H-`dC?|DuP~uvoJBHMD zudcdHd?R8v!F783><1hzvjV*8-BW`*08WaJjr$*A_Yf4dz~!hadf2DGkF7ouT>b^% z0ZZ@i9yR)Y`)vh!=$C;qoA}529kEYFqkHY}{q-BNZz-4Nz5j<( z{nZ~-6z1J0Vw#XY7F+Ee^%e=JOvElQe22Y|Vvu$ZQKPrXkuo^fLlrPpot#J3pY~&sC*WqFZFg#Vnge)XEA!n(Q zC_1Qkb)czs+_H7jQ$hIF#y$n?B1A=s27(|Fd{t7Qhk8Fk3%@GpbDtisbWfg<1=ZoV zOX{l8p#h;29CG{hXvxpIv_T$V6{Y@(TNwcKnU9ilFAH)pk*N5M?n~I^zu+|27lcb8 zD>G-q$rWjLxYefF7IkK$H9tbpKyFsHj=~gn06B(~%-Q=9n06^-kqMH$>f~O-?Yyrk zQ$wPbL?GCtTV4e8)aU8>2FUd>-QmEm7Iun^+@irT+&+ybP6epD_;*!9wMaa-T*fH| z3r5Oh(c()OL*__*|3iR^EMCX5)jv43G$pM}<1BTkgIcSc)XQ08;in1X`#TrQW4j%^ zDMHbH{^ukpS+OKeFVWfdpsU#&4=Mza&Y2lwn{L*&6F&rrQIbn^KX6y&?YRC>&q{W4 zB(1Ex{m^*n+8z!4SUIum0)c_W8*1t`k4ESb3`P$-QwS+RX%&8kGSHT695}Mh1-#FSdx?FIsNfnl{$`QI+W;6BmF$*^b52KT-aM^>{RbWu&;P3Gr= zngx?&+CG%P9X5Y>nOTTY+(FDy^LzLh)rH+2Ncnyx<^#+>7s{aCt&T@i` zy+$WZg>_Lcd#?`?qo~4~HKP5%x?#kdck$S^3v&(eA^An3;ihZYrF+(;8(+51itfI! zaj2~wVAxaZ|Iwj9kPEqBW~&AYoaWtHVOa_7InZ@4k;HCi%tL@#2v*44&p21ahb+M& zVP_Uu{Fh_@GuQL~?2|*DWJoKcie%!X$!sW^>Di#C^D1dzx95WQ)I45kVHye?tCsbj zu|P*h%?ljsaqqqwI;|X|k2EagMXFMf5iWy|##h)81nDGTCab51EBwX2o0las3iQvwXQZU4Pu3 z)JC_?i?VIxiR|o)RDU`_$?OgS*Uqqk9Aw$7mQJ?R9p!)YK-rA?U^CFegh7|~(~Y%v z{l}ABr~0MXry86`H9_UM>?+j)qtY-Zk(VYT0X{ z&zy5#u4Wv)1MAze&1=y9iNW3I-Qik~ft;?>4Ft{ihqB?FPl8G-!l_geSo>=;rYW_*4{ka{2@(KF~8ylSdxNU7JM=Ea6zgrfVyPh9}9jMdK_ zH&^^{?6uE#Q@1Vl_uZmS3cEi`fw2gOAK=e8m>piNGPI=wda5qnR$ZKNTG}wb27ba~6pHQjp7p^nJTp@jsLX~RZVa7;+ zTeMqMMwFJ)5E;TEE^N1m4p|ht>PneMLp)KNtOzq8<`XjYL7pQ5UBh@n=%D8p?s&y5 zXm-|DzuvB zCE^h=z1Ui=v$NaJwgvP*;bAu;ZyLh&zqB2>kZq%9 zhc`dFMXTLXYP;=At#)=xqm$UqwjMJ429G%9E=0qx-G?f(?Jw4?+S)DMw;u}?))dVY zTDQpjTUB}1LO()&KhmRI0bfpxrBHUBpFMATGAo=FRr2MGhvKP-(!l&rzC6CR)W2ot z)s|oR@e_V991(-T5n)9aFqjM5ibX<@zZV|N-c(2+E_1$F-toS5<|MgEGL~mo@@Mis zojBHK=bu~mpR8zhdpBeyCPmCYj$^rnF&r;sf$yQ?=M9KqNX0GT|vYH;~Q<^ zG~b{{qKP^Zc(mK0nVrd7B(kYaX-JOK0Di49|p8a=|?bIEur|YiPa;nK|y?95}*>y){ zy|~iD!rLrF7Ry{-ZsS&#+bsBd7T>;fGvM>ULyu}qvpF{n-|7nHc=b9IhNwqS6`39u z3EWQ7)?p|FGh8A;m~ho%;vrYx0#9iUi1nlJvo#IaD~^$qTwB)#o%ICrg^l7!`IX?+ z+@ZhMUI8XQJa{(YJ~G4OLZ-jIDCynVjLhhW#~*!mc=zFwjDy#ob0bS8+aAW=LTNi% zP_>^wme?EHjM94I(&knHH@&0MKp=j@5y5JyG`LQUqT}lGDyjg+;Rw6=a9HZ{UQ(z| zx5$|X9DV<(gLv-@sCJ=By9)EWN>N?O=AM77SHM;@bAF|nW`q+Ji9MV7h`cF8-bQAO z!k&>>t|Abvnfup^H%)K)SM8mJ!nd^1J+C<@C)%D+|GRM#H>3J*YqV0TU0cMcZk~sU&6>r2hBo zOLWmR5+h#7^wfy;&=Qt~Kyf7<+ETZr0ew+4+MsPg5X>GzhotU96^w}1pgA_=QceL4 zmIi@6*nK7B1;vHZP_T8~*}FdtjV1*V!MqD69TD@kVh@_yz!H?}JBw;l02>(t0E|Ck zzfjPju9!5+75k1T%saSmFGnsil-(7SW`Pjf;;Esh3@B?nULV0bCYv|fBG-J)eIhc=h^c`yJT6?#oX_ zPPEj0!aVBxMtS5?HDG_-QO?@T@^j_biSL|yo!VjPdM-DVAoq=$^;52lv1- z;%BJpy}KE32wk|COkYpvj18nZXw!$$KTa~i*WHIkh7X()xV?E_;)73u2L_%gTCPpp zQBqpk&^%5Xk=6|y*j-dt9v9bwKVEBp1zz`4NaBIhpZiXeLb@I3{44+)xVj}Gk%rK< z)YnI)?;pJe*%+$Jy%IVWv7CarEud|F*OM^Gf|?46_KaZg3dgLlCRSmFssR!^%b z{a0V~c*eJyxn7erbxo6cSLJKKK>=->;s3BSsdL{Q4-_mNa5f}M9z-m18-|%QPo;#^ zwU=tk7{Yf(v3BY)Ab4m;ra`nFna1N(3}B6^Ch$3)$3W_nN@)|lO;^h7OFwqDK6c&~ z(e{inE%iP;v5{Jo%@$R3qiPa8yO*WSO_%{`4P(M*SS@uK?xKS>^^)awCALXV>rST| zM$ycAZ~-a=z`p^&uL~N2afD-`+n*n({rpenvFSgUM?PQrcKhkmN)jXH&1}#TSwlT_&NWKe@k=|;DsEFfDqyQ za3K+)HXtkM4KN(HFpMj6*p}dl;PhbY68M-e6SjEw8pNN#*!X^DY{*yF;j` zY$_+4z+)hMAK8Xts5zkx8u!vIy{qH0)IEc{Yk^vDcil6Wr5)ZIMssL`Ewq75G+Bvk zsgq2$rwlQmjO1am%}E9!?JofsyAKdJJr@<%S^m^MxyaKR{q&{*{Eh9c%584KjV9k5 zy{k4wuCV)5mBgQ+cJ*&1HCj>_1;EFwzA^7@cD#(9G%MATH2xmD4*)yXA!T zb3gG_pJFMRPx4LU$lFeF?yN0xJp(alk!dO#ODgO0X6m3^=zz%J#&YTg`3SPDI?JS4 zECU}7pKXyGIw-QH6F2{|^}t?AQkkSEzlQviW1Kx6SF(gVnK|Kl@=53*=D_ZKA~>zq zLJ>|^u&TWzdbgVJ78lovdup8w@N|3#Z9m%-?(vGGc4_6(o#V0|)W}suTSuG z>!&$A>orUNIhh(~Ed4}xhp?kgw~7lT|5dsUU`1cIj2`8fIbNtrft)bD2tSZw4E05H zkp)XKr1V}-@jw98n4@OzL3X`tpirrXO&K4pK2%`z##{zol~=(`S7pTN#NsgSLdIWt z+%ZTImC!F-|vI z2%O~gKx0NnD|>>OK)1y2C@+Zl1zwEn#6Go626;x~Jl#A{p#qH+H_X1p z-uj@>D#AX&(H1iS`gHJFy-7v&EKvjeJD;V3Zq1kg#p?F$3*#`fwSb101L zIyt^B`;4%E54$=R1A-TWb9YmlFio*FG5e5u7D`7UVjK;!sbWR%2MVoU6)r51=+P{$ zx(Le1P2*K*Gb*F{Q_qOS?J@?g#VYX^X(k>R42R{b>nr)yt}HCan1iAvK(cV-tnmk% z!aC>#AvhuTE})4^-+|ccK#}*Dfe84&iL1DapszU*=7Os_-(hw$zqe4OLCl#~!`E5q z>H&IOwtHA!J|Fk#aYoB472> zuXF*IaKv|Ak(rq*yDR;_kP=SxdQ;jb?il1~!uhPICEV|ZSBF1p?@4BT|BJe^QruL+j;CyY1!#=jy}80Ay_^$A-nwB?aEb}L+rKCAM?RNH8?~XlZzceHl z$J$pS_+{q03U^9b?EsptDTifH`IteJKGmFhj^3)N7*Mgx+%|EbFs!G&=rqW_itUuuLi_m*_b$5nIO*%C=IB`dwT zKBK3c#s?MEI7=&ZpuyFt6@bAN@wE4$xW$^cm>#Wb+p~|8I^fI6ZqH^)5_~zy?SU^R z?;yX5TKRdU2n#=m!CZhBoyTB;>tOnO{EiL+<0Lp-7`G#C@AEfAPrT`m$%n6wK8aZ3 zod31kmtg~MdyNZE**Oand;_@j$ZZv2XX!G0WP#GJI=8W~mfA8rl9W=i7Ydz# zb#7!~tXe{OJ~3tbt6V6Sm_ykZ>beQQ=m%hYDqJZy<3aAjY(uC$>3srWmAeFyO+w^P52-{-DBUHU~|bTgZ-;K$vD(Tq+7jgu%@s%J%GTCB(=`^1wHwJ!YLn(BrHe6BqjwwXm!0Fy4?ha&=lS!y zoG%ns!_0+UUt4+A#dP+IU)I<@o zT8LB!6M&t$0&pg#{pu5QnywELhE%y@)>qsB#o|1X{hXLSV(z~@pb{Kt=j-omAu$OT;)-q)F%4!$Ad5)*N;#9)#KvgR-124z z?_h3S%|d>gn9{;tfs;A*0=m>vRKkX4p6;kgsQ4I+-WYOfQ=j4}nhuwqFv6p0qB{$@ z5LRq+Hkl_v;hciHa;@ZtGofh-@5x|QO+r}^r|F(nfv|iVj$WTWkdS4#FCBy1^}PAqmxEUZ zb6;@-w)!!IR5FIJ*Yg8;uLiFS=Dp<2+w@~_@d2_CPi@US=RiZy^7yHI2boiqvz~gY zv?`Cp&u@;Kk>@~iRm+ zVzGB|w&sTUN(d4}ah8CK^`yzhnkQXhgs+CtN*7V~cx4w0Z7|hVN%V)G8)x&;xuD5|?|O{ydioBL70qWQ&GXn--6BrZ&BoFBN_|6sV(BJc+2)M8ybPGe zbpP$;V!}}M={|0pEOnmL(n#UZ>!^5JN-rJ&oJh$0k=K4hVLsm#qM*#?3L(c#rrOho z7*IKHJW$^P=EZgvFeRe(*f>FLfmZy(j7v`1dkV<3xclu;%(J~)EQGslGZ&jvDFeS%bbDUAir zXIt6qE{LBSF=)I2k#_HEtp)IPy9@KO<;X>Khhl$;(Oi0}1!(>nfa#bJ^mcTm?mXYO zukwDgjrP>v^*63WjbR~|v0e6dlUw~Y9xS%hTM`l&#Dj!kV1^b{iuQy+n-d!dT zWWmbC6r?+0zIxrD-3H>J&bwtE!G?8%CXrrn6GFhuhrReon&GWu=|tFC>@aN_yrInbL-~m(`Fzl96dbq`)RthYJNso%j<;8G=(EI!D-Wnw?lgk zOBL?ciW808l?cAN!q-9<@hla%eq#_5BhU|JL&;g$B3g{L_Skj(2-3JK)$cOSuL6J; zqkDw%;a5>6oY)=_l*SMPtS^RwW&*P#3)(+)3vJ@qPE>%8X(U2l=3CaB)NzDRrUPQ7-rzVL4E%tvY~y zyaV+5?U^=QT>NE$9jMjGX5i25V~hcbr_+0)o1aS{W6TC4 zmdI}=R1BK`d-^L@ri7SPfaejjEI@H;bWr`PzB)6)<={0V&vyd*k>_~U7KC-CR#i2I zzOiLdKl-rxcYO_Jgw@BjBQFjD+mWZaj&6i&x>iv&ieBB4F`YI_Be^=LGFg|9Iw<6d zDT7_YDw&5ycF{*s6cGdVLz)Q2EdmSw0%MS>cD?f%O8rZ(Q~?f34}xmUmxxsHJ7LLg~D;5*|n+ zVY#jUEXzB~wU;v0neEhg4uNs&I6OO@)s=ZdOJPFjG>W+p#505d74`wFJYlZhpQKdw zZTD>IEPhjV^Iv9R@2WrrNM7Bz<|EC^34btQInz(^Z`u7NxqtO$#!na1x9E1=Pssgy z2N1Zc%Rh&li?54Jj-uZ8&RvvQ%ts_{AAcI&{}B4Hoxq=I@f{|Zte0d7H`xg*#mU0# zagN1?XECeP}M-IGn5#UxXw;sFm1cmtYD}h<$DqV*yQ&vN3GE`feNNWS5@f=I?sgY`EvUB!N&<)_BCoz++5~GSK zkSrb^uu(iwcyLzOm_^@?VUBHhkXj~$=PJli#z%l_ll>eLKSB=u<2k1sHpxTz?Qz#0 zEQu}2KKDljlpjIrqrkqZbBKNU0|cD&Q~bM2ezl4)11y(iUE{r88R_n#^CR#!2Jn zaA92X&#G>9)uAPkyP!Y%Kr$mz>QhJ0#-K>mMFllv(1x?XhcI7N@g=x%_IUD7w!|M7 zl#JtEwETy+zkDYetkt!ys6qn9pM z4{_LgYtapuBX-BJG9XkJCs(Z=S4}Z`s(E>UX z{+m4L7g3qJqkjX<`%T7B4}q@qAcVqBg!dva=-PrnuzvMmXafT9M=S{XJv{B>8VLHA zgy*f1dJht7675H4F<#|DpQKasa{u_LY9T3uvh{bVWmN_=MdBjk+&nIQPcKKV>f_KR z49G_0A(Qne#<2@%hzd1iL73OFHYN`Fg-Fe~!ed`NVAHTlq9|Xn2{d(?lShdM zN8R}gEO?-5C(dbi1*KGYY-?_zQp$WByt^m4(qs zIP~Rql*KG)03;H(fqYioRrl0=^*}w8kJ35O>2clyo=^koj=HPvsr%}IdZ-?Ww&-FV z)h;_S+|*O*DfN_kNyOkLh*^rIPo#?5sV(_I;wXqt9 z?8e3Nc$~3gDv~`=H}YCVPw)uAD z7{@@k6!RX=FAek!Mg>QKrB9qZFL&a&xu_~_u`jCx+IODyjJkC6d@&_&9N8*(Ol7Hp zku*j_=Z~oSBld+$?#49Qml|*hi0gCRju?)#SUOETtbde?_o0a=Nn3ae9DI)UF}oLs zQ=x3od-83ctQ^qZ^t*iP%&-eiM+j#tBb@@^DTcGSCeTBSkT(XDK1lBcPc(iPqR0JYy0ey~pM5sFtNL1V zb9rgpge-8Hij1j^%CJj0c$^CHs_hD`h!cp8eCPf>_1x>{_Z54iB*A?E5$@4lT;7S` z2{ngQDh~wDZ^OXz?h11~mjV~50JxOBiWpiJQUM&+G|vn;HscAg4MGz|&tnSq*sOJ< zwUms~3U?16O`ubJhK$!&(m2vqA1%RxjA)D5@s#w;jDewb8>TG$XP!FltIBMbnGMN8m+blQnN`pfN$jxO!`Wo3`2+DG`)YQI7+PmK^pC@p9<(rYdOpk~QR#(3Xu*io0Xdi}b@*;e{85=cSVN#Fd^` z6bQN0__Uu<0dC6O?v1%q`O<^bLwX@KgI1^l4FUkulQyiA#0p;GQT3gjDhLcIGz7gv zg*ZAoBe|}vPLAAaAe<+dlCIwRw2*)P_b>VW`{wH252YLa&NzEqge3p_C#`J=48(gZ zBxZF5AjXP9)OM|rqd!tkPvw&n+n1)kT-0sA6Ljx!1%y_->DanzJ_2Qm312Ws7ioDF z%W_?$S4SXQ1NV>4S}vo;ERo5%sHON1)68g{lb8|5$-iNDai;epphE4_@OFj^iO9OZ zWE@Ge(%@v_8r|Zx$%qCrhXfC2?UXMVNn*CsKodR0k|W0V52&MZMfL<4gajb1C0QsF zIDv{pLNu9EUe>CP8NF#Z6hfflNJpqdWz(w3CDob?;(F2)EZAQTl@~U$ zHcK{{6Y-43%TbdR=t&~oqC3x%7LsKVIeASbd2fOVv=ptaPf|IztRir)B|wX5`G7^1N}gJ~o3_TS+gs=Kj^ts?XQN8Zx)>ss zB5_xa+b-6*b1sw30F0s(97LImGz3VwVq+~%e=a$r%Av=B$r?J>#E&ZX0lOP_P8^DAoX?2LptT3dZi?{tuy(Z@PP z6N*ahBFY{?uGZ@%!xG4}W|ZI1LanxQs$3uRDr7w2bRU)$ixueOhlC}9fz8XB6=(`fSL+~BTq7?d^;ICm!@RGG1Wmm@9 z<&qX;7+oaHS3_)D5zB7vvbUQixoso?8c;kUf075OY&8)yOaS(?mHUp4_O)r=8x08AZA;w<8Ijdd* z=B6Jg-}n=+hYB+l69c1`n1lBUr+`c=xoLf4=)WI(*s0kXP<0G#-j70S_+JyC6G7Hs zhGL`OqOsQu88k;`txE(Lyp+R*N*}B#m1OX=-s`>C*Wi}ZZyKCIl>ReEeqvLmqwq7u4~(eq{ZM3Bk8b4cpTR@36;rSD6q zrnGIT`oU=dY)iF<8z{W1=~DRtptD_CffaHC*ecAa7= zJM8S|rrIgDTdpr;u4LqZ5L=hoW}}UXH4&|4zdkqWFvR3XTL)d8Sr`QwFB$R`=7R6% zriMddhAPU|WOy0LU6|#t(qrX~S%OV|r_?w=@t`C)d1)w$TP>+=DrqQ|5u#}py|w30 zwtfFe{475`Vpy#!R~m)Gp5&_lur2f4c|5T?(dDwdd7k9wOLp8c2T1A$Cwm;_1M7gZ z)uVX@1qD1gK^2U;X2TI^ebUhMy`OJuu%SV!&ovuRa< zM7mPKTYbb=PjA1zC2p-esrb%2<><@T+ytM6Fee=J6>S~$f}B_EczgHhF97Ien`D!~ zOyCZSP9tMR@ZNE}V;1H?-fiP%LTrt5mWo1zUbJCG%lrQJ$tPO{l0W?TF<@;h$o0PJ zqLH(s(snhIdU~++%XjOyqn$XT6SDuyT#A11sTzL5EU>}};8QIX`|#)~7D zjf)IyJgv3IvqS!(`y%&icjW01$H6SKa4?;h4kifD^3Y$`*Ec@5{V%-aQUKtiUsp~5 z-+v3oEXtP5jZen3v^02Mr2xi@Co%v_?e?wbY=2|9`XVv5hXp3NMfZ^AN_w+KX{|`E zY|(vKSp5_n!9w%8qI^iD@kmt}o2Vz9x|CB&O|vGXG9EY%Z}>n5Z775d;_4k3YYO9x z%_YdJ0=1Ny;`V^$s3xlGk*?p4b{#2wr=wM2jEs%(()xuesp?XN)9F|xY&mGYJx|Y- z+{rN(U7ub7)z7}}(d_Cfx)|wEoMA_C>KR&n!X;(ZH*=d)7O@>~XX}9G4fM-hRAs3s zbzYUvr;K>z1BR8wuR>MEx~f>5ZxnTz+%clcD`Ilh@XWCleDY$X9Bb`Ds0);~adHD< zYx{0Xdx+^Lb;}EO#s+?E!aYuY&GV>e*=5QvSM4^tqe``7XX$0kt?HP@FCCSTZw)#- z?VfaC@nFR}+YexMo{+wSrZPIEUx3G3YtpZ>>80Ge16~ygtm}FUp;o2*%07llqTcIQ zSJl$9#qElr+?EB`U7L4V8$nf~)oUC3@>bLLbM0+>hh}NDPugI{vK>NcuiBog>*fl?C@YZyxYb>|Z`n>cmrlo3x_i9J$G?S`#r0&|W6h}Z( zX1($RmDUXtWLRnn3U6D*U{yK8;Y~M$7U|$!+tcq@TD`Jm9kgyFZg&2eB*&m)xwlj< zJ;B-WgqEXHPu2ONCOjN@sAC-S5704hNJALPP(TJI5W^)p75!FNdQ42(`q8<4-Om|R&WXes-*#MgxGQbgdEEe?$zG` z2u)80prr5zV3d^+SE=a!RJ!yo5WsS_xdDMlo(aXw4>Z2TmWs%&p6bAnVnp#1&F0JS z!mLQ<5~2+gz-4D(Waz`7gCPhKI&V*n6^}!WF|y7%jKRDRk%F{j ze6>DLL=eE(TP9xR$(teSGn>N&s`!I6Ywh}<9@jjN>5lJ7)cz3B_O7+Qc(8lhtmR&7 zzU$$-_;6lpj(yF3er!9}tcMzw<74x&W;zPNO-CZQab0Ve*Xn0#<5DnGXnnbbx^hNc zsfRYL)uwKmyr#}mRB=riOe=gr?sQ$_lB(TKGFzxJ*nw1MfhD>SES74BB$9;k41r4G zt8P3|#^w274$r!hTQ;u(D<(178iSP4i4!!s3RLQvf+X@Zg)}7*$F+)-mleKjs+X6Q zfN`iu*it7jrF@`EcY{k&Q;-NEis4ScirC195|IPiARCekh9HwcN|6);&fGYVl7ZGT z((&MQOk%teiG|>_gtgT8wUqeHl98YkMN<-=qUeHSj{b9}9j`t5wMyZxpzFJwPWx&f z9{1Ga#P4Z8(S(lnN1bO#^|f=L@+ zGcyCh#sLtpt8JpF+B_6Rsi3@+{Fe!IikNWSrYubAu$GTt*)(+ycQFqw<4En}iV&JQ zZ*$Za@?^3pTTxmfHX3Ab@jdbcCCYtwkj_KQNowgCv{M7661=2+agZwd#C@ zCRx?j&^*v;#zs&>kp9Ec-B^xx(-c!oNTeJ$bpKpNTl$$`BvFGAb^f0zzzP6Va(a4* zjUD3VhEOO7gMn~3NJt105rt&Rg5)TK6zha^8-yTWIbJ!4BaW(S0`29i=O;mX_59ET zXm99SHVs+?Apa4dTz}>KG>Cx$6a);Y^&?VPnMHtlMB%B)>=(Hj2tea3IR;tCQ4SHf zb#B1%F)3FqcrI1vv`o+Wo^hnITW1f~tQgoWJ2?!NxDYBFhwr?_|6HxB5^$Q8(2!b+qLIGM=N54CWg`R8sn?)UvrgUmiCHtx zl?!xEu{46&7S6s%u%bC5Lu47D`SP>ed5=jDUt(|xa6mCY;fd!72YHLlroqF)yfkTZ zIvqmp9zX%|3rCL3iJx(2iiZN7LrfO=a)1lM7aR(Ar(YG#G>kJhy2+EV+sMf^S5EoJ z-=jc~L2@lPduNVxlj$d<*fL<$Byh`pER_rP1yT!LUdowCbk4BUd^xfAFb2n(F`KBMrdlFGq3bRv;99$eBd>4C z1vW)P^WNa4l!a3zI1)R@HqU$}7#tY8d<7pEeNzoLtkAC*xYZeB(PkslU- zYq!3m93lYY>EQ=uW4K!T(QpNDN&)wzctJ;>Tw4lxp#3(%e*jm_&Qu_w7XW>~!6T9I zIM8T~-9fcV_zNjOVE-!>NSE54wyzy&mmA;iZ_l?^+8gbi{4ZeJ*$=d%?P^jfj1$z6aHf&cFL?{0v!oZ=_`8~bmV&|o9`jCB-jB1`rep1CTWRt}w9dGhHQlqpxi zs8SV^YBfl;%<5Ra5XhRJMFyKD&04f-6W6W-t4lYpPCgv}S4uo2tmhBGpN`vko9wa5 zTL#r=w^OecftuwZOAdGnSYzQWsh&kPGg*&G>iek&FKJ4XTyD@zk5NwV69VR_<9SKl zIg1D#IQKsEf7E=NVNrtjTtov=n~=q0MLg==5g0W`?oDYPr#30!aobT1ZPp|5liX82 zxQ8quZ~6v#!Um1k(_HR}vnd`L&lzJwo6a#KFRr`nJrs|}>O2vP6JkJM4S5%Z-x+bT zwIl@8@wbNCPid=f<2M!QkXIDfrOjk9|5#g)rC+a%$7u*<8oXisSejraDi^wARLo+X zo~*ps4)KTgAfYwQZ>qa=d#v%zTpl~-eF)k)@cb1!k3o`-LiVFdR!U_p+EUjJ ztCCjfU{gvp7CWL+92wRQg^1F%N)7p-)gLgDwnED5Fr=FrewQulvAMkwBa%xQQLiuN zcCY{y>oilzR0oO`RxMDW2Z=ZEaDIqWK8V1CA_DbTaa!?&9nFoMC+5YNqk>a}648RU ziiB>#8J14b<$3PcgoPB;_`!zVh}y#)u48l1W>INp(!AI#C)2u){>_eJUiTIcZnQxi z?V#ThVA?$B>Z*fO3~qqAj)|G7&oz+V4_inuJ~NAz8LU=#t;*{Tx=lV_hMG}{mEnPx zBs{nSV8RgnyG)vhzxB`$$7#gEbOG`%1sBYD|Kj+CM^0qyq0>Br${AM{((t1V)#}q| zVU}BbI(I<4=TtiK`m#Md<%Nc7m2<4o94|K;$7IFPJDp#ODymz)x-LT?YA7_JL&oGS ztOYmD$f^}q20u&DjYv^U0J3d~}5uQ^vf{utc(_ddB4)?Rqtz^`RjKmP^X`Cs;5-T&)!?Q$wT z#>rOM)_ZdKFVUY5)Zs!ElDMaAjIEp#m5=@N#WO3@`f&mU8JzREbl%Fh+b$6$J@4x~ z`X|KOya@^#Hn&c%TTIa;u4!!~txZ<}qeH%>>5$bowvg4PXfF(b@jo;4cYk5PyM_J! z#?^;FQsMnk8&!prcglKBbv-*z0^BB)RA4wM$r zgS$)aBEK$f00GKD%6m&2sINOP|M@qotK5ER+#}ri!(EqEDby(vNR=0ZFo*lW?sIM(NmF}#AUjl7DjpJiVZ))mzeKGC`=x+0XP$}G;bwWa}$0$~Uz zq9r(kukkqcOk=ZN%SPadD%leWRFf}dKQ;yPSP8BCI|ow4{y2G}-hG+4t8*XYPIwT$mU=T5Wdo$#If=Edd{cqko*CyIy!A5g@|0&{zE! zXkcVuOf)PsD+vVE=t$A@jLs|h^gN+56fZm1;pMyMp``S*5kiB^o!}M3HuLzeK^4mR z+t|6{mV_uJa7i}gt)st@W+);TLqRYuocHW%H8O3JNa%{Ps-HN8kHA+5nxIkGS~?dO zTQWNpTf6#}CAI?P!zpc%?(_Unewa;P!N1-q)O~9P0;S``@AJwe6IV5d_|cx+XPE2{ zdl>C%T6!~)xqV3ZSY5I8rAK{SX#f!As_SjTa%W`CZ0KYO)`vCdD*D&~a`uw?@)k~k z60W(Q;AvX#vp96$?g=ZMW7= z&G6;X8pjq}n@4*1a+~6U%6st_vC#<^djc3iVQkt@g2Z$1ZWi+dL{#g6whRoET})^4 zQ-~C$aX9mRVa4Og*6AQO#cxZPqb6oyA)w*FVTHA(-6O}aR1$g}1zAY$fRfJIDhrZC z@cSf{1Ujr)MCE38fu{;89;OshX}=M;wv-9c4yra%huWo(YYLl5V&!Ueo(F5^K1i#W z%aVL13eWK_0rvuF4}@Hd=*tR|wH38>r2M_ynO4?UR+ne&DM(=X`I=Sw7o>Bs#XLOz z=nV-1D1La1`aq(mVw>I&lQlMuPTh)nmadJ>waFc)=_JX|9e`z6WR=j*MfRUl+jy`fTN`KOAOHvK~2As|y@9q+#KCpF4LjK#QpVD%xm$@id zS4w*j*(Gfg?OuQp9n2=GY#@#^m|!v^bK()79?|wMv3{tNlQD!pds^6#-%=zNGOR8f zEo7-DJk6h_X5xCv5SSFd&Sc8K zbQ-@F>9y+|@~|3QM%+&fE_+xFb@bYK%*o8m5Z?(Wg@8%3({QHkKm2~LFDsYoZ+R}n zv_})na~o_YE4#8^F3+W{p6dPTn_J56nJ6US=;6xg;8Prp!uw+R=G$d28Mq#8ursj&wg@Sh6W`W(<4CAwN1T>(ggx`mN~yPAtSNpRVZ zwEdGU%3TaJke_?C+N_PmI&=W0*aIj~=PT9Vf=h|?FOZpleDOyQ)9*!Zl!9JvwPS&} ziO{$M5=@<1#ocvW~ z{zZ|Sc~?Ih^fi81`&$dj<2U~3;@63a$`71A0Y$lh*j#u4Mi>dS_~(_oLiDL<^5I?h z9KHp4CS_NOQ$Dm{hTZuw1h#t%h$J|-)9nW?Q1Z31U}TR(f60QC8AwPX=Sqwa+fI!Y z9hd`nK;7nZ3P8KHB|6;GK^-LcYvXlq!pHC>;Enn22_N3^nQz1lCYVMP&k+aV6xXs`uT)l3I9y|-96*hyyn_TGlV)Jl=y8r1c?bo&>k*k~ z!Td`zLj(l281h%sH9kNYAj$%vKg2|wvC13K-kK4v&2asUX2^R#Vm;4t$Ro=!btBNC z{#Ty)BTjDSPy8NFxtBnyey zeAf#Thyz6>8H$L^n5h+(d>dnXg=Hhtbu~^w%0vDhWYCVO_69Ae?5Z+M=l_7Lz5&gA zr+_eleAoPmHuurjz==?yN$(=*+-RGh^Qj1%M%4%-TAdZ=1;e{o%q$i-V{pR^HMXRS z#>(4?&3jb7I$(k+gkLh`#t?Jq8#&~chMAg!9kCptgsL>rFb&3@u~cyf$pY6u4a~b#_(wE)Su{QfifqT zZnW|OwilO7M|l-~ELFS(0=1vq9-?%m&8~?+pbtX=mpSVBT@2W@(b5JHksGEWpYas8 zFYTQ&U5sXTXT;gNYqC=jNhE#GQLi$it0z;m`(l?l@3dw9Ct@${6_DjV6Z4Qh4@8;_ zq}-&lVc$o~x%1u3X(Kq*X%dQf$qPtrFxhLsz*f?DJdr~yN!VuhIjmcvb?O*nlc`Il zS&R_6DN4`uH}IJsR5z(F&4>P@=xce=-^nMOy_U@V1d-)0{{QpPp*K4`iimlpoNlVC z$bLc#WSC;};{1%pr^^CrhjFP4U?K#o1JLb`Yn46GEOvO*JnA)tzGV4CbvdG{*6PU3 zt3c(0qtZPnO$y{>zmC7@HAInFo(N(Hmj z@X3&P`zq{nh?$%*wWlQtVF*xbb~Jh2L%B+}=rd^+5t?22>|j2BI?_6vn$JMx*&u*R z|GgSx_~s}=?CrV3+p zGv@;>D2=Q1Kx~sz%|jUx)7yO&(;=^!=~h^(vBD~2Mvq{ls_=2Cqnx_9XXduD^#e` z>Zfy;2Sf`h#6Tq(?jX!TnoZkB_R@4#Wv$)eVezIzhB5BR-^HdM9kLek{)c{p-FOI6 zNQCmBefWwFC4Ck<(Oo0ukZ=0;Z3hfZK$}G630cE2!l#7zk4zvmFF|DfqolDOM7JUd&?sXvG zN(ZHABA7R}QlARjf)Kmjl3_U1B>*nMyAu$(YZEX^Cw9htI}_yu0lV1=LV8W4OcGpU z*{#irVfjkQ=rfyXz%cs|#*BF*a?+TA?v&FO3|V;hRmv4AR-;id8i(Q4U`D?g%gmrP z7&2nHNqy$E7&33cm>JW^puRqSVVMYccN5-h%1MRf&04MaV@4E20?AiVsexuRb<)t3 z<)#f_D8CDnWetJO0)k3V60Kk~Xs*KT7Z#{Nmz}TRnS;ktQCCk{S`}de_pTcA+H`6V zZ|~kxwtvH%B>TLBwH*^FH9EZE@IXFAM|7WY z8q{_9!q&31noQ)G!~ZUOFXd{{!s|U}%0|ZKFMGXKnNbcZcPRfeQ~vVTbCn)tqo*DV2lDtIGc@SAacJhAfE|DxcIA00020K%Z&= diff --git a/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu4mxK.woff2 b/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu4mxK.woff2 deleted file mode 100644 index 2d7b215136a471c9a1644966be6e5af672806eea..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 18536 zcmV)0K+eB+Pew8T0RR9107z&65&!@I0IdW707wJ?0RR9100000000000000000000 z0000Qfe;&`S{#sk24Db#N(fX5gFF!o3W5Gef#g*Sg<1d-f_MQo0we>AFa#h4f^-KU z41!4;|CKx32*#RYw!HrMPtEL}iN}qSQwN)N`+r;MUwCVr! zB`S~_dxyH|4y4eDsZ}f7>Qp1pxlg*2EJeo!KmxO!{3 zCH@-oiVrXT1c#6S*VOnuGs6z2MjNufi=pQcn$K1eMoiPgmpejZpXBoV!`Qp?zS^YH zph1OIYlu}b>0%*S$fQC@tVB%j&+~KpbKl#>MvQIFIbpz~M@se|F`@<|=3pvqHTwNT zNtMx9H6!tV@r&Kqhyxx2j6$42Yi-^U%cPI!n3>+&mt3b+IAfHi#*7Do2|*A~*)4Vgxy zWXoTW6AN9_y?!lSD3(lS9FzakS(VmTnbh9hJ1u&$)(kt_AoDlB&HBNkd-$3*&LYBuAQ{Ps?0GH2S(Dz~i%t|O zD-=y2MY`^(V3-E-0@dA803=ifWci^X#TrtoAr%_3nn9~k_l|1T4tKy|n9Hs}Zq5*@ z9rJ@~b2A|b2CM=j2yVG{=m=div_LDgyuvpgsOVjrrvb9dgy(zO zQ+m-J)X`{f+lq~o%e`yF*fZ?Mj)UjkjLe5@>RB!N{2S_z(ikwqN^NtM4v*1~mDc#N zz}~hOy$oltnx-;DPoo#LZ22)v4r0^Z-ZzunXS(g*r}4?rMIUgT`eYS0%u!}{Cgr-o z(G$s%zM2AyO4fI#|G(4YbT84}&-+b5f-5)K#1=>9l00Rj_7!WwZ0b?KrP`XGLJUyfb785Z+(MS5g zu&|2Q*g0I>NR`RU{Ix-bA;TmA;}QaiGEJOW63i24kp!!R2@__UuqlafO29c0E(y3M z#cdx`&PDM+Vmy=Kl_c-Ar4Ju{`Rd1S8-YYcCrp~C)I_BvA|rX3`>e(mdPY)aq)Jwj zve#;koNX>rmY<@6w_K#3bU4+}bWk9}m9&?3=9)S}+)36ebr(&y)_o}Mqg|=VDTk(~ zC1MFz-ZxP6?XGI@i7e<7fL1g+`Log_s=o0n6iCBB!*Ct?eOC;~$JF2b3)3MOja~9t zK34t5|J&6_YJ(!R>M5;F0pw~066&=_ON&L(01!B2#>$dq>>*jRwiFU-!)BvKBp3>Y zR_n0ZVo_KW7IgvQE_EW45S6gl1mgCYeurtTQUJJM>5@vaeWDMN>$HldN718b33nfs z{uyDiywB2Ke9wEm7kaD*x;@ks2G;{gzX6wZ)ZRA7B_=uWq)$~XC)tHKXlpBJ^Ola9 z(`V`9bB-2nk=JiWY#?xnwm@R@*is=l_6d{BNgNa!xJ#aNr(O8-$Wip=Fdc?1`6NdF_q2`lt;X@yT~T{PfFje?WmD4ABq+=^z#! zJ_CjXj2I)BGH1z(H6a_erf}lSg(olGeE9McAxf+`2_&RarAa5JFhiDXIdbL03pk({M;eLI3=cW0$O$878@cF60yHc-SoClO;|NbBA{B8eQK=-S zGR*{ad1Vup%~-Z#*@k5(6?H1Zsf;wEZFp1@V3iS88Do`IstHrgHr1k2K{lsPHc*2a z)Sw16s6h>CQ1@3uWPQM-op-s8F>@#1gCiRt?Wt78Jl}M)J7i_I6sKNArmfDD?O8*i zM_({q(gsb#bO0CF@*PJ=N6Az34SRyDD`tP8l?fHcK}CwKP6HiYQS!+U|J>Er-5K=Y z3EtrAC}N4t2$qdJ{Uvx5_VNw6NM15oYrM$ujT_)4yo8tV5*|S>gPXf{I3^WSahMXw z9ixLU2*nByN0(+9DaZdm?8Oz@E8z`_*gyigzhF%N-HpQ^qG=dtR6`0W4^H0( z!-Gwe&a786XRvUsBCfCpTW|sw-K~Ln=w7Mj1AcBaLIrMRSx%0;fnXw2lQjS-&K{gz z`ULmI>4xFVb5W|ng7EqcF(Cv!KDc~w_~YrrD;`eqamoO%JO#YLSD%4f*|l68s5|VA z@ygRFV1I+>obJh&sZH&u-F3j1yD9l`w@x^^?iR_Viu*gwWr23?LL;44nInym`K=&D z7E!&pw&Q>k&P>(5?V}Rh0WR zI7=tAhcO%wv*he0)c9pZz{Lol0E6A^2KYfI=zyg{>pdfwknnIOP*89}81PEU_@296 z@*Fcp*U`_AiSU>NpUK{u5nQQD^}6a>{1u=;1+hMu{1p+?ogk5C!PPx7VAfXtmH-Z8^4jqn28YNKeKQ4c;LI2f8PE`*CJ88NTg&niP&RYUqEm1bM|$jI7XBwi zOF!N1J-t&qtlex|->SE+*Vd}9tnC9KzQRgSoT8&YA@e*}vnwmJAd19B6a&#gt=m!d zu$fk>$%6s{c(Bk$RKTEB_N2A(No(fw{w*M&pov6$LYzxEX1OXV7;6{XaVQlmFoqB3 zQ&BUC;xi<}qPzkq2>e6PB+$_$51Q9LnO$bFK+fWJP<_QC$#2DNl<^#4TC|;Jw(A|F z+>z4>>L|-U3BMGVB;OUMBtN)_N%$rsJeAZt79$;%wi6DTl&M7%bMZMzM}tH>wsV-g z)K(EDQ7cxX4D-dKbGS>1Y(f$Ss0IB<&DvF&()`Y>T&S<&osiNd^&?> zA4&sztROH5Eh6TDy3=>mRe8Ucro*ANtae4UX-1|lof_As>CGvV@P<0mmvjn!Mbnz< zPJ3uLP06IwN zPaUWuOkowAa?n9tT;$~+5Sv{Dw z!f~8|n_tI>196hLL>Lgpgc)H)xDq~uh!Ut05jCc6;c1TS-aTfNw>8I0HT53pb?H9bC4lHh||<0A*d2h@YogTjM&E{#j+ ze)I%x0XLtU!_DNr<|4TvT&=x>y{g?RM{>k*2si}TfCKc=#g4^|FOwg+*iAiOBLC=0 zlzQjAULW)+Q|=>`egg)PDpY>y{MXP6tNr%Re?j0?A+|OtQf--svcyVPNkPi3;pd#A z>U9Il{}s564Tz>Goz@y|s}8HJ#O?4+tX?fO&*=eX%{CbO+nhTnyh*nMP(-OG?&8$y6$a=V7p#W6}Y9V#iE9X zQ*;su%U-MolK6f$)_`GOZnLw)gqB%tdkCEp9k6+u>DmX@kvlyl{0maC4C!!J`~xBW zca#Lo+YI71vyltb{!lE$+)HOikBM>mz4OaJo z{w>e})*hp7WaHV*k5J)IJ2nPP;Hj8!2g1k67Kmb-iU8vyPBigcXpT-OBZ=i0p|l9| zGcZ9AA5q*#Me%(y8hZfAx{-_fS>tx>`8LqzYnIU3x3H=Lnu)k3s9=)xOSyPys-i)@vaAO*mp zgPEhLW!;+sl+zC8BHC!8@rODIXQS?*Ye8+dV`uU9ia+VmkS9<18!m-}-q0;W+&hbb^0e(R&Xe1|@kw=?FhZ|&^9cu}O zAz>!NI2gUG7MpUC5Rk5biq)jd%QsW+k!I0sID&a8@A^^hXC|nmr}l?+RO@tlnaQ!z z8y{kBr`ptdB2zONaBmij`)bhB;(=>X!$q&beSLZ545G0xizHxs<|z3v#sK41R@6#=gGV9Zkg1+7AG#Rh_8nk zBGc>0*!UGiHc!L}o|gstn=v78cv`Z-de?`SAVq;EVW-*sIwla~Dm-e81j!BW3aYB~ z)>=?WM5aXS2jM4Hf~`u#E@wNpSRAWL|1~Qg2ESLj-|6to|L1af5?4Z39vuGq#_8tI zQ#hg9-ZqQ3pyRY>du{5VoO(j_aHU$%;jV#syy`ZfwO3aNK&Gr11dz!;Ds6M= zh`DL$+CJPRcSG0mEsu7_A7Wm^8V~j!LvUKANU!>7!^K#EoRB)UeTO~06uB+AUEyUW zQdYek@y�X(}v9Df66BOzDLi965`2aXUIJnWXgcMVJMBVDzD?>?T}t{Of7R7E~_L zZ1u5H6E&^E^&>Ztt9bi>+6B7(N<w2ClsBQ{SKCX#dgnToN_0*lHQw-GH z`$h#NZSHNDpv`O!lTC3nw8@#x6{sc&d(`2{}edo0{)-r;P(X($9+rBhhwI>RtI3>B0 z(mXq8N%H^eFAHjl0FYAGiN!#6v| zOQr8Y!PTU+L6B!MYsnx~zhtPT8P??<6u}QsxzUUVE`%r-`~G{}Z*j9l4oem+7A43I zY9t~wc451B+p@uJP+yLD)k*nQizA|cBeN5hL^|MvdeBMF%bEUrv_8Y!{v*G=ckcfm zElNaI<3Okc$%57q|BtY#$8=sU?Uprbz-kGi;;^8X-29tEad$`dZH@~~`kPg@6$;ypui@Y z+k>3cAR`#XxON8C+`5g$&-~2S5(x&0+;#Ehnr9Hm?2*daUsC{c<1B{1!uDjIc79>Q zz1jwC^tc7NYcO-Cx{>>b8Rn0HuIeN z#DWC~_{$V7Xo)M91zMge#`&-Z^g|T z{w(aDoWqBs!Rfk4J41boJQ|O>7u2k1hCsT4gK(P@5tm0~No2(s#=CRgRD^Q?SUJZV z<_l1@*twQT9}Xu!ZQupT2_Qe-IPdBfqcR5t%HXWC^XlSYVM$ zX}COEMkIfEWnODXUO{sw!8*USH7B>Dl}zM!^azDny9Xg`9T8#Wh%i^hLDjt0cK)_D zvNa1nUlr_h703n=jDR`0huB(s2D|L>C~!6M6GO)lQKHb<~<^*Mo_eAOT|ltt`ZCJs1o!& z8>3R8;XQvW&5w0*P%_>7)VZfe>Amzj>^t;d=?=C`fqaCuUoJ1HZMZZn_?F}mxmOjh zL@0r36ilz$r5;StiN18EGQSD~YSI{ExugGfe@}~7?a<#%s(F7oJ(9eIoFqdu$hja|0S3^-#e_Ik&);Vk@9jUB2}tdaypXJv+nY7_|HFD<=OjCxM$E(& zk@K=@@Kg{v?Ml!-r-I|kcCCmxJNs$g`pe=ajtA`S1KW+)T@Dutql^fP`~2L;pLrfm zf7xJN?-fd?WG3ZMduynSzWfvmErC$2Z~b)&TfG0jXD%jhapKA0^~Yb2-do`{Ik2X> zx4ZJKPy(27atMo|hDRwG#iKNO$(V~7!bWM>W=4r7DO!17(`ZgB2GbFw8mE>&o_yoq z-rd!|+;d6UH{#R+RbxAlV{f*mr`~P}#s{QUSN*0}{G=A;<(3r-hf2rba0#*bgx!3? zE|DCn`6OG{q8=WY$xaL75~!&C+i8URKg}d`I-GE>e5Y&AJmdZLg8j>{Hck-^8>xGm zl3K>Dq7_g12nh+~#r>Ugl~1E;%BRkW8#Y}sYcMl0t2aBNK0qqxna~bOrD_MIO^JtA zfs)jT_V8RB+&?BVCMoi2TC@)f)bM0s+-^2@}X~se^8Lh zQ!Cy>7Y0saHt(oM{to93v>ui|EV46__Wr}KkiDK3lgH?zMa~tT2BN`th2Cb)fHZ>E9Z47PTs{EdOiYm9vf{wb2Byx7YL;olLe8t%S=^ODhB=ham z)d65-8Dnid*~MTcFC=dY04s5fh0h^o4pWo1k&uYK0ca~-6U51~)rb^-CiY5hV@ms! z!t}y10HUvZ#3Zso)yCSTRtV{w4NPy_ZI}YU$|=k)W;#6i*sCb3HrzKYu=4xOy$7Fm zu1;kT$hpZ>8h}%kZtfp@d+NYt+|nz9T;(Q}7YUKI19kDMLO0B)UXKN+qqbcFV@1&6Kv@_)S-uMOah>r;uFJ|_`PhBu$gn3uy=RJ z@T$y{?~uJs0`*p|IwJDOz0&+`jGk)^P2D|=M~0+-(tQ1^^7DP2^22>ASDtWWCeL<@@cwKR zF8g_=S8n#0;X&Jei+VDAqhg5oi8Aq64UY!?iD&Le4jjm_KinWp3+^tI5V#!JC^x72OXBR(kI zj7RwAQ~RmAu_ci7PJkcY8^)4I+;hBRd^bw(5=Wgn*4;XW3d-(YHOvhuOBV3zP~#c3QiP0no6Jd z;K{wsx&tKS#m0K$1C$(iSpYx0hap&ctVKDM7-WWb8Zz2gbVzUXij1Lp`42546Epa@ zh}=b#wvJDuj~(l4JT^VJe?xGRbI9Z7*^R^pi3rDU1zBjb9$#`2ne)`8N86!0ASo1# z-C%JNA~6}^_|!w;_t^7qUyhhni?j5w&>=lAKCR|vX8(oNbMsbarXTFCrtAo*|2xXS z{^r(Dw}sUIvLHjP4+@J~wigH5w+o4F4=hce2>*EOFg=e>7i0n8=N*t_H2K!-6D9sJ z-qD&hLOZ-1>{`}UP`iruSNh*CoS#Bv3m`iI@N*VlY%+W9mD=Ai0e0D;hZlRs&Pc9qm`hC9YP7jQ>=$DZ9NXWM7T1c$|W zSer+hiyC$zT(KUdFR!t37k$vpfh;~>5)u=c=@u%PXu{8-aG0jrCZ?y-=r&Nx^b8t_ zIZv7A57Nkvj?P@^8M+C~b+N|3-+X9U2W9E?xqbXCG;4fA(n7CDGGFh;6^r`+o_$%+ zAw9ZKyp|BJ79HBGH+B?${`l;9THyCvVL?Xg zap=))1&bqesDFf=eCXL2$h=5{kH}q94fCTkfd4-lROB}&h+nYh{sBAiKcCCXc<<(vlpB(f0H7$&w4!|zbLbE08J(~FB8klw~ zeganCa+`_~cvNlZy#v%j>Kw4@xu5`uKu=3uJ%snl*LPPZcGmhQqchWDBXg=UAiRNf zSr#ABjJ%a4Mos}f2S+Llui@N_ZKTq3@Kjn|^6VS~TV9bzpjX8--ku9%6jSo>dOT@q zn6=rIC$p*O)wKQ};2ahmH=B%EGnF5d(xh*omcSVnEf(n(X47Uy=|(L^83L%K`*GBx z5^RZ^?C)$z>(bsEEH?`^YkX)s@4_?m&C8ZoNa(vPQ0^eTme7(A=v5T4Y&fhsiSn(E zWe6Vu|b(L{e=UwGKRyl&=ckror-7Ut>h&N_LK)`og$GeZ&6!C`yB7P4&@*j}9Z zK@ZsS!aVHWut!zoR6oKFk)52A79QK0+>O-2r&~H|Yt|nQ@5&(-R2Ab4f~1^%QZIO% zVHcxafz$v^|5*LU$!thLgRzX)efA<|A)CvpZ-BPf%6E6wo`P<#22Fl-Epid9YR}C` z%Q8t(2i6(7gM~`hSf`E$V0e48YOO&YraHC`2BD-LjpyR^lzJ)1>8XhFny3hBbz~Hk z77SxQ`krhZr8zZM8r>;#@^ox0=xlL!a2l9dxGR#u_ zuYsMFoo*ebwG2{~iHeAdf;;w>P~3&eue!mk4b)Wjt#n};=jwtRQ`3nx*oe!P-lme} zXDLw~>4f~2jA(|6qrSP7jlO1F4al<)x&Ux>lxT`#T6!r!$JLsIMZ=0yY`r!J#uwLF z=4`WZKYLMJa1gf_kb})fg8yFV0Zw^=djE@)Ph!<;m|_Xp!Afar5ED(|k8*O?RK55s z2y(K(?0t2e*VdKn-~bbW^=QeH*_Asv@(WVa6DFZd-RW9Z? zq*o#uSjY_2x)b>1GPrNFauB-dSYab3GK~<%-jv%;^R%&bvUW^LwQ;l|W-yaU0CVng zSAr<)QoC|p0;`KDFK$dm!0FP9yjhlhwYiA#kH7{g8JdKA4JuSLCxoF|qq4s0&PPWs z=PJPCyp2<+*|p9Had4lKMA@T=r0L8Vffr9jblCt{$-1r}R?MVh)0lKT;3Z;tEZ=+0 z9-13qW@K?JHDUha6a?O$$>|?)8M@$ezWO-sf_608&sEDd;5yZv=;B4X=jI?#t)hx9{UxwjdxenJ?YumLqx}$ z_x4QC%CUCujq!C@V>I2S&Q|Q*vjh-12JR5)tIewwU*0K1l0KPVK;>v!iv1L*@?#|Y z-D<$$A7x|raAmQt?&ah(VdGS13JV{U*#uAK=6R2(U17Y}zDBy9Z4<(ucU0mC$;P*| zVWO?Dbv8Sr^6{9v*{bJ@u@?cl1~S=$DdIWvC-KrXOflSBnVI^RTTMavowF%R54$r{ zWx_~1*sCguKq%Ojr-yLvl#l7m1ftxB_iRXQ55k5<+)?n}2s3i)Lspm>CbT}K0$W)+ z=)9TcmA#5!>#=3gC4lo0QB30jjeHDvrrk{vI-obEcR;ODhpQiT$$UH3CNgnhM~jii zkUi7SK=*XlW?s_+mFuww537oaHxmp(Ou}X{=%xLGH;eA^cQno6t|v#^W)Ske;qSy6 zg%}1epmEy*s{h6yqs%Qbk-2$qr2ak$->_k}yct4k6eAv!&LGgL#NG7DIy?F+_0^A3 zx6Mth!+p!nsfE5jR7LA3@2EQR?KoOXAu-z{Ey2e(HL*K2Wh^xXhEO##S64GP4K}k- z*D$p_ulal%nOfj`ZZOcW`@}hhdK!N;mh7h(AD}CL^1>B!6%(2KXmpY*C(Ccmuch4C zxd%WcjL%Zm(OcP$Kay5;6=CS$WPb>_;<)(L1Hb1*gkC@+t$j0jsD!bEtAUI1g5}~7 z*1kDBl#;#cDr9}49uA*%!`EK<&cD(4po9g8168^)obs=kjEt7MVcwmO@jCDaB|h@2`t3H+^oFyX2yb2FT_W<|jCW^^M{?eDBr`bf(@sr+z%eKlsI9PTo zomRpmfiiE4lwHux(hBg^_cpoYV}Hw=)5(Di^6^n%lR54otK3shUnKSe6w9UPn?=kpHPKsY$^|E>v6=c99@JL)=;!C_c%L)!kH z{9#yZu6pTizMoi!$q86?F(Daki2nOevTEi@i9PDAXouLYOa(TjsLA;07 z_LwAGNpb?CDxciiQlYuL^Yjj1MyaO#h2situ5QML4vy(ob}r3qUv{f6bv`#*lVp~Z zq(joni(+DhnAj*Y+rZyRI!WR5#EeXOB4J+3LD$F_rmLlGr<-Y~Oa5^AncAh7=$F3V zGrk#Hn1m%25;Id8nK{IoW}Y&`+Cpex7CkR0wa)*Xl>yw*$w1!@ZfIjv;FJ`I+9R9T_9 zSUpuSm6h8VK3nzo;MSxt@WVZ^4Y`m1Mv8tFBqUx~nAP_14C!`lrh;bINzi!5a2#F_t|#@Bp4KyZw$=G#GzPzch5{hehYB^paN>K| zF<6PseNFxHT!5C#K-Q(TOdoHgF~Y8RKHuk~aiTn6>< zA|H{SaZ)|KPEAa)uL%IPi7vh7G?`lO81U9BnI+>)rczw$n&w$SS|KE*BlV|9SDPmQ z-zqA5n`QW|8lS=9>YTz20Dp?wopQ6Ol}ZDh5omLr(wW281u=oVJ=k#qwY@P&=y4)f zsBKZ#hoQz=uxJxtU9>5N)dJeU&Lojyurz!wvo~^QF!?|ZPzmbP+tfhQkf7e)pDojXa{33LlkzM~utMk$KNYTWiChTa z^$HVtxG^T<0hk1Sm>f7YxCFte5s%zUscEOB?SCzGK@_6sf@V%@bVS^wRRMOEwuF`J z?^hI(e#S>5{9lvpan>dp?b|G;WMM1f#5hz@g6%w@s-$6VEnC>Wla3f)S;VTd-#40o z$LazLNjy&Fs_OAv>mUi;a`Kg}DM>{wMXLu+QJMuJ#dDNv;5H@E$@_8VCFB8UYICx7 z;<@xD0Y0^fg$IvcvQ)_TK3DP=~`wi zS!6S`yb!TS(Z&_t6xnw@4%#ut0n({t_Cw);1)M%)RdWaIq4U{`rduuG*F%!wp@`c6Wc^ZoNtNoJ)-a&+W|WmC4BK70u0Opg zrV>(UKAqMJuj8s4R?L8H>qM|Q)3VKNYnR#c$nS^+_zvV6t=4__C59JVFEXdyxnO;> zJjx{L8WfV$gpXA!_P~kNqkp!JP8kyOfDr>Ze74|Uwtg8M zaZC;AR9-(6+hXajw#4?iU#mu`d>&BVbXzhzn!!Xi9k;~l(LYyDrwobNQ5~f!op#Uh zfq&Mr80bT%&YM(RX_X&6>zC0Hvt>h)AMD!QG8XBd?7Cw(ew5bd;{aHzJM|G|u30S1 zS{nQZ+)B8{fWj|!&rPA2Qh!>gKgtAEJ8I{;5Hws~nK?T=Qf#`PJP?LF!&EIN>Xt=U zD_9$2)Kzs;67q#MeqI;B$sT)skNLef6{90im>^|*k-4J9!f5mfF$ zUH)zz)Xjo0{eG7UGlKy$klbBQY<;i(d-5w|kAh(hMnrPqx*UN}P=?fqU}F}dh)4NU z(HeT_E1Jsq$NgfGoXAWYfP+kukBE6-Ba~i_X~Yvl3P*}8Pbam5AX`Uno*Af^g00w2 zuaU)S9TdZ=RpJHAI2Fp&-W_+gQL-2Wdp47C%1#J0Mqvj#TqH*?uF=9|jmtY`d)(PY z$rMa7V1DM=nUvh|N@yQ1B>_HeXTCPOyZl-%7DgD&qtJ@B@S*MOgwApP@=Ut8cV^sN z^U^!Le5%(Ot?v6Lf|0r4Y_!dM7Z)w#E~}2HLwcqXz>yymH)B8oZ&H6nl0z{m_UoNX zpzC4t3Dr~HN6;jnUZB`R@@sHKAabE&5aCl1&AaoUkM$ygD*AIYGvT!hBm}ErH{?37 zp9To2fSMRRWI*g18*EnMQ!2_X;+I9H8jGuUQPrdq7jDI&Lw73FyOw_fiT(tU>mA@h z7lIz<9N`2nT(?<7%>C?jUbXY+A~aHGEny7;a+rg`kqcNz&gHgN_+d$3^(6Lo7Xz%Y zD-87_WRDVIWtoK=pA+w3qEPK5V8mF-gPuUyT~y-@qWVscRIfxNq+iSwOq>R*TBiq`(rBjhLkid&X9daw zvV|<9x9&~paG>&11Q83ZGe0evpOUnPy4?G055&P{c)tAdvv4;Iim%#>qyq6QQhb1C#_1`DIHyPSH9 zi|;45ZCUPFjT;DfoO>2ytdX(N3}Be)jboB=Sn|9zir8Jfn_M^-jTn<>cf0m1;i>&U zhlkKE4)uMz{AM_bL53(GASs{{2)_Tc68$$~4}Xe3SII$-(dQ$sy^Ad*Kf+Wp43=PE z++LMUWEct|>V=uZD5DYSKm5}D!MqWLRFL2{JcADqT%SU8I^hoY{eSOYOHU|L+tl@Nm97F3%vo`*$TVBP9M z7?;CqI!2-dgqP_Ihp_%=6u7`Hw3mrQs|8ylmo`W)cSzMVM&244I+5fDh7#%3u*MU6 zXSEN6Q*J4CiP`|LAkwu)YdZn=acRaFJnmq4nD%VVK9wSS6fF8!zrM#+*$T)K@jytf zqhB8kLOKW)Rj$ST6XV*>GB9vJ+ni2mgd)RnSR>tZI^VKfBNRd7H%xiN4-+RLU9t%~ z62y_{ANW^Zsuf&^NAL!$i(k6k>Tzp49q-2p1>WZ8rF?)$3)hKzFbq1)_H5?*-v5u(gg)tXEABsc@!1=G0`W#*jCAJ&FQGpF2%36RbZR z0wz2JW@V|6$*txZxEkU|?|GL|`Hh}i!d89i(k1%bVgN$q;`8)@=E1%s*~ z2#iKk*pyV~gazq`(WS&K!KgN!>DVqVj0$7PJIRU6Wv`h0<>ir&rEX`Y6SO^tcT`pe z0ofAoH%;9!b(frO6FGtOf{tM{DMR-me2C~JJf;6*I8Ca=Sy>XI5)7{oW9K?UCr9VJ zfp-}E0v6^jEXO)WIWOT&Z&$X)*urX!bppP^Vb;yi>11D!YNW%Y?u(zwtDD#qBa?X3 zTdnKECpH|PWUFHbPafLpC@&=B=p}M#-V>fvQO7LP!QUT!<1I3>H;2^E8FF~8(t?h3Pok9a z!VrhLcp^*5J8YYXNklIIIDzOcmAa?I_0~TR)@wTTIK^E8=5C_mh#f+6H=+eW@jim2 z@=|;+sT}wH^FZ8%uln)lL#FGM%A8G(v_wXEAN|xH*7bRM{8K#vlKkEWXvw0Jn)&Q(2@V)iyb3adhPEO=M2B43YMYi|8SPk9s_vIq0eC9Y&ZX4h0Ii zXhV*cw-@@M$<_g*@tlL23*Hqq%>ap0DW(r|&?NsYt6hC3QBe!5F34H#X0)w{d!Gnm zlnNRxqy*X<#el8&wy&^Jl zGPg?=y0u^r5ULC&@AFk?p z4?S{}Q@&ORW|hFDcdssxz7^%|n)bQOVWJQ;MT0hjDSzt9Twnx(GAF4&phW>SLLH5S9YP= zUrlXhD~BC}HESV{tfFnckhZs=#m?HJ`nX%G`aSS;7~MT{xO4e_E0^7(Tc&n7|JT}Q@B31Mxh4FnX_fdLS zh*u+>OITLodua1b3@&pR&fq_yly+X!;E?&e;_j3SRK5E~^jewus3w0lZk`c#rRKrz;hJ}fKYsr=s{jwbt)m-kK7KV?3fGkkXD#Q(b)+!%*_ zb9DSQey;P*$VHFytMZN5J|yuSM>XJ#cGUmgFkN0EEz%iLnrRYpV(y*BkC7hDOuAl! zrFYKyk_z`K=i-D`ViE`m8(=2!`Dxsz6bA+Y2yuEs;Jd(&%FzXz?8t?U}_ErgQ z4(V(nRrA(}d{o>Wei!b>df& zWtE&177ZW5i!1TpV<1wxe>D$RQB;xJ{PvLm*ata&sgCoBhw@{Y&Ldof=jqow5C`tq z6fR#qBbEdp?+m+)0RUriBS3*ckv0mytK~AcJ#g^pO=s2pMQq+35DKHK77KSu`+`>* zoQPyO-H`~Oq}gys^nCvYiT{1pMP*<8J5a~#H8^G$hYLF)ic*;7+EAdk25DD>BU)~= zND<~2-?d1b4whnaOSxJDPnaUOKD(fmw@BC-l}=EMe+)%?8K@){w+t$r?}vj??&HW6 zwWQafhJjcZK!5p$B$~@rLqHpEG0w^OvQ-xf+oNbLx*R7|Ty+vJ)_z!GAe?i4hZ=R@ zRk5zDC{P3d4(F9KPU_^6_`{Pps6%sQbROTR9s5LaZs?~R7&a>#9M_L^bHm7+?-_Go zL*ehBkn3UT0XGQ9_eeKIuN7yf_Iv(PzmwklL4QhTbl(jqAB~vbxFEO@jcwx7P2WC8 z8=o|hE42-svAMLZi==d^26dN8m!sAXC0h08b~2ieHg7X*f+MKX78tTSjYo92RM)KS z+#P9WMdGF8JC$xOdGb65ml0**h^18n+#6cLH@Boo|XZ1@F^mzC#=Ss1!dhe{B43 z952H1V&ix)lxZBoVti!7Uz26`u2sM%Gf0yr(Z$<|{J7L3YW~MYUi`gVx(D~hy>*Yx zgXVSfwfXPf-G{5aKFOWkey|uG51)t2Vct;ZpYN}&%*mAXWgY7k@wka4yv1Sn{k zqRE^yFl(XRY%jhuIfq4DIc$)<{w)=P+8H)#@Wn<@;cW8@R`JWoJmb}X7>n1*HCv0Y z(zgh_f5c>?s?K-a#XU;!wv-X09_|O1eCNoBQzrwZ6i(hYYpJd1AE!i-r$C)eZ&jGo zWPOdOV(i8JS6Cr;U0HUWm&dALm8}Z~+PfxNQtXm-sV*67_)gPuLcn@&>$=&0k6^L! z<_Yk&G=cw~TlY%tmNWLzx?Ja$gfeES?TEk%&#=~Y(98+SwyWF9Z5_aucmR9knD~8r zKEmEFg1>dQr?(zH=)W)2uW^}=*Sh}d>UC8!-A2Cv^&ZY2O6&9@B${{WmVoT53Jgc4Gio&lvh8}`2mdZTwEh00z5;6i06#y;5di)r z@%w8Se{^u#7Xl7|0RRHnX8r2|9K=7vKamF~5DfOytge+sdSg%T(9(yk)u1XvmS&YX z-6KCgg(+KZol{;{S$P3LR*O|LqjJZx3~6*U&YHUrun**r0q(FK${+^jy`i%-0;eNv z!X_eF*2a+vUlFxUp`g7GWK2i9v$O-*o7&BtWe+b@*vGq+1E*5Jy(vD44aRRlG<1ceKI=s|q30{Lx{;%hpIu~q9G`5i=%Ya`QAu~I>>(y|ixyvylR5qw zk-xXU$z1`ZUg?u0?4kv z9(RSp1=L~nb?qP>BalX|qv!`YX$^~dD6d}ds|(WYsOnDFS~+In)lKtuzBSAvt7JBU z+%(&hIZRT+Z#;ZJ{^{MFsS>z_0`U+49v}lL_(0qaF>u`M+uCnD|5zo1=M_zJTw`iN z;Opg5ECJ|!qUE%7+R~zKP0ezH2o@I!lxEr&I#UvE=lZPTc-ypa>z3bl=`h5AMyu$@rTs3p`60(FBe zhq}*>u`Eo7y##v<>HympTL>k@^0H$zXcjcs{-{MmYTGrcASG$!uye~In|eVk+@t0< zTbo_V09m54-(bsUCBUA9?Tx^`V7f^j_95&A*)jH{8I8h~XY*M73AwnOu@dZ4*ekNV z3j0NN%q6s?eh=xF0uX>^X@Ex(R6qd08Gr)J0vaFycE|w-L}38%0VTBxfiCP=g#i=V z+i`b01*@oF#k^HC5Rl|DO9*_bj7FVq$skClVF1sp>eA5p1pzPvQm9i1B9nb$4!hg#+a z6jBj9*bN(D1;czzC#+jzq5^Kt0X;{o9FkZJ=H*VPffs-agPZk#z#I7P9=~&pPuz1J zJ3dok34dwr_GXA^?HGSp%?38*sNt`!s;Prbnn_{Qw3Nit`1~CVE=Z+;buKDM9Z%zr z^Rv>BNfRhd_SI8NgJgy~>q@5yx>@Qdi>4h)3bfOaojRNs>gb<#PJ diff --git a/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu5mxKOzY.woff2 b/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu5mxKOzY.woff2 deleted file mode 100644 index a4962e9b425730ba136b544273ff2bf03c0503ea..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 9852 zcmV-?CWF~`Pew8T0RR910496@5&!@I09srC045j!0RR9100000000000000000000 z0000Qb{m#>9DyDNU;u(P2viA!JP`~EfrBXYV+(>D01|;10X7081B5gLAO(VS2OtcB zNgKmZ1?<=wuyFt$9=$D!k}^WfNQAI)fDrWCg8%OaQsgKNmu}3%S4`ByWWyXT8|Hp- z6v`2uw26{}f>vv3Xq0)<&8(<{iLNwP`NcLCKp=1zHp1UTlxOHpY*vj0dnkZQc6!(P zQr^-%$|&7K^Z6e-g}$p@R}7v)Jwjuj)Lrt>yY{>{*-3Wy7YIQUI$&G($`*iy?Wycn zG(Xg~{!LIJB1R2i4`CEW#e#(aRv~DEQLs^Y+JUXoA62L7)AkpcaFf^~3ob7)#V!E9 z6bO&sm6_~$^`O25Uf>>LUm#%`w2EW^x48|x$I9Pde21?+BC#rXSwVSiFdFU6!YFvLy>G5}DVv>{g+LiKT44W~Q-Vv@96{{eI`6@peo;eZ!V&rOXq09iMFH&y^%PRmtWbxXF~$~Aw|oq~b_U;wPc55ctIhoAv`00ZCy zcz!ka-i2_J&-)p6YD9&Q1PVB+Xw1FwMODX282+s2|2NH~^F7}8cF&hGNk=)|b*0qD zFNXq900|+ZueGJWv+~MXH%V7q+FY0STH7nF0G{@^@>WT=_+z^cQ+|WE>gCtUc!FTCLgI>5&$s1Bw7cU*znN(#t{z z@!1k__mBmrtp*6C4Er=qkBoLF=R+esI87)O~KbZgdcs;A6s}Ky6=QLx;EEApQ+0 zK0HBIYcq-IIBKoE>k*yXj7>S03NceZE%N>*acrxcn}E@gcS#E83{lO6R?+rZ6F53~ zh`m>e^Uo3=v186%MenmkJDFUMu(5Lp~$_(e@iXPa!n>f`4i%A=>ws-|JN zB}tXEZ0W29y!%NFl4ff5__#URC8%)+n^9ISEgK}SX8g8> z>U&jm-lg5>M0OY$8AgU#5s4Osl2r{C`ogu63kTLFs;9oJr83~_|CmLRP+=mFRMa$L z#EKIyL82t-G8HLTsY<CJJ*x0RD zway8#ts5~_t6>7H0#>sYSi>eeP^*@v!RiKr;;jh*093*5boSQn$$eY$YO!RpiY@}N zq!_IVmX@dzBT#5&y}j#ssSIzmGP42j*4sNucHA;_5V3BmhH;HqpCN;SxU|_4OKhN8 zC*v{J6s&1=t6EJnV$iK+vWAgSOajIwt0))Hu>gXlMsn`OIX#iab?C zMoq5tx-lBGNoi%(0?a-xsR>w)XI!ta9;P;c5Wy|7H!WyMdWEze%Xt2mm1HStq9PGh zkx>&r{i|v08itIxGULf@L%r6sZJLW|#iRZ-6h{WFTs~piv?5$3eOE_DJs%S^qrnF2 zL~nr@gv4uH%TuF-=D=#m#0!!ca?4c|>bjWfr!gMSgr16R#v8LZY_J0B#ApEz(ZGWx zNo3Qou5fu>*oqRw4O1{vH)AznF|!EM;S480%w}`=Y&Od?i=V~s#t~blr2O3{w<&{2E`h@4lr*oN zKo5>y0+vHt@j3mwWr-?B5|OkfWRxu{wu;8p)I*;=TVqo=6s-wuOE@&yCUvAWZAgl# zhrmMCJa#6pcMXx$7Qz7@Pozx~+eR*iBTzMm?ck8tKuPCJSO` zj@zbYn@^Ih)3o{s%%{!DY)xTZdNR8~t44DaIdksg)XhtVo}EEarEsfD&Ao7(CDOBT ztuUb})P|jiI`pWXW*2B$QJd_OUboB+WqZoekO%otpbKkO3@QbbK_yhfw#uGXSn4r} zCS#hW^=F$k7ndEd3@haDY0m70Q8Xx=z_NSK-?vgGH9tpEm%Uw znq=58Bw~mJ0K{9WgU{R|B`w~_0xTXYyU{>#X8&Yi_Qx$XB=IR8SkHu6TT}7 zAQNo$L5Kgr^>0Tj5Rf+kad4v$h?Q3X)>TLrc-dNQKIDNvwiOTI^nk>6p^m($2t(`% z*H)6P|KMuFw+~I^M`i2|3dc%nev1?9KLN0ORn6kEmwo&D+wmU+*mU&HxD1qXfC&ky|M#Np6`>VIjv?>*RXEanA9~=pgcm?lvQZhm_Wh6q7@*CMZ_( z>Tzq=zNFM$z5li0CcR#(q||X7_Z`z{BYF?wz1EoRrIJd>Q%07dXX>hSls(s$}#_^4*7uOmm%FuCVUH*@qc{VxaDC6`j|9Q1cw)IZMS z?|E-?nD3X_MPVVwN!flT!nK=%f0kB?ulHY9%2iT8o^spI%iYSC#1N8;Yk8ItTwlkM z?_5RJwsh1!=ABHyb^Kw%+PS(r5Ptu9=SBY1#Tp$&to;%~4V&$9vRdv3T5NJ`e`se2Ud9-l!rmN7Hs{K(R%${46*~%0i0NeI1f9d z?Te!J##V)`96b=mwqx3k`MYsQE`ofpl-xxk3bn*Qe~$6U$Y9{KwJNir@C3%Hvv|ZP zaG2SQuG$PhDesgv$Il&y5{_60Aw6;(i{dzdEp6t2KoOif<{g}J!Dt=yk*nx3XN@^Y zWOHmz!FPI?JX}0?6iPT@(J1H9T|E;*?qlf0tsPU%g&kh`=%+AJjUU|tMYo60g7k|~ zXr#QWU+(Ie0S5>st<+hZDNV@1rulU;u2b~s&1&|+K2XR)Q{Qw zur)@@%UIFYHtuX;wL=j%DF15GdyY5rOx|Yzz%D;HqKdX*B0I?I^(?X&b(Ew(Aq_f zODfpOpjKzHeDUvo$D?j#z?)onp!kJ>0z$D;!FqOvlV+w~dNWdIguMy=Xpeoep{QYL ztZGDSsJn{BZaZtUC5x0#a9kMk_qV^V z8f77Hfri!>^O}kPcB(hZQDK69a9P`wkfMsVYK#~toPLffS#PsrmZ9^AoJxpD7lyRH ziL$*#xu4RY(}Q?eIyt`A-EhDV_0Jc;{Zu1{+;{1`^fIR@XUbqfO4=71mzAZdVMVMl z={dQ_B#CFgORmVF@ZY1FIEFV_aDPy4&(0n8gZUiOJb;wM4o|zAR~SGwJr7+&0R)GC zd=**5l`QfUhVF@J7=Ld+PwW;}|K;)Kv~F5dm)Kp%_cZpW;1}ydm-oJmz1NiC_$~ko zhhw=*F_UKE(>ccIU^0wWBZSGx;p}YCBv5*Rv(6=UY7{XCxCZR(>m*6g=G@yJbKi%h z88kgQR-(&b+Br4pO%#_+T&!x@PC?9`Eb!S^m$x+fwzouSpIe-S3aN&a>$HGcJYW1` zIaZQHINQaM&4*2*KP`m^mW7EKxbBhHR z9r|w%l%#fDG*_nzfY=jJ=XA4rP|;2fIaMX$;g-7rtsNfbzj_`-WwRgHZm}Y-N`ul) z;bxKwe9G~4a#bsaNj2yML*FOWzHUU(2*`;ObFVmd>Tz$YBkh)PaQ14B**@lT1lb0R zqZU6bszZ_$)=I2Gpi3InklFE)fppuF$|Q~+h{#e~93*)dxpH&Q!^bXH2#De8d2#6J zP-B910nn#lsI2Jy2mCZBA;*oFqA_lo%qnl}K)lB(Uj6I(ci%<^|JX18DYB$9^QZwow1{_u>c>DalL)cRj+)2 z>cp``766vr4|bcz<3sOL{!9_Gn!o$kQuol>XcNnGEs*FOK?Xjlj!e8Bjx*O5;}?3Oe|GjMWnMsZ}6uwvv6q?bgto%KbO8OWkf3XCXRH5F{~`L?j#8&=E$c-w^mXzm+w&yv0?-nfd^hHDe>=P6Y{l=3}X6 zXsi72uIXK=>zRZ%F3RC^IS8>)eVCzxU^N0P>`cAh8GOs(1`O&-EM)ATq@Sb#3z@Ej z{q`>r(c0q;yA&r4cq9AnaVe5o-B0%#gj@O?IMwomPEbE-(eE+i_UvPG|K9MD4M2hi z$@>X+VN_n`oi;KfJ&$(vtZ>B0-K>Yk8j(j&ejIQ;A0d<+=|X+*^u^-`FP=XD;9F4z%#UGC#&pCi;(M*WJoR5tJ0w2X&4C)|4}E-MC?Up-Kb-q(Z3r6at?mH6 z_K}gEe6)rI$kF0Vv<3N?)qs2lARl7msqv=q2amI*B`OQerlJ4$YxUs580Hkfhj#^c zzxJ*;s*gJ@I0)3zur;hQls4J`_O5AaWgy(AVATcC@|E`+9RHHa6#8YRtNDaTi=nY- z$D z8X#=FCZE|1Up88-c+axGK?{mS!(D(>yfCgH93p!<-rZvA-K$6T=@~%lHNP}uUcnh2 zjDR`6$(P6Vq&A+9?@_EcZ?D&JIj3Hs!berZn4U9OJg5?ky*ThR@grV=0f_!m1d>yz z!h#e!xW5jq+np+w_<}E1X~!sacL&;UaR2LUd&dx`O3wJM@$M@2VoMWyp;nWqtS$CoMp{z8MMq7KR^41?hJsY4n@Z+;K zs78u!C2Q-Px$pxJKt87W8#OM5(@+mCPC~~3zRDLh<^9?_0ejd$P*--pmP_H}(JbCm ze8FzRpcii~Ie+<&^73u_u7ZrPgzP3^kb6Du`bvi+N=b>{kzFmrk%i`&0x2Pi&Ed_# z=OISgD*l(;Y=mVJ1#g_PEuM6gzoFYx;Nu_D1+|A>#HHeYYZ>%QugHt-ulsgd1$Bwd zZgc`Td;aGKMtlSymo6T|h}1u1|DtVEY+O9bkvH-o>?>~HmKxzt__yPMJ9Gqx!5yX*uUY5_GdN{Q8@bqk#h4zU!aJrSS~^j>_={F+(3Z7(`!GT z3xHyGns4N>iEM;TJk`HQJgv!^kg4)!AuKPSstGMN@zi4Y$qk8TW)(v{qT&70;H84R z%HAaEV`bM@`^#^ZlV`G7W$VQ;r7weQOwD|CU#MHWut#E=%d!57A7BJVq!rFfBQPR! z|GT!&?I?`4xb`(qWb${OU2@4(gkdz%Mw%opL*de~Pw015anH*H{z`Vde6}fzxjzRZ zlG4+XJ~D3=2N0O23f%c%tpkP8{K2e@53OLXrkWFl#|cN1=Q?cBP@5^W2daLjbOrB9F0a93NK&IrO2lftDUbN9cic z6n>WpXp6*!69JvZg5FgBj-?0({^hu{1ne(MOd765Tas@7w5Lj%jF@SvS=s4>x`ev3 zR)Uc`Bp6j^--!^qbY9j~m;BSb-1ELzKg?5!sp*(rYtY+^J@ZY0=Jbp07f{F@Ph}bf z2N*d=l?WzKXe8lqGV>$B3M)0ReOpVPu)F{V7w7(W+%K#fCnh?C89-nJIZ~K1B*z$S z$V(j!3P>GFxaOdR(PY_wj3!Z*afsDPGty`w0*pSbFxOE5Lc7?;t_J#t6cIe+MyXdu zi3hS*;`(cT#0lc`A7qb9x2=uZXYGT}F$7`I7`aF;{_{hkexx(MDif>eoLZ|lI0hXC z@TVNU4SXaO^M?4dnfxXppG=5ROk50!M#9C@vf{6A!ZG+FsHcz=*;7^w4J5vnNtTp~ zo@Riz>2KfRo}~!&$cU7kxAim}Gf{eNtE~sj)+MY(A+h@kFkMUvbr7v+5m*yIVKn{h zRp!D-m5TiOceNsIK;C9;t@MVKq14b!Bva8HYPS9&@LMBdpb9_8zOVzct+Kp+BTcXF zTvXhj_&pJYe_TUOyJf`~hZqN3i$*FgqbOZ7wiJnpL(s$|te%#c(mM@Pz+lo1e8M~Z zy7i^*nc5CYSq&lpAFT)Pjo6l?>k5H?mH*zY(Hx7xvKU5*zo6}@qTXexd7X<3|n4pXT zyteLyTqXk7Zdv5)dUiRR8gSyS8R2nB;dIf2T%!r!th+^^#T`8P z5h=L@{;YTWK+L;Vg9QQe9L!RV{(JG8Q{vxv~f>mElukvCqVS$3FP@2;mq;Z zBxB>&es<2asnXi=#QrhPlbE9Z% zU)dT#>5`{pph7DG4Bx`0b?+}6jg1r9pA1xn^c|qw*j~#rP><#8?5cW7ihkjz`CmPv z{lV{i`^rv1HgbXQ`9qJ_{`ZMuN7vW6TWkT~DZoJgp-15O1(mw!;OT7;Fb?G{b4wmL zXYm-yK)pmJ3A+=4e+){>-Y?i(Hu3oNupzh@aKVkts00d3l%v0b5`oUReeHf#k2Emn zHd}TeexhyBsEh+OYA96xe0&g);u*+ZS(f?3#uf~$LAQtiw^aorV%2W=IwRNt1)90D zn$kKoSp~ifC(%bjMY$Qnc#RQkk!`@e|3ZtOG@(cR|CpLzS{vHtacjABBd$jjn6m&e zKtS-rCkT?`2@wm(dg*>xVH2>bfBkY@91VI+ak%v2Pk>JQm65|>re2I3Sj<3L1R=+= z6R|-Gk1!S;g5+EE2pAZ@S&V^c0gOpq{`Bz*JN4j;1Dx7pU0>*D{i_S6&{p?oPyS;O z2c39;Y6hu52nwjAV4XY4t8Uh_7`;XuY=UNF!w)8~+q4BqX1i>s=Kt6sK2^lT)rD;J z42M*MHYwD`6TB=o=>xe7dP#e4Y6`x)rICl60^z+~wM>cWOc)W+z0$ja5(XF%x@=@@ zRVYRPC!Fyks#~dollYkm>-$BKrF6X}xhR2y+x|a93CnsXQ@4TS)C+o>=0xtRljy2N zQp6Q^Su0yEF}+Rna#9mQFlHm)g!PVdbna?8o7_DtFw?P0M;euj9UvMX0E?(?>~;-= zwxPhfBp{13h+Nc}PGr?qX`KmLK6<}Ail~d~?*WpCAj(BgR&lTbt~qv3K`pVvoe73& zW>O*a{bqa7>{%WS$5cA+K8S;zSm&rl_!J5bZ)YI;Z%!lyeP$}T3qK=n5Fj~sf#ZZ% zr5f+ZJ2gWs(Eu8n$wP%5ymg$37paE08VRUs{Jvy!nT@)#%GU};qO9%VTWU#rZAxkk zXW<4Kfw*VeJ$+5wuOZiXv^o;Y7KN_qNk0fib;x~ohC>V^)Ym`+m^nqY@Q}uaicSv( z@h1qSC?KoE>QF5meqYusqwK$2{4h%Jr|U;U(;wiTIRFi<25nBjnHfZz<^M_LI`n%E7~IM z;ZT?AMg6Ihdg_hKa02)62~HD~@G@`7vRlEf7$NX5G-%dl3lbB=5phu6p|N-s4!+H~ zS(w=_@&kHTPKb(YRa~{q`qD}OSbyePkmRWdqpCCQ<1C?XYI#n>Uuibx2X?I3R?t4^%+^&)vM5olzr72EZ${30RQO<@~o^D1oJ zKiSK_P)T}Hca?pi#G}Q|5ZukBzJ%y`sP!w{*_*aJ(aK}aBZ3dGQ!09f@<}IaD>2h7!<0BUF!g$pNAf;$ zL2k!F_m8O|y*M#$>S(N}%D^`&%nuvNqx_#VD%Wu+%iW|`M|Q2_Rhu{2_FL5E6uspw z9LV0uRHM<%H>P~ETT|KJGxstgv@&aJJ)fSxHnt7$gCcFPwo>l8){eiKMu`u=Myx74 zOaj#50&;hb4cz-!Y)nGJ$s1+| zb|h@>nSfG^QhJeXC+S-CSi5R;ZO!{QT>}7)Y5{6riqs%xy$c&#uzH3NQ+yJLi*pv; zK-p_H$2Pj;!KAp(@7g5zynINx63$LHU5(K_W1?}0Tee+0f7Lek;iXZj{N~Zkp!XxS z98mzIJi2Mgis=DKyrOPQ>b42e&{SWem^JHCgm~tynRU{2U4AKdFp6?xW*bEIAhiFJbSU)J0{A zklUM)*W^11(?#Klw1Ww?3aC(qh)k9FdG#E3{$F8;4dx}CoTsgw^huVj3YG9Bh}e{m zqEOib6)Ka`76LsbemeNDV2hRjUA1c%QP1}!1#(A(_S`wl{^~3okx8!6&R(_V2}`w( zM2QR=fD1%KyS&&x#2D;w_?> z;kaJ6K2jh(@|W*LEt+V`j!J2V3j9nu4RNpA0psm@Q1(C3vO#~Z6wl4zL^K$tW>>dN zL2lI7tinFAY(Qwq33>rPnZt%>GtUNC>(^B}PB<+L;()_q;w_-qZAX0X5&QV;y`*xO zq%w4ToOj~o10P1iT2dR7_)lnpR=M(Xes3lHMDtMk=)EgQ{ByUC)v);ODmlRjJvvfB zB!IMG?nwCdok6$MjhEl@DD;~FKH3Tt@!cUQ|B!d~<+ z*;C+SETOQ4O`e{+ujWDwiMLbx+VjkLz9D8^;&a;dl~hH2HxUF=z++WP2AqtIR*lh;5Q{xZd0c z1W~CVz_WIds7327wt7W{Zi^1T*rXE%i5a*Y1#*{HkOU=i%N}FM!j?&#tRECV|(*1gLBy$cjGckYvrHbUpQ?5oHr5Gw2)rPbi zGG>TUHIptD{o0JGXEMsH$B;n;u`*h$L@6L>_LKNn&8u?$MFA$G9$ig1WGQzi6c{q3 zi%el!kwG0)2w4gAm#VWeCu|0Fk=2p0QO0Z@D$N)@>tfZY1&iV?!WvcFYa#S1Zmp%& zg!l`Axrm)n79y$Nwu7o8s%$B5_XmO3QT(Bviul-Q@ZDY zCav6L=5DA=_A^24aS5Sf6I!nT1&8GeZQUBiX2ky?(uTOo$TE^$?CQLwbUn;{Isc3BWq ip5Q3{p1bO#^| zf=L^deih@GrFa}bQ14`#jj&kL4`hSNEs8SV*&oex4 zxA*UyL)MCnRojTxsSq+|GWIx}og5BXcZa*nj1xI$-bwC4r9tM3G?0eI_IuZ=Bu!KU zRNI|F_*jNe+@OzzUEY0D7YGtDkRwt>-R9mpVn4?u+Wj^S@^QUZ%lJX2P4r`M}_S ziZZ^c!r}1|vbHW2Zk2pOK5%X?r)2;Ytn887_s%TKlbMA*BI2bLmGyz;$%#Mv;(0I) z+;wo*#`+XxQk1F0;1QFnH4TwbDF_3o5f&zqtx-vYuq`j#-G{)c%Xardisc9Xsr-Rj zV1{XoKZ;!c#l~RQ`sMH4{n`p>5C)XTiX^i`3w#Is|395sJ0q>%S^)&*u;@Zvn&z5v z=4)0XuwzT|-kFx)t4bMQ~ht?py~kQ5d& zwaoBH8dR|)nmKaVQMGbiOUhU{ruNfZ@k*2`0;6gr;P&~ULew*6-p>CrHB0|%(6}i9 zEtjekL{gCTfKDz76okW1h&}8E)(zzbgbcBg?r`4zX;P)`SqI3P=!}o9F1p(2;tM0a z_U#+MjWQMy!FT<2eXMC6lG02=f(RoJY!g5+yZgQUu{}M#>K{p(fFlOOxV8cqeurN} zFN2YWq77oq9K@0(hz(m1dk)}SxqF0~-oZB^wBc);p)IDj;0KO{j| z!FVH-jcyVG^kxW1ea)#;wu|iRPWSyXX&vBF=&Muq%Y~y&Yc0UZg-h*7i~Nv#e1g8< z%13ucmR;ujZ4p&F*iUqiuPPfZpU;!>Ma%*G2FI3J#oflP_g%jpyUYQJb@732SSZWY zwx>IN*od$EF-lw^OYUY%>8jgX)Lw!Ffzr`uP343!CL?b~EwWA)_9ldXf8(CujS|A} z4zp|Th7RZNkC|nvUlJ4TDi8K6ba$jBb3=J9)*R{ncbweYqza z4G9U9;FeT?WAGN#n0VJpN;R~290cJ^dD^}bM(3)63og=O%8WVB^zr8L6Cg;47_n}+ z2@9t{p(a%Kw4rH7*P&CFZasQ2+}Cda3)@2@o*MPixCxV{Oq;P}&ALrn-gpOs(ga4= zl?p2y_#vh1DiB+=+_UD@b#Xv|S|I!a5FiKQbLvSDtgahAB_DL4-ZMCWy77w-#7Dpb z<~rNgaYMS9N;n@`5ei0_%#N;4g5gkD761|CBlDxK26@F4J4JvJ#lv$S>Ux6pf|NfiM_O+2t2Vh*j@J?9nzY5 z62msJu#I97kd+@kvmS9 zc7<5>BAd4GteC*^3vRH;nn)@yOm(C@pJ2;U^z13EhVpu-C=U_@a@A2fO$0~zo_5n% zmdK_HJewh~DT3P}velA`foY|bhQfABQA3orM0s0PG#QBia*I*=88^$0^6av}u8HiL zq`s5V1x4*w(N!cuIBEA2n|=}8PfGhmdEX&-5+tR&AeQ|ALh2e$iQSac*D&2w(hX&O zh15+vv`c5%2{wHq*kOMd)-WyYbLHhBxzA9#hufst)k&DnNRc+bQG!nyUd`|sthFO{ zcHB7p1VOtW*$9?C-&WeMGA$nQncEV3O;VF#`cz7nVf$H0U+RH0CwJ$7mx=%4-l6OjXZg9U8+0%l9fZ@2nSnpY<&*G`&aUj zR~_#}Br!LWpQ79jHEm($VV-6*vsusAfl{v#alfwm8_)P8GD*P(my%ScIrQ{pFwZiU zxxC7^fu-L4PXpy!Hz9xr^QC$6!34mA5wX>$Eo~TuNkOwjQ#8%Y61sp5+N}LLtTVdV zsn+G2TkdFNvYa{#=DE9?z2?q?=Nj2QQ(nBo{;FEE^5ZW+pm*Lctf?i-i);T)7%5)a z2>(l7vFfHASOXp`vW0_&CE%O%Sr;V@FZ99rA+I3sCslCvD1>}g#W z-A7kw2+;5G9MP+a{8n%vE+Lu%fe(WP`Ncq(!Dj>I^g zr!wBRNJ-94JYymI!$MQ-V{ z@0Gu`U^4xBFxt60cm}9`2NwUoG6R4x^uT$HGG<5qUT}$$4@0?d&{744 zBZ%`#n9{f~SjuPyLE*AdC4D?78Q&-N0r-LTWw@n1Ep6`e3(8$lhZ{Jzt~lDyg*Y-quS zj$|NPvLQF}MkdY!;~|w&(tVM4MXtpp`o4(5m~{uiv5tP>CFyPPbyHQ|JJ)vCZm&yd zn>31EG)>*KVpwR|ij_GYH|yPITyCeXyCIg?EVFd3s#1zh(~Dipd^uQc+a<2TvFjKa z3XyO`!WrEQcBPrp#oEI%`=9GoLK~wh02GmHPbJhAwj=_ltdJjS64ut|fZLJ*<@tMS z%6Ig>QiOk~n*Zhi^Jc$-J5Q4sqbzPRDlZT3mle+!w4Bpw$sb7%_*#>nbM!e$qcSWP z%vc�#tg(wI-^-++po0(nao!UY`@(hzo32BpJa{uX|;-<*>{d>5V5$S}xaIJZo2- z73hfy!dAV!2}#qhB(}IQ>6(|F68V5u+yiq@C9?u3T4g#z?xMY#zRRe|C^(b6Um9;a zeD}QcG-oOQ>)+z1`a7!dN{CjrCOv<(5xB91aL4AsZgxT9d!-`>8^zfQkob&T6Fx+8 zp-{1$4s8i&$YQ6*RfcHap&qE~jo4{;ARXGp9Af9o3m?;jN0iekp9nWp!i*%6&e3OG zNE0jaHfy(CU8_l-Zws##c{$JY#JI57VRO2%R-pY&DVhw_40@g8rps;NY`G6O%kx+j z2x;LT_2Kp}!9zOGJM~?< z_t9#u@!;9zq~^U5AXd}wS;IL2T^KJ*z%K%dX&ee`9So!GW_7$-8lo-Jpr@$Ypf&_J zz7i*!N%bgnIU9`0cD6~zDdh3y;Fn1#$-@vV_lRU^1$XD zDs0g;P1*qx6hbQus?`^x0?m6Su^AF}$yveN28X*(zAAb&#TLS2n4^n9kwahg?+~sH zaRBC}csM)B^%d)Qo}Bi)UF|&$CN5qDRpH6&o| z`H$H^=hiJa79m1{+<`1#Z$4B z+*@0tOYy;<1mr7u%U-o|bxsq~e9E3SUxHRU`@(mjfwz{jio@r1yY zvt{gMZau;C&5UwU?Mk^aIUsE2jxGr@bQ$g?MN0X&6va%!o88obCXmkPew@?_8l!#v z(itiNeMoz(Dv0hHO(9KaU*FO>l!{1}I_|b+HRA(7K~-%6UWEdi6jJG}$5%rOnQV?x zVjeJ*&{6B_*^eU3KRENvv5ACl8gSan<0=897jgR_J{-~XFg@se{Z$29CcuY+!IQbgBY$BozAc_jBA9MLp? zXv(9T4vz5+pzzja(vxeMpJ6d2gqC)oOt?7fr#t0O=zu;+oc} zH+otGodXHESPTXOFd~?N70!4>-hG~WfqFrodwgq4{yCv@v@RLEoI>zI33HJW$m|xz zy4zS-cd-#h$H?M7Tgdg~LqV4~uw`KZ<2RC9AB?aXQ)(ZDo|EmkF@s_1^by}_SJ0n=)D3?biZDJwQ869}lV68n*|W^`hI}L&i_4WpEXc1ntz=0o)DrPq%ZbuQxA=*?0z!C`3&* z$Z3IDDD2K5gk9r2$xw8$%k&ee=u&@etuKadZeEWC-r(3J{)9reXZvfrO13IOK+f+~85!3V}v$ znee>Eu3|s$5y^|=7;kLyBgJ7_lj1aOn0|2Ydy}n2#;DYm?5_{7vr%Vp&u(dQ*SEcu zfQfIiC0p-Q;d@^S{4UrXmY`QR-;(Yyx(usH;8-S;0YXYClT!)TtkVIzW~GJ&5WT}~~^~mRhd}KDON@b>q!k*`9zAT)qOQSwy1 zdi894W_i=^T)?@F+2!Zs>mj~V$qNf|KE-k@IrhDDN#p2$?rZ7U9o5e}^QnOOuG7vd zCOXO|+sBA`EgC=M5Z^b+gx7s8S8)*B$2n9vDH^R5+dg)P@xoln^eIh1I<;}Q#x)N4 zgfDX(26CTczRu;8b$^^kwzeSy3vUJ$R6b=X^;L|RxvTu#RvmZ{o}tA6oT`~CCOqfS z{TLz{ij$swb4 zBJsv#6S3XRSSUa2xSuI2)@UW0^@I#+dwU!Hs9$_&QN6pHvFAJn&TZY+`*0Z$l}XaMjGxR3> zlu-DC*}+@lg2!Mxk}M%mTR&4KShK(A@b|=$JB8kW%HdGNlzWYDewvn08fM%g7 znF+53@5$CddpQ>jj>R@%E7DtVfvnino(=!JeZGZ3zvPB+HDLFDNa^){@s591e-2X*-s9F^{3iy$3x zlq;uCu;x3S&oXQ)lDFs=i&c58e$5ysPxNUQD@O#u86I@nH%rA)j^pgCnRzx%>wo<& zfiic+!+(C7g{@4oM*#twUH4$Zg8O;~-?2iadzTP0dC zHD79&sMeF0+h;Oo7&Dow)9*9Y+_=}o2hvy!R;uK{nmbRX>ignTUxdwL9R0jA{S12x zwmaG%Ox6-i91^XS?(5tyf8EOJ``k_HwZcUP1!s6!x`bNBFi0c!IdZjbpcp8In4G2< z1H}+GTza1Kf{9{2O$-@XyNevt*hcGWp-{}4NU<^@ar|B#hiRR@6SqYl&tH*It6%#P zaKkq!hKSgPNV@oSWn~8HqZ43^ei~(>nA4x88Ts1q`jC1?y)Pbnx}S+nluLg$H3kjn zGfX%>ZKfy6NimJ=h7R^X(|t9yi-XLkCh`&nVo!66qYnB8 zPjaY#lOx9en!)CE?yWY`{0s8@6DoQRRO5+de8uIo>;OBO;-j_2!LVj~bw^FdcKfLQ znFa9hFk2}(D%U$Y1B+35tdx?R6BL|?O;Pl6o(V2-=ny%fQb$ z_PTd?Lclz8KEOA@)yX#&_$_-K^F`@dOERI!5!1<+-LM(d0Aaw8X{e{0+9wA zj`4wFC9-rMbFfNO5agCE$^Q}?mgL<#WJ2dC2Vz~8AS<;yEJ3epDv@qyEm|qj$J9yM zJI}^{Kw#w`ay&r`w5tfbL7;Jyy9iy}w}~xT3H>ET<8Vu89P|z!|BDba8h9K*yP?U$}c-Ikt0J=kKYrJ*DoB^(`c z9dkskJ2X18yh~4p++)$}QbPH<{Z)_>#}a^{HAE71_D3HXKgvWgpP)Cdo^%6|EqL&aAsXKee?Brkj*wj$ zkXw1dGc*B>wKChG;^=hTnc8+npW|g*YzkD?UgN^mZ5S6fj0SYeKuVc~tv8%*W_4iE zmQP|Ez>1s6Nh(}F-ri@X{AL@}Vi=G(#6&Sy#$}gH_A$Ewg{MI!ZQIum@l^fsa8g>% zY2t|$#4J4^y)OD>#!+lb<`CEBnidWg7L=j)!32}2g=_Rls?V$eT33Kln#iTcQ9-<^ zbD*OEPp#(pD-``xh=-haJce68)g}TuS8?R&E;Zl8FWy;cMz5{x^Svjhb;?aK$!BvL zy@ms4*PttEyT-K=u<49g5x!EHcj&fNEp@u^5|Y}Uar}fvHr_%(qb9uFW5V!HqVVr8 zcn|pA>K^%z{343noNm2gr7A(;uM{&FX`8bAkw9@mnJ;cjHlBfASB6C#dBC|w zF?>s8YlT+bR_a8GW&Ihxg$j6{SGLZFCnk^eDnv3O7;0=ppy|+N z8_Ae`1AlHhHT$uj;uA^Ay@-pdOad?U7IsTgMqZ3hL=_H&4X5}9u+vUhXDIYoX-h^$ z3ZyHCr-po*yQ+v| zCPLj-q0$c$buCl_DgO>3EsA}hX|KHvISGQzQkqh3q_ZDaXU7kOoP-5iq*K##H1se7 zp|jFRyZ`$Y$o;2Of0si-t89qedAFweIecpW=@17AjxnoAVMNkoCROH@D&l|4j2RY5C_;yww|bn`_WAZynu>`FZP2w!k?Zo z{UcIxL;drJac7$(rIdADGV4jVAWv;aZ-YZ&=rBKfy;njXg7D#(Fu$-Q$8#~mkr-|U zK)k4236c3DgRfLLd*MA*lh~%~WeA(nyEff^(P92h{u*8h8RShJrytT*7v$QP+=NPs ziyl4$PoGVWcHPb@p9*))jhZiqLiZ%#v)*2hsoAljYQ<^t3B<(W_C{jl1{2llZlOXe zYQO*YlUsrHe%XFtVI0578G)b-SX@c>F|0>jgW}I^i#|X7uqyAL~I8tpHzFPJq|6?RXPf=AX{kS{g1GsR0UuS8!Wri2Wu zW~HXJ{+_13pQcY$#Ma~^7e==Fq8IOzkZ6^oC1U;L6RQv%@9gRy3h-GKk=mDQL@Bl6 zWVxjBl#;$X)%6>Un$I;%lwXLm{GnO8ae-b7UAmp3H`LM|t1w=B1JuWt{+++Gvwd{n zDhBPDCEC1E8Sv*mF24J<=HG2fIJcjBz`?{!O=7}MtcjOVz*fP}cJvD$uf%bBwF!%t zo_{`0uL!qRmMYYUdNIUTMKN)0DLKz>+RIgCvTB6fSuO92aP2=vy*JWGmZDxCn`S66 ze$g;ST&6J@ZHNPjjZ$5b34{cQ4P_8Al1BleVgp1*6#C!daokNrq)wPHMEJ@3xsF+v zE;QSqr)nV0g1 zc$v?jPJM1 zl^ZjDi4U@vQDjO=L0ws*;#p)h!Ho;08FT3&K#0O7*QaNX|Gi&@hfbgOz*U#0XHqD+ zz{cn`*O!q;VCNMq_!1i+8V3*#MHR!6;59mTm}2^>cvu+x;pUq`LDG58LNfjzfpP=x z>chpgesQQLDAR&50J*y-SYjdk82mn*p*u?l((2JikRQ~9{xO%8`w4Er2 z-V0~y&N9Uv)R?d|_@A#wA74T8o9m))7R3VpttrUCBLa%6gkj-Cy@t+{*%E17@aJ25 zy*FXl2+m9+^4trkoRtW|>`m=3u5k!Nx#S>h@eYyPXVRq!Z}+)}CQ$1@B_7OJ6tmB1faf`$0o z+1#ke0M$9YAyu3!tF9jN1kKx4x$U`^WI~P2!#0hWxyu3yaTTjOrt4#cJcpo%y zd7ROMuQu5BTa|}G;~m|MPCreN8bk305z(rRdPuMCx-GZ)9o`9X{)Qa8LA z-md$!L(EEz21|x2LSFZy(f8;*faYgePP6rM4L8k(N}t0S@NT^YCXkvuJ_7kejY+>| zGxM6^ojpkAkq2o0YG#)_D4}U-y!iEkfnv=3nqjQ^y2M(9K*^o(X82v*mvnI(H5v>J zfA^Zd?+%>ajbt|Tiq<15%Os!{!M0KvZc@Y2O$StNwtj8P3^egC%WJAFHVOxo6Jgn~ z3awG53SHP4I72x}PnJVXtTQ|yCo9M?NhFFcT@LKv#KET4B>|1^vV^!F)A{)f54{sb z-WT$+Di=HC?lvr3m3u9`NJ*z`_IkQRnP)?#W5_VQ1%_V;#t`?arNX_@*qaXwDxBVn?6q)9zC5`Ny__!8KEi^RrATcFR4&y|6UOgE2nqgcapg<|>6i8EUQg zazfectQY#ZM|6J0dbLZ~<@V+7r?2j(e`T`Y5Z*K~n-&R+nyIL}i-|j}=8N$cuTbh+ z$-htze~(;EC@xD{cR%HN>J{&U_tj*tcj&JDCP!enW&-6HduAtZFK>=7_}`7bjW;;# zC}1MlrLnFoBt$&jeia%WHQix-)*W7oy}IKCK#g+FQzpr+SYl_0&oF-C zXCcn(HBK;BagO`4JGOS)ACH=Z^?Bo_iOU6N{23fz`UF~xUtKADWS~;>TWzo2M*yx> z-;1ihzaoFvU2oOz#uf4E!ZF4Wnwf3;R%MEeQ(2y3CgDdu7yJ@o-rZB3Bx0f`ky`+G zuP0qyj)-2}BDL4lNkzYmBvN0?B!y0F;Et1k3#!C8p3SIAn3G*Umj1P+PafTs1ZS`h zorB<)3Xsf5(og8@x=@N4m10yHD#@h-#!Q6x+$8p9a*2+48Iuud4#nZH{;n&g#;y2v z2S1sMpMDDEkM<5Oj<7~GB&r90+239CEG#sr5xikZuk$7*qeP$7gmI=zG86gA zk##9)dV>YKa4Iamu)$rBeEO?$JnkB`LL&QNi{rE!B`yn}+{-OBi7hv10OAQ{E^eGF zf4$@CQTz>F0R{%4+fuku%Pq3yPUfLzza{6JMM!>{Br1;LDEmf z$s8InUM81uRfQx;*s~b!4RHo`P#QM2LFz@d8*r+8I5(v*}u3%}$?g{|W&4sFur&bsC0O3@5 zX`q>ZOA!Nd98<;U(8Z?VH@~p%?DsRrAA5dJ8184QSx=|8?9Rc&5;eQxmf}X}3$QDU z@R{d5XTOV8^*k~gK*nESjdv@-_aTnm-7Pb$9f&3WhXllsp5%{h8UiRjZ@BX#F5|iu zXaO`Qzh-DmEXUline$lbg8f^DueI0U>MXjQXJ?Q2V7F zM4v0TDB4ivr;I?ftw2P#-H7cRBuD~z$DiICCu=_nkX)%Ome&*p2qpsIt^~T632dV9 z%{zYv0t8xjRSDT7H-f&T^#LpVO-=B^KER}1DsesE)fj|Y(@d<5Y>C#so7aDc5K%QT zLxeR$B~@3;HYL0t)sq;`uj3Cb}@r zjKLTxVgZBGjye(mo3=c^!V;_Dx|v~o?DQ{SG}3l{0Ao0u;jVx?uKfnkkNk{X&uj1z z{GLGU)}!w7E?@F5HCC*1T81hp5FG7Fg9EpDZGp*U-FT%%=}5jEkqFHFMNbXB2 zDuFp^9keHl^lLj{$a@-;{(R95@e8KWIff9=z2?>Ae9S-sOU-XoZ7N3v6!(-@A-w9m z3vpn9Ar*Dt2|)I>{@;gNN`@`Kl!1FtxuP5+Uy(37rmng@bg+D2fv=PUEWC`~Y7RDGxQ{e5q+ZJTTL?PVqsAs(AEB_hETaUQvtA6OOPPY>YSs__~8SFxK zWJ5PgSvRz08%k*q5E>IEc!JlYkH-8H&|XnPsH9T27++WLkKsY4+1U0=F!dz@6mx1; zNhLJ^zS)iv&zlF$tcSdkF>DOoQY1NA*NeJPNc?~HL|h;}ia~R%$u7d(pmTBaPw#R= zzxG)F;67QMki}(o6bVLL8YpM8dJSb8`|GxUVn^y z*k$idyXSrT5ZP&M#$xNZeVLonuo!tTBOZ3P9PffvR7rjaPYFb;U}nRBja{|QY%m=g zM`8zcwv&>@Qp3%6>GUBcEJZi9d#4C00{{p(GsJd+#LXz&2)&|#Td{#yxR?rLR{1_S z?FgB?1rpK=mZs{|y-3%C-aaZ*$1w$o&%WoZSxw2y2JnLgPiZ>~Fn{b6TFw%}2|T^wPETvR(EX ztsah-u?nSf6wHQ}`Ti5|c^|UjVuyml88nnq8(hhv#tn1u5p3EOBvT0KJ6gH5No&Oh zjTu!xET!mDv0aE~d^oSjO5Xq{(Or)c$ybyhGBOE70Fj9E=#54nXrbk(#X%ixMzP$* z)@}9CAy|CCj-D=*n3@RBU7r}&mn(Goy7egZ@Om7AG3-`$SkqdBJ&wOyb@0F0#k&mH z+tu^_FxjK8>#OaD*v`KFZmK+0XEs-tm%69wTPM%=mp^$A>FuNVh1FWYXQIs*D}r*fKcG8H`Pn*wl&Y zhourZ-VfG*@&qa<*|R9>3-pixDyy{Bbk8&{1~0l%ySF&QYPDnuEUd%_W5HtKehGrg z`VWQr{>+aFS-e_$et^wFXT?IX0E_e@-I0~a@z4)*ssSSB!{B16t{L)qj>7d}JEa4S z0b_-3*CT)pHUS-xT~;qYxWIzIf-<7Pr5_A>FY%AL24A&}!y zi%xY~Wr~!!bhyamhL)qc;N(B~oxZtuq#K|==zs@^mkWlQ1VxGZ7?krz)I+pcPvb;w z{!)MOfCr*U?tTmN$IilsZZ3;g+;iS>_@EIzVCvJ#)H&QN6}9GF`cfa=G}!86d+hE^ zgSt{Q#`CK=BqmM7r}=(2ozUZNO&@e!*r?qw@#K?2;B15&`t=pq*Ta=_6snu3Mb1@? zfpyB7b|>FpOWtfg)hDUzVVe_p%0Ef&6ytV$roP?cl5>q`-GnedZ|LzxMQ@; zE6QdeV=na6OVW-C-BPBP>E-eszjzPc{h@n$o}M>fr35z^ zu{{j2I3Zz1>W5Zl0d$~)xuOGY)ppcUg#>Izo6|O@@WBC-qGAL_=V7@goNK@A;NUm; z)}I6*4j-6@}gw>e2dI!Wv+?*zPz014vR6wbhaO%{Y7B zUT<|coKwNH8$!^U*n&WrRiFGO_17WUFe4H*@Yq~0+^jvSgX-996y;}<6ghCKepN;5m7}AhY^A?XUZ}X4oTG6xQi$GeV|RM94F%jaX~9tpp&=)+ zKDE&%JlN|%w8Wc3C{x0=ciyJW?uUVgXWyG3d=sAdnPP(Y=2gMS=6taYHCq~Oa^-lX zgFl$T=c?>IqbhW}#*%hcCVt>H=pz2x!umTBT|ERygH?u#YQ97b$NQqP&7^ggY+JoH zr;4UNnWj>+7+^#mVPnfUCtD0!7kJu=+Z>A`{U>%AUE6@MCPE|02slvTUTTuI(mtw& zKH(C_a~+1UE5QeeErL_ja~IDeD~7m*wczAhQ{-}gmZ5WwG%BmCvn!x?{MZ;+yjTMK zC+$+6u#p1CIAKinmlr$H3_-sdv3x?)_miE>iEmi+1-Y&_+ZpQ#iUZuLuj#3V{;Ots z+p#w_EulnI_z6bU9B<1gM!{*IAB*OwrBS{z8ns&?WscMw^z86gAD2%UCL@&_!&55=@Rd7fal{X(seex;I(K<695? zx12zz_dpCbVr_bU1^2YfJb7c;IQXST%vpk0?=4(PtX7-d=xN z{SkV?O$EDkW{hal?S_q}ULUuesm8k}>K`JU*lfGRM{Rr{wnE1#uvTjgcQJw{Ai2uz zu8qVR1W)>T@_DDcG)&2fcq{1lU$1$&^+Y$j&U}MVPm{Sk-S;b^^lC9U@CEvWzSHl` zf+xtO??t%w`IVE+`yy9vY@Tc$C9ykpFIbK2=tvr=@lYoR0s}OHGp&sLrCqrXRtSq> zOphyJ2_uY{$!Rp>eYH(~f?_*?0G;Y&z1^Z@ZWFP1IaMkm0j`eSWq_Uxn)GWOCe)y6 zU2B#@i>usj2o0)MgBqxQ#eChdby;tJm|3}kUNr3s9Qm+YBC=fLEwLGgPEkD2+(lHR zc@Y(9-X6tOq_~Q@(Md#(HFc5Qd#NUj8@PTET&^~?O30`}z^Eq|$|ZH4b^}#bXr~(! zoY1UkJnH!g=%gM&P;FmTpzE22yxy5cxn#|D?|EzB>)l}yDD`Y#Lwoz4IBU(rQxBUM zZ4b*mL^-=d%ZD12IsUuuTD3oh8({%Hm8M^L3kd_Q`=|RaMcan<1Vse*^ zH_waI`bQb#6P4Vy$ngNya)PEQ!<_G~xAJVe?_DFy;Kbe2tif9HX(8U=X6+f>@@l1~ z#nY9S*DZhpym7Vq41faICGLC0Z(*4@+*Z7*?fNb&nDe2W`hbVkm+h>3t)$$RP3W#L zZ$$K>hkRRUsbHX&Ac|mn^LK9r#soQ)8>!Tic)04TW;{w*9P;N9eE$S-ufG0^MwE2A4=9@Hu?t?NCh2>WTjLcN_yW7X*J2hp zq&6gMq34*GN|EJDQ#sxRZ(piiTNx;CSc(D&fd9F9B2ZOT&M8;zEtKwX^tNr4Voj9E zZOUX~Y!V?96RB-xC`oX;bW+q}W-jHXyHKOq>=a}rt_hvo0ckUz&E_>cLSr*Q0Sq(+ zN_tRsVs?{2s?#NrOw~43n`X-gSN6&eVpPO>vU5%Br}UgZsFN-}k7x{s3mGL4HwlY6*JSa?C2k3X%PlVZyJLHV_;%#C zn-$(k{e0Lg*DqB$$9o6Mxc+bl9v+D?+9#KDU{qtxXDWHz`lK_!dOEcamD(U#OcMMc z)Qzvw?V|F?6N@0nP>^DSlxinu4>@F{J?Chg1jG?J7JP3mq;o<>?7MWI&@(?&WWv#0 zCD3#=at^qE#+A-G{3{OK_qeJkvEGZ!CrUu_xeO2Sz^U>PcEAlG9c8H|u5eC^ixJ=a z6$?BJKoliC@<;X3ed(SjGq?8aBDwQ28fi_*h0CJ$AQa!Vq>mS|sYFjN(r~Lws>*|7 zFj+3oe_t|>bdPD8PQ@=8XlGyXd=QmMVI%26jyrarq&2SO=01Gz0w#h_43^b9_o&fH ze^)4Se!%7*wd^@LbnU?^KEkNkLAtF_&t>Gv;g%+W^x6zc57fR(dsJPBu4DPe;goSY zn|zPGvvYz*OxOe|@n$nPadhtAsHeuQd=noTe$z^6FbensECPN2r+^*62JisKfKioIaR|5uyn=s# zM|f1#RTF?cz!qoeXCJU#g$GUnkAN}2C}6%Q3%CU=_Bh+<=PYulKnMXwRT&5Yz_dC( z5Jv7G%GZz7g5V?c&z$e~9O*#lL3lx^*q|Wpr-(RPi10=%h%7KLjDtY_H3gexL}Ca4 zj6vQ~BHWjy29ZAPX%z2iB1;1}c$WrB)2BzyS%7()#X_%gIpR*%ZIXAeOCT;0jeGVlXLXwgV G0002&1Enzl diff --git a/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7GxKOzY.woff2 b/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7GxKOzY.woff2 deleted file mode 100644 index 20c87e676ea8cca21679bc8886fd9566172771de..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 12456 zcmV;ZFjvoaPew8T0RR9105GTk5&!@I0D0^H05C!T0RR9100000000000000000000 z0000QWE+nf9D_y%U;u+y2viA!JP`~EfxjSutauBAE&vjOAOSW4Bm;<81Rw>1bO#^| zf=L@*Z6$1*HXY~=Lhu!TNk>sRn(0VHuyGKC&8xxxe@c*Ygh|;&@n$H?R;%0Bu2-2% zClyA8fF^cwmM;35*J|}eSt!1f-}&b66xXO8B}{{noNmk5a>h3ib5GtqQMYwJy<FB+Hg;%dw-xkg#uFpGrVec?aD11Hncx0dIiO_@Ojx?XbP8wX&yS=on_7f=|NT zssLN+piq04D3)Jw3do-EA)E5+cS%<%85ShN?S-*ufJ)%%$1}(D!0T0GC%k^D1B5-Y z;ou3Xd^!#Q``4C!-LChfV>BUGPPhZ^(>C+x$yA5XHUp%)DXaQc&1r0A@NQzsSO2F- ze?=0$jzH;+i#NqmE_+})AnO_cYXkR{soLKC-OCS#v0n*-93KiG;as^^s2r7w&Q0@J z{AV0VJs1Fx96W;ps0ctQ;Yn(s7%QhrcIDEUl2eLROw+5AE42=(DDAd#Yq`_7a#2>X zFZ{o%=K4Qya%nP=x@ODElwqXAbd8JrE!~$%%cmoirYpSy@Z||$nyA#PWzCkcwsFj5 z%N`wL*w8HB`-5qUmhXJiJ*n4`&JmgFc*YnZ)cfOS>=@JP*;-OA5~)HTss{IG6~p;# zKKQo*v%3XtkXti?+}RTHAXm6&gh8HF3;AFwNDN#Aa2_!{%oTFtK~B!jE-pkOgmbw8 zeTci5+qwYBCnv{}faKF-gCl|DbNu7TKoSC;NQ*o>IF<}J5%-O0V64;87>Aj<1H8d< zAR1Hz`knWkqprWE%^$3;t2Z=g&*|Ny2l}2(nVLY(v#RSQtG9$+-@DI!^|!dqlzlX_ z%b9DsRebe!CZSCCi)~(iQOx?$8yh~@w&{MNO-y(v5xVK8nv7y=h!Awod25t;!Fo1m zy7$WErW^GuHWuIl1iR3TRy=5#kAV3-HxNvMohk-^03=h^t zoURM1-VlO=K@?!;=fev>H!${7$jX9%bK~?2;FACby@0L;TGeJl5yE`0OFs|JUnz)@ z`ydR2&$HuzqR|8hsg16b1D)~eOv0!5Tr0Z}`I#XE2)w|`P6x&3p^-Vo7gi|Ej}ID( zE)3l(Yuez&Bei)^BNk$6c5mhk;T!B1F}t`PjYsu3 zvtnxbp0_7IIVuKVpYtb>1SF4`FfJerSEyVQGM5T3Q!02tYjbuc`G{O7m@IyL8mN+!t>|k7i1VaoQj8nM4xCR2_JcBHPATYu? zTu@@a#NQIXOB|fKpX49;Lw=F}$qyTrJGVCnl$>^~?*?_&h9fLpjLvJ46C!Z;AXmZX zYf}OV)O$)D$mNgQwx0;mwtj~`gdbmZuJaH6?)SI10EWO7FN^~J15ev)2Ec+o2rzF| z3=Z!N2cpM=E&>h`5&3%ZaSVeki}_>D2dg0hxnPuXII7XgnHa_&Ec|7x0gl7@KVh~>9 zMAAlRP2yj_fdY6K!YaZ&lNs}q+*DUw+$_+17Qq}6>7#;q8UYVsE^!1pq!e_Jc1$4j z%3jwuuPmj*asxg%j&$^}ecew8&^1SzyJr}=*r#&0B83P{j<7r=OC6mEky9jA_(zb# z>g}U*Ez>smplswyY-ZBf-qm26=3Xh!#+3*S9V|=|@5b)w0&hCGBg?;eGmK)iR^>tdSYm5E>!YI<=7NHRe{RUuuO+l({Tq7LjIi+9NXS)~art z2+Z|*XlJImPBl1A%!lnxZ$9%noAmGNcN_UW8(|_dsTp_El7wU0;g3jun0B4CFx z*@=kClDf+Pt#DQu7C$?(5<#+jSalklts$nXS^_789%(^fg%UV)1_;)4kGhih;HhC* zYF52S``mT5hdzhgIa%z4F8jP&;YN~Ew?s|~o~fP6!phmn+hUsJlVXWw&2mTE)ER!k z5NkE0pW0+emU${D>&Ob|2Ul03a@NdF&!Ja2oitbTKs)@bny=K}&a%XG&@Bp}x{OQ? zNpYlTB~&$mfZDNb3jASg7Ni1JRyR25=b5@1H-acxwwkXDg-FyxUbyZaq- z***wWBVnJKJdpmLJxm^SA4%f@uX_)^{9!hmF@CgXot9rSvph9Vc6FnQGp!OODV&gF ztEy2GtB_l~IVyeI34oTbEG||Vt5!aljX9BBmD;k7S`b%W4=BeaSLBGrKn}I|onZVt zmym5Sfa|i8AkWmfL1IIAGUAFkn|HfR*K52ztBmPz_|1tDU<Qj`bit<83_7fVbT5&g5j{^kN9bxR%A;165Eq>=Ct*M^-R=3SU4G*$0EG&Dh zaZUzqhe8_zHDGErwdb-G+I^`IgMdiRl0}Ba3-H`L$s~*=fdyTK#$_?!YUP|2J0)?C z`FO8Ij973!EUBo50Un~**lwy&qdo~K_}3>Qr-e{nP~D}ksd#puuyT)~cgOTByp|fI zlwgF>j80`^3ep02<7DIq#)`gzv@+ zLQeMY%ONS&-5UjtZLkyHm7kc{FToT4wy-<|YsUchQFKWK%X}vJF`IzW&t@_mn&D1w zs-r6gwn1s7mV|g{UXAJ=`Zim>cu}H4)r4-x|j}z%pV}oWfw9~?#*5v zK&S%nvn^pso=_37IxH$C^J%Us6PPehF)e`{>Hq-~b!D3^0}A$$Q?1!3#R&@t%T?`= zn=dOMTsviux9Z3W3)Kk@D-Zk`VSP6*C(Mjn3o-yLtG$^=sz5zPMY=KutV&wiQIWE? z@oO#i`Fl#DoPES)xlEU-=gCpBR?b8!I~B4rG{`xg!C}|Tm1TQ)26z)ymZO)=wc4W2 z!_{v%| zy}*7j)428v;J){TJv{_&z4d_Ejz4+lvFG79{1^_~n{WL!ZQ?Qc>hf7ehr%jj_5+E{ zpB?J{^r!Pb%KkZhuit$AuG{<6xn6qr^Et4ykF(D0ZM#Yxdl3F#^v9Ry3^a6=NDW%N z4)flVw-np!G{HCe(=(dNFHfU?UGMhq4*rD^^#SzjGkO)$(T6oABG&12fc#Wmx4e?* zs5^Bb@WLcOe!5gQw;gC~H#uPy&KbrSz_d}-p`}bKwQ$k*BA;bfCpEDOtT}!FcJ95x zPLLW;Sz0*O3X&YM0ITt3Bca^M*q5=a(8RlkIw_+%%wTam{PXqU)2;P|$@IK}?8LH4 zaOsp}OaB;q>H~Rj>HcTkY~|*YALGJn2j~=l{KPscDZ-GMX}Maz(eAZ`0%hJ`EtgrT zLi=f5ZF#WmIJpuKz?${qp4<%$ov?K%>s%|==V)j2aCMZOf$oMsDmCKn>gnU|C^lTZ z1;OtNUmtlO4BLzR`$(2Kew!cmPv@4cs~x4^=(j&OU#;bpEoS40;k@@sHx3tfl&9r3 z6w`omATjTU`RjXCJ?Pgjs9z{qx06{H+o{!8>e8Y;s3{S_cTxhHqx&%{pQ!NI!ud?r z#i^?1s>al{%D5n_c=QTDb|(O`yIdD?%DR>D65IvIUjp)*QV{g!^{r=hz1JF>dwT1h z>%Co)nmE#fhW~03D*tu3Pj;GL@jXjagwcndQ4%GKk}+{geWFOlq({(7>a%BZfu(Yj za`dOg3*4DnBl@8d-1}^yx}Qbn+F7&;T7|Ss%d>h&>6PMQS?V(HM)6P6%ML70+MHGT{Z(b+7Wupe6QH1rlS0m& zyTMlfHHI3a-M_?epl>-<_2khWhS*|WV6fW?jVb?LTsgnIAp*Bshu06xkE##JzDu7x6iTeKbM-WT-w7+!p@^9)LA!J_HE`J5Em95?V0PVWCv#g6fmucOgwX5 z_GA<(-7}RGX&^j=A3YJsHd>t)&f~FfDjQVnV^vJI7v4yB^qg*)`#_$>R=h;fZ56Q6_` z!XtB2uP#V-u!nc?RIX3ke|fohT@d#_UtQ9l*9B!S1abeEoI`C-3yWG`%nx+DD7@VM z)a)pwf{Gh4tN~2?5G>8Jr2X_w3sreJn;0i?ofc;T#UbbQXy4fTn-$}>Sy2JO=iDqz z!cBzrx`|FnZl#~DF|+5rVp{x|2=9DSYds-bmBX2%Ac^#F(X}SLWBx z7SE#m)v_WZGamL1Er6Lx=Jg*7D`s>I6VGrlZ8-3;4ihtkrwURu1mPM~ObdRD&-s|Z z^s!36Z~Ow&$LFLp-URWzP%!1CV$O!y$Oj#X5**{H;1g+2hoP&1_{g@c+I0!sVFm8C0XWQA5)Bquy<8Ib>g=_r(BM4d<>xQ z`)Spe93 z+urq8#K@?Wu<(K?$5C|g*3flIpA*hWpU~?X{^%3fpgckK_1>ziNV56QAeWwG5JR=3 zTZw#o9|sjgIopMMYr-`s6r?){&MzkW9JMS@3%)Z|ZCIDO|0=H} zjHqB$krudgr(*2%lpoo3_?2aT`_tf|5m&NvUzl-0+cRdc^~}Q?4c9uP_Jr}}shsY)5LnHD7yC<%&1m?XyZ? zlD*69RYQ^CHhSkeg8vy#;n|~iXO-dIw_)w=BQ<(Ld}S|8jLbv55#7bzNl@Fh>y0|j z!XjlJy4~=pl-qIk*`cyOaJ5(HfqOxZN)5TYd%oAy_@cbCFSgsEbp5))-c2~{p2*GG zF@0i50wUV)X7A@-Hs>%1%%UpZQ?tC>DeA(Gm;7pf$70wjvR6iMxZ`Po?EOURR>o>9 zw_jd){UgQ$7Ed;q+krn4v4q!yPsh$-w`8cStR8O0fJemQUAbTanJ7qGko-km@a0{d z{Z%@+B_oRuNcmB@UHcb&$A2HHO#S(|Fd01fyuT557uDV{pEsToX_;1#Blzxi0JT{- zk54i^uQE(E=s)i2%Fc?;YTiA87!7F=Vr6Fo_rB1%KSJDe?8%(R$t`8Pt>sBHa~pz{ zIl<22Fz2dU%pN!-6ymeoQscb5Q{sD4lE+e#t%;|MO>n9vMghjAI5i`)V;QwEp01Vz zRc+oRv*Vr~A)&4wIn^mzYU&?ekWzBIj}H3j^++GB!=<7klL~wkDZbkBa>vh@oHSHg zJvb_rPLj&(Qx&RC`j&a1pS6X*uZ6jbGcWtj_Wl)*p zQOX{fc?znm5oR(3Ph}facx816QP0lamK&I>%|rJ~dY22`brgS=TWPbPC!mah?{)xzp)adZ@)#=C)?{qhK zu5%5M((|h7RfX>H^OXLly3doJpH@AsSROuN(g9wKn9$x))$uEThAF>Oabl2`m@upY zm#7aL^nKTze_cf#P`MKG=erm)^6|zTw5Z7h$WO^j>uN&d(H}ao?F#%g!Qm$EBMVA@y!P}di#aEVOQNb!fLL{4thT6`FxA`d{FAb z=@gwUqgLrT4OssRiq1%dP8#oBYEuN%T3H9#{K4bsddCZRvE1F);n0bk)J zv>?FcUUF*Z+(#@TR(X`U0KKFm%H-}am#$#zxS_{$%vqQ;iu6)`k?HdnZI7A2+^@TT z`9A&rQ;9D^-)^s>K5hZMY)??~yxLe=nAu)?1^C6>7M^;z@zr07%s6*e;SlJroy|$2 z_LM8J?#6UQ1k^+n#yksfcL~`|J?wPKMmZGrAatf7%)){MWq9}7dJzV&ssjaSv^2Do zXzb1Bqp|1L%me)_O#^(aEd0F8EdzaAOcmu1DSkhs?8$p;9C}12!U-P$Bj+)*?VNjf z=W8ZulbJpIF-NNZ?Ju?ofnPq4m;Uvef8k2+AN`f;b`p;kXzHFaX@5!hX018qP*`OsLwQow6XGgf2A;$-lX}7rwdBPjyZZ@Fx6~aG!mKfQp>y?#T5tx`?6V|+ix#TJO4mZuC zVl^S`%Yx+Z@f6OD-IQ-V*Vk+4I;{-$@FHIii)%!_aKdpxhXiCnOiZU3T5F2o<6E~f zUV-wyd&!y{zbpNUUI>H`57jh!b$I|Gyvr zt_A~C`y87@j9+*me4AibgH&Wf;u!@{>2?PP-;O9jocZWSfL6nEEC+4uMoWcQ4I(H> zEpu_>Wzy;Z`>i}M7K(1me%tf{iX^j7#M+0ri`EZ<7zYigiwRp6QqZooQ(N7}7CDcxT}o*9GWhNQ&74C|@9H(WDlER_`Z_zhklD_{|Q9 zqR{ct7J#HiIXI>SQ9M*8kRg(eu9y;p{;TJoASkPIN+>9@bkJoRtt{pgqxVL>M}n;zkVV5$P$) zJz9YT)l(0sNc;3J4h1FtXaEB-jS9Exi8cV-3z&%B4#}p+<7P!BQYT#kXoBiC0kv>j zW3(U|0kN|3W1{y(z+~|F#EixMF$hHnXDa%B+~aJ}#%8osF#tFqH)CN$D z=k5^GC>0-z&lh%D6jMM{GmwpB@mj@NU5zR=PHZog0*zm7Tn8$agkLp#fMORj{8D}c z#6o~rNY-uXX-x1~I;yF1>F7mBq|q^%pwgx0nMp6=dT`7X#lf0Ur9v20%36q6yt!kD zI4cMavG*>T#zO705eg*eT3mpFz`WW&nOO_=iv7%-(t*v^j4BngQKhVehy~AsLd03t z|AO|O#awB~_ss9T%%b-wGY88xUlGZ5sxqmhJJ6XRagN4ZMcSI6CTQYz4b)}#3Q^5|UzV?2w%(OhZ~4xjcp{Or`eb+Gjd5Ww zKqWR_Q@WFW8kpbQjrq3C@dZQXZflbdns%xe<+~*Rgh%jG)c7_R0IzCxU+ixD4%cn_ zi=Ha4C%goj63%{<)r_;s%lhi%h6{o1oZYBL1IDX%m8P@wg!{N(ch|5TB2N(2E?$d$lSKIvg1(OM*`9llzLvQ9RP{xNm zL4~qxbP{y&gB$71BP29?xYloZHM<(sOXd3o_$U0$XFUL)0_Jzh3FD_SgoDd12Invd z_RdMr;XwQY^8q0hyPxrWlkd7L9Ckv@2#1+431w#oC1*dp^UNFiCew<6i(Qy&8HTdM zPKGn8AppXDdY)!q_DWWW3hmkz~`Kq?$4s2hIQrY^o(VEhI=?}Jn{o%VYLbJcL1xe`U zk=e6&bxJiBsNXrsP_WUOZ7&)>&OnB5JqQ)})+^?{%n?~K#Sq1N+`deL;ARhf$N6rI zOIKumn7>G6gklRvld!aequQAa8wY+eyYU4Xf%-k@T={wdRRLdWjv5FHCLB%fd^-WY z#$GQ5(E|zp(!}@6kF$$J73M28-{bD=fpj5YUcXuTuNgf{h{?ja*Qg&zO=EGzpK`ijD#6So7m=-g1 zfXkKB^_T}nG=dNWw9sT&wLn7Y$~lk;#@$g5Dj`-TTnI>iG!D$iXSN$eP$J^$BE;Sc zztv8ayuZ6QA_HIxCu?QaOr_rs|_25tpE zE06&{tNiZe*BY_22PIEJ7y+b6>CT~+l2AaKuC;0Yvh*ck{Q3rwA$X;T!NH z_$_edC16aCQN;UyFJ2g3iCm+QC?UGUzQVr8e#wMdSF@>IC03qEl?vob{3I$=TZn+; zvvG)$08_somo<|Xn3~mX96VbXQ6T+s3gE-kpUQWe)Myo=74yxcP2J1xjT%gn0j45d zVXBxQ0VYzb!M)$5|LY1AFay#!C_i47 zY*t3$6~Q}1GoFa^vF$K-^C&D8s1xlQ6F4riLz&vMAIAPTpKSodS1u=NbPFj02ys8X z7zpZgwQOw)=JUf-Xvh& zI6en#ozWIjT!%LuQWYPjWkzRiig#V$IAMZbXQse$_!r^4|C2j5P-|bvbXm^va`*+y z&2Wlkl;y~>kPR@GU%_7F#)gVw6zDo>s#Y1RNn}A!t9a!ykVWw(f$^T|x=3rD#H^jA z_-4Q2N{RtXo0K#h)D~59a}UfyhFiQcN=BKw2Brk4gklqc)w%#`!8%pA-i22~TLU?; zGnZevCOBTUe5pLY`*%s~&1n|Y`Evxl^j04-lJGJ*+wNz&?MnsPC3_NQ$t+(@@LEC- z$Xfm4~RQ}cF1?3AU+7>|0y0jZn@By07F!dA_&CsK%q zqqQQDYTXt!jB7;6;XIxzqww6tqFFD1(3J-l;nv&>3z)lWk)O#XRYW5QODy#}%ESOE z_>MS?A@%T7kU=JPa{; zKAbLeH^!FQcqRiXvL!i2FP5*GdpPzQ(UP`ga6wmKzsr|wFG3|^ASv;w$M|Ar zF@e>RvGjzqiGoqQbx$Vhbq{Sg`uH{!Wpe{F;Y`NWAt1x7AbZ+A57s~`wwoDUxpOT< zaC(k%FKIcjeSPb6xKsZz(TY-Z^lvHe|D=woDeBUir~XIYpgLtqS8AL+}6 z;bGbur9bGExKC@a^ttQ^6i8U39hgXMt2|)>%ri?uIYYCIx>6J8%yOCcSWjcmP>W#f z%&21n4is!CrhYS#%^akeHi>SzO@UstUm2Gn8pC`W^ZYJD>i@)!k@${h#^9L&MI(^c zB8y5Y);5(qsS>ncK$Q_`)k9J^@GT+$wnwYvQc)Jl0bEy)NMusaNlX$1@;i~)@+!z4 zy*}Hck^hB7kY??5EV8t4t?8`Q7NDG#78}`2Gy%vwT}nA4v~dAqS&dy0Bfd6(p>HUf z9EBhtphkM2-XlFC()*j!K;0Ra!1le37pw-VL9f;v#pSDmRy@GNBP&^&g)tFSuH(Iq zY)6vGuw@tmu#YI6a-!Q=Pa3;k(y^~A*F!MRb-4-4xFoI^=>Y=;WffPs=vRS(gJ?C)#wxN&D#Y2IL?c8+R!OA?F zbDm1U7p8vo_EZ7ZQ>D`?O-ek7zd80^$0?3VN?h#KLAT_sz=&e9HP47KnsWgm3Q#fu z5JQGdx$kWoKspK7Q)8o8ML0K-Uka5jR3sudKx_hqs|sLD+gQ*&yec;OPwJ2w4QPoD z8V~;sN-^E<+I0f$Zf+*GlM7cnUF_vO!fT!^5{4vAyun_hG2pVr6zJVL^j*tpxhSrH zzD<3SCngJV7sL$_erGocs2ya%X^;m<9gS|>b*fH(n}hRn2qg%k(42c9v1|ph4@jCq zu|-^i2MP>8`W%cb~*paaOQE1a?F;1&&(G*T>X6fZ`sXkr{dAO|ca8^_j)V z1Wx_aN;Wx84~$+Rx-%9qeu1nt5}M{pdgGI>x_G!8s1{_BY5V9-dx#8GCm0t!B`M3@ zo;`u+qZt%JtTSrVUw}Q`darG_6jSo)+BZ?{##^Yz2$#?H0jmVR!hGZ0V;~rS0C7!) zBL9rX61#@mdJwdBC#J=4adl%1R;3&#?nI4}!_Mt&XWp2*9U@2O(OOYI5)013h5wr+ z?hIYevmO(7m&w#6muf3aOUh6!$b1Xg6-}F1Zk}`PR%PDv_@zf3P zT8|Q_QOcfN@`QVDo8HnOI4>cy@e6i?>aAV?UGd3V*bylnY(tDck@9@M1DnAL(D z^2NH!LhY2pm+?tSm%*Miup8-cOh;Sbo7%i*Mr5=r2IFhi7>MWeUu2>4j74a>_Po4= zX56^3L74HQSOluEB2nua3l>@;5=hQY;|IyC+4sPs1L$8|_+$Fl#e#D^@uO(rt|ssu zqrTF5H1w2@&B6ERz@vjv_7i;oRNTLX9igi)x>~Bw?}Tt;^*XHFYX8G;2cwNGmFv=E z0jirhEnR+%#Jv(0po( zYWwi6)%>o}P2hGB%H_4VeG5BcS-sz=TV7673-$Z4rKr+jSKxT4(lubrFEv^=q(Fu! zj~s;%!$yIkSOkhXL>*+>qjtKc1H5Y`HWs27G9lqtwrJ(&^=qU16&)$D3a@0~%cBji zLnODxST4+AYb-8=Z%d{)7!w)n%`bUdAsn@f+C-gS<6Kl@B%vKT;mR$?Qk3Pn;9DBz z<_*luWuwL#do-jYoY&q6dBUqab7K`KGR6LF5&G^S`gcSCGIo&x23l)HG+qF@hzQpdUO-J1%T89| zAmv!DH!&bmplvOpLq#wZ*2(owScKttKMqI<<6&9o3*{Q9^V2GVYe0Z;~1OH*PQ=$wTMBHe+aNU0&6RdK={#tvED7_Qjyb_$&cT(c z!d-&iyhmVum+MvwFw<}A+Vj;{tUUQyr)sV>_S0_cy zw5KqRQm6t=Gf!#$X{cO2gW2yVL+~u)RV51Nj=TZyp=t@(W^`^vina*n+x<_`=#@N< zQQPLEj(^S$PZ_}fKN)cLeBl3Y3OIWlaCQqsIP=-YYZ1w<6744mI_fLAbzEURBR3Ts zn{lvZ$*1heZTEx?IPZwS(gZ+0DjIlPWI^e({#ai@Bn6`N0o?YtN$+8xoAeWXNuwkRzsw~ z|M@0CW44qt?&RDKAGvDL-DoWo;R#tgeZzZ*j3+}hAx0|k&Q@StAzIxc#r6$x0nI|p zA(G35;GTf^!hHb|*6t68vS$i_7(*{WS_wV%DhXD?{RsDt9s@Pf-AVPxcME#^o|vYKJO^rri&+rl&gsu zvRsCXdA^hZo)M>{7`#&?F{8piW-|j_Cl6K;9UGbnOJbVRJW60KulS=;1afN*juJhTfvJA;E| z;WDG!A}P4C6*+Jm6?3nFN!9mkQ$*+n+A}7dIH*cKoE0yw|qlZsd_lq!m@4@1x5=I`m+C0?5=!KL0^=ue>_QUb~ zEG+)~%kS4$Vf>XZw`(p;+<&yZErrRKK3^=i?a7-@UF*ZtfBA_gH^Wk!O9dO2T|Ph3 z%7^Kz7sqy~$=toy+#}<`k9rv;k9GjukvoH~l3_g!*AHe>N%f))0000Vo+R)9 diff --git a/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7WxKOzY.woff2 b/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7WxKOzY.woff2 deleted file mode 100644 index cfd043dbaaefbe81712f30912f9403405061703d..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 5796 zcmV;V7F+3ePew8T0RR9102ZVG5&!@I05~`R02V_40RR9100000000000000000000 z0000QQX7V19D+y&U;u$i2viA!JP`~E&S=Ma3xXm55`i26HUcCAgg^u!1%h-3APj;@ z8*VZM)6GTP?Ep%myC)|3|8T&Kq0onBFI*Cb0D_<>+L7rR8qk}-BDtx(CI^jBoHUw3 z+F`Nu#=T~!f2N+vG=o6m^G)8~YGkqC;QV3XSue`B}LZ3N&Gz+&0!#=H^?p&7eeE-TeiOKRnw zo!Ri(y%_ZlFlA(;ga*JX?g-d(M*vODY-D<3j({7$_v7dEk|O<-fYPCjZ3jkmb8~7! zy}O|n=-0c(y}LYrOfU>zn^<|^Lu92AQBQGjnJOpT4xXY#Q)ba z*^>FhP~E05qUV3BHnn=g|6ix?`tEyDEPqm;{{MAS_UUmzwFupw<-=S~|8`%+Em()op(w7*~F3pnoS9Cscv5+MQ` z@YuGW{X{D;+5nsdLoi|lF<}nj5WtDlLl6u&3Rn)o2nZl_92{I+JUo1S2v}+cN5u6v z`i6q}_7(Y=V7@y)H3Q5KBreVda|B@PInDl^`Po1R$Pa^n0kz)3gt>(Pbs&9AOvklH z@Z=NK`T7=LZ^FoIr)*Gdiuy%buOSjF0mwvisHkmEw=c~v1lll0& zwBBIH%mB_+dnL>fk0N2OGbG6|(=zI0C3UvyJA#W!wOk=4mcbw(G~0Ak zsS5-K8a8?Q=zu~T)e3hZ9g=hegaQu(tsfCd9BNZyA;9#c>7z`b8Vbs|ApKa3*$Wkd zQ!H29KL}t2*KZRQJY6vxwj}I1a^}jNj3;lt5VL%{$o3KEyzSewa+yd38t=K3eI^7& z4*mI+lGxHl@edAD`m~h6@RY#t6~S;+zzF;Y!?h-*q?jm3dGOU_8CgOK$co^3WGPup z^2u_NYhxQ#Gbph%P9?6{jdVCz@cMvJ4*mlzkVJTtmj)EjpYF3kIsUezePThf^y3yM zfYnb&-LyKu1DsP&?nNs61=uxht-ym`0K}!8#KVIRg4TZ&9J-~1L*^MJHX+I%b0VJB zf+M{$rKFX%LNnAPCsraxrfQ;4ZMkbw_={E{ zLAW{9-ZA^s=?9LQ6ByQQc#UsDNEk*f-A*zbri@BAz0pT|jGzKa0BC(kMquP#gzBP> zpz#x>?7a?`$BRf1@`^WU?j%YOky1#)RDKUi$h#$RD_L-)WfZUau=>N4YMzX@C^3a3 ztZvolwWL-@P3J*Ok!kyh=SV*JMilElH8mWlKPW)W$4s4y(DnldH;PDr^Q7rN#bs~~i`KvDAEYg-F zB;VLh%7f_0y;H!Ql|&)xuKt3+5DXob=>&KXP)-1@0`*%EZ-f2-XyikloONz9@;H&?@EuxQ}j!)$4DHV0< zy|LXNnWRsTa}=}A_U7qu*HYin{{j?z=UR;@-WuJq!Q4w7MBo&c>4@-_MXk+~pTLx< zGY$9#*V%F;h!_QRNr+LLAj0yNjp~%1&!M74I9ryINwWNm^d5^!1y;PXL?0ztCj;-T zGi5?en^+;lI`ne!XUoa0V1`SkIplRTo$#fyo!GtoJnwlH;YA@5Hy<~qhuq*zG_cMM z-D)|`#* zEb;pg)N7F^a$csfJE6+vY9X24!=7{My6C+o1o^w}xcA*rLXlr`iG2mxN={ch|BMs4 z^#kR#empf6D3e~Xbm_YZ%O1DkD6SI4;d%C#S0z;z4hjayUcq&8V4vB$Z@r<8Slcq< zZc`KJ*VZ*(y|yf9Sszed>thG5Ro$p}r1g+H8NF+5(fPD1Yh0MyckW^K);nK6f$)ol zM-N>x#J*(wy24Rw&T3=7?Yz6ldvLxeU+VDRu@HLr;KXS2uH9`NJ9jnTc1XEaI68Vz zC$Hi5708G?Gv_Ndt$+TXvS4?Pwjir-E0f0=x875>zxCXK8Wm`MJP?*xke`=amh7Pk z%HP-@vGnLw4c_sK92=}6bU!O2wnYg%?yU(OiYQ226>s(9wfN+T?4mhpFdlkAcE#Ya zvDx2`P>*c7SWX5ueF)yXvvSkr1-=G%40s;}Z7YA@sVlnr(g^pvi?{6SuTH86nB|fi zQE^GX;7snOcD=P6ZJM%TYi5{Qq8{RivNHdTF3ikuineZ9_2*ty{axLeevwW$?0uy@epLG{JK)Q4gCrTaT_R~9= zE6ZZ1>`!pmX)&dFW+63Wc}<*UfqXunJI&TjUwtQeqVv>3 zsWsQ$q9UAEQa-Y!V_-pou#1du%_}Kw&5v6<*UQ06>SJ#+tP6!+xQhwfu9X_pu+GnQ zR-nXYPY;~NqNOlSiaH5%gNK@9@;1w-tt8<$x?5j;G=c0p^rRWIz!<>mbbim zXlvi5zW=tJuF@u^ZkYXb@brj?=%Ag!y|8UmIr1| zGdUrdkGDvN2>#Or&`$#jqQP7FjCuS-x&?sHyh!^ZFC@DJz+{F3e&BiCyfJwMFhA_L`i5#(NbdJj60e)XqypvWy=7D zt&31UVA}L5B&NI`Y2xU4T^D)sdZd=8)2Eoh>EUfWoxX$0xtkf#^62 zE)cPH{I?HAA?q7}z3*E+3T-Lc=0Gbz@Jw=)Gsx=}_|ESN&EV2KBZ{oK2u^ z>8~hrEVfSql{wb#^f5XhKAdbA&pwSD0LRSjn%zlRbGv#K{*;`idX!@L)b`Og>XIk( zj16B!UTuyV2klXcc$bfLlg;`8{NXti>vap~^%K)J()N8yPWxK>Evk7Z$~q~IZS^Vq z1t3g=LS8>gIq}*)CVe+u924ZI%lAq;o0_$wYUAD0*G8{#Q1Q>=T|U=MuDJu?U3dBG zmbZcO%_!C}?|{{vR90FD7WIBh%Qb_dm=>H9{vja?v|~bEJ4!b?OauLzQOCn(-sNK*%{FfU?A+Ki&77M?@dx=*jxF>tVQv@3JYn9F8zf~NZtoOT zcB(resVPoVF-p}ogFU$(ldeM<;|AALb2+gdIpXGNKp63FTh|!3CAsf8Vi8GIhOv) z{3M7!-*7}PFjodG8!QD>V7MrdSw2I?k94OQr*8Cslr?!&5-Tt-$C|ODz68;uibPjL zs*`{#aZf-Cda(rOzyNXpY<4^C^9!)229%{71`*ElwZv z>#KJ9I@gRtv0%M$U41%Mh9uW^D_pbGt3Bp(k&P}wf@Np=L9$Q?JfWLy_A_9^P@Q1f zbo3A)=0qS&l2*IruUTc!=g|{Vfg6eT3y5H4ILlfmMk~>0Ms4rE)yK5{$dMQcKkkr` zs9ne_f09WP86H3em7fce;X8a|qg_aGmqU>s0kdL8(ql##_=Nf)de%9kYe-9`IuC&{ z3nE39@=Q(2!Se5c17!i#@YT!|G+-8M?#rnlQ)CG0F zG#I@E>5TwU)>Piq*32{tDRXH7^v z1q~GeA`ulk%n77euGwWWh5dL$wl8(0c83NuJmJz&+96a-ZuD(d4Ip4vp}#dX5= z7eeQ3w&jkT~IB0TjN!TD}>}uRh$6GP5YuJDDG0l!RZXEIM^Cr48$wYQ^x-FUm2?RzxZ4F z!8mU15JrrQ(&-_FD8ls|h7g7^%Y@Am#l#4A&a9CQl|cxrOgfBg80#<^XWI=utFRE+ zbqfX?F%l3a*mtx&kb-_kV2L$ka5$(S4TD%85QYk)9wG(@Vbnsz;V49N2{tkM9fHJi z5hNVfEU@@sU%V&47#3qD+f|&zAGG?15ts7;jzTnHFN|s!jW8-%9U*@8-d#l%lHq`|-Z&7qbW@RTVkh6_)lHlyQMhYedu)=0~)w zC?O9Aapi;BXb=-=D%)wswOC;ck(Zyg_0>{it(e|G8J|?PeyWzPs0}Iv|Mdv?KZin% z8IRGn_(t`uv*g;}$!?!AKC~gNeTKJ1QEN_iD&d&eh{ls~Q6S61Z zh_%Uldu0&tpnAqaQIIFgN>E{^phCdH0yq>50Gp7l57E-;!)V($9XCxUz(-*%*GFX` z)YMkJV2{O*Hw&;|&@+(`@tDAuuD$ye^&c`rYKnwL-o$3nS16 znlqhV4>@vDG*fC(FWHibO_v~KmImus0m33-oXn+UObRjF@z^q5id#(qt!boaU?0-O zrlEa$2>0T!o@7|dgo)n3VdQBiE-8D=VHxRZJGqgmV7EC|aT&(@*-uN!w2iQw3du}f zoNN|LX4fo>l~~GX^}@jU<}w#5wXs$*zcj(ehTRcLQv7XYab#b0vb}7iG==>h$?_}X zMm8cVZ%nUps)0YgV0IdkMR0u}{#P|sUa#6&<}3USYSNQ@gdgxT{0n}L4y0#!9nbii i`m(RJGr<3&0|nL$_!)l0&(Z4&tOsamhuYam9Ksp(m-4It diff --git a/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7mxKOzY.woff2 b/deps/Roboto-0.4.9/KFOmCnqEu92Fr1Mu7mxKOzY.woff2 deleted file mode 100644 index 47ce460fa9a827bd143a2cc1d70f6dd4b97352cf..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1496 zcmV;}1t1bO#^|f=L^k2vJm! zG68-Z0c>{d3teP$ehh!!R+zFi z-JPU2hiMtv&=% zoF#>G^mfu~?)EXT^#44%xYAf+PnzyRmrfXD`z+1P>BAPo{4DhHA0USWz_LS89mQb? zABjS9#>o$>WHQzW8;R`BAe^NUHzF)3jbtMQNT!1(r-dG4kAs(wv4FO-CDQ4kIzP2~ zoSY#@;vE6a3VVtBUB42^YNV-nntzi@nTPr4(7b z59jed?NA~bd7Y}>KJF3~Ly^xHX1dH0Z&7@&IhOvo9qFePuBDm#m=uoj{)JKSiP}pj z`Dx3gYwO=W+Q&7)OucTu? zKY$Z%c(~@@)6o3`{dGU3wq;7;q*k#MzDcI?Ydq)Gt=;QT+yA6}9BR$ZXuHSB4+E3Y0P{Ga%dcgsuHfaSoh{j~ScdZM zypF1!y;U|kWMy(gm+@FCZgEay8QRxcdXoKpt1l4nBG$_Ad7G+>>zh~*0QK9dIvv2P zXV)#BJhk%JZwCnIR7FaRc^V4>;^gT2r~_Ye``*8aR+RuBe^8TvZ&2cwU+&*Km9eY? zP>CQ=T=q{!b?`5%f030DBK)f4PUk6yfBJ#!T300m)?Uu1{~Q6Ka*yR`d;#=`;uL{^ z(gnCeTHy@)Y)8=Xq9f{gcro!Urp=L=VRGcwqVqkp0xnW7Ehe5Q-t?VF`AqT^D#akN zYL%c=DpQD2tq!9OC0c|aAz~#7v7954Q6g3WRz22oi5MjsRLWN?1*2A-GOTJu)J6wR z+;}0(c#a#N&G~o`mPw0RWlEH)XK{rS!mwB+*E0wasudy;0#wQ#A+e|G5cwvurXSUa zQG~<=noFRFxVG*rtu4Yq6_*A)MA|cs3_Z6_uNe?6IugRY$nmA-hPPu{xcj0>lEfhA z`O8U~6XU&f70FyTPOR#aye2!;(TWr^S7&+7C1qzxyoV*J=n&ThOG&FSFUr+s-0nLs z_AKcfZ0ApggUE9pgI7>)Xd2M diff --git a/deps/Roboto-0.4.9/font.css b/deps/Roboto-0.4.9/font.css index fd77771fb..33cb5f482 100644 --- a/deps/Roboto-0.4.9/font.css +++ b/deps/Roboto-0.4.9/font.css @@ -3,8 +3,9 @@ font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu72xKOzY.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmZiArmlw.woff2) format('woff2'); unicode-range: U+0460-052F, U+1C80-1C8A, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F; } /* cyrillic */ @@ -12,8 +13,9 @@ font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu5mxKOzY.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmQiArmlw.woff2) format('woff2'); unicode-range: U+0301, U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116; } /* greek-ext */ @@ -21,8 +23,9 @@ font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu7mxKOzY.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmYiArmlw.woff2) format('woff2'); unicode-range: U+1F00-1FFF; } /* greek */ @@ -30,17 +33,39 @@ font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu4WxKOzY.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmXiArmlw.woff2) format('woff2'); unicode-range: U+0370-0377, U+037A-037F, U+0384-038A, U+038C, U+038E-03A1, U+03A3-03FF; } +/* math */ +@font-face { + font-family: 'Roboto'; + font-style: normal; + font-weight: 400; + font-stretch: 100%; + font-display: swap; + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVnoiArmlw.woff2) format('woff2'); + unicode-range: U+0302-0303, U+0305, U+0307-0308, U+0310, U+0312, U+0315, U+031A, U+0326-0327, U+032C, U+032F-0330, U+0332-0333, U+0338, U+033A, U+0346, U+034D, U+0391-03A1, U+03A3-03A9, U+03B1-03C9, U+03D1, U+03D5-03D6, U+03F0-03F1, U+03F4-03F5, U+2016-2017, U+2034-2038, U+203C, U+2040, U+2043, U+2047, U+2050, U+2057, U+205F, U+2070-2071, U+2074-208E, U+2090-209C, U+20D0-20DC, U+20E1, U+20E5-20EF, U+2100-2112, U+2114-2115, U+2117-2121, U+2123-214F, U+2190, U+2192, U+2194-21AE, U+21B0-21E5, U+21F1-21F2, U+21F4-2211, U+2213-2214, U+2216-22FF, U+2308-230B, U+2310, U+2319, U+231C-2321, U+2336-237A, U+237C, U+2395, U+239B-23B7, U+23D0, U+23DC-23E1, U+2474-2475, U+25AF, U+25B3, U+25B7, U+25BD, U+25C1, U+25CA, U+25CC, U+25FB, U+266D-266F, U+27C0-27FF, U+2900-2AFF, U+2B0E-2B11, U+2B30-2B4C, U+2BFE, U+3030, U+FF5B, U+FF5D, U+1D400-1D7FF, U+1EE00-1EEFF; +} +/* symbols */ +@font-face { + font-family: 'Roboto'; + font-style: normal; + font-weight: 400; + font-stretch: 100%; + font-display: swap; + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVn6iArmlw.woff2) format('woff2'); + unicode-range: U+0001-000C, U+000E-001F, U+007F-009F, U+20DD-20E0, U+20E2-20E4, U+2150-218F, U+2190, U+2192, U+2194-2199, U+21AF, U+21E6-21F0, U+21F3, U+2218-2219, U+2299, U+22C4-22C6, U+2300-243F, U+2440-244A, U+2460-24FF, U+25A0-27BF, U+2800-28FF, U+2921-2922, U+2981, U+29BF, U+29EB, U+2B00-2BFF, U+4DC0-4DFF, U+FFF9-FFFB, U+10140-1018E, U+10190-1019C, U+101A0, U+101D0-101FD, U+102E0-102FB, U+10E60-10E7E, U+1D2C0-1D2D3, U+1D2E0-1D37F, U+1F000-1F0FF, U+1F100-1F1AD, U+1F1E6-1F1FF, U+1F30D-1F30F, U+1F315, U+1F31C, U+1F31E, U+1F320-1F32C, U+1F336, U+1F378, U+1F37D, U+1F382, U+1F393-1F39F, U+1F3A7-1F3A8, U+1F3AC-1F3AF, U+1F3C2, U+1F3C4-1F3C6, U+1F3CA-1F3CE, U+1F3D4-1F3E0, U+1F3ED, U+1F3F1-1F3F3, U+1F3F5-1F3F7, U+1F408, U+1F415, U+1F41F, U+1F426, U+1F43F, U+1F441-1F442, U+1F444, U+1F446-1F449, U+1F44C-1F44E, U+1F453, U+1F46A, U+1F47D, U+1F4A3, U+1F4B0, U+1F4B3, U+1F4B9, U+1F4BB, U+1F4BF, U+1F4C8-1F4CB, U+1F4D6, U+1F4DA, U+1F4DF, U+1F4E3-1F4E6, U+1F4EA-1F4ED, U+1F4F7, U+1F4F9-1F4FB, U+1F4FD-1F4FE, U+1F503, U+1F507-1F50B, U+1F50D, U+1F512-1F513, U+1F53E-1F54A, U+1F54F-1F5FA, U+1F610, U+1F650-1F67F, U+1F687, U+1F68D, U+1F691, U+1F694, U+1F698, U+1F6AD, U+1F6B2, U+1F6B9-1F6BA, U+1F6BC, U+1F6C6-1F6CF, U+1F6D3-1F6D7, U+1F6E0-1F6EA, U+1F6F0-1F6F3, U+1F6F7-1F6FC, U+1F700-1F7FF, U+1F800-1F80B, U+1F810-1F847, U+1F850-1F859, U+1F860-1F887, U+1F890-1F8AD, U+1F8B0-1F8BB, U+1F8C0-1F8C1, U+1F900-1F90B, U+1F93B, U+1F946, U+1F984, U+1F996, U+1F9E9, U+1FA00-1FA6F, U+1FA70-1FA7C, U+1FA80-1FA89, U+1FA8F-1FAC6, U+1FACE-1FADC, U+1FADF-1FAE9, U+1FAF0-1FAF8, U+1FB00-1FBFF; +} /* vietnamese */ @font-face { font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu7WxKOzY.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmbiArmlw.woff2) format('woff2'); unicode-range: U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+0300-0301, U+0303-0304, U+0308-0309, U+0323, U+0329, U+1EA0-1EF9, U+20AB; } /* latin-ext */ @@ -48,8 +73,9 @@ font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu7GxKOzY.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmaiArmlw.woff2) format('woff2'); unicode-range: U+0100-02BA, U+02BD-02C5, U+02C7-02CC, U+02CE-02D7, U+02DD-02FF, U+0304, U+0308, U+0329, U+1D00-1DBF, U+1E00-1E9F, U+1EF2-1EFF, U+2020, U+20A0-20AB, U+20AD-20C0, U+2113, U+2C60-2C7F, U+A720-A7FF; } /* latin */ @@ -57,7 +83,8 @@ font-family: 'Roboto'; font-style: normal; font-weight: 400; + font-stretch: 100%; font-display: swap; - src: url(KFOmCnqEu92Fr1Mu4mxK.woff2) format('woff2'); + src: url(KFOMCnqEu92Fr1ME7kSn66aGLdTylUAMQXC89YmC2DPNWubEbVmUiAo.woff2) format('woff2'); unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+0304, U+0308, U+0329, U+2000-206F, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; } diff --git a/pkgdown.yml b/pkgdown.yml index 3ec17fa2c..197c70abc 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -1,6 +1,6 @@ pandoc: 3.6.1 pkgdown: 2.1.1.9000 -pkgdown_sha: 5c03b7444923f7c797b8b283e175f8eed63797a7 +pkgdown_sha: 6615322cb2ce15b1effcecf894123c88fa10b9c9 articles: check_model_practical: check_model_practical.html check_model: check_model.html @@ -8,7 +8,7 @@ articles: compare: compare.html r2: r2.html simulate_residuals: simulate_residuals.html -last_built: 2025-01-02T14:15Z +last_built: 2025-01-09T12:18Z urls: reference: https://easystats.github.io/performance/reference article: https://easystats.github.io/performance/articles diff --git a/reference/r2_bayes.html b/reference/r2_bayes.html index c88e9aa68..a9e0df116 100644 --- a/reference/r2_bayes.html +++ b/reference/r2_bayes.html @@ -196,7 +196,7 @@