diff --git a/.Rbuildignore b/.Rbuildignore deleted file mode 100644 index 2999478..0000000 --- a/.Rbuildignore +++ /dev/null @@ -1,9 +0,0 @@ -^.*\.Rproj$ -^\.Rproj\.user$ -^LICENSE\.md$ -^\.git* -^tests/testthat/mplusResults* -sketches -^_pkgdown\.yml$ -^docs$ -^pkgdown$ diff --git a/404.html b/404.html new file mode 100644 index 0000000..b7cd858 --- /dev/null +++ b/404.html @@ -0,0 +1,88 @@ + + +
+ + + + +YEAR: 2024 +COPYRIGHT HOLDER: modsem authors ++ +
LICENSE.md
+ Copyright (c) 2024 modsem authors
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+By default, modsem() creates product indicators for you, based on the +interaction specified in your model. Behind the scenes we can see that +modsem() creates in total 9 variables (product indicators) used as the +indicators for your latent product.
+
+m1 <- '
+# Outer Model
+X =~ x1 + x2 + x3
+Y =~ y1 + y2 + y3
+Z =~ z1 + z2 + z3
+
+# Inner model
+Y ~ X + Z + X:Z
+'
+
+est1 <- modsem(m1, oneInt)
+cat(est1$syntax)
+#> X =~ x1
+#> X =~ x2
+#> X =~ x3
+#> Y =~ y1
+#> Y =~ y2
+#> Y =~ y3
+#> Z =~ z1
+#> Z =~ z2
+#> Z =~ z3
+#> Y ~ X
+#> Y ~ Z
+#> Y ~ XZ
+#> XZ =~ 1*x1z1
+#> XZ =~ x2z1
+#> XZ =~ x3z1
+#> XZ =~ x1z2
+#> XZ =~ x2z2
+#> XZ =~ x3z2
+#> XZ =~ x1z3
+#> XZ =~ x2z3
+#> XZ =~ x3z3
+#> x1z1 ~~ 0*x2z2
+#> x1z1 ~~ 0*x2z3
+#> x1z1 ~~ 0*x3z2
+#> x1z1 ~~ 0*x3z3
+#> x1z2 ~~ 0*x2z1
+#> x1z2 ~~ 0*x2z3
+#> x1z2 ~~ 0*x3z1
+#> x1z2 ~~ 0*x3z3
+#> x1z3 ~~ 0*x2z1
+#> x1z3 ~~ 0*x2z2
+#> x1z3 ~~ 0*x3z1
+#> x1z3 ~~ 0*x3z2
+#> x2z1 ~~ 0*x3z2
+#> x2z1 ~~ 0*x3z3
+#> x2z2 ~~ 0*x3z1
+#> x2z2 ~~ 0*x3z3
+#> x2z3 ~~ 0*x3z1
+#> x2z3 ~~ 0*x3z2
+#> x1z1 ~~ x1z2
+#> x1z1 ~~ x1z3
+#> x1z1 ~~ x2z1
+#> x1z1 ~~ x3z1
+#> x1z2 ~~ x1z3
+#> x1z2 ~~ x2z2
+#> x1z2 ~~ x3z2
+#> x1z3 ~~ x2z3
+#> x1z3 ~~ x3z3
+#> x2z1 ~~ x2z2
+#> x2z1 ~~ x2z3
+#> x2z1 ~~ x3z1
+#> x2z2 ~~ x2z3
+#> x2z2 ~~ x3z2
+#> x2z3 ~~ x3z3
+#> x3z1 ~~ x3z2
+#> x3z1 ~~ x3z3
+#> x3z2 ~~ x3z3
Whilst this often is sufficient, you might want some control over how +these indicators are created. In general, modsem() has two mechanisms +for giving control over the creating of indicator products: 1. By +specifying the measurement model of your latent product your self, and +2. By using the mean() and sum() function, collectively known as +parceling operations.
+By default, modsem() creates all possible combinations of different +product indicators. However, another common approach is to match the +indicators by order. For example, let’s say you have an interaction +between the laten variables X and Z: ‘X =~ x1 + x2’ and ‘Z =~ z1 + z2’. +By default you would get ‘XZ =~ x1z1 + x1z2 + x2z1 + x2z2’. If you +wanted to use the matching approach you would want to get ‘XZ +=~ x1z1 + x2z2’ instead. To achieve this you can use the ‘match = TRUE’ +argument.
+
+m2 <- '
+# Outer Model
+X =~ x1 + x2
+Y =~ y1 + y2
+Z =~ z1 + z2
+
+# Inner model
+Y ~ X + Z + X:Z
+'
+
+est2 <- modsem(m2, oneInt, match = TRUE)
+summary(est2)
+#> modsem:
+#> Method = dblcent
+#> lavaan 0.6-18 ended normally after 41 iterations
+#>
+#> Estimator ML
+#> Optimization method NLMINB
+#> Number of model parameters 22
+#>
+#> Number of observations 2000
+#>
+#> Model Test User Model:
+#>
+#> Test statistic 11.355
+#> Degrees of freedom 14
+#> P-value (Chi-square) 0.658
+#>
+#> Parameter Estimates:
+#>
+#> Standard errors Standard
+#> Information Expected
+#> Information saturated (h1) model Structured
+#>
+#> Latent Variables:
+#> Estimate Std.Err z-value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.819 0.021 38.127 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.807 0.010 82.495 0.000
+#> Z =~
+#> z1 1.000
+#> z2 0.836 0.024 35.392 0.000
+#> XZ =~
+#> x1z1 1.000
+#> x2z2 0.645 0.024 26.904 0.000
+#>
+#> Regressions:
+#> Estimate Std.Err z-value P(>|z|)
+#> Y ~
+#> X 0.688 0.029 23.366 0.000
+#> Z 0.576 0.029 20.173 0.000
+#> XZ 0.706 0.032 22.405 0.000
+#>
+#> Covariances:
+#> Estimate Std.Err z-value P(>|z|)
+#> .x1z1 ~~
+#> .x2z2 0.000
+#> X ~~
+#> Z 0.202 0.025 8.182 0.000
+#> XZ 0.003 0.026 0.119 0.905
+#> Z ~~
+#> XZ 0.042 0.026 1.621 0.105
+#>
+#> Variances:
+#> Estimate Std.Err z-value P(>|z|)
+#> .x1 0.179 0.022 8.029 0.000
+#> .x2 0.151 0.015 9.956 0.000
+#> .y1 0.184 0.021 8.577 0.000
+#> .y2 0.136 0.014 9.663 0.000
+#> .z1 0.197 0.025 7.802 0.000
+#> .z2 0.138 0.018 7.831 0.000
+#> .x1z1 0.319 0.035 9.141 0.000
+#> .x2z2 0.244 0.016 15.369 0.000
+#> X 0.962 0.042 23.120 0.000
+#> .Y 0.964 0.042 23.110 0.000
+#> Z 0.987 0.044 22.260 0.000
+#> XZ 1.041 0.054 19.441 0.000
I you want even more control you can use the
+get_pi_syntax()
and get_pi_data()
functions,
+such that you can extract the modified syntax and data from modsem, and
+alter them accordingly. This can be particularly useful in cases where
+you want to estimate a model using a feature in lavaan, which isn’t
+available in modsem. For example, (as of yet) the syntax for both
+ordered- and multigroup models isn’t as flexible as in lavaan. Thus you
+can modify the auto-generated syntax (with the altered dataset) from
+modsem to suit your needs.
+m3 <- '
+# Outer Model
+X =~ x1 + x2
+Y =~ y1 + y2
+Z =~ z1 + z2
+
+# Inner model
+Y ~ X + Z + X:Z
+'
+syntax <- get_pi_syntax(m3)
+cat(syntax)
+#> X =~ x1
+#> X =~ x2
+#> Y =~ y1
+#> Y =~ y2
+#> Z =~ z1
+#> Z =~ z2
+#> Y ~ X
+#> Y ~ Z
+#> Y ~ XZ
+#> XZ =~ 1*x1z1
+#> XZ =~ x2z1
+#> XZ =~ x1z2
+#> XZ =~ x2z2
+#> x1z1 ~~ 0*x2z2
+#> x1z2 ~~ 0*x2z1
+#> x1z1 ~~ x1z2
+#> x1z1 ~~ x2z1
+#> x1z2 ~~ x2z2
+#> x2z1 ~~ x2z2
+data <- get_pi_data(m3, oneInt)
+head(data)
+#> x1 x2 y1 y2 z1 z2 x1z1
+#> 1 2.4345722 1.3578655 1.4526897 0.9560888 0.8184825 1.60708140 -0.4823019
+#> 2 0.2472734 0.2723201 0.5496756 0.7115311 3.6649148 2.60983102 -2.2680403
+#> 3 -1.3647759 -0.5628205 -0.9835467 -0.6697747 1.7249386 2.10981827 -1.9137416
+#> 4 3.0432836 2.2153763 6.4641465 4.7805981 2.5697116 3.26335379 2.9385205
+#> 5 2.8148841 2.7029616 2.2860280 2.1457643 0.3467850 0.07164577 -1.4009548
+#> 6 -0.5453450 -0.7530642 1.1294876 1.1998472 -0.2362958 0.60252657 1.7465860
+#> x2z1 x1z2 x2z2
+#> 1 -0.1884837 0.3929380 -0.0730934
+#> 2 -2.6637694 -1.2630544 -1.4547433
+#> 3 -1.4299711 -2.3329864 -1.7383407
+#> 4 1.3971422 3.9837389 1.9273102
+#> 5 -1.1495704 -2.2058995 -1.8169042
+#> 6 2.2950753 0.7717365 1.0568143
vignettes/interaction_two_etas.Rmd
+ interaction_two_etas.Rmd
Interaction effects between two endogenous (i.e., dependent)
+variables work as you would expect for the product indicator methods
+("dblcent", "rca", "ca", "uca"
). For the lms- and qml
+approach however, it is not as straight forward.
The lms- and qml approach can (by default) handle interaction effects +between endogenous and exogenous (i.e., independent) variables, but not +interaction effects between two endogenous variables. When there is an +interaction effect between two endogenous variables, the equations +cannot easily be written in ‘reduced’ form – meaning that normal +estimation procedures won’t work.
+This being said, there is a work-around for these limitations for +both the lms- and qml-approach. In essence, the model can be split into +two parts, one linear and one non-linear. Basically, you can replace the +covariance matrix used in the estimation of the non-linear model, with +the model-implied covariance matrix from a linear model. Thus you can +treat an endogenous variable as if it were exogenous – given that it can +be expressed in a linear model.
+Let’s consider the the theory of planned behaviour (TPB) where we +wish to estimate the quadratic effect of INT on BEH (INT:INT). With the +following model:
+
+tpb <- '
+# Outer Model (Based on Hagger et al., 2007)
+ ATT =~ att1 + att2 + att3 + att4 + att5
+ SN =~ sn1 + sn2
+ PBC =~ pbc1 + pbc2 + pbc3
+ INT =~ int1 + int2 + int3
+ BEH =~ b1 + b2
+
+# Inner Model (Based on Steinmetz et al., 2011)
+ INT ~ ATT + SN + PBC
+ BEH ~ INT + PBC
+ BEH ~ INT:INT
+'
Since INT is an endogenous variable, its quadratic term (i.e., an +interaction effect with itself) would include two endogenous variables. +Thus we would ordinarily not be able to estimate this model using the +lms- or qml-approach. However, we can split the model into two parts, +one linear and one non-linear. While INT is an endogenous variable, it +can be expressed in a linear model – since it is not affected by any +interaction terms:
+
+tpb_linear <- 'INT ~ PBC + ATT + SN'
We could then remove this part from the original model, giving +us:
+
+tpb_nonlinear <- '
+# Outer Model (Based on Hagger et al., 2007)
+ ATT =~ att1 + att2 + att3 + att4 + att5
+ SN =~ sn1 + sn2
+ PBC =~ pbc1 + pbc2 + pbc3
+ INT =~ int1 + int2 + int3
+ BEH =~ b1 + b2
+
+# Inner Model (Based on Steinmetz et al., 2011)
+ BEH ~ INT + PBC
+ BEH ~ INT:INT
+'
We could now just estimate the non-linear model, since INT now is an
+exogenous variable. This would however not incorporate the structural
+model for INT. To address this, we can make modsem replace the
+covariance matrix (phi) of (INT, PBC, ATT, SN) with the model-implied
+covariance matrix from the linear model, whilst estimating both models
+simultaneously. To acheive this, we can use the cov.syntax
+argument in modsem
:
+est_lms <- modsem(tpb_nonlinear, data = TPB, cov.syntax = tpb_linear, method = "lms")
+#> Warning: It is recommended that you have at least 48 nodes for interaction
+#> effects between endogenous variables in the lms approach 'nodes = 24'
+summary(est_lms)
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -28467.33, Change = -28467.332
+#> EM: Iteration = 2, LogLik = -28124.48, Change = 342.852
+#> EM: Iteration = 3, LogLik = -27825.10, Change = 299.377
+#> EM: Iteration = 4, LogLik = -27581.12, Change = 243.980
+#> EM: Iteration = 5, LogLik = -27370.69, Change = 210.431
+#> EM: Iteration = 6, LogLik = -27175.43, Change = 195.264
+#> EM: Iteration = 7, LogLik = -27000.48, Change = 174.946
+#> EM: Iteration = 8, LogLik = -26848.56, Change = 151.919
+#> EM: Iteration = 9, LogLik = -26711.51, Change = 137.051
+#> EM: Iteration = 10, LogLik = -26592.54, Change = 118.968
+#> EM: Iteration = 11, LogLik = -26504.04, Change = 88.504
+#> EM: Iteration = 12, LogLik = -26466.85, Change = 37.190
+#> EM: Iteration = 13, LogLik = -26452.38, Change = 14.465
+#> EM: Iteration = 14, LogLik = -26439.05, Change = 13.331
+#> EM: Iteration = 15, LogLik = -26430.51, Change = 8.541
+#> EM: Iteration = 16, LogLik = -26422.81, Change = 7.698
+#> EM: Iteration = 17, LogLik = -26403.89, Change = 18.924
+#> EM: Iteration = 18, LogLik = -26402.25, Change = 1.642
+#> EM: Iteration = 19, LogLik = -26401.21, Change = 1.037
+#> EM: Iteration = 20, LogLik = -26400.26, Change = 0.955
+#> EM: Iteration = 21, LogLik = -26399.30, Change = 0.960
+#> EM: Iteration = 22, LogLik = -26398.64, Change = 0.658
+#> EM: Iteration = 23, LogLik = -26398.02, Change = 0.615
+#> EM: Iteration = 24, LogLik = -26397.74, Change = 0.278
+#> EM: Iteration = 25, LogLik = -26397.33, Change = 0.420
+#> EM: Iteration = 26, LogLik = -26397.20, Change = 0.128
+#> EM: Iteration = 27, LogLik = -26396.77, Change = 0.425
+#> EM: Iteration = 28, LogLik = -26396.48, Change = 0.292
+#> EM: Iteration = 29, LogLik = -26396.34, Change = 0.145
+#> EM: Iteration = 30, LogLik = -26396.31, Change = 0.022
+#> EM: Iteration = 31, LogLik = -26395.95, Change = 0.365
+#> EM: Iteration = 32, LogLik = -26395.73, Change = 0.215
+#> EM: Iteration = 33, LogLik = -26395.68, Change = 0.055
+#> EM: Iteration = 34, LogLik = -26395.37, Change = 0.304
+#> EM: Iteration = 35, LogLik = -26395.30, Change = 0.079
+#> EM: Iteration = 36, LogLik = -26395.10, Change = 0.198
+#> EM: Iteration = 37, LogLik = -26395.04, Change = 0.057
+#> EM: Iteration = 38, LogLik = -26394.99, Change = 0.054
+#> EM: Iteration = 39, LogLik = -26394.96, Change = 0.028
+#> EM: Iteration = 40, LogLik = -26394.93, Change = 0.031
+#> EM: Iteration = 41, LogLik = -26394.91, Change = 0.019
+#> EM: Iteration = 42, LogLik = -26394.89, Change = 0.023
+#> EM: Iteration = 43, LogLik = -26394.87, Change = 0.015
+#> EM: Iteration = 44, LogLik = -26394.85, Change = 0.019
+#> EM: Iteration = 45, LogLik = -26394.84, Change = 0.013
+#> EM: Iteration = 46, LogLik = -26394.82, Change = 0.018
+#> EM: Iteration = 47, LogLik = -26394.81, Change = 0.012
+#> EM: Iteration = 48, LogLik = -26394.79, Change = 0.018
+#> EM: Iteration = 49, LogLik = -26394.78, Change = 0.013
+#> EM: Iteration = 50, LogLik = -26394.76, Change = 0.020
+#> EM: Iteration = 51, LogLik = -26394.74, Change = 0.015
+#> EM: Iteration = 52, LogLik = -26394.72, Change = 0.028
+#> EM: Iteration = 53, LogLik = -26394.69, Change = 0.022
+#> EM: Iteration = 54, LogLik = -26394.63, Change = 0.062
+#> EM: Iteration = 55, LogLik = -26394.58, Change = 0.057
+#> EM: Iteration = 56, LogLik = -26394.29, Change = 0.284
+#> EM: Iteration = 57, LogLik = -26394.04, Change = 0.248
+#> EM: Iteration = 58, LogLik = -26393.97, Change = 0.075
+#> EM: Iteration = 59, LogLik = -26393.73, Change = 0.240
+#> EM: Iteration = 60, LogLik = -26393.72, Change = 0.011
+#> EM: Iteration = 61, LogLik = -26393.71, Change = 0.013
+#> EM: Iteration = 62, LogLik = -26393.70, Change = 0.005
+#> EM: Iteration = 63, LogLik = -26393.69, Change = 0.008
+#> EM: Iteration = 64, LogLik = -26393.69, Change = 0.003
+#> EM: Iteration = 65, LogLik = -26393.68, Change = 0.007
+#> EM: Iteration = 66, LogLik = -26393.68, Change = 0.003
+#> EM: Iteration = 67, LogLik = -26393.67, Change = 0.007
+#> EM: Iteration = 68, LogLik = -26393.67, Change = 0.003
+#> EM: Iteration = 69, LogLik = -26393.66, Change = 0.006
+#> EM: Iteration = 70, LogLik = -26393.66, Change = 0.002
+#> EM: Iteration = 71, LogLik = -26393.66, Change = 0.006
+#> EM: Iteration = 72, LogLik = -26393.65, Change = 0.002
+#> EM: Iteration = 73, LogLik = -26393.65, Change = 0.005
+#> EM: Iteration = 74, LogLik = -26393.65, Change = 0.002
+#> EM: Iteration = 75, LogLik = -26393.64, Change = 0.005
+#> EM: Iteration = 76, LogLik = -26393.64, Change = 0.002
+#> EM: Iteration = 77, LogLik = -26393.63, Change = 0.004
+#> EM: Iteration = 78, LogLik = -26393.63, Change = 0.002
+#> EM: Iteration = 79, LogLik = -26393.63, Change = 0.004
+#> EM: Iteration = 80, LogLik = -26393.63, Change = 0.002
+#> EM: Iteration = 81, LogLik = -26393.62, Change = 0.004
+#> EM: Iteration = 82, LogLik = -26393.62, Change = 0.002
+#> EM: Iteration = 83, LogLik = -26393.62, Change = 0.004
+#> EM: Iteration = 84, LogLik = -26393.61, Change = 0.002
+#> EM: Iteration = 85, LogLik = -26393.61, Change = 0.003
+#> EM: Iteration = 86, LogLik = -26393.61, Change = 0.002
+#> EM: Iteration = 87, LogLik = -26393.60, Change = 0.003
+#> EM: Iteration = 88, LogLik = -26393.60, Change = 0.003
+#> EM: Iteration = 89, LogLik = -26393.60, Change = 0.003
+#> EM: Iteration = 90, LogLik = -26393.60, Change = 0.003
+#> EM: Iteration = 91, LogLik = -26393.59, Change = 0.003
+#> EM: Iteration = 92, LogLik = -26393.59, Change = 0.003
+#> EM: Iteration = 93, LogLik = -26393.59, Change = 0.003
+#> EM: Iteration = 94, LogLik = -26393.59, Change = 0.003
+#> EM: Iteration = 95, LogLik = -26393.58, Change = 0.003
+#> EM: Iteration = 96, LogLik = -26393.58, Change = 0.003
+#> EM: Iteration = 97, LogLik = -26393.58, Change = 0.002
+#> EM: Iteration = 98, LogLik = -26393.58, Change = 0.003
+#> EM: Iteration = 99, LogLik = -26393.57, Change = 0.002
+#> EM: Iteration = 100, LogLik = -26393.57, Change = 0.003
+#> EM: Iteration = 101, LogLik = -26393.57, Change = 0.002
+#> EM: Iteration = 102, LogLik = -26393.57, Change = 0.003
+#> EM: Iteration = 103, LogLik = -26393.56, Change = 0.002
+#> EM: Iteration = 104, LogLik = -26393.56, Change = 0.003
+#> EM: Iteration = 105, LogLik = -26393.56, Change = 0.002
+#> EM: Iteration = 106, LogLik = -26393.56, Change = 0.003
+#> EM: Iteration = 107, LogLik = -26393.55, Change = 0.002
+#> EM: Iteration = 108, LogLik = -26393.55, Change = 0.003
+#> EM: Iteration = 109, LogLik = -26393.55, Change = 0.002
+#> EM: Iteration = 110, LogLik = -26393.55, Change = 0.003
+#> EM: Iteration = 111, LogLik = -26393.54, Change = 0.002
+#> EM: Iteration = 112, LogLik = -26393.54, Change = 0.003
+#> EM: Iteration = 113, LogLik = -26393.54, Change = 0.002
+#> EM: Iteration = 114, LogLik = -26393.54, Change = 0.003
+#> EM: Iteration = 115, LogLik = -26393.53, Change = 0.002
+#> EM: Iteration = 116, LogLik = -26393.53, Change = 0.003
+#> EM: Iteration = 117, LogLik = -26393.53, Change = 0.002
+#> EM: Iteration = 118, LogLik = -26393.53, Change = 0.003
+#> EM: Iteration = 119, LogLik = -26393.53, Change = 0.001
+#> EM: Iteration = 120, LogLik = -26393.52, Change = 0.003
+#> EM: Iteration = 121, LogLik = -26393.52, Change = 0.001
+#> EM: Iteration = 122, LogLik = -26393.52, Change = 0.003
+#> EM: Iteration = 123, LogLik = -26393.52, Change = 0.001
+#> EM: Iteration = 124, LogLik = -26393.51, Change = 0.003
+#> EM: Iteration = 125, LogLik = -26393.51, Change = 0.001
+#> EM: Iteration = 126, LogLik = -26393.51, Change = 0.003
+#> EM: Iteration = 127, LogLik = -26393.51, Change = 0.001
+#> EM: Iteration = 128, LogLik = -26393.51, Change = 0.003
+#> EM: Iteration = 129, LogLik = -26393.50, Change = 0.001
+#> EM: Iteration = 130, LogLik = -26393.50, Change = 0.003
+#> EM: Iteration = 131, LogLik = -26393.50, Change = 0.001
+#> EM: Iteration = 132, LogLik = -26393.50, Change = 0.003
+#> EM: Iteration = 133, LogLik = -26393.50, Change = 0.001
+#> EM: Iteration = 134, LogLik = -26393.49, Change = 0.003
+#> EM: Iteration = 135, LogLik = -26393.49, Change = 0.001
+#> EM: Iteration = 136, LogLik = -26393.49, Change = 0.003
+#> EM: Iteration = 137, LogLik = -26393.49, Change = 0.001
+#> EM: Iteration = 138, LogLik = -26393.48, Change = 0.003
+#> EM: Iteration = 139, LogLik = -26393.48, Change = 0.001
+#> EM: Iteration = 140, LogLik = -26393.48, Change = 0.003
+#> EM: Iteration = 141, LogLik = -26393.48, Change = 0.001
+#> EM: Iteration = 142, LogLik = -26393.48, Change = 0.003
+#> EM: Iteration = 143, LogLik = -26393.48, Change = 0.001
+#> EM: Iteration = 144, LogLik = -26393.47, Change = 0.003
+#> EM: Iteration = 145, LogLik = -26393.47, Change = 0.001
+#> EM: Iteration = 146, LogLik = -26393.47, Change = 0.003
+#> EM: Iteration = 147, LogLik = -26393.47, Change = 0.000
+#> EM: Iteration = 148, LogLik = -26393.46, Change = 0.003
+#> EM: Iteration = 149, LogLik = -26393.46, Change = 0.000
+#> EM: Iteration = 150, LogLik = -26393.46, Change = 0.003
+#> EM: Iteration = 151, LogLik = -26393.46, Change = 0.000
+#> EM: Iteration = 152, LogLik = -26393.46, Change = 0.004
+#> EM: Iteration = 153, LogLik = -26393.46, Change = 0.000
+#> EM: Iteration = 154, LogLik = -26393.45, Change = 0.004
+#> EM: Iteration = 155, LogLik = -26393.45, Change = 0.000
+#> EM: Iteration = 156, LogLik = -26393.45, Change = 0.004
+#> EM: Iteration = 157, LogLik = -26393.45, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 136
+#> Loglikelihood -23781.36
+#> Akaike (AIC) 47670.72
+#> Bayesian (BIC) 47973.16
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 24
+#> Dimensions 1
+#> Total points of integration 24
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -26393
+#> Akaike (AIC) 52892.89
+#> Bayesian (BIC) 53189.74
+#> Chi-square 66.72
+#> Degrees of Freedom (Chi-square) 82
+#> P-value (Chi-square) 0.889
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 2612.09
+#> Difference test (D) 5224.18
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> BEH 0.235
+#> INT 0.365
+#> R-Squared Null-Model (H0):
+#> BEH 0.210
+#> INT 0.367
+#> R-Squared Change:
+#> BEH 0.025
+#> INT -0.002
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> INT =~
+#> int1 1.000
+#> int2 0.915 0.016 58.24 0.000
+#> int3 0.807 0.015 54.49 0.000
+#> ATT =~
+#> att1 1.000
+#> att2 0.876 0.012 71.51 0.000
+#> att3 0.787 0.012 66.55 0.000
+#> att4 0.693 0.011 60.50 0.000
+#> att5 0.885 0.012 71.68 0.000
+#> SN =~
+#> sn1 1.000
+#> sn2 0.893 0.017 52.59 0.000
+#> PBC =~
+#> pbc1 1.000
+#> pbc2 0.912 0.013 69.14 0.000
+#> pbc3 0.801 0.012 66.52 0.000
+#> BEH =~
+#> b1 1.000
+#> b2 0.959 0.033 29.32 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> BEH ~
+#> INT 0.196 0.026 7.60 0.000
+#> PBC 0.238 0.022 10.62 0.000
+#> INT:INT 0.129 0.018 7.29 0.000
+#> INT ~
+#> PBC 0.218 0.029 7.51 0.000
+#> ATT 0.210 0.025 8.28 0.000
+#> SN 0.172 0.028 6.22 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> int1 1.005 0.020 49.42 0.000
+#> int2 1.004 0.019 53.24 0.000
+#> int3 0.998 0.017 57.46 0.000
+#> att1 1.007 0.024 42.32 0.000
+#> att2 1.001 0.021 47.17 0.000
+#> att3 1.011 0.019 51.87 0.000
+#> att4 0.994 0.018 55.74 0.000
+#> att5 0.986 0.021 46.02 0.000
+#> sn1 1.000 0.024 42.10 0.000
+#> sn2 1.005 0.021 47.14 0.000
+#> pbc1 0.991 0.023 43.04 0.000
+#> pbc2 0.979 0.021 45.56 0.000
+#> pbc3 0.986 0.019 51.24 0.000
+#> b1 0.995 0.024 42.27 0.000
+#> b2 1.014 0.022 45.68 0.000
+#> BEH 0.000
+#> INT 0.000
+#> ATT 0.000
+#> SN 0.000
+#> PBC 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC ~~
+#> ATT 0.668 0.079 8.48 0.000
+#> SN 0.675 0.054 12.48 0.000
+#> ATT ~~
+#> SN 0.625 0.029 21.63 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> int1 0.161 0.009 18.28 0.000
+#> int2 0.161 0.008 20.89 0.000
+#> int3 0.170 0.007 23.51 0.000
+#> att1 0.167 0.007 23.23 0.000
+#> att2 0.150 0.006 24.81 0.000
+#> att3 0.160 0.006 26.51 0.000
+#> att4 0.163 0.006 27.46 0.000
+#> att5 0.159 0.006 24.85 0.000
+#> sn1 0.181 0.015 12.48 0.000
+#> sn2 0.155 0.012 13.27 0.000
+#> pbc1 0.145 0.008 18.27 0.000
+#> pbc2 0.160 0.007 21.74 0.000
+#> pbc3 0.154 0.007 23.69 0.000
+#> b1 0.185 0.020 9.23 0.000
+#> b2 0.136 0.018 7.52 0.000
+#> BEH 0.475 0.024 19.71 0.000
+#> PBC 0.960 0.037 26.13 0.000
+#> ATT 1.000 0.058 17.32 0.000
+#> SN 0.968 0.086 11.29 0.000
+#> INT 0.481 0.019 24.97 0.000
+
+est_qml <- modsem(tpb_nonlinear, data = TPB, cov.syntax = tpb_linear, method = "qml")
+summary(est_qml)
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 2000
+#> Number of iterations 76
+#> Loglikelihood -26360.52
+#> Akaike (AIC) 52829.04
+#> Bayesian (BIC) 53131.49
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -26393
+#> Akaike (AIC) 52892.45
+#> Bayesian (BIC) 53189.29
+#> Chi-square 66.27
+#> Degrees of Freedom (Chi-square) 82
+#> P-value (Chi-square) 0.897
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 32.70
+#> Difference test (D) 65.41
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> BEH 0.239
+#> INT 0.370
+#> R-Squared Null-Model (H0):
+#> BEH 0.210
+#> INT 0.367
+#> R-Squared Change:
+#> BEH 0.029
+#> INT 0.003
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> INT =~
+#> int1 1.000
+#> int2 0.914 0.015 59.04 0.000
+#> int3 0.807 0.015 55.65 0.000
+#> ATT =~
+#> att1 1.000
+#> att2 0.878 0.012 71.56 0.000
+#> att3 0.789 0.012 66.37 0.000
+#> att4 0.695 0.011 61.00 0.000
+#> att5 0.887 0.013 70.85 0.000
+#> SN =~
+#> sn1 1.000
+#> sn2 0.888 0.017 52.62 0.000
+#> PBC =~
+#> pbc1 1.000
+#> pbc2 0.913 0.013 69.38 0.000
+#> pbc3 0.801 0.012 66.08 0.000
+#> BEH =~
+#> b1 1.000
+#> b2 0.960 0.032 29.91 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> BEH ~
+#> INT 0.197 0.025 7.76 0.000
+#> PBC 0.239 0.023 10.59 0.000
+#> INT:INT 0.128 0.016 7.88 0.000
+#> INT ~
+#> PBC 0.222 0.030 7.51 0.000
+#> ATT 0.213 0.026 8.17 0.000
+#> SN 0.175 0.028 6.33 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> int1 1.014 0.022 46.96 0.000
+#> int2 1.012 0.020 50.41 0.000
+#> int3 1.005 0.018 54.80 0.000
+#> att1 1.014 0.024 42.01 0.000
+#> att2 1.007 0.021 46.97 0.000
+#> att3 1.016 0.020 51.45 0.000
+#> att4 0.999 0.018 55.65 0.000
+#> att5 0.992 0.022 45.67 0.000
+#> sn1 1.006 0.024 41.66 0.000
+#> sn2 1.010 0.022 46.71 0.000
+#> pbc1 0.998 0.024 42.41 0.000
+#> pbc2 0.985 0.022 44.93 0.000
+#> pbc3 0.991 0.020 50.45 0.000
+#> b1 0.999 0.023 42.64 0.000
+#> b2 1.017 0.022 46.25 0.000
+#> BEH 0.000
+#> INT 0.000
+#> ATT 0.000
+#> SN 0.000
+#> PBC 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC ~~
+#> ATT 0.678 0.029 23.45 0.000
+#> SN 0.678 0.029 23.08 0.000
+#> ATT ~~
+#> SN 0.629 0.029 21.70 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> int1 0.158 0.009 18.22 0.000
+#> int2 0.160 0.008 20.38 0.000
+#> int3 0.168 0.007 23.63 0.000
+#> att1 0.167 0.007 23.53 0.000
+#> att2 0.150 0.006 24.71 0.000
+#> att3 0.160 0.006 26.38 0.000
+#> att4 0.162 0.006 27.64 0.000
+#> att5 0.159 0.006 24.93 0.000
+#> sn1 0.178 0.015 12.09 0.000
+#> sn2 0.157 0.012 13.26 0.000
+#> pbc1 0.145 0.008 18.44 0.000
+#> pbc2 0.160 0.007 21.42 0.000
+#> pbc3 0.154 0.006 23.80 0.000
+#> b1 0.185 0.020 9.42 0.000
+#> b2 0.135 0.018 7.60 0.000
+#> BEH 0.475 0.024 19.74 0.000
+#> PBC 0.962 0.036 27.04 0.000
+#> ATT 0.998 0.037 26.93 0.000
+#> SN 0.988 0.039 25.23 0.000
+#> INT 0.488 0.020 24.59 0.000
If you’re using one of the product indicator approaches, you might
+want to use some lavaan functions on top of the estimated lavaan-object.
+To do so you can extract the lavaan-object using the
+extract_lavaan()
function.
+library(lavaan)
+#> This is lavaan 0.6-18
+#> lavaan is FREE software! Please report any bugs.
+
+m1 <- '
+# Outer Model
+X =~ x1 + x2 + x3
+Y =~ y1 + y2 + y3
+Z =~ z1 + z2 + z3
+
+# Inner model
+Y ~ X + Z + X:Z
+'
+
+est <- modsem(m1, oneInt)
+lav_object <- extract_lavaan(est)
+bootstrap <- bootstrapLavaan(lav_object, R = 500)
Both the LMS- and QML approach works on most models, but interaction +effects with endogenous can be a bit tricky to estimate (see the vignette. +Both approaches (particularily the LMS approach) are quite +computationally intensive, and are thus partly implemented in C++ (using +Rcpp and RcppArmadillo). Additionally starting parameters are estimated +using the double centering approach (and the means of the observed +variables) are used to generate good starting parameters for faster +convergence. If you want to see the progress of the estimation process +you can use ´verbose = TRUE´.
+Here you can see an example of the LMS approach for a simple model. +By default the summary function calculates fit measures compared to a +null model (i.e., the same model without an interaction term).
+
+library(modsem)
+m1 <- '
+# Outer Model
+ X =~ x1
+ X =~ x2 + x3
+ Z =~ z1 + z2 + z3
+ Y =~ y1 + y2 + y3
+
+# Inner model
+ Y ~ X + Z
+ Y ~ X:Z
+'
+
+lms1 <- modsem(m1, oneInt, method = "lms")
+summary(lms1, standardized = TRUE) # standardized estimates
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -17831.87, Change = -17831.875
+#> EM: Iteration = 2, LogLik = -17831.87, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 92
+#> Loglikelihood -14687.85
+#> Akaike (AIC) 29437.71
+#> Bayesian (BIC) 29611.34
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 24
+#> Dimensions 1
+#> Total points of integration 24
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -17832
+#> Akaike (AIC) 35723.75
+#> Bayesian (BIC) 35891.78
+#> Chi-square 17.52
+#> Degrees of Freedom (Chi-square) 24
+#> P-value (Chi-square) 0.826
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 3144.02
+#> Difference test (D) 6288.04
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.596
+#> R-Squared Null-Model (H0):
+#> Y 0.395
+#> R-Squared Change:
+#> Y 0.201
+#>
+#> Parameter Estimates:
+#> Coefficients standardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> X =~
+#> x1 0.926
+#> x2 0.891 0.014 64.39 0.000
+#> x3 0.912 0.013 67.69 0.000
+#> Z =~
+#> z1 0.927
+#> z2 0.898 0.014 64.59 0.000
+#> z3 0.913 0.013 67.87 0.000
+#> Y =~
+#> y1 0.969
+#> y2 0.954 0.009 105.92 0.000
+#> y3 0.961 0.009 111.95 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> X 0.427 0.020 21.79 0.000
+#> Z 0.370 0.018 20.16 0.000
+#> X:Z 0.454 0.017 26.28 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> X ~~
+#> Z 0.199 0.024 8.43 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 0.142 0.007 19.27 0.000
+#> x2 0.206 0.009 23.86 0.000
+#> x3 0.169 0.008 21.31 0.000
+#> z1 0.141 0.008 18.34 0.000
+#> z2 0.193 0.009 22.39 0.000
+#> z3 0.167 0.008 20.52 0.000
+#> y1 0.061 0.003 17.93 0.000
+#> y2 0.090 0.004 22.72 0.000
+#> y3 0.077 0.004 20.69 0.000
+#> X 1.000 0.016 61.06 0.000
+#> Z 1.000 0.018 55.21 0.000
+#> Y 0.404 0.015 26.54 0.000
Here you can see the same example using the QML approach.
+
+qml1 <- modsem(m1, oneInt, method = "qml")
+summary(qml1)
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 2000
+#> Number of iterations 111
+#> Loglikelihood -17496.22
+#> Akaike (AIC) 35054.43
+#> Bayesian (BIC) 35228.06
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -17832
+#> Akaike (AIC) 35723.75
+#> Bayesian (BIC) 35891.78
+#> Chi-square 17.52
+#> Degrees of Freedom (Chi-square) 24
+#> P-value (Chi-square) 0.826
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 335.66
+#> Difference test (D) 671.32
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.607
+#> R-Squared Null-Model (H0):
+#> Y 0.395
+#> R-Squared Change:
+#> Y 0.211
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.803 0.013 63.96 0.000
+#> x3 0.914 0.013 67.80 0.000
+#> Z =~
+#> z1 1.000
+#> z2 0.810 0.012 65.12 0.000
+#> z3 0.881 0.013 67.62 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.798 0.007 107.57 0.000
+#> y3 0.899 0.008 112.55 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> X 0.674 0.032 20.94 0.000
+#> Z 0.566 0.030 18.96 0.000
+#> X:Z 0.712 0.028 25.45 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 1.023 0.024 42.89 0.000
+#> x2 1.215 0.020 60.99 0.000
+#> x3 0.919 0.022 41.48 0.000
+#> z1 1.012 0.024 41.57 0.000
+#> z2 1.206 0.020 59.27 0.000
+#> z3 0.916 0.022 42.06 0.000
+#> y1 1.038 0.033 31.45 0.000
+#> y2 1.221 0.027 45.49 0.000
+#> y3 0.955 0.030 31.86 0.000
+#> Y 0.000
+#> X 0.000
+#> Z 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> X ~~
+#> Z 0.200 0.024 8.24 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 0.158 0.009 18.14 0.000
+#> x2 0.162 0.007 23.19 0.000
+#> x3 0.165 0.008 20.82 0.000
+#> z1 0.166 0.009 18.34 0.000
+#> z2 0.159 0.007 22.62 0.000
+#> z3 0.158 0.008 20.71 0.000
+#> y1 0.159 0.009 17.98 0.000
+#> y2 0.154 0.007 22.67 0.000
+#> y3 0.164 0.008 20.71 0.000
+#> X 0.983 0.036 26.99 0.000
+#> Z 1.019 0.038 26.95 0.000
+#> Y 0.943 0.038 24.87 0.000
Here you can see an example of a more complicated example using the
+model from the theory of planned behaviour (TPB), where there are two
+endogenous variables, where there is an interaction between an
+endogenous and exogenous variable. When estimating more complicated
+models with the LMS-approach, it is recommended that you increase the
+number of nodes used for numerical integration. By default the number of
+nodes is set to 16, and can be increased using the nodes argument. The
+argument has no effect on the QML approach. When there is an interaction
+effect between an endogenous and exogenous variable, it is recommended
+that you use at least 32 nodes for the LMS-approach. You can also get
+robust standard errors by setting robust.se = TRUE
in the
+modsem()
function.
Note: If you want the lms-approach to give as similar results as
+possible to mplus, you would have to increase the number of nodes (e.g.,
+nodes = 100
).
+# ATT = Attitude,
+# PBC = Perceived Behavioural Control
+# INT = Intention
+# SN = Subjective Norms
+# BEH = Behaviour
+tpb <- '
+# Outer Model (Based on Hagger et al., 2007)
+ ATT =~ att1 + att2 + att3 + att4 + att5
+ SN =~ sn1 + sn2
+ PBC =~ pbc1 + pbc2 + pbc3
+ INT =~ int1 + int2 + int3
+ BEH =~ b1 + b2
+
+# Inner Model (Based on Steinmetz et al., 2011)
+ INT ~ ATT + SN + PBC
+ BEH ~ INT + PBC
+ BEH ~ INT:PBC
+'
+
+lms2 <- modsem(tpb, TPB, method = "lms", nodes = 32)
+summary(lms2)
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -26393.22, Change = -26393.223
+#> EM: Iteration = 2, LogLik = -26393.22, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 70
+#> Loglikelihood -23439.02
+#> Akaike (AIC) 46986.04
+#> Bayesian (BIC) 47288.49
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 32
+#> Dimensions 1
+#> Total points of integration 32
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -26393
+#> Akaike (AIC) 52892.45
+#> Bayesian (BIC) 53189.29
+#> Chi-square 66.27
+#> Degrees of Freedom (Chi-square) 82
+#> P-value (Chi-square) 0.897
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 2954.20
+#> Difference test (D) 5908.41
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> INT 0.364
+#> BEH 0.259
+#> R-Squared Null-Model (H0):
+#> INT 0.367
+#> BEH 0.210
+#> R-Squared Change:
+#> INT -0.003
+#> BEH 0.049
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC =~
+#> pbc1 1.000
+#> pbc2 0.914 0.013 68.52 0.000
+#> pbc3 0.802 0.012 65.02 0.000
+#> ATT =~
+#> att1 1.000
+#> att2 0.878 0.012 70.81 0.000
+#> att3 0.789 0.012 65.77 0.000
+#> att4 0.695 0.011 61.09 0.000
+#> att5 0.887 0.013 70.26 0.000
+#> SN =~
+#> sn1 1.000
+#> sn2 0.889 0.017 52.00 0.000
+#> INT =~
+#> int1 1.000
+#> int2 0.913 0.016 58.38 0.000
+#> int3 0.807 0.015 55.37 0.000
+#> BEH =~
+#> b1 1.000
+#> b2 0.959 0.033 29.28 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> INT ~
+#> PBC 0.218 0.030 7.36 0.000
+#> ATT 0.214 0.026 8.19 0.000
+#> SN 0.176 0.027 6.43 0.000
+#> BEH ~
+#> PBC 0.233 0.022 10.35 0.000
+#> INT 0.188 0.025 7.62 0.000
+#> PBC:INT 0.205 0.019 10.90 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> pbc1 0.990 0.022 45.57 0.000
+#> pbc2 0.978 0.020 48.28 0.000
+#> pbc3 0.985 0.018 53.86 0.000
+#> att1 1.009 0.023 43.19 0.000
+#> att2 1.002 0.021 48.19 0.000
+#> att3 1.012 0.019 53.21 0.000
+#> att4 0.995 0.017 56.95 0.000
+#> att5 0.988 0.021 46.75 0.000
+#> sn1 1.001 0.023 42.73 0.000
+#> sn2 1.006 0.021 48.06 0.000
+#> int1 1.010 0.021 47.81 0.000
+#> int2 1.009 0.020 51.14 0.000
+#> int3 1.002 0.018 56.02 0.000
+#> b1 0.999 0.021 47.31 0.000
+#> b2 1.017 0.020 51.50 0.000
+#> INT 0.000
+#> BEH 0.000
+#> PBC 0.000
+#> ATT 0.000
+#> SN 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC ~~
+#> ATT 0.668 0.021 31.78 0.000
+#> SN 0.668 0.022 30.52 0.000
+#> ATT ~~
+#> SN 0.623 0.019 32.90 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> pbc1 0.148 0.008 18.81 0.000
+#> pbc2 0.159 0.007 21.62 0.000
+#> pbc3 0.155 0.007 23.64 0.000
+#> att1 0.167 0.007 23.64 0.000
+#> att2 0.150 0.006 24.73 0.000
+#> att3 0.159 0.006 26.68 0.000
+#> att4 0.162 0.006 27.71 0.000
+#> att5 0.159 0.006 25.11 0.000
+#> sn1 0.178 0.015 11.97 0.000
+#> sn2 0.156 0.012 13.20 0.000
+#> int1 0.157 0.009 18.25 0.000
+#> int2 0.160 0.008 20.48 0.000
+#> int3 0.168 0.007 24.27 0.000
+#> b1 0.185 0.020 9.46 0.000
+#> b2 0.136 0.018 7.71 0.000
+#> PBC 0.947 0.017 55.23 0.000
+#> ATT 0.992 0.014 69.87 0.000
+#> SN 0.981 0.015 64.37 0.000
+#> INT 0.491 0.020 24.97 0.000
+#> BEH 0.456 0.023 19.46 0.000
+
+qml2 <- modsem(tpb, TPB, method = "qml")
+summary(qml2, standardized = TRUE) # standardized estimates
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 2000
+#> Number of iterations 73
+#> Loglikelihood -26326.25
+#> Akaike (AIC) 52760.5
+#> Bayesian (BIC) 53062.95
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -26393
+#> Akaike (AIC) 52892.45
+#> Bayesian (BIC) 53189.29
+#> Chi-square 66.27
+#> Degrees of Freedom (Chi-square) 82
+#> P-value (Chi-square) 0.897
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 66.97
+#> Difference test (D) 133.95
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> INT 0.366
+#> BEH 0.263
+#> R-Squared Null-Model (H0):
+#> INT 0.367
+#> BEH 0.210
+#> R-Squared Change:
+#> INT 0.000
+#> BEH 0.053
+#>
+#> Parameter Estimates:
+#> Coefficients standardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC =~
+#> pbc1 0.933
+#> pbc2 0.913 0.013 69.47 0.000
+#> pbc3 0.894 0.014 66.10 0.000
+#> ATT =~
+#> att1 0.925
+#> att2 0.915 0.013 71.56 0.000
+#> att3 0.892 0.013 66.38 0.000
+#> att4 0.865 0.014 61.00 0.000
+#> att5 0.912 0.013 70.85 0.000
+#> SN =~
+#> sn1 0.921
+#> sn2 0.913 0.017 52.61 0.000
+#> INT =~
+#> int1 0.912
+#> int2 0.895 0.015 59.05 0.000
+#> int3 0.866 0.016 55.73 0.000
+#> BEH =~
+#> b1 0.877
+#> b2 0.900 0.028 31.71 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> INT ~
+#> PBC 0.243 0.033 7.35 0.000
+#> ATT 0.242 0.030 8.16 0.000
+#> SN 0.199 0.031 6.37 0.000
+#> BEH ~
+#> PBC 0.289 0.028 10.37 0.000
+#> INT 0.212 0.028 7.69 0.000
+#> PBC:INT 0.227 0.020 11.32 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC ~~
+#> ATT 0.692 0.030 23.45 0.000
+#> SN 0.695 0.030 23.08 0.000
+#> ATT ~~
+#> SN 0.634 0.029 21.70 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> pbc1 0.130 0.007 18.39 0.000
+#> pbc2 0.166 0.008 21.43 0.000
+#> pbc3 0.201 0.008 23.89 0.000
+#> att1 0.144 0.006 23.53 0.000
+#> att2 0.164 0.007 24.71 0.000
+#> att3 0.204 0.008 26.38 0.000
+#> att4 0.252 0.009 27.65 0.000
+#> att5 0.168 0.007 24.93 0.000
+#> sn1 0.153 0.013 12.09 0.000
+#> sn2 0.167 0.013 13.26 0.000
+#> int1 0.168 0.009 18.11 0.000
+#> int2 0.199 0.010 20.41 0.000
+#> int3 0.249 0.011 23.55 0.000
+#> b1 0.231 0.023 10.12 0.000
+#> b2 0.191 0.024 8.10 0.000
+#> PBC 1.000 0.037 27.07 0.000
+#> ATT 1.000 0.037 26.93 0.000
+#> SN 1.000 0.040 25.22 0.000
+#> INT 0.634 0.026 24.64 0.000
+#> BEH 0.737 0.037 20.17 0.000
There are a number of approaches for estimating interaction effects +in SEM. In modsem(), the method = “method” argument allows you to choose +which to use.
+"ca"
= constrained approach (Algina & Moulder,
+2001)"uca"
= unconstrained approach (Marsh, 2004)"rca"
= residual centering approach (Little et al.,
+2006)
+"dblcent"
= double centering approach (Marsh.,
+2013)"pind"
= basic product indicator approach (not
+recommended)"lms"
= The Latent Moderated Structural equations
+approach
+"qml"
= The Quasi Maximum Likelihood approach.
+"mplus"
+
+
+m1 <- '
+# Outer Model
+X =~ x1 + x2 + x3
+Y =~ y1 + y2 + y3
+Z =~ z1 + z2 + z3
+
+# Inner model
+Y ~ X + Z + X:Z
+'
+
+modsem(m1, data = oneInt, method = "ca")
+modsem(m1, data = oneInt, method = "uca")
+modsem(m1, data = oneInt, method = "rca")
+modsem(m1, data = oneInt, method = "dblcent")
+modsem(m1, data = oneInt, method = "mplus")
+modsem(m1, data = oneInt, method = "lms")
+modsem(m1, data = oneInt, method = "qml")
modsem basically introduces a new feature to the lavaan-syntax – the +semicolon operator (“:”). The semicolon operator works the same way as +in the lm()-function. In order to specify an interaction effect between +two variables, you join them by Var1:Var2, Models can either be +estimated using the one of the product indicator approaches (“ca”, +“rca”, “dblcent”, “pind”) or by using the latent moderated structural +equations approach (“lms”), or the quasi maximum likelihood approach +(“qml”). The product indicator approaches are estimated via lavaan, +whilst the lms and qml approaches are estimated via modsem itself.
+Here we can see a simple example of how to specify an interaction +effect between two latent variables in lavaan.
+
+m1 <- '
+ # Outer Model
+ X =~ x1 + x2 +x3
+ Y =~ y1 + y2 + y3
+ Z =~ z1 + z2 + z3
+
+ # Inner model
+ Y ~ X + Z + X:Z
+'
+
+est1 <- modsem(m1, oneInt)
+summary(est1)
+#> modsem:
+#> Method = dblcent
+#> lavaan 0.6-18 ended normally after 159 iterations
+#>
+#> Estimator ML
+#> Optimization method NLMINB
+#> Number of model parameters 60
+#>
+#> Number of observations 2000
+#>
+#> Model Test User Model:
+#>
+#> Test statistic 122.924
+#> Degrees of freedom 111
+#> P-value (Chi-square) 0.207
+#>
+#> Parameter Estimates:
+#>
+#> Standard errors Standard
+#> Information Expected
+#> Information saturated (h1) model Structured
+#>
+#> Latent Variables:
+#> Estimate Std.Err z-value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.804 0.013 63.612 0.000
+#> x3 0.916 0.014 67.144 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.798 0.007 107.428 0.000
+#> y3 0.899 0.008 112.453 0.000
+#> Z =~
+#> z1 1.000
+#> z2 0.812 0.013 64.763 0.000
+#> z3 0.882 0.013 67.014 0.000
+#> XZ =~
+#> x1z1 1.000
+#> x2z1 0.805 0.013 60.636 0.000
+#> x3z1 0.877 0.014 62.680 0.000
+#> x1z2 0.793 0.013 59.343 0.000
+#> x2z2 0.646 0.015 43.672 0.000
+#> x3z2 0.706 0.016 44.292 0.000
+#> x1z3 0.887 0.014 63.700 0.000
+#> x2z3 0.716 0.016 45.645 0.000
+#> x3z3 0.781 0.017 45.339 0.000
+#>
+#> Regressions:
+#> Estimate Std.Err z-value P(>|z|)
+#> Y ~
+#> X 0.675 0.027 25.379 0.000
+#> Z 0.561 0.026 21.606 0.000
+#> XZ 0.702 0.027 26.360 0.000
+#>
+#> Covariances:
+#> Estimate Std.Err z-value P(>|z|)
+#> .x1z1 ~~
+#> .x2z2 0.000
+#> .x2z3 0.000
+#> .x3z2 0.000
+#> .x3z3 0.000
+#> .x2z1 ~~
+#> .x1z2 0.000
+#> .x1z2 ~~
+#> .x2z3 0.000
+#> .x3z1 ~~
+#> .x1z2 0.000
+#> .x1z2 ~~
+#> .x3z3 0.000
+#> .x2z1 ~~
+#> .x1z3 0.000
+#> .x2z2 ~~
+#> .x1z3 0.000
+#> .x3z1 ~~
+#> .x1z3 0.000
+#> .x3z2 ~~
+#> .x1z3 0.000
+#> .x2z1 ~~
+#> .x3z2 0.000
+#> .x3z3 0.000
+#> .x3z1 ~~
+#> .x2z2 0.000
+#> .x2z2 ~~
+#> .x3z3 0.000
+#> .x3z1 ~~
+#> .x2z3 0.000
+#> .x3z2 ~~
+#> .x2z3 0.000
+#> .x1z1 ~~
+#> .x1z2 0.115 0.008 14.802 0.000
+#> .x1z3 0.114 0.008 13.947 0.000
+#> .x2z1 0.125 0.008 16.095 0.000
+#> .x3z1 0.140 0.009 16.135 0.000
+#> .x1z2 ~~
+#> .x1z3 0.103 0.007 14.675 0.000
+#> .x2z2 0.128 0.006 20.850 0.000
+#> .x3z2 0.146 0.007 21.243 0.000
+#> .x1z3 ~~
+#> .x2z3 0.116 0.007 17.818 0.000
+#> .x3z3 0.135 0.007 18.335 0.000
+#> .x2z1 ~~
+#> .x2z2 0.135 0.006 20.905 0.000
+#> .x2z3 0.145 0.007 21.145 0.000
+#> .x3z1 0.114 0.007 16.058 0.000
+#> .x2z2 ~~
+#> .x2z3 0.117 0.006 20.419 0.000
+#> .x3z2 0.116 0.006 20.586 0.000
+#> .x2z3 ~~
+#> .x3z3 0.109 0.006 18.059 0.000
+#> .x3z1 ~~
+#> .x3z2 0.138 0.007 19.331 0.000
+#> .x3z3 0.158 0.008 20.269 0.000
+#> .x3z2 ~~
+#> .x3z3 0.131 0.007 19.958 0.000
+#> X ~~
+#> Z 0.201 0.024 8.271 0.000
+#> XZ 0.016 0.025 0.628 0.530
+#> Z ~~
+#> XZ 0.062 0.025 2.449 0.014
+#>
+#> Variances:
+#> Estimate Std.Err z-value P(>|z|)
+#> .x1 0.160 0.009 17.871 0.000
+#> .x2 0.162 0.007 22.969 0.000
+#> .x3 0.163 0.008 20.161 0.000
+#> .y1 0.159 0.009 17.896 0.000
+#> .y2 0.154 0.007 22.640 0.000
+#> .y3 0.164 0.008 20.698 0.000
+#> .z1 0.168 0.009 18.143 0.000
+#> .z2 0.158 0.007 22.264 0.000
+#> .z3 0.158 0.008 20.389 0.000
+#> .x1z1 0.311 0.014 22.227 0.000
+#> .x2z1 0.292 0.011 27.287 0.000
+#> .x3z1 0.327 0.012 26.275 0.000
+#> .x1z2 0.290 0.011 26.910 0.000
+#> .x2z2 0.239 0.008 29.770 0.000
+#> .x3z2 0.270 0.009 29.117 0.000
+#> .x1z3 0.272 0.012 23.586 0.000
+#> .x2z3 0.245 0.009 27.979 0.000
+#> .x3z3 0.297 0.011 28.154 0.000
+#> X 0.981 0.036 26.895 0.000
+#> .Y 0.990 0.038 25.926 0.000
+#> Z 1.016 0.038 26.856 0.000
+#> XZ 1.045 0.044 24.004 0.000
By default the model is estimated using the “dblcent” method. If you +want to use another method, but the method can be changed using the +method argument.
+
+est1 <- modsem(m1, oneInt, method = "lms")
+summary(est1)
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -17831.87, Change = -17831.875
+#> EM: Iteration = 2, LogLik = -17831.87, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 92
+#> Loglikelihood -14687.85
+#> Akaike (AIC) 29437.71
+#> Bayesian (BIC) 29611.34
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 24
+#> Dimensions 1
+#> Total points of integration 24
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -17832
+#> Akaike (AIC) 35723.75
+#> Bayesian (BIC) 35891.78
+#> Chi-square 17.52
+#> Degrees of Freedom (Chi-square) 24
+#> P-value (Chi-square) 0.826
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 3144.02
+#> Difference test (D) 6288.04
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.596
+#> R-Squared Null-Model (H0):
+#> Y 0.395
+#> R-Squared Change:
+#> Y 0.201
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.804 0.012 64.39 0.000
+#> x3 0.915 0.014 67.69 0.000
+#> Z =~
+#> z1 1.000
+#> z2 0.810 0.013 64.59 0.000
+#> z3 0.881 0.013 67.87 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.799 0.008 105.92 0.000
+#> y3 0.899 0.008 111.95 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> X 0.676 0.031 21.79 0.000
+#> Z 0.572 0.028 20.16 0.000
+#> X:Z 0.712 0.027 26.28 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 1.025 0.019 52.75 0.000
+#> x2 1.218 0.017 73.47 0.000
+#> x3 0.922 0.018 50.64 0.000
+#> z1 1.016 0.024 41.94 0.000
+#> z2 1.209 0.020 59.65 0.000
+#> z3 0.920 0.022 42.33 0.000
+#> y1 1.046 0.031 33.47 0.000
+#> y2 1.227 0.025 48.20 0.000
+#> y3 0.962 0.028 33.81 0.000
+#> Y 0.000
+#> X 0.000
+#> Z 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> X ~~
+#> Z 0.198 0.023 8.43 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 0.160 0.008 19.27 0.000
+#> x2 0.163 0.007 23.86 0.000
+#> x3 0.165 0.008 21.31 0.000
+#> z1 0.166 0.009 18.34 0.000
+#> z2 0.160 0.007 22.39 0.000
+#> z3 0.158 0.008 20.52 0.000
+#> y1 0.160 0.009 17.93 0.000
+#> y2 0.154 0.007 22.72 0.000
+#> y3 0.163 0.008 20.69 0.000
+#> X 0.972 0.016 61.06 0.000
+#> Z 1.017 0.018 55.21 0.000
+#> Y 0.984 0.037 26.54 0.000
modsem does not only allow you to estimate interactions between +latent variables, but also interactions between observed variables. Here +we first run a regression with only observed variables, where there is +an interaction between x1 and z2, and then run an equivalent model using +modsem().
+Regression
+
+reg1 <- lm(y1 ~ x1*z1, oneInt)
+summary(reg1)
+#>
+#> Call:
+#> lm(formula = y1 ~ x1 * z1, data = oneInt)
+#>
+#> Residuals:
+#> Min 1Q Median 3Q Max
+#> -3.7155 -0.8087 -0.0367 0.8078 4.6531
+#>
+#> Coefficients:
+#> Estimate Std. Error t value Pr(>|t|)
+#> (Intercept) 0.51422 0.04618 11.135 <2e-16 ***
+#> x1 0.05477 0.03387 1.617 0.1060
+#> z1 -0.06575 0.03461 -1.900 0.0576 .
+#> x1:z1 0.54361 0.02272 23.926 <2e-16 ***
+#> ---
+#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
+#>
+#> Residual standard error: 1.184 on 1996 degrees of freedom
+#> Multiple R-squared: 0.4714, Adjusted R-squared: 0.4706
+#> F-statistic: 593.3 on 3 and 1996 DF, p-value: < 2.2e-16
Using modsem() In general, when you have +interactions between observed variables it is recommended that you use +method = “pind”. Interaction effects with observed variables is not +supported by the LMS- and QML-approach. In certain circumstances, you +can define a latent variabale with a single indicator to estimate the +interaction effect between two observed variables, in the LMS and QML +approach, but it is generally not recommended.
+
+# Here we use "pind" as the method (see chapter 3)
+est2 <- modsem('y1 ~ x1 + z1 + x1:z1', data = oneInt, method = "pind")
+summary(est2)
+#> modsem:
+#> Method = pind
+#> lavaan 0.6-18 ended normally after 1 iteration
+#>
+#> Estimator ML
+#> Optimization method NLMINB
+#> Number of model parameters 4
+#>
+#> Number of observations 2000
+#>
+#> Model Test User Model:
+#>
+#> Test statistic 0.000
+#> Degrees of freedom 0
+#>
+#> Parameter Estimates:
+#>
+#> Standard errors Standard
+#> Information Expected
+#> Information saturated (h1) model Structured
+#>
+#> Regressions:
+#> Estimate Std.Err z-value P(>|z|)
+#> y1 ~
+#> x1 0.055 0.034 1.619 0.105
+#> z1 -0.066 0.035 -1.902 0.057
+#> x1z1 0.544 0.023 23.950 0.000
+#>
+#> Variances:
+#> Estimate Std.Err z-value P(>|z|)
+#> .y1 1.399 0.044 31.623 0.000
modsem also allows you to estimate interaction effects between latent +and observed variables. To do so, you just join a latent and an observed +variable by a colon, e.g., ‘latent:observer’. As with interactions +between observed variables, it is generally recommended that you use +method = “pind” for estimating the effect between observed x latent
+
+m3 <- '
+ # Outer Model
+ X =~ x1 + x2 +x3
+ Y =~ y1 + y2 + y3
+
+ # Inner model
+ Y ~ X + z1 + X:z1
+'
+
+est3 <- modsem(m3, oneInt, method = "pind")
+summary(est3)
+#> modsem:
+#> Method = pind
+#> lavaan 0.6-18 ended normally after 45 iterations
+#>
+#> Estimator ML
+#> Optimization method NLMINB
+#> Number of model parameters 22
+#>
+#> Number of observations 2000
+#>
+#> Model Test User Model:
+#>
+#> Test statistic 4468.171
+#> Degrees of freedom 32
+#> P-value (Chi-square) 0.000
+#>
+#> Parameter Estimates:
+#>
+#> Standard errors Standard
+#> Information Expected
+#> Information saturated (h1) model Structured
+#>
+#> Latent Variables:
+#> Estimate Std.Err z-value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.803 0.013 63.697 0.000
+#> x3 0.915 0.014 67.548 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.798 0.007 115.375 0.000
+#> y3 0.899 0.007 120.783 0.000
+#> Xz1 =~
+#> x1z1 1.000
+#> x2z1 0.947 0.010 96.034 0.000
+#> x3z1 0.902 0.009 99.512 0.000
+#>
+#> Regressions:
+#> Estimate Std.Err z-value P(>|z|)
+#> Y ~
+#> X 0.021 0.034 0.614 0.540
+#> z1 -0.185 0.023 -8.096 0.000
+#> Xz1 0.646 0.017 37.126 0.000
+#>
+#> Covariances:
+#> Estimate Std.Err z-value P(>|z|)
+#> X ~~
+#> Xz1 1.243 0.055 22.523 0.000
+#>
+#> Variances:
+#> Estimate Std.Err z-value P(>|z|)
+#> .x1 0.158 0.009 17.976 0.000
+#> .x2 0.164 0.007 23.216 0.000
+#> .x3 0.162 0.008 20.325 0.000
+#> .y1 0.158 0.009 17.819 0.000
+#> .y2 0.154 0.007 22.651 0.000
+#> .y3 0.164 0.008 20.744 0.000
+#> .x1z1 0.315 0.017 18.328 0.000
+#> .x2z1 0.428 0.019 22.853 0.000
+#> .x3z1 0.337 0.016 21.430 0.000
+#> X 0.982 0.036 26.947 0.000
+#> .Y 1.112 0.040 27.710 0.000
+#> Xz1 3.965 0.136 29.217 0.000
In essence, quadratic effects are just a special case of interaction +effects. Thus modsem can also be used to estimate quadratic effects.
+
+
+m4 <- '
+# Outer Model
+X =~ x1 + x2 + x3
+Y =~ y1 + y2 + y3
+Z =~ z1 + z2 + z3
+
+# Inner model
+Y ~ X + Z + Z:X + X:X
+'
+
+est4 <- modsem(m4, oneInt, "qml")
+summary(est4)
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 2000
+#> Number of iterations 123
+#> Loglikelihood -17496.2
+#> Akaike (AIC) 35056.4
+#> Bayesian (BIC) 35235.63
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -17832
+#> Akaike (AIC) 35723.75
+#> Bayesian (BIC) 35891.78
+#> Chi-square 17.52
+#> Degrees of Freedom (Chi-square) 24
+#> P-value (Chi-square) 0.826
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 335.68
+#> Difference test (D) 671.35
+#> Degrees of freedom (D) 2
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.607
+#> R-Squared Null-Model (H0):
+#> Y 0.395
+#> R-Squared Change:
+#> Y 0.212
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.803 0.013 63.961 0.000
+#> x3 0.914 0.013 67.797 0.000
+#> Z =~
+#> z1 1.000
+#> z2 0.810 0.012 65.124 0.000
+#> z3 0.881 0.013 67.621 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.798 0.007 107.567 0.000
+#> y3 0.899 0.008 112.542 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> X 0.674 0.032 20.888 0.000
+#> Z 0.566 0.030 18.948 0.000
+#> X:X -0.005 0.023 -0.207 0.836
+#> X:Z 0.713 0.029 24.554 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 1.023 0.024 42.894 0.000
+#> x2 1.216 0.020 60.996 0.000
+#> x3 0.919 0.022 41.484 0.000
+#> z1 1.012 0.024 41.576 0.000
+#> z2 1.206 0.020 59.271 0.000
+#> z3 0.916 0.022 42.063 0.000
+#> y1 1.042 0.038 27.684 0.000
+#> y2 1.224 0.030 40.159 0.000
+#> y3 0.958 0.034 28.101 0.000
+#> Y 0.000
+#> X 0.000
+#> Z 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> X ~~
+#> Z 0.200 0.024 8.239 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 0.158 0.009 18.145 0.000
+#> x2 0.162 0.007 23.188 0.000
+#> x3 0.165 0.008 20.821 0.000
+#> z1 0.166 0.009 18.341 0.000
+#> z2 0.159 0.007 22.622 0.000
+#> z3 0.158 0.008 20.714 0.000
+#> y1 0.159 0.009 17.975 0.000
+#> y2 0.154 0.007 22.670 0.000
+#> y3 0.164 0.008 20.711 0.000
+#> X 0.983 0.036 26.994 0.000
+#> Z 1.019 0.038 26.951 0.000
+#> Y 0.943 0.038 24.820 0.000
Here we can see a more complicated example using the model for the +theory of planned behaviour.
+
+
+tpb <- '
+# Outer Model (Based on Hagger et al., 2007)
+ ATT =~ att1 + att2 + att3 + att4 + att5
+ SN =~ sn1 + sn2
+ PBC =~ pbc1 + pbc2 + pbc3
+ INT =~ int1 + int2 + int3
+ BEH =~ b1 + b2
+
+# Inner Model (Based on Steinmetz et al., 2011)
+ INT ~ ATT + SN + PBC
+ BEH ~ INT + PBC + INT:PBC
+'
+# the double centering apporach
+est_tpb <- modsem(tpb, TPB)
+
+# using the lms approach
+est_tpb_lms <- modsem(tpb, TPB, method = "lms")
+#> Warning: It is recommended that you have at least 32 nodes for interaction
+#> effects between exogenous and endogenous variables in the lms approach 'nodes =
+#> 24'
+summary(est_tpb_lms)
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -26393.22, Change = -26393.223
+#> EM: Iteration = 2, LogLik = -26393.22, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 103
+#> Loglikelihood -23463.37
+#> Akaike (AIC) 47034.74
+#> Bayesian (BIC) 47337.19
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 24
+#> Dimensions 1
+#> Total points of integration 24
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -26393
+#> Akaike (AIC) 52892.45
+#> Bayesian (BIC) 53189.29
+#> Chi-square 66.27
+#> Degrees of Freedom (Chi-square) 82
+#> P-value (Chi-square) 0.897
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 2929.85
+#> Difference test (D) 5859.70
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> INT 0.361
+#> BEH 0.248
+#> R-Squared Null-Model (H0):
+#> INT 0.367
+#> BEH 0.210
+#> R-Squared Change:
+#> INT -0.006
+#> BEH 0.038
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC =~
+#> pbc1 1.000
+#> pbc2 0.911 0.014 67.47 0.000
+#> pbc3 0.802 0.012 65.29 0.000
+#> ATT =~
+#> att1 1.000
+#> att2 0.877 0.012 71.30 0.000
+#> att3 0.789 0.012 65.67 0.000
+#> att4 0.695 0.011 60.83 0.000
+#> att5 0.887 0.013 70.47 0.000
+#> SN =~
+#> sn1 1.000
+#> sn2 0.889 0.017 51.65 0.000
+#> INT =~
+#> int1 1.000
+#> int2 0.913 0.016 58.82 0.000
+#> int3 0.807 0.015 55.32 0.000
+#> BEH =~
+#> b1 1.000
+#> b2 0.961 0.033 29.34 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> INT ~
+#> PBC 0.217 0.030 7.30 0.000
+#> ATT 0.213 0.026 8.29 0.000
+#> SN 0.177 0.028 6.35 0.000
+#> BEH ~
+#> PBC 0.228 0.022 10.16 0.000
+#> INT 0.182 0.025 7.38 0.000
+#> PBC:INT 0.204 0.019 10.79 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> pbc1 0.959 0.018 52.11 0.000
+#> pbc2 0.950 0.017 54.90 0.000
+#> pbc3 0.960 0.016 61.08 0.000
+#> att1 0.987 0.022 45.68 0.000
+#> att2 0.983 0.019 51.10 0.000
+#> att3 0.995 0.018 56.12 0.000
+#> att4 0.980 0.016 60.13 0.000
+#> att5 0.969 0.019 49.85 0.000
+#> sn1 0.979 0.022 44.67 0.000
+#> sn2 0.987 0.020 50.00 0.000
+#> int1 0.995 0.020 48.93 0.000
+#> int2 0.995 0.019 52.40 0.000
+#> int3 0.990 0.017 56.69 0.000
+#> b1 0.989 0.021 47.79 0.000
+#> b2 1.008 0.019 51.98 0.000
+#> INT 0.000
+#> BEH 0.000
+#> PBC 0.000
+#> ATT 0.000
+#> SN 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> PBC ~~
+#> ATT 0.658 0.020 32.58 0.000
+#> SN 0.657 0.021 31.11 0.000
+#> ATT ~~
+#> SN 0.616 0.019 32.97 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> pbc1 0.147 0.008 19.28 0.000
+#> pbc2 0.164 0.007 22.15 0.000
+#> pbc3 0.154 0.006 24.09 0.000
+#> att1 0.167 0.007 23.37 0.000
+#> att2 0.150 0.006 24.30 0.000
+#> att3 0.159 0.006 26.67 0.000
+#> att4 0.163 0.006 27.65 0.000
+#> att5 0.159 0.006 24.77 0.000
+#> sn1 0.178 0.015 12.09 0.000
+#> sn2 0.156 0.012 12.97 0.000
+#> int1 0.157 0.009 18.06 0.000
+#> int2 0.160 0.008 20.12 0.000
+#> int3 0.168 0.007 23.32 0.000
+#> b1 0.186 0.020 9.51 0.000
+#> b2 0.135 0.018 7.62 0.000
+#> PBC 0.933 0.015 60.78 0.000
+#> ATT 0.985 0.014 70.25 0.000
+#> SN 0.974 0.015 63.87 0.000
+#> INT 0.491 0.020 24.34 0.000
+#> BEH 0.456 0.023 19.60 0.000
Here is an example included two quadratic- and one interaction
+effect, using the included dataset jordan
. The dataset is
+subset of the PISA 2006 dataset.
+m2 <- '
+ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
+CAREER =~ career1 + career2 + career3 + career4
+SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
+CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
+'
+
+est_jordan <- modsem(m2, data = jordan)
+est_jordan_qml <- modsem(m2, data = jordan, method = "qml")
+summary(est_jordan_qml)
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 6038
+#> Number of iterations 101
+#> Loglikelihood -110520.22
+#> Akaike (AIC) 221142.45
+#> Bayesian (BIC) 221484.44
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -110521
+#> Akaike (AIC) 221138.58
+#> Bayesian (BIC) 221460.46
+#> Chi-square 1016.34
+#> Degrees of Freedom (Chi-square) 87
+#> P-value (Chi-square) 0.000
+#> RMSEA 0.005
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 1.07
+#> Difference test (D) 2.13
+#> Degrees of freedom (D) 3
+#> P-value (D) 0.546
+#>
+#> R-Squared:
+#> CAREER 0.512
+#> R-Squared Null-Model (H0):
+#> CAREER 0.510
+#> R-Squared Change:
+#> CAREER 0.002
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> ENJ =~
+#> enjoy1 1.000
+#> enjoy2 1.002 0.020 50.587 0.000
+#> enjoy3 0.894 0.020 43.669 0.000
+#> enjoy4 0.999 0.021 48.227 0.000
+#> enjoy5 1.047 0.021 50.400 0.000
+#> SC =~
+#> academic1 1.000
+#> academic2 1.104 0.028 38.946 0.000
+#> academic3 1.235 0.030 41.720 0.000
+#> academic4 1.254 0.030 41.828 0.000
+#> academic5 1.113 0.029 38.647 0.000
+#> academic6 1.198 0.030 40.356 0.000
+#> CAREER =~
+#> career1 1.000
+#> career2 1.040 0.016 65.180 0.000
+#> career3 0.952 0.016 57.838 0.000
+#> career4 0.818 0.017 48.358 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> CAREER ~
+#> ENJ 0.523 0.020 26.286 0.000
+#> SC 0.467 0.023 19.884 0.000
+#> ENJ:ENJ 0.026 0.022 1.206 0.228
+#> ENJ:SC -0.039 0.046 -0.851 0.395
+#> SC:SC -0.002 0.035 -0.058 0.953
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> enjoy1 0.000 0.013 -0.008 0.994
+#> enjoy2 0.000 0.015 0.010 0.992
+#> enjoy3 0.000 0.013 -0.023 0.982
+#> enjoy4 0.000 0.014 0.008 0.993
+#> enjoy5 0.000 0.014 0.025 0.980
+#> academic1 0.000 0.016 -0.009 0.993
+#> academic2 0.000 0.014 -0.009 0.993
+#> academic3 0.000 0.015 -0.028 0.978
+#> academic4 0.000 0.016 -0.015 0.988
+#> academic5 -0.001 0.014 -0.044 0.965
+#> academic6 0.001 0.015 0.048 0.962
+#> career1 -0.004 0.017 -0.204 0.838
+#> career2 -0.004 0.018 -0.248 0.804
+#> career3 -0.004 0.017 -0.214 0.830
+#> career4 -0.004 0.016 -0.232 0.816
+#> CAREER 0.000
+#> ENJ 0.000
+#> SC 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> ENJ ~~
+#> SC 0.218 0.009 25.477 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> enjoy1 0.487 0.011 44.335 0.000
+#> enjoy2 0.488 0.011 44.406 0.000
+#> enjoy3 0.596 0.012 48.233 0.000
+#> enjoy4 0.488 0.011 44.561 0.000
+#> enjoy5 0.442 0.010 42.470 0.000
+#> academic1 0.645 0.013 49.813 0.000
+#> academic2 0.566 0.012 47.864 0.000
+#> academic3 0.473 0.011 44.319 0.000
+#> academic4 0.455 0.010 43.579 0.000
+#> academic5 0.565 0.012 47.695 0.000
+#> academic6 0.502 0.011 45.434 0.000
+#> career1 0.373 0.009 40.392 0.000
+#> career2 0.328 0.009 37.019 0.000
+#> career3 0.436 0.010 43.272 0.000
+#> career4 0.576 0.012 48.372 0.000
+#> ENJ 0.500 0.017 29.547 0.000
+#> SC 0.338 0.015 23.195 0.000
+#> CAREER 0.302 0.010 29.711 0.000
Note: The other approaches work as well, but might be quite slow +depending on the number of interaction effects (particularly for the +LMS- and constrained approach).
+vignettes/observed_lms_qml.Rmd
+ observed_lms_qml.Rmd
In contrast to the other approaches, the LMS and QML approaches are
+designed to handle latent variables only. Thus observed variables cannot
+be as easily used, as in the other approaches. One way of getting around
+this is by specifying your observed variable as a latent variable with a
+single indicator. modsem()
will by default constrain the
+factor loading to 1
, and the residual variance of the
+indicator to 0
. Then, the only difference between the
+latent variable and its indicator, is that (assuming that it is an
+exogenous variable) it has a zero-mean. This will work for both the LMS-
+and QML approach in most cases, except for two exceptions.
For the LMS approach you can use the above mentioned approach in
+almost all cases, except in the case where you wish to use an observed
+variable as a moderating variable. In the LMS approach, you will usually
+select one variable in an interaction term as a moderator. The
+interaction effect is then estimated via numerical integration, at
+n
quadrature nodes of the moderating variable. This process
+however, requires that the moderating variable has an error-term, as the
+distribution of the moderating variable is modelled as
+,
+where
+
+is the expected value of
+
+at quadrature point k
, and
+
+is the error term. If the error-term is zero, the probability of
+observing a given value of
+
+will not be computable. In most instances the first variable in the
+interaction term, is chosen as the moderator. For example, if the
+interaction term is "X:Z"
, "X"
will usually be
+chosen as the moderator. Thus if only one of the variables are latent,
+you should put the latent variable first in the interaction term. If
+both are observed, you have to specify a measurement error (e.g., “x1 ~~
+0.1 * x1”) for the indicator of the first variable in the interaction
+term.
+library(modsem)
+
+# interaction effect between a latent and an observed variable
+m1 <- '
+# Outer Model
+ X =~ x1 # X is observed
+ Z =~ z1 + z2 # Z is latent
+ Y =~ y1 # Y is observed
+
+# Inner model
+ Y ~ X + Z
+ Y ~ Z:X
+'
+
+lms1 <- modsem(m1, oneInt, method = "lms")
+
+# interaction effect between two observed variables
+m2 <- '
+# Outer Model
+ X =~ x1 # X is observed
+ Z =~ z1 # Z is observed
+ Y =~ y1 # Y is observed
+ x1 ~~ 0.1 * x1 # specify a variance for the measurement error
+# Inner model
+ Y ~ X + Z
+ Y ~ X:Z
+'
+
+lms2 <- modsem(m1, oneInt, method = "lms")
+summary(lms2)
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -10816.13, Change = -10816.126
+#> EM: Iteration = 2, LogLik = -10816.13, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 58
+#> Loglikelihood -8087.2
+#> Akaike (AIC) 16202.4
+#> Bayesian (BIC) 16280.81
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 24
+#> Dimensions 1
+#> Total points of integration 24
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -10816
+#> Akaike (AIC) 21658.25
+#> Bayesian (BIC) 21731.06
+#> Chi-square 0.01
+#> Degrees of Freedom (Chi-square) 1
+#> P-value (Chi-square) 0.917
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 2728.93
+#> Difference test (D) 5457.85
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.510
+#> R-Squared Null-Model (H0):
+#> Y 0.343
+#> R-Squared Change:
+#> Y 0.167
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> Z =~
+#> z1 1.000
+#> z2 0.811 0.018 45.25 0.000
+#> X =~
+#> x1 1.000
+#> Y =~
+#> y1 1.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> Z 0.587 0.032 18.10 0.000
+#> X 0.574 0.029 19.96 0.000
+#> Z:X 0.627 0.026 23.76 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> z1 1.009 0.024 42.22 0.000
+#> z2 1.203 0.020 60.50 0.000
+#> x1 1.023 0.024 43.13 0.000
+#> y1 1.046 0.033 31.33 0.000
+#> Y 0.000
+#> Z 0.000
+#> X 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> Z ~~
+#> X 0.211 0.025 8.56 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> z1 0.170 0.018 9.23 0.000
+#> z2 0.160 0.013 12.64 0.000
+#> x1 0.000
+#> y1 0.000
+#> Z 1.010 0.020 50.04 0.000
+#> X 1.141 0.016 69.82 0.000
+#> Y 1.284 0.043 29.70 0.000
The estimation of the QML approach is different from the LMS +approach, and you do not have the same issue as in the LMS approach. +Thus you don’t have to specify a measurement error for moderating +variables.
+
+m3 <- '
+# Outer Model
+ X =~ x1 # X is observed
+ Z =~ z1 # Z is observed
+ Y =~ y1 # Y is observed
+
+# Inner model
+ Y ~ X + Z
+ Y ~ X:Z
+'
+
+qml3 <- modsem(m3, oneInt, method = "qml")
+summary(qml3)
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 2000
+#> Number of iterations 11
+#> Loglikelihood -9117.07
+#> Akaike (AIC) 18254.13
+#> Bayesian (BIC) 18310.14
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -9369
+#> Akaike (AIC) 18756.46
+#> Bayesian (BIC) 18806.87
+#> Chi-square 0.00
+#> Degrees of Freedom (Chi-square) 0
+#> P-value (Chi-square) 0.000
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 252.17
+#> Difference test (D) 504.33
+#> Degrees of freedom (D) 1
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.470
+#> R-Squared Null-Model (H0):
+#> Y 0.320
+#> R-Squared Change:
+#> Y 0.150
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> X =~
+#> x1 1.000
+#> Z =~
+#> z1 1.000
+#> Y =~
+#> y1 1.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> X 0.605 0.028 21.26 0.000
+#> Z 0.490 0.028 17.55 0.000
+#> X:Z 0.544 0.023 23.95 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 1.023 0.024 42.83 0.000
+#> z1 1.011 0.024 41.56 0.000
+#> y1 1.066 0.034 31.64 0.000
+#> Y 0.000
+#> X 0.000
+#> Z 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> X ~~
+#> Z 0.210 0.026 7.95 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 0.000
+#> z1 0.000
+#> y1 0.000
+#> X 1.141 0.036 31.62 0.000
+#> Z 1.184 0.037 31.62 0.000
+#> Y 1.399 0.044 31.62 0.000
vignettes/plot_interactions.Rmd
+ plot_interactions.Rmd
Interaction effects can be plotted using the included
+plot_interaction
function. This function takes a fitted
+model object and the names of the two variables that are interacting.
+The function will plot the interaction effect of the two variables. The
+x-variable is plotted on the x-axis and the y-variable is plotted on the
+y-axis. And the z-variable decides at what points of z the effect of x
+on y is plotted. The function will also plot the 95% confidence interval
+of the interaction effect.
Here we can see a simple example using the double centering +approach.
+
+m1 <- "
+# Outer Model
+ X =~ x1
+ X =~ x2 + x3
+ Z =~ z1 + z2 + z3
+ Y =~ y1 + y2 + y3
+
+# Inner model
+ Y ~ X + Z + X:Z
+"
+est1 <- modsem(m1, data = oneInt)
+plot_interaction("X", "Z", "Y", "X:Z", -3:3, c(-0.2, 0), est1)
+Here we can see a different example using the LMS approach, in the +theory of planned behavior model.
+
+tpb <- "
+# Outer Model (Based on Hagger et al., 2007)
+ ATT =~ att1 + att2 + att3 + att4 + att5
+ SN =~ sn1 + sn2
+ PBC =~ pbc1 + pbc2 + pbc3
+ INT =~ int1 + int2 + int3
+ BEH =~ b1 + b2
+
+# Inner Model (Based on Steinmetz et al., 2011)
+ INT ~ ATT + SN + PBC
+ BEH ~ INT + PBC
+ BEH ~ PBC:INT
+"
+
+est2 <- modsem(tpb, TPB, method = "lms")
+#> Warning: It is recommended that you have at least 32 nodes for interaction
+#> effects between exogenous and endogenous variables in the lms approach 'nodes =
+#> 24'
+plot_interaction(x = "INT", z = "PBC", y = "BEH", xz = "PBC:INT",
+ vals_z = c(-0.5, 0.5), model = est2)
In essence quadratic effects are just a special case of interaction +effects – where a variable has an interaction effect with itself. Thus, +all of the modsem methods can be used to estimate quadratic effects as +well.
+Here you can see a very simple example using the LMS-approach.
+
+library(modsem)
+m1 <- '
+# Outer Model
+X =~ x1 + x2 + x3
+Y =~ y1 + y2 + y3
+Z =~ z1 + z2 + z3
+
+# Inner model
+Y ~ X + Z + Z:X + X:X
+'
+
+est1Lms <- modsem(m1, data = oneInt, method = "lms")
+summary(est1Lms)
+#> Estimating null model
+#> EM: Iteration = 1, LogLik = -17831.87, Change = -17831.875
+#> EM: Iteration = 2, LogLik = -17831.87, Change = 0.000
+#>
+#> modsem (version 1.0.3):
+#> Estimator LMS
+#> Optimization method EM-NLMINB
+#> Number of observations 2000
+#> Number of iterations 119
+#> Loglikelihood -14687.61
+#> Akaike (AIC) 29439.22
+#> Bayesian (BIC) 29618.45
+#>
+#> Numerical Integration:
+#> Points of integration (per dim) 24
+#> Dimensions 1
+#> Total points of integration 24
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -17832
+#> Akaike (AIC) 35723.75
+#> Bayesian (BIC) 35891.78
+#> Chi-square 17.52
+#> Degrees of Freedom (Chi-square) 24
+#> P-value (Chi-square) 0.826
+#> RMSEA 0.000
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 3144.26
+#> Difference test (D) 6288.52
+#> Degrees of freedom (D) 2
+#> P-value (D) 0.000
+#>
+#> R-Squared:
+#> Y 0.596
+#> R-Squared Null-Model (H0):
+#> Y 0.395
+#> R-Squared Change:
+#> Y 0.200
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information expected
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> X =~
+#> x1 1.000
+#> x2 0.804 0.013 63.648 0.000
+#> x3 0.915 0.014 66.681 0.000
+#> Z =~
+#> z1 1.000
+#> z2 0.810 0.012 65.547 0.000
+#> z3 0.881 0.013 66.644 0.000
+#> Y =~
+#> y1 1.000
+#> y2 0.798 0.008 105.935 0.000
+#> y3 0.899 0.008 109.335 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> Y ~
+#> X 0.673 0.031 21.616 0.000
+#> Z 0.570 0.028 20.006 0.000
+#> X:X -0.007 0.020 -0.364 0.716
+#> X:Z 0.715 0.029 25.082 0.000
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 1.023 0.019 52.606 0.000
+#> x2 1.215 0.017 73.187 0.000
+#> x3 0.919 0.018 50.599 0.000
+#> z1 1.013 0.024 41.627 0.000
+#> z2 1.207 0.020 59.429 0.000
+#> z3 0.917 0.022 42.344 0.000
+#> y1 1.046 0.036 29.466 0.000
+#> y2 1.228 0.029 42.539 0.000
+#> y3 0.962 0.032 29.921 0.000
+#> Y 0.000
+#> X 0.000
+#> Z 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> X ~~
+#> Z 0.199 0.024 8.301 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> x1 0.160 0.008 18.929 0.000
+#> x2 0.163 0.007 23.701 0.000
+#> x3 0.165 0.008 21.078 0.000
+#> z1 0.167 0.009 18.594 0.000
+#> z2 0.160 0.007 22.969 0.000
+#> z3 0.158 0.008 20.921 0.000
+#> y1 0.160 0.009 18.034 0.000
+#> y2 0.154 0.007 22.804 0.000
+#> y3 0.163 0.008 20.824 0.000
+#> X 0.972 0.016 60.080 0.000
+#> Z 1.017 0.019 54.904 0.000
+#> Y 0.983 0.038 26.163 0.000
In this example we have a simple model with two quadratic effects and +one interaction effect, using the QML- and double centering approach, +using the data from a subset of the PISA 2006 data.
+
+m2 <- '
+ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
+CAREER =~ career1 + career2 + career3 + career4
+SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
+CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
+'
+
+est2Dblcent <- modsem(m2, data = jordan)
+est2Qml <- modsem(m2, data = jordan, method = "qml")
+summary(est2Qml)
+#> Estimating null model
+#> Starting M-step
+#>
+#> modsem (version 1.0.3):
+#> Estimator QML
+#> Optimization method NLMINB
+#> Number of observations 6038
+#> Number of iterations 101
+#> Loglikelihood -110520.22
+#> Akaike (AIC) 221142.45
+#> Bayesian (BIC) 221484.44
+#>
+#> Fit Measures for H0:
+#> Loglikelihood -110521
+#> Akaike (AIC) 221138.58
+#> Bayesian (BIC) 221460.46
+#> Chi-square 1016.34
+#> Degrees of Freedom (Chi-square) 87
+#> P-value (Chi-square) 0.000
+#> RMSEA 0.005
+#>
+#> Comparative fit to H0 (no interaction effect)
+#> Loglikelihood change 1.07
+#> Difference test (D) 2.13
+#> Degrees of freedom (D) 3
+#> P-value (D) 0.546
+#>
+#> R-Squared:
+#> CAREER 0.512
+#> R-Squared Null-Model (H0):
+#> CAREER 0.510
+#> R-Squared Change:
+#> CAREER 0.002
+#>
+#> Parameter Estimates:
+#> Coefficients unstandardized
+#> Information observed
+#> Standard errors standard
+#>
+#> Latent Variables:
+#> Estimate Std.Error z.value P(>|z|)
+#> ENJ =~
+#> enjoy1 1.000
+#> enjoy2 1.002 0.020 50.587 0.000
+#> enjoy3 0.894 0.020 43.669 0.000
+#> enjoy4 0.999 0.021 48.227 0.000
+#> enjoy5 1.047 0.021 50.400 0.000
+#> SC =~
+#> academic1 1.000
+#> academic2 1.104 0.028 38.946 0.000
+#> academic3 1.235 0.030 41.720 0.000
+#> academic4 1.254 0.030 41.828 0.000
+#> academic5 1.113 0.029 38.647 0.000
+#> academic6 1.198 0.030 40.356 0.000
+#> CAREER =~
+#> career1 1.000
+#> career2 1.040 0.016 65.180 0.000
+#> career3 0.952 0.016 57.838 0.000
+#> career4 0.818 0.017 48.358 0.000
+#>
+#> Regressions:
+#> Estimate Std.Error z.value P(>|z|)
+#> CAREER ~
+#> ENJ 0.523 0.020 26.286 0.000
+#> SC 0.467 0.023 19.884 0.000
+#> ENJ:ENJ 0.026 0.022 1.206 0.228
+#> ENJ:SC -0.039 0.046 -0.851 0.395
+#> SC:SC -0.002 0.035 -0.058 0.953
+#>
+#> Intercepts:
+#> Estimate Std.Error z.value P(>|z|)
+#> enjoy1 0.000 0.013 -0.008 0.994
+#> enjoy2 0.000 0.015 0.010 0.992
+#> enjoy3 0.000 0.013 -0.023 0.982
+#> enjoy4 0.000 0.014 0.008 0.993
+#> enjoy5 0.000 0.014 0.025 0.980
+#> academic1 0.000 0.016 -0.009 0.993
+#> academic2 0.000 0.014 -0.009 0.993
+#> academic3 0.000 0.015 -0.028 0.978
+#> academic4 0.000 0.016 -0.015 0.988
+#> academic5 -0.001 0.014 -0.044 0.965
+#> academic6 0.001 0.015 0.048 0.962
+#> career1 -0.004 0.017 -0.204 0.838
+#> career2 -0.004 0.018 -0.248 0.804
+#> career3 -0.004 0.017 -0.214 0.830
+#> career4 -0.004 0.016 -0.232 0.816
+#> CAREER 0.000
+#> ENJ 0.000
+#> SC 0.000
+#>
+#> Covariances:
+#> Estimate Std.Error z.value P(>|z|)
+#> ENJ ~~
+#> SC 0.218 0.009 25.477 0.000
+#>
+#> Variances:
+#> Estimate Std.Error z.value P(>|z|)
+#> enjoy1 0.487 0.011 44.335 0.000
+#> enjoy2 0.488 0.011 44.406 0.000
+#> enjoy3 0.596 0.012 48.233 0.000
+#> enjoy4 0.488 0.011 44.561 0.000
+#> enjoy5 0.442 0.010 42.470 0.000
+#> academic1 0.645 0.013 49.813 0.000
+#> academic2 0.566 0.012 47.864 0.000
+#> academic3 0.473 0.011 44.319 0.000
+#> academic4 0.455 0.010 43.579 0.000
+#> academic5 0.565 0.012 47.695 0.000
+#> academic6 0.502 0.011 45.434 0.000
+#> career1 0.373 0.009 40.392 0.000
+#> career2 0.328 0.009 37.019 0.000
+#> career3 0.436 0.010 43.272 0.000
+#> career4 0.576 0.012 48.372 0.000
+#> ENJ 0.500 0.017 29.547 0.000
+#> SC 0.338 0.015 23.195 0.000
+#> CAREER 0.302 0.010 29.711 0.000
Note: The other approaches work as well, but might be quite slow +depending on the number of interaction effects (particularly for the +LMS- and constrained approach).
+