You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorials/regularization/regularization.md
+18-29Lines changed: 18 additions & 29 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,10 @@
5
5
For ridge regularization, you can simply use `SemRidge` as an additional loss function
6
6
(for example, a model with the loss functions `SemML` and `SemRidge` corresponds to ridge-regularized maximum likelihood estimation).
7
7
8
-
For lasso, elastic net and (far) beyond, you can load the `ProximalAlgorithms.jl` and `ProximalOperators.jl` packages alongside `StructuralEquationModels`:
8
+
For lasso, elastic net and (far) beyond, you can use the [`ProximalOperators.jl`](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
9
+
and optimize the model with [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl)
10
+
that provides so-called *proximal optimization* algorithms.
11
+
It can handle, amongst other things, various forms of regularization.
9
12
10
13
```@setup reg
11
14
using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
@@ -19,16 +22,14 @@ Pkg.add("ProximalOperators")
19
22
using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
20
23
```
21
24
22
-
## `SemOptimizerProximal`
25
+
## Proximal optimization
23
26
24
-
To estimate regularized models, we provide a "building block" for the optimizer part, called `SemOptimizerProximal`.
25
-
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
26
-
Those can handle, amongst other things, various forms of regularization.
27
-
28
-
It can be used as
27
+
With *ProximalAlgorithms* package loaded, it is now possible to use `:Proximal` optimization engine
28
+
in `SemOptimizer` for estimating regularized models.
29
29
30
30
```julia
31
-
SemOptimizerProximal(
31
+
SemOptimizer(;
32
+
engine =:Proximal,
32
33
algorithm = ProximalAlgorithms.PANOC(),
33
34
options =Dict{Symbol, Any}(),
34
35
operator_g,
@@ -37,7 +38,7 @@ SemOptimizerProximal(
37
38
```
38
39
39
40
The proximal operator (aka the regularization function) can be passed as `operator_g`, available options are listed [here](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/).
40
-
The available Algorithms are listed [here](https://juliafirstorder.github.io/ProximalAlgorithms.jl/stable/guide/implemented_algorithms/).
41
+
The available algorithms are listed [here](https://juliafirstorder.github.io/ProximalAlgorithms.jl/stable/guide/implemented_algorithms/).
41
42
42
43
## First example - lasso
43
44
@@ -101,26 +102,18 @@ From the previously linked [documentation](https://juliafirstorder.github.io/Pro
101
102
102
103
```@example reg
103
104
λ = zeros(31); λ[ind] .= 0.02
104
-
```
105
-
106
-
and use `SemOptimizerProximal`.
107
105
108
-
```@example reg
109
-
optimizer_lasso = SemOptimizerProximal(
106
+
optimizer_lasso = SemOptimizer(
107
+
engine = :Proximal,
110
108
operator_g = NormL1(λ)
111
109
)
112
-
113
-
model_lasso = Sem(
114
-
specification = partable,
115
-
data = data
116
-
)
117
110
```
118
111
119
112
Let's fit the regularized model
120
113
121
114
```@example reg
122
115
123
-
fit_lasso = fit(optimizer_lasso, model_lasso)
116
+
fit_lasso = fit(optimizer_lasso, model)
124
117
```
125
118
126
119
and compare the solution to unregularizted estimates:
## Second example - mixed l1 and l0 regularization
145
139
146
140
You can choose to penalize different parameters with different types of regularization functions.
147
-
Let's use the lasso again on the covariances, but additionally penalyze the error variances of the observed items via l0 regularization.
141
+
Let's use the lasso again on the covariances, but additionally penalize the error variances of the observed items via l0 regularization.
148
142
149
143
The l0 penalty is defined as
150
144
```math
151
145
\lambda \mathrm{nnz}(\theta)
152
146
```
153
147
154
-
To define a sup of separable proximal operators (i.e. no parameter is penalized twice),
148
+
To define a sum of separable proximal operators (i.e. no parameter is penalized twice),
155
149
we can use [`SlicedSeparableSum`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/calculus/#ProximalOperators.SlicedSeparableSum) from the `ProximalOperators` package:
0 commit comments