L2 Regularization#

pylit.methods.l2_reg.l2_reg(lambd)#

This is the L2 regularization method. The interface is described in Methods.

The objective function

\[f(u, w, \lambda) = \frac{1}{2} \| \widehat u - \widehat w\|^2_{L^2(\mathbb{R})} + \frac{1}{2} \lambda \| u \|_{L^2(\mathbb{R})}^2,\]

is implemented as

\[f(\boldsymbol{\alpha}) = \frac{1}{2} \frac{1}{n} \| \boldsymbol{R} \boldsymbol{\alpha} - \boldsymbol{F} \|^2_2 + \frac{1}{2} \lambda \frac{1}{n} \| \boldsymbol{\alpha} \|^2_2,\]

with the gradient

\[\nabla_{\boldsymbol{\alpha}} f(\boldsymbol{\alpha}) = \frac{1}{n} \boldsymbol{R}^\top(\boldsymbol{R} \boldsymbol{\alpha} - \boldsymbol{F}) + \lambda \frac{1}{n} \boldsymbol{\alpha},\]

the learning rate

\[\eta = \frac{n}{\| \boldsymbol{R}^\top \boldsymbol{R} + \lambda \boldsymbol{I} \|},\]

and the solution

\[\boldsymbol{\alpha}^* = (\boldsymbol{R}^\top \boldsymbol{R} + \lambda \boldsymbol{I})^{-1} \boldsymbol{R}^\top \boldsymbol{F},\]

where

  • \(\boldsymbol{R}\): Regression matrix,

  • \(\boldsymbol{F}\): Target vector,

  • \(\boldsymbol{I}\): Identity matrix,

  • \(\boldsymbol{\alpha}\): Coefficient vector,

  • \(\lambda\): Regularization parameter,

  • \(n\): Number of samples.

Return type:

Method