Total Variation Regularization#
- pylit.methods.tv_reg.tv_reg(E, lambd)#
This is the total variation regularization method. The interface is described in Methods.
The objective function (correct that)
\[f(u, w, \lambda) = \frac{1}{2} \| \widehat u - \widehat w\|^2_{L^2(\mathbb{R})} + \frac{1}{2} \lambda \left\| \frac{du}{d\omega} \right\|_{L^2(\mathbb{R})}^2\]is implemented as
\[f(\boldsymbol{\alpha}) = \frac{1}{2} \frac{1}{n} \| \boldsymbol{R} \boldsymbol{\alpha} - \boldsymbol{F} \|^2_2 + \frac{1}{2} \lambda \left\| \boldsymbol{V}_\boldsymbol{E} \boldsymbol{\alpha} \right\|_{2}^2\]with the gradient
\[\nabla_{\boldsymbol{\alpha}} f(\boldsymbol{\alpha}) = \frac{1}{n} \boldsymbol{R}^\top(\boldsymbol{R} \boldsymbol{\alpha} - \boldsymbol{F}) + \lambda \boldsymbol{V}_\boldsymbol{E}^\top \boldsymbol{V}_\boldsymbol{E} \boldsymbol{\alpha}\]the learning rate
\[\eta = \frac{1}{\| \boldsymbol{R}^\top \boldsymbol{R} \| + \lambda n \|\boldsymbol{V}_\boldsymbol{E}^\top \boldsymbol{V}_\boldsymbol{E}\|}\]and the solution
\[\boldsymbol{\alpha}^* = \left(\boldsymbol{R}^\top \boldsymbol{R} + \lambda \, \boldsymbol{V}_{\boldsymbol{E}}^\top \boldsymbol{V}_{\boldsymbol{E}} \right)^{-1} \boldsymbol{R}^\top \boldsymbol{F},\]where
\(\boldsymbol{R}\): Regression matrix,
\(\boldsymbol{F}\): Target vector,
\(\boldsymbol{V}_{\boldsymbol{E}}\): Variation matrix of the evaluation matrix.
\(\boldsymbol{\alpha}\): Coefficient vector,
\(\lambda\): Regularization parameter,
\(n\): Number of samples.
- Return type: