NNLS#

pylit.optimizer.nnls.nnls(R, F, x0, method, maxiter=None, tol=None, svd=False, protocol=False)#

This is the NNLS optimization method. The interface is described in Optimizer.

Description#

Solves the optimization problem (1) using the active-set popularized by Bro et al. The algorithm maintains a partition of indices into an active set (variables fixed at zero) and a passive set (variables allowed to vary). At each step, the active set is updated according to the gradient, and a least-squares problem is solved on the passive set. This ensures feasibility with respect to the non-negativity constraint throughout the iterations.

Algorithm#

The procedure follows these labeled steps (corresponding to the implementation):

A. Initialization
  • A.1 Initialize active set \(P = \mathbf{0}\).

  • A.2 Passive set empty.

  • A.3 Set initial guess \(x = x_0\).

  • A.4 Compute gradient \(\nabla f(x) = R^T (F - R x)\).

B. Main Loop
  • B.1 While there exists an inactive variable with positive gradient,

  • B.2 Compute the index with maximum gradient.

  • B.3 Move this index to the active set.

  • B.4 Solve the least-squares subproblem on the active set.

C. Inner Loop
  • C.1 If the solution has negative components,

  • C.2 Compute step length \(\alpha\) to the nearest feasible point.

  • C.3 Update iterate \(x \leftarrow x + \alpha (s - x)\).

  • C.4 Remove variables with absolute value below the tolerance from the active set.

  • C.5 Recompute least-squares solution on active set.

  • C.6 Set inactive variables to zero.

The algorithm terminates when all free variables satisfy the Karush–Kuhn–Tucker (KKT) conditions or when the maximum number of iterations maxiter is reached.

rtype:

Solution

returns:

A Solution object containing the final iterate.

Notes

  • This implementation does not support SVD preconditioning.

References

  • Rasmus Bro, Sijmen De Jong. A Fast Non-Negativity-Constrained Least Squares Algorithm. Journal of Chemometrics, Vol. 11, pp. 393–401, 1997.