[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

A Penalty-free Infeasible Approach for a Class of Nonsmooth Optimization Problems over the Stiefel Manifold

Transforming into an exact penalty function model with convex compact constraints yields efficient infeasible approaches for optimization problems with orthogonality constraints. For smooth and L21-norm regularized cases, these infeasible approaches adopt simple and orthonormalization-free updating schemes and show high efficiency in some numerical experiments. However, to avoid orthonormalization while enforcing the feasibility of the final … Read more

Exact Penalty Function for L21 Norm Minimization over the Stiefel Manifold

L21 norm minimization with orthogonality constraints, feasible region of which is called Stiefel manifold, has wide applications in statistics and data science. The state-of-the-art approaches adopt proximal gradient technique on either Stiefel manifold or its tangent spaces. The consequent subproblem does not have closed-form solution and hence requires an iterative procedure to solve which is … Read more

A Class of Smooth Exact Penalty Function Methods for Optimization Problems with Orthogonality Constraints

Updating the augmented Lagrangian multiplier by closed-form expression yields efficient first-order infeasible approach for optimization problems with orthogonality constraints. Hence, parallelization becomes tractable in solving this type of problems. Inspired by this closed-form updating scheme, we propose an exact penalty function model with compact convex constraints (PenC). We show that PenC can act as an … Read more

On the Complexity of an Augmented Lagrangian Method for Nonconvex Optimization

In this paper we study the worst-case complexity of an inexact Augmented Lagrangian method for nonconvex constrained problems. Assuming that the penalty parameters are bounded, we prove a complexity bound of $\mathcal{O}(|\log(\epsilon)|)$ outer iterations for the referred algorithm to generate an $\epsilon$-approximate KKT point, for $\epsilon\in (0,1)$. When the penalty parameters are unbounded, we prove … Read more

Parallelizable Algorithms for Optimization Problems with Orthogonality Constraints

To construct a parallel approach for solving optimization problems with orthogonality constraints is usually regarded as an extremely difficult mission, due to the low scalability of the orthonormalization procedure. However, such demand is particularly huge in some application areas such as materials computation. In this paper, we propose a proximal linearized augmented Lagrangian algorithm (PLAM) … Read more

A New First-order Algorithmic Framework for Optimization Problems with Orthogonality Constraints

In this paper, we consider a class of optimization problems with orthogonality constraints, the feasible region of which is called the Stiefel manifold. Our new framework combines a function value reduction step with a correction step. Different from the existing approaches, the function value reduction step of our algorithmic framework searches along the standard Euclidean … Read more

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS

We present a MATLAB implementation of the shape-changing sym- metric rank-one (SC-SR1) method that solves trust-region subproblems when a limited-memory symmetric rank-one (L-SR1) matrix is used in place of the true Hessian matrix. The method takes advantage of two shape-changing norms [4, 3] to decompose the trust-region subproblem into two separate problems. Using one of … Read more

On the Global Optimality for Linear Constrained Rank Minimization Problem

The rank minimization with linear equality constraints has two closely related models, the low rank approximation model, that is to find the best rank-k approximation of a matrix satisfying the linear constraints, and its corresponding factorization model. The latter one is an unconstrained nonlinear least squares problem and hence enjoys a few fast first-order methods … Read more

A Parallel Line Search Subspace Correction Method for Composite Convex Optimization

In this paper, we investigate a parallel subspace correction framework for composite convex optimization. The variables are first divided into a few blocks based on certain rules. At each iteration, the algorithms solve a suitable subproblem on each block simultaneously, construct a search direction by combining their solutions on all blocks, then identify a new … Read more

Penalty Methods with Stochastic Approximation for Stochastic Nonlinear Programming

In this paper, we propose a class of penalty methods with stochastic approximation for solving stochastic nonlinear programming problems. We assume that only noisy gradients or function values of the objective function are available via calls to a stochastic first-order or zeroth-order oracle. In each iteration of the proposed methods, we minimize an exact penalty … Read more