[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Randomized Subspace Derivative-Free Optimization with Quadratic Models and Second-Order Convergence

We consider model-based derivative-free optimization (DFO) for large-scale problems, based on iterative minimization in random subspaces. We provide the first worst-case complexity bound for such methods for convergence to approximate second-order critical points, and show that these bounds have significantly improved dimension dependence compared to standard full-space methods, provided low accuracy solutions are desired and/or … Read more

Tricks from the Trade for Large-Scale Markdown Pricing: Heuristic Cut Generation for Lagrangian Decomposition

In automated decision making processes in the online fashion industry, the ‘predict-then-optimize’ paradigm is frequently applied, particularly for markdown pricing strategies. This typically involves a mixed-integer optimization step, which is crucial for maximizing profit and merchandise volume. In practice, the size and complexity of the optimization problem is prohibitive for using off-the-shelf solvers for mixed … Read more

Efficient Proximal Subproblem Solvers for a Nonsmooth Trust-Region Method

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures … Read more

Splitted Levenberg-Marquardt Method for Large-Scale Sparse Problems

We consider large-scale nonlinear least squares problems with sparse residuals, each of them depending on a small number of variables. A decoupling procedure which results in a splitting of the original problems into a sequence of independent problems of smaller sizes is proposed and analysed. The smaller size problems are modified in a way that … Read more

Shape-Changing Trust-Region Methods Using Multipoint Symmetric Secant Matrices

In this work, we consider methods for large-scale and nonconvex unconstrained optimization. We propose a new trust-region method whose subproblem is defined using a so-called “shape-changing” norm together with densely-initialized multipoint symmetric secant (MSS) matrices to approximate the Hessian. Shape-changing norms and dense initializations have been successfully used in the context of traditional quasi Newton … Read more

Solution Strategies for Integrated Distribution, Production, and Routing Problems Arising in Modular Manufacturing

Recently, there has been a paradigm shift by certain energy and chemical companies towards modular manufacturing, whereby transportable modular production units can be relocated between production facilities to meet the spatial and temporal changes in the availabilities, demands, and prices of the underlying commodities. We refer to the optimal distribution, production, and storage of commodities, … Read more

On solving large-scale multistage stochastic problems with a new specialized interior-point approach

A novel approach based on a specialized interior-point method (IPM) is presented for solving large-scale stochastic multistage continuous optimization problems, which represent the uncertainty in strategic multistage and operational two-stage scenario trees, the latter being rooted at the strategic nodes. This new solution approach considers a split-variable formulation of the strategic and operational structures, for … Read more

New interior-point approach for one- and two-class linear support vector machines using multiple variable splitting

Multiple variable splitting is a general technique for decomposing problems by using copies of variables and additional linking constraints that equate their values. The resulting large optimization problem can be solved with a specialized interior-point method that exploits the problem structure and computes the Newton direction with a combination of direct and iterative solvers (i.e., … Read more

An extended delayed weighted gradient algorithm for solving strongly convex optimization problems

The recently developed delayed weighted gradient method (DWGM) is competitive with the well-known conjugate gradient (CG) method for the minimization of strictly convex quadratic functions. As well as the CG method, DWGM has some key optimality and orthogonality properties that justify its practical performance. The main difference with the CG method is that, instead of … Read more

Minimization over the l1-ball using an active-set non-monotone projected gradient

The l1-ball is a nicely structured feasible set that is widely used in many fields (e.g., machine learning, statistics and signal analysis) to enforce some sparsity in the model solutions. In this paper, we devise an active-set strategy for efficiently dealing with minimization problems over the l1-ball and embed it into a tailored algorithmic scheme … Read more