Self-tuned, block-coordinate, and incremental mirror descent methods with applications in machine learning and wireless communications
Abstract
Uncertainty, high-dimensionality, and matrix structure of the decision variables are among the main challenges that may arise in addressing a wide range of stochastic optimization and equilibrium problems in machine learning and signal processing. Accordingly, the main goal of this dissertation lies in the development of suitable computational methods that can cope with the aforementioned challenges. To this end, we consider the stochastic mirror descent (SMD) methods that are among the popular avenues in solving stochastic optimization and variational inequality problems. Despite the significant advances in the convergence and complexity analysis of the SMD methods in the past two decades, there seems to be much to learn about ways to reduce the sensitivity of the performance of these methods with respect to the choice of the step-size rule. Motivated by this research gap, in the first part of this dissertation, we develop a unifying self-tuned randomized block-coordinate SMD method for solving high-dimensional stochastic optimization problems. The proposed method is unifying in the sense that it addresses both smooth and nonsmooth regimes. We establish an almost sure convergence for the generated iterate by the scheme. Importantly, we show that a mean-squared error of the method is minimized resulting in a faster convergence compared to the standard SMD methods. The numerical experiments on training support vector machines display that the self-tuned schemes are robust with respect to the choice of problem parameters and data sets. The second part of this dissertation is focused on multi-user optimization problems over semidefinite matrix spaces. The motivation arises in wireless communication networks composed of transmitters and receivers that generate and detect the signals, respectively. The competition among the transmitters in the network can be characterized as a non-cooperative Nash game with positive semidenite matrix variables. To compute the equilibrium, we develop an SMD method equipped with a convergence rate statement. In addressing cooperative regimes, we develop an incremental mirror descent method where the users communicate with their adjacent user over the network. We establish the convergence for the proposed algorithm and also derive a non-asymptotic convergence rate statement.
Collections
- OSU Dissertations [11222]