Show simple item record

dc.contributor.advisorYousefian, Farzad
dc.contributor.authorKaushik, Harshal D.
dc.date.accessioned2023-09-20T22:17:38Z
dc.date.available2023-09-20T22:17:38Z
dc.date.issued2021-09
dc.identifier.urihttps://hdl.handle.net/11244/339594
dc.description.abstractTraditionally, constrained optimization models include constraints in the form of inequalities and equations. In this dissertation, we consider a unifying class of optimization problems with variational inequality (VI) constraints that allows for capturing a wide range of applications that may not be formulated by the existing standard constrained models. The main motivation arises from the notion of efficiency of equilibria in multi-agent networks. To this end, first we consider a class of optimization problems with Cartesian variational inequality (CVI) constraints, where the objective function is convex and the CVI is associated with a monotone mapping and a convex Cartesian product set. Motivated by the absence of performance guarantees for addressing this class of problems, we develop an averaged randomized block iteratively regularized gradient scheme. The main contributions include: (i) When the set of the CVI is bounded, we derive new non-asymptotic rate statements for suboptimality and infeasibility error metrics. (ii) When the set of the CVI is unbounded, we establish the global convergence in an almost sure and a mean sense. We numerically validate the proposed method on a networked Nash Cournot competition. We also implement our scheme on classical image deblurring applications and numerically demonstrate that the proposed scheme outperforms the standard sequential regularization method. In the second part, we consider a class of constrained multi-agent optimization problems where the goal is to cooperatively minimize the sum of agent-specific objectives. In this framework, the objective function and the VI mappings are locally known. We develop an iteratively regularized incremental gradient method where the agents communicate over a cycle graph. We derive new non-asymptotic agent-wise convergence rates for suboptimality and infeasibility metrics. We numerically validate the proposed scheme on a transportation network problem. We also apply the proposed scheme to address a special case of this distributed formulation, where the VI constraint characterizes a feasible set. We show the superiority of the proposed scheme to existing incremental gradient methods. A potential future direction is to extend the results of this dissertation to employ gradient tracking techniques and address multi-agent systems requiring weaker assumptions on the network topology with asynchronous communications.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleOn distributed optimization problems with variational inequality constraints: Algorithms, complexity analysis, and applications
dc.contributor.committeeMemberHeragu, Sunderesh
dc.contributor.committeeMemberBuchanan, Austin
dc.contributor.committeeMemberRudra, Pratyaydipta
osu.filenameKaushik_okstate_0664D_17388.pdf
osu.accesstypeOpen Access
dc.type.genreDissertation
dc.type.materialText
dc.subject.keywordsconvergence and rate analysis
dc.subject.keywordsfirst-order optimization methods
dc.subject.keywordslarge-scale optimization
dc.subject.keywordsmulti-agent distributed optimization
dc.subject.keywordsnon-cooperative nash games and variational inequalities
dc.subject.keywordsnonlinear programming
thesis.degree.disciplineIndustrial Engineering and Management
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record