Show simple item record

dc.contributor.advisorYousefian, Farzad
dc.contributor.authorYevale, Jayesh Vinayak
dc.date.accessioned2022-05-13T19:05:29Z
dc.date.available2022-05-13T19:05:29Z
dc.date.issued2021-12
dc.identifier.urihttps://hdl.handle.net/11244/335799
dc.description.abstractDistributed optimization has been a trending topic of research in the past few decades. This is mainly due to the recent advancements in the technology of wireless sensors and also the emerging applications in machine learning. Traditionally, optimization problems were addressed using centralized schemes where the data is assumed to be available all in one place. However, the main reasons that motivate the need for distributed implementations include: (i) the unavailability of the collected data in a centralized location, (ii) the privacy of the data among agents should be preserved, and (iii) the memory and computational power limitations of data processors. Accordingly, to address these challenges, distributed optimization provides a framework where agents (e.g., data processor, sensor) communicate their local information with each other over a network and seek to minimize a global objective function. In some applications, the data may have a huge sample size or a large number of attributes. The problems associated with this type of data are often known as big data problems. In this thesis, our goal is to address such high dimensional distributed optimizationproblems, where the computation of the local gradient mappings may become expensive.
dc.description.abstractRecently, a distributed optimization algorithm has been developed for addressing possibly large-scale problems by considering stochasticity. This method is called Distributed Stochastic Gradient Tracking (DSGT). We develop a novel iterative method called Distributed Randomized Block Stochastic Gradient Tracking (DRBSGT), that is a randomized block variant of the existing DSGT method. We derive new non-asymptotic convergence rates of the order 1/k and 1/k^2 in terms of an optimality metric and a consensus violation metric, respectively. Importantly, while block coordinate schemes have been studied for distributed optimization problems before, the proposed algorithm appears to be the first randomized block-coordinate gradient tracking method that is equipped with the aforementioned convergence rate statements. We validate the performance of the proposed method on the MNIST and a synthetic data set under different network settings. A potential future research direction is to extend the results of this thesis to an asynchronous variant of the proposed method. This will allow for the consideration of communication delays.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleDistributed randomized block stochastic gradient tracking methods: Rate analysis and numerical experiments
dc.contributor.committeeMemberYao, Bing
dc.contributor.committeeMemberLiu, Chenang
osu.filenameYevale_okstate_0664M_17406.pdf
osu.accesstypeOpen Access
dc.type.genreThesis
dc.type.materialText
dc.subject.keywordsdistributed optimization
dc.subject.keywordsgradient tracking
dc.subject.keywordslarge-scale optimization
dc.subject.keywordsmulti-agent optimization
dc.subject.keywordsmulti-agent systems
dc.subject.keywordsstochastic gradient tracking
thesis.degree.disciplineIndustrial Engineering and Management
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record