Show simple item record

dc.contributor.advisorYousefian, Farzad
dc.contributor.authorPatel, Vandan
dc.date.accessioned2018-06-13T16:19:39Z
dc.date.available2018-06-13T16:19:39Z
dc.date.issued2017-07-01
dc.identifier.urihttps://hdl.handle.net/11244/300037
dc.description.abstractThis research is motivated by challenges in addressing optimization models arising in Big Data. Such models are often formulated as large scale stochastic optimization problems. When the probability distribution of the data is unknown, the Sample Average Approximation (SAA) scheme can be employed which results in an Empirical Risk Minimization (EMR) problem. To address this class of problems deterministic solution methods, such as the Broyden, Fletcher, Goldfarb, Shanno (BFGS) method, face high computational cost per iteration and memory requirement issues due to presence of uncertainty and high dimensionality of the solution space. To cope with these challenges, stochastic methods with limited memory variants have been developed recently. However, the solutions generated by such methods might be dense requiring high memory capacity. To generate sparse solutions, in the literature, standard L1 regularization technique is employed, where constant L1 regularization parameter is added to the objective function of the problem which changes the original problem and the solutions obtained by solving the regularized problem are approximate solutions. Moreover, limited information is available in the literature to obtain sparse solutions to the original problem. To address this gap, in this research we develop an iterative L1 Regularized Limited memory Stochastic BFGS (iRLS-BFGS) method in which the L1 regularization parameter and the step-size parameter are simultaneously updated at each iteration. Our goal is to find the suitable decay rates for these two sequences in our algorithm. To address this research question, we first implement the iRLS-BFGS algorithm on a Big Data text classification problem and provide a detailed numerical comparison of the performance of the developed algorithm under different choices of the update rules. Our numerical experiments imply that when both the step-size and the L1 regularization parameter decay at the rate of the order 1/?k , the best convergence is achieved. Later, to support our findings, we apply our method to address a large scale image deblurring problem arising in signal processing using the update rule from the previous application. As a result, we obtain much clear deblurred images compared to the classical algorithm’s deblurred output images when both the step-size and the L1 regularization parameter decay at the rate of the order 1/?k.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleIterative L1 Regularized Limited Memory Stochastic BFGS Algorithm and Numerical Experiments for Big Data Applications
dc.contributor.committeeMemberHeragu, Sunderesh
dc.contributor.committeeMemberZhao, Chaoyue
osu.filenamePatel_okstate_0664M_15289.pdf
osu.accesstypeOpen Access
dc.description.departmentIndustrial Engineering & Management
dc.type.genreThesis
dc.type.materialtext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record