Loading...
Thumbnail Image

Date

2023-05-12

Journal Title

Journal ISSN

Volume Title

Publisher

Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International

My thesis focuses on designing scalable machine learning algorithms leveraging theoretical advances in mathematics. In particular, I investigate two directions where scalability plays an important role: fair machine learning and randomized feature representations. In fair machine learning, my research concentrates on achieving individual fairness in the single model and decoupled model settings with minimum data labeling budgets. For randomized feature representations, I propose a model-agnostic framework for designing computationally efficient randomized machine learning algorithms with provable performance guarantees, which demonstrates that it is not necessary for individual models to be weakly trained before they are optimally ensembled. Furthermore, I also contribute to the scalable estimation of Kernel matrix spectral norm. Specifically, I propose to apply sketching techniques to efficiently estimate the spectral norm, theoretically derive the estimation error and empirically demonstrate the estimation efficiency in a time-constrained setting.

Description

Keywords

Machine Learning, Computer Science, Scalability

Citation

DOI

Related file

Notes

Sponsorship