Sensitive Attribute Association Bias in Latent Factor Recommendation Algorithms: Theory and In Practice
dc.contributor.advisor | Hougen, Dean | |
dc.contributor.author | Beattie, Alexandra | |
dc.contributor.committeeMember | Nicholson, Charles | |
dc.contributor.committeeMember | Razzaghi, Talayeh | |
dc.contributor.committeeMember | Diochnos, Dimitrios | |
dc.contributor.committeeMember | Harris-Watson, Alexandra | |
dc.date.accessioned | 2023-12-06T22:27:22Z | |
dc.date.available | 2023-12-06T22:27:22Z | |
dc.date.issued | 2023-12-15 | |
dc.date.manuscript | 2023-12-06 | |
dc.description.abstract | This dissertation presents methods for evaluating and mitigating a relatively unexplored bias topic in recommendation systems, which we refer to as attribute association bias. Attribute association bias (AAB) can be introduced when leveraging latent factor recommendation models due to their ability to entangle model and implicit attributes into the trained latent space. This type of bias occurs when entity embeddings showcase significant levels of association with specific types of explicit or implicit entity attributes, thus having the potential to introduce representative harms for both consumer and provider stakeholders. We present a novel analysis method framework to help practitioners evaluate their latent factor recommendation models for AAB. This framework consists of three main techniques for gaining insight into sensitive AAB in the recommendation latent space: bias direction creation, bias evaluation metrics, and multi-group evaluation. Methods within our evaluation framework were inspired by techniques presented by the natural language processing research community for measuring gender bias in learned language representations. Additionally, we explore how this bias can be reinforced and produce feedback loops via retraining. Finally, we explore possible mitigation techniques for addressing said bias. Primarily, we demonstrate our methodology with two case studies that evaluate user gender association bias in latent factor recommendation. With our methods, we uncover the existence of user gender association bias and compare the various methods we propose to help guide practitioners in how best to use our techniques for their systems. In addition to exploring user gender, we experiment with measuring user age association bias as a means for evaluating non-binary AAB. | en_US |
dc.identifier.uri | https://hdl.handle.net/11244/340008 | |
dc.language | en_US | en_US |
dc.subject | Computer Science | en_US |
dc.subject | Recommender Systems | en_US |
dc.subject | Responsible AI | en_US |
dc.subject | Machine Learning Evaluation | en_US |
dc.thesis.degree | Ph.D. | en_US |
dc.title | Sensitive Attribute Association Bias in Latent Factor Recommendation Algorithms: Theory and In Practice | en_US |
ou.group | Gallogly College of Engineering | en_US |
shareok.orcid | 0000-0003-3351-5951 | en_US |
Files
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: