Sensitive Attribute Association Bias in Latent Factor Recommendation Algorithms: Theory and In Practice

dc.contributor.advisorHougen, Dean
dc.contributor.authorBeattie, Alexandra
dc.contributor.committeeMemberNicholson, Charles
dc.contributor.committeeMemberRazzaghi, Talayeh
dc.contributor.committeeMemberDiochnos, Dimitrios
dc.contributor.committeeMemberHarris-Watson, Alexandra
dc.date.accessioned2023-12-06T22:27:22Z
dc.date.available2023-12-06T22:27:22Z
dc.date.issued2023-12-15
dc.date.manuscript2023-12-06
dc.description.abstractThis dissertation presents methods for evaluating and mitigating a relatively unexplored bias topic in recommendation systems, which we refer to as attribute association bias. Attribute association bias (AAB) can be introduced when leveraging latent factor recommendation models due to their ability to entangle model and implicit attributes into the trained latent space. This type of bias occurs when entity embeddings showcase significant levels of association with specific types of explicit or implicit entity attributes, thus having the potential to introduce representative harms for both consumer and provider stakeholders. We present a novel analysis method framework to help practitioners evaluate their latent factor recommendation models for AAB. This framework consists of three main techniques for gaining insight into sensitive AAB in the recommendation latent space: bias direction creation, bias evaluation metrics, and multi-group evaluation. Methods within our evaluation framework were inspired by techniques presented by the natural language processing research community for measuring gender bias in learned language representations. Additionally, we explore how this bias can be reinforced and produce feedback loops via retraining. Finally, we explore possible mitigation techniques for addressing said bias. Primarily, we demonstrate our methodology with two case studies that evaluate user gender association bias in latent factor recommendation. With our methods, we uncover the existence of user gender association bias and compare the various methods we propose to help guide practitioners in how best to use our techniques for their systems. In addition to exploring user gender, we experiment with measuring user age association bias as a means for evaluating non-binary AAB.en_US
dc.identifier.urihttps://hdl.handle.net/11244/340008
dc.languageen_USen_US
dc.subjectComputer Scienceen_US
dc.subjectRecommender Systemsen_US
dc.subjectResponsible AIen_US
dc.subjectMachine Learning Evaluationen_US
dc.thesis.degreePh.D.en_US
dc.titleSensitive Attribute Association Bias in Latent Factor Recommendation Algorithms: Theory and In Practiceen_US
ou.groupGallogly College of Engineeringen_US
shareok.orcid0000-0003-3351-5951en_US

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
2023_Beattie_Alexandra_Dissertation.pdf
Size:
1.25 MB
Format:
Adobe Portable Document Format
Description:
No Thumbnail Available
Name:
2023_Beattie_Alexandra_Dissertation.zip
Size:
1.85 MB
Format:
Unknown data format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: