Show simple item record

dc.contributor.advisorEdwards, Bryan D.
dc.contributor.authorErdmann, Marjorie A.
dc.date.accessioned2023-05-31T17:22:34Z
dc.date.available2023-05-31T17:22:34Z
dc.date.issued2022-12
dc.identifier.urihttps://hdl.handle.net/11244/337739
dc.description.abstractThe primary objective of this research was to gain understanding of affective trust in AI (how comfortable individuals feel with various AI applications). This dissertation tested a model for affective trust in AI grounded in interpersonal trust theories with a focus on the effects of perceived benevolence of AI—an overlooked factor in AI trust research. In Study 1a, online survey participants evaluated 20 AI applications with single-item measures. In Study 1b, four AI applications were evaluated with multi-item measures. Perceived benevolence was significantly, positively associated with affective trust over and above cognitive trust and familiarity in 21 of 24 AI tests. Confirmatory factor analysis suggested four factors, supporting the theory that cognitive trust and affective trust in AI are distinct factors. The secondary objective was to test the utility of manipulating perceived benevolence of AI. In Study 2, online survey participants were randomly assigned to one of two groups with 10 AI applications described as “augmented intelligence” that “collaborates with” a specific or exact same AI described as “artificial intelligence.” The augmentation manipulation did not matter; there were no significant direct or indirect effects to benevolence or affective trust. These results imply that “Augmented Intelligence” positioning has no significant effect on affective trust, counter to practitioners’ beliefs. In Study 3, online survey participants were randomly assigned to one of two groups—one that received benevolence messaging (a message informing the participant that the AI was intended for human welfare) for five AI applications and the other did not.
dc.description.abstractBenevolence messaging was also tested to see if it moderated contexts expected to diminish affective trust (likelihood of worker replacement and likelihood of death from error). Benevolence was not influenced by the manipulation. Surprisingly, likelihood of worker replacement had no significant association with affective trust, and likelihood of death from error had only one significant association. People may be more ambivalent about these contexts than previously thought. Understanding affective trust in AI was expanded by identifying the importance of perceived benevolence. Until benevolence messaging can boost perceptions of benevolence, the success of that strategy remains unknown.
dc.formatapplication/pdf
dc.languageen_US
dc.rightsCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.
dc.titleUnderstanding affective trust in AI: The effects of perceived benevolence
dc.contributor.committeeMemberDelen, Dursun
dc.contributor.committeeMemberPappas, James M.
dc.contributor.committeeMemberWheeler, Denna L.
osu.filenameErdmann_okstate_0664D_17890.pdf
osu.accesstypeOpen Access
dc.type.genreDissertation
dc.type.materialText
dc.subject.keywordsaffective trust
dc.subject.keywordsartificial intelligence
dc.subject.keywordscognitive trust
dc.subject.keywordsperceived benevolence
thesis.degree.disciplineBusiness Administration
thesis.degree.grantorOklahoma State University


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record