Show simple item record

dc.contributor.advisorEbert, David
dc.contributor.authorDonner, Catherine
dc.date.accessioned2024-04-29T19:29:15Z
dc.date.available2024-04-29T19:29:15Z
dc.date.issued2024-05-10
dc.identifier.urihttps://hdl.handle.net/11244/340251
dc.description.abstractMisinformation has emerged as a pressing public policy concern, prompting transdisciplinary research in the data science field. News journalism provides a foundation for free speech in modern society, yet misinformation in mainstream and independent media through opinionated or biased news can pose dangerous consequences ranging from misunderstanding basic facts to emboldened extremism. Currently, the preeminent tool for misinformation detection is the large language model (LLM) as it is renowned for its ability to capture the context and meaning of textual data. In addition, generative artificial intelligence (AI) models, namely OpenAI's ChatGPT and Google's Pathways Language Model (PaLM), are accessible in an application programming interface (API) form, which can also provide opportunities for automated misinformation detection. Despite advancements in developing effective data science tools for identifying misinformation, there are not many available options, and it is crucial to assess pre-existing tools to determine the most recommended model to pursue for future field research and open-source use in automated misinformation detection. This thesis attempts to evaluate fine-tuned supervised LLMs, AI model frameworks, and unsupervised learning methods to propose an explainable, automated misinformation detection tool that incorporates multiple natural language processing (NLP) dimensions and holistically evaluates trustworthiness in news articles. The study revealed that the Hugging Face LLM RoBERTa with added NLP dimensions as features was the most effective model. Furthermore, it was found that unsupervised learning methods provided valuable insights that eliminated some ambiguity between trustworthy and fake news articles, and AI models tended to inflate the trustworthiness values of news articles. Keywords: misinformation, large language models (LLMs), unsupervised learning, application programming interfaces (APIs)en_US
dc.languageen_USen_US
dc.subjectInformation Science.en_US
dc.subjectComputer Science.en_US
dc.subjectMass Communications.en_US
dc.titleMisinformation Detection Methods Using Large Language Models and Evaluation of Application Programming Interfacesen_US
dc.contributor.committeeMemberHougen, Dean
dc.contributor.committeeMemberTsetsura, Katerina
dc.contributor.committeeMemberKim, Jeong-Nam
dc.contributor.committeeMemberKumar, Naveen
dc.date.manuscript2024-04-23
dc.thesis.degreeMaster of Scienceen_US
ou.groupGallogly College of Engineeringen_US
shareok.orcid0009-0005-1946-4870en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record