Reliability and usability of ChatGPT for library metadata
Abstract
At the end of November 2022, OpenAI launched ChatGPT, an artificial intelligence chatbot, and it quickly became a world-wide phenomenon. Instantly, it became a subject of controversy and concern as well as praise. Schoolteachers and professors grew worried as ChatGPT was used to create content for everything from high school assignments to scholarly works. Lazy writers aside, ChatGPT’s output has often proved to be inaccurate to the point of complete fabrication. ChatGPT has also regularly misattributed the sources of its information, even giving the wrong author for large blocks of text. With all ChatGPT’s weaknesses, does ChatGPT have any beneficial uses for catalogers and metadata professionals? As a field, information professions are regularly challenged to do more work, more accurately, in less time. Does ChatGPT offer any reliable, accurate services at this time to assist these professionals in completing their tasks? This study seeks to evaluate the weaknesses and strengths of ChatGPT as it tries to perform three common cataloging/metadata tasks: 1) assigning classification numbers, 2) choosing Library of Congress subjects headings, and 3) harvesting keywords. Over the course of four months, it will ask ChatGPT a standardized list of questions on these topics. Then it will collate and evaluate ChatGPT’s performance. In the end, this study will offer its findings as well as best practices for using ChatGPT in cataloging and metadata tasks.
Citation
Bodenhamer, J. (2023). The Reliability and usability of ChatGPT for library metadata.