Undergrad 2023F: Jacob Choi

Religious Text Training in Large Language Models

Jacob Choi

Date: 2023-11-10 / 3:30 ~ 4:30 PM
Location: MSC E306


The existing literature lacks comprehensive exploration into the contextual understanding of large language models (LLMs) in the context of religious texts. Notably, LLMs like GPT-4 have undergone training on religious texts with adjustments aimed at ensuring unbiased output. This prompts an inquiry into the potential "religiosity" of a language model repeatedly trained on religious texts, such as the Bible or the Quran. Additionally, it raises questions about the efficacy of LLMs in discerning relationships among entities extracted from these texts. This study addresses these inquiries through iterative training of an LLM on religious texts, yielding noteworthy outcomes. Furthermore, the methodology employs Named Entity Recognition (NER) to extract relationships from texts, subsequently leveraging GPT to elucidate connections between these textual entities.