تعداد نشریات | 43 |
تعداد شمارهها | 1,650 |
تعداد مقالات | 13,398 |
تعداد مشاهده مقاله | 30,194,769 |
تعداد دریافت فایل اصل مقاله | 12,071,721 |
Evaluating ChatGPT as a Question Answering System: a Comprehensive Analysis and Comparison with Existing Models | ||
Journal of Computing and Security | ||
مقالات آماده انتشار، پذیرفته شده، انتشار آنلاین از تاریخ 07 مهر 1403 | ||
نوع مقاله: Research Article | ||
شناسه دیجیتال (DOI): 10.22108/jcs.2024.140680.1141 | ||
نویسندگان | ||
Hossein Bahak1؛ Farzaneh Taheri1؛ Zahra Zojaji* 2؛ Arefeh Kazemi3 | ||
1Faculty of Computer Engineering, University of Isfahan | ||
2Department of Computer Engineering, University of Isfahan, Iran | ||
3ADAPT Research Centre, Ireland. Dublin City University, Ireland | ||
چکیده | ||
In the current era, a multitude of language models have emerged to cater to user inquiries. Notably, the GPT-3.5-Turbo language model has gained substantial attention as the underlying technology for ChatGPT. Leveraging extensive parameters, this model adeptly responds to a wide range of questions. However, due to its reliance on internal knowledge, the accuracy of responses may not be absolute. This article examines ChatGPT as a Question Answering System (QAS), comparing its performance to other existing QASs. The primary focus is on evaluating ChatGPT’s efficiency in extracting responses from provided paragraphs, a core QAS capability. Additionally, performance comparisons are made in scenarios without a surrounding passage. Multiple experiments were conducted in ChatGPT, exploring response hallucination and considering question complexity. Evaluation employed well- known Question Answering (QA) datasets, including SQuAD, NewsQA, and PersianQuAD, across English and Persian languages. Metrics such as F score, exact match and accuracy were used in the assessment. The study reveals that, while ChatGPT demonstrates competence as a generative model, it is less effective in question answering compared to task-specific models. Providing context improves its performance, and prompt engineering improves precision, particularly for questions lacking explicit answers in the provided paragraphs. ChatGPT excels at simpler factual questions compared to the "how" and "why" question types. The evaluation highlights occurrences of hallucinations, where ChatGPT provides responses to questions without available answers in the provided context. | ||
کلیدواژهها | ||
ChatGPT؛ Question Answering Systems؛ Large Language Models؛ Performance Evaluation؛ Hallucination | ||
آمار تعداد مشاهده مقاله: 19 |