| المؤلف | Khalil A Nagi(1) Elham Alzain (2) Ebrahim Naji (3) |
| محكمة | نعم |
| الدولة | اليمن |
| سنة النشر | 2024 |
| الشهر | July |
| المجلد | 11 |
| العدد | 98 |
| DOI | https://doi.org/10.35781/1637-000-098-007 |
| ISSN | 2410-1818 |
| نوع المحتوى | بحوث ومقالات |
| اللغة | العربية |
| قواعد المعلومات | HumanIndex |
| رابط المحتوى | تحميل PDF |
The aim of the study is to investigate the quality of ChatGPT translation and the effectiveness of using informed prompts to improve it. The research team built a dataset composed of various English complex sentence types (150complex sentences) that are selected from various news sites. The sentences were translated into Arabic using a default ChatGPT translation prompt (Translate the following sentences into Arabic). The translated sentences were annotated by three professional annotators. An error taxonomy was performed based on the Multidimensional Quality Metrics (MQM). The results of the error taxonomy showed a high error frequency that amounts to 2.73 errors per sentence which indicates that ChatGPT falls short when translating English complex sentences into Arabic and that it still needs to be trained effectively. The sentences whose translation outputs had the most errors were translated again using informed prompts that require the model to correct the original translation. Both the original and the new translation outputs were evaluated manually by the professional annotators and automatically using the BLEU metric. The study, therefore, identifies the effectiveness of the adopted prompt strategies in improving translation quality and recommends further research in the area of informed prompts. Keywords: ChatGPT, prompts, error taxonomy, English-Arabic, translation output, complex sentences