Assalamo 'Alaikom,
Dear Community,
I’m currently writing my PhD dissertation on the nature and impact of AI hallucinations, with a particular focus on how this phenomenon is defined, interpreted, and addressed across different languages and disciplinary contexts.
In English-language research, the term hallucination is widely used, though increasingly debated. Scholars have proposed alternative terms such as confabulation, delusion, or even bullshit to better capture the epistemological dimensions of the issue.
As part of this work, I’m also exploring how the concept is rendered in Arabic.
Which term is most widely used (or most appropriate) in Arabic to refer to what is known in English as “AI hallucination”?
I’ve come across translations such as هلوسة الذكاء الاصطناعي and هلاوس الذكاء الاصطناعي, but I’m also considering whether terms like اختلاق (fabrication) or تلفيق (falsification) might offer more semantic precision depending on the context.
I’d greatly appreciate any insights, especially from colleagues working in Arabic NLP, translation studies, or digital humanities.
With thanks in advance,
Amina
--
You received this message because you are subscribed to the Google Groups "SIGARAB: Special Interest Group on Arabic Natural Language Processing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sigarab+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sigarab/72bf3dde-909c-49c6-be83-c69d9e232128n%40googlegroups.com.
Eric Atwell, Professor of Artificial Intelligence for Language
School of Computer Science, Uni of LEEDS, LS2 9JT, UK
| CAUTION: External Message. Use caution opening links and attachments. | 
To view this discussion visit https://groups.google.com/d/msgid/sigarab/GV1PR03MB8637DEB69F6E3DA2B35E57948C6BA%40GV1PR03MB8637.eurprd03.prod.outlook.com.
I think it’s a solid translation. also both terms have similar sound.
For those outside the AI world: when people say LLMs “hallucinate,” they mean the model generates outputs that sound confident but are factually wrong—like how a person might hallucinate in a psychological sense. For those inside the AI field: the term/translation is not always comfortable because LLMs aren’t actually hallucinating —they’re just making incorrect predictions due to limitations in their training data or probabilistic architecture, etc.
On 9 Jun 2025, at 5:50 PM, Kareem Darwish <kareem...@live.com> wrote:
Though هلوسة may have a specific dictionary meaning, I have heard it used repeatedly for hallucination in the LLM context. So much so, that I think it is becoming like jargon term for the phenomenon.
From: sig...@googlegroups.com <sig...@googlegroups.com> on behalf of Emad Nawfal (عمـ نوفل ـاد) <emadn...@gmail.com>
Sent: Monday, June 9, 2025 1:31:20 PM
To: Eric Atwell <E.S.A...@leeds.ac.uk>
Cc: Nizar Habash <nizar....@nyu.edu>; Amina EL GANADI <amina.e...@gmail.com>; SIGARAB: Special Interest Group on Arabic Natural Language Processing <sig...@googlegroups.com>
Subject: Re: [SIGARAB] AI Hallucinations in Arabic: Which term is most accurate?
Further to what Nizar and Eric said, the word هلوسة may have some other justification. While I don't think they're etymologically related, the Arabic root ه ل س means to become weak, physically or mentally.في لسان العرب: ورجل مَهْلُوسُ العقل أي مسلوبه. ورجل مهتلس العقل ذاهبهوفي مقاييس اللغة: الْمَهْلُوسُ: الضَّعِيفُ الْعَقْلِ
On Mon, Jun 9, 2025 at 2:21 PM 'Eric Atwell' via SIGARAB: Special Interest Group on Arabic Natural Language Processing <sig...@googlegroups.com> wrote:
An alternative is that "hallucination" is an AI/NLP technical term, distinct in meaning from the general English language word, and so, like many other Computer Science terms, can be used in other languages without translation.
Eric Atwell, Professor of Artificial Intelligence for LanguageSchool of Computer Science, Uni of LEEDS, LS2 9JT, UK
From: sig...@googlegroups.com <sig...@googlegroups.com> on behalf of Nizar Habash <nizar....@nyu.edu>
Sent: 09 June 2025 10:01 AM
To: Amina EL GANADI <amina.e...@gmail.com>
Cc: SIGARAB: Special Interest Group on Arabic Natural Language Processing <sig...@googlegroups.com>
Subject: Re: [SIGARAB] AI Hallucinations in Arabic: Which term is most accurate?
Hi Amina - my immediate reaction is that اختلاق أو فبركة (fabrication) or تلفيق (falsification) both imply intent (specifically intent to deceive)... which risks anthropomorphizing the machine...
Hallucination feels equally out of control for humans and machines. Another term in English is confabulations سرد تخيلي أو استرسال وهمي >> توهمات؟....Perhaps اختلاق غير متعمد can work... but it is unnecessary.... The word هلوس/هلوسة/مهلوس is already in the Arabic dictionary: https://www.almaany.com/ar/dict/ar-ar/%D9%87%D9%84%D9%88%D8%B3/BestNizar
Nizar
--On Mon, Jun 9, 2025 at 11:33 AM Amina EL GANADI <amina.e...@gmail.com> wrote:
Assalamo 'Alaikom,
Dear Community,
I’m currently writing my PhD dissertation on the nature and impact of AI hallucinations, with a particular focus on how this phenomenon is defined, interpreted, and addressed across different languages and disciplinary contexts.
In English-language research, the term hallucination is widely used, though increasingly debated. Scholars have proposed alternative terms such as confabulation, delusion, or even bullshit to better capture the epistemological dimensions of the issue.
As part of this work, I’m also exploring how the concept is rendered in Arabic.
Which term is most widely used (or most appropriate) in Arabic to refer to what is known in English as “AI hallucination”?
I’ve come across translations such as هلوسة الذكاء الاصطناعي and هلاوس الذكاء الاصطناعي, but I’m also considering whether terms like اختلاق (fabrication) or تلفيق (falsification) might offer more semantic precision depending on the context.I’d greatly appreciate any insights, especially from colleagues working in Arabic NLP, translation studies, or digital humanities.
With thanks in advance,
Amina
Nizar HabashProfessor of Computer Science
New York University Abu Dhabi
https://www.nizarhabash.com/
--
| 
Hamdy S. Hussein | 
| Principal Software Engineer | 
| Qatar Computing Research Institute | 
| +974 445 41679 | 
|  | 

--
You received this message because you are subscribed to the Google Groups "SIGARAB: Special Interest Group on Arabic Natural Language Processing" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sigarab+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sigarab/72bf3dde-909c-49c6-be83-c69d9e232128n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sigarab/CAAiBj-7-SkbsF5jBsGusXMjWDwA7Bm-Rrm8ovd%3DJZ7JKi%3DCsfg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/sigarab/CALs98aYaQ81F5zTfD4GM6tnuDn-19%2BctAHnQ_Njh2HJVbgY86A%40mail.gmail.com.
Dr Abdusalam Nwesri,

Associate
Professor,
Faculty of Information Technology,
University of Tripoli,
P.O.Box: 5760 Hai Alandalus,
Tripoli - Libya.
Tel: +218922307021
Email: a.nw...@uot.edu.ly
To view this discussion visit https://groups.google.com/d/msgid/sigarab/CAFfBGVmVvQ_7x3YN-xAu--Hn8ZemCFVj%2BKP4sTwasYRttAENXQ%40mail.gmail.com.
Dr Abdusalam Nwesri,

Associate
Professor,
Faculty of Information Technology,
University of Tripoli,
P.O.Box: 5760 Hai Alandalus,
Tripoli - Libya.
Tel: +218922307021
Email: a.nw...@uot.edu.ly
مَا سَمِعْنَا بِهَذَا فِي الْمِلَّةِ الْآخِرَةِ إِنْ هَذَا إِلَّا اخْتِلَاقٌ {7}
[Shakir 38:7] We never heard of this in the
former faith; this is nothing but a forgery:
[Yusuf Ali 38:7] "We never heard (the
like) of this among the people of these latter days: this is nothing but a
made-up tale!"
[Pickthal 38:7] We have not heard of this in
later religion. This is naught but an invention.
I feel this word اختلاق is more appropriate than hallucination since it is what happens when an LLM generates an answer (and it IS programmed to do so systematically, without being allowed to say I do not know or my confidence level is such).
As to the "risks of anthropomorphizing the machine" by using this word اختلاق, I feel it is less so than hallucination which I see as more strongly giving the machine some human characteristics.
Thank you for an interesting discussion.
Ahmed
| 
 | |||||||||||||||||
| 
 | 
--
To view this discussion visit https://groups.google.com/d/msgid/sigarab/CAA4SajgdN_1knaNoUGSPDQCBmwCknoQQWeyzgMQXWOWxGX6HKA%40mail.gmail.com.
Many thanks to all who took the time to engage with my question and share their thoughtful insights. Your contributions have provided essential perspectives that I’m integrating into my broader research on AI-generated errors.
Building on this exchange, I’ve been reflecting more closely on the term at the centre of our discussion. While hallucination has become a widely used and technically convenient label for AI-generated errors, particularly in NLP, its usage is far from neutral. The term is both semantically and ethically charged, and its application to LLMs raises several concerns that deserve thoughtful consideration.
First, hallucination originates in psychological and clinical contexts, where it denotes perceptual errors experienced by sentient beings, typically the sensing of stimuli that do not exist. When applied to machines, even metaphorically, the term risks anthropomorphizing systems that neither perceive nor misperceive. It subtly suggests the presence of cognitive malfunction or sensory distortion, where in fact LLMs are generating outputs based solely on token prediction within statistical patterns derived from training data. They possess no perception, no experience, and no internal state.
This point is underscored in both the academic literature (e.g., Edwards, 2023; Ji et al., 2022) and in contributions to this discussion, which emphasize the importance of avoiding the attribution of agency or intent to non-sentient computational models.
Second, the term can obscure questions of accountability. AI hallucinations arise not from psychological malfunction but from well-known technical limitations, training data gaps, architecture constraints, decoding errors, or prompt ambiguities. To say a model “hallucinates” may inadvertently deflect attention from system design flaws, shifting blame away from developers, evaluators, or deployment contexts. Framing such outputs as “hallucinations” can normalize or excuse them, whereas describing them as fabrications, ungrounded responses, or generative errors would signal a need for correction and transparency.
There are also linguistic and cultural implications, particularly in Arabic. The term هلوسة is closely tied to clinical or pathological connotations, often evoking mental illness or cognitive disorder. Introducing it into AI discourse can inadvertently stigmatize or sensationalize the phenomenon, especially for non-specialist audiences who may not share the technical framing. As has been noted in this discussion, metaphors imported from English into Arabic do not come empty-handed, they reshape local semantic fields and bring with them new assumptions, tensions, and interpretive layers.
Compounding this is the semantic drift of the term hallucination across AI subfields. In computer vision, it once referred to constructive inference, adding plausible detail to degraded images (e.g., face hallucination). In NLP, the term has acquired a sharply negative meaning: confident but factually incorrect output. This shift introduces ambiguity for interdisciplinary researchers and challenges efforts to maintain consistent terminology across linguistic and technical domains.
From the perspective of digital humanists, this ambiguity is especially problematic. We often work at the intersection of language, knowledge, and cultural authority, engaging with sources and traditions that demand careful framing. Calling an AI-generated false citation a hallucination risks reducing complex epistemological failures to a technical shorthand that flattens interpretive nuance and obscures institutional accountability.
None of this is to suggest that the term hallucination/ هلوسة should be discarded. As many of you have pointed out, it is now firmly embedded in both English and Arabic AI discourse. Its use is also expanding into other languages (this year, Treccani included “allucinazione” in its Neologismi list, marking its growing presence in Italian contexts as well). While the term remains a convenient shorthand within technical communities, I believe it is especially important in interdisciplinary, educational, or general-audience discussions, to clarify both what is meant and what is not meant when describing a model as “hallucinating.”
It may therefore be helpful to retain هلوسة as the established term of art in Arabic NLP, while also introducing more mechanism-focused alternatives such as فبركة (fabrication), خرابيط (informal for nonsense/gibberish), or اختلاق(fabrication/invention). These may more accurately emphasize the generative and synthetic nature of such outputs while avoiding clinical or anthropomorphic connotations. I have been exploring possible substitutes, and these seem among the more acceptable options currently available.
At the same time, I recognize that terms like fabrication can themselves be problematic, as they may suggest intentionality or deception, concepts inapplicable to computational systems. This only underscores the broader difficulty of identifying language that captures the nature of these outputs without resorting to misleading human analogies.
This remains an open inquiry, and I truly appreciate all the insights shared so far. I welcome any further thoughts as I continue my research.
Best,
Amina El Ganadi
Visiting PhD Student, University of St Andrews
Doctoral Researcher, Universities of Modena–Reggio Emilia & Palermo