Skip to content

Unveiling the Linguistic Comprehension Limitations of ChatGPT Unmasked

AI's ability to interpret meaning from nonsensical words was assessed in a recent study, shedding light on the linguistic comprehension capabilities of models like ChatGPT.

ChatGPT's language comprehension unveiled through its peculiar interpretations
ChatGPT's language comprehension unveiled through its peculiar interpretations

Unveiling the Linguistic Comprehension Limitations of ChatGPT Unmasked

In a groundbreaking study led by psycholinguist Michael Vitevitch, the language processing abilities of the AI model, ChatGPT, have been scrutinized. The research, published in PLOS One, explores how ChatGPT processes language, particularly when faced with nonsense words, extinct words, and modern concept inventions.

One of the extinct words tested was 'upknocking', a 19th-century job where people tapped on windows to wake others before alarm clocks. Intriguingly, ChatGPT did not respond with English words that sounded like the Spanish words, but instead switched languages and gave responses from different languages. This behaviour highlights that, unlike humans, ChatGPT processes language based on pattern recognition rather than true understanding.

Vitevitch's study aimed to understand how AI processes language in the same way humans do. To achieve this, he fed ChatGPT a series of "nonwords" to explore how it handles linguistic nonsense. The findings showed that ChatGPT excels at pattern recognition but does not process nonsense in the same way humans do. Humans use deeper cognitive and contextual cues, while ChatGPT matches patterns statistically without semantic understanding.

When it comes to extinct words, ChatGPT's performance is limited by its training data coverage. It may attempt to infer or generate plausible context based on partial knowledge but cannot truly "know" or interpret meanings beyond what was encoded in the training corpus. In the study, out of 52 archaic terms, ChatGPT correctly defined 36, acknowledged uncertainty for 11, drew from other languages for 3, and made things up for 2.

Interestingly, ChatGPT was also asked to invent new English words for modern concepts. It often relied on a predictable method of combining two words, such as 'carperpetuation', a name for a thread that does not get sucked up by a vacuum cleaner, which is a 'sniglet' created by ChatGPT.

Vitevitch believes that AI should be engineered to provide a safety net for the things humans need help with. He prompted the bot with nonsense to better understand the unique and sometimes strange ways in which AI processes language. The study's findings underscore both the advances and current limitations of AI language models in handling language complexity compared to human cognition.

The original publication of this study was under the title "What nonsense reveals about ChatGPT's understanding of language." Vitevitch, a professor in the Speech-Language-Hearing Department at the University of Kansas, conducted this research to bridge the gap between AI and human language processing. His work provides valuable insights for future developments in AI language models, emphasizing the need for AI to better mimic human semantic and contextual understanding.

Artificial-intelligence models like ChatGPT, while able to excel at pattern recognition in language processing, do not truly understand language the way humans do, especially when encountering extinct words or modern concept inventions. Furthermore, the development of new English words by AI like ChatGPT often relies on a predictable method of combining two words, demonstrating a lack of true semantic understanding compared to human cognition.

Read also:

    Latest