Investigating the Source of Uncanniness in Human-Inspired Machines

As artificial intelligence (AI) and robotics evolve, they increasingly mirror human capabilities, prompting some to perceive them as eerie and uncanny. Karl F. MacDorman, an associate professor at the Luddy School of Informatics, Computing, and Engineering in Indiana, has delved into this phenomenon, aiming to unravel why certain robots and AI systems evoke such feelings.
MacDorman’s recent study, published in Computers in Human Behavior: Artificial Humans, examines the theory of “mind perception,” which suggests that people find humanoid robots eerie because they attribute minds to them. His research dissects past experiments and introduces new findings challenging the validity of this theory.
The proliferation of large language models (LLMs), such as ChatGPT, adds complexity to this discussion. These models, despite being trained on extensive data, often respond in ways resembling human communication, blurring the lines between machine and sentient being.
MacDorman’s skepticism of the mind perception theory led him to reanalyze previous experiments and conduct new ones. His investigation reveals that attributing sentience to robots does not necessarily correlate with finding them eerie. Instead, automatic perceptual processes seem to underpin the uncanny valley phenomenon.
Through meta-regression analysis and innovative experiments, MacDorman sheds light on the disconnect between mind perception and perceived eeriness in human-inspired machines. While attributions of mind may play a role, they are not the primary driver of uncanniness.
This study offers valuable insights into human-robot interaction and challenges prevailing theories in the field. By understanding the factors influencing perceptions of AI and robots, researchers can better inform the design of future systems, potentially mitigating feelings of unease and advancing human-robot collaboration.

Leave a Reply

Your email address will not be published. Required fields are marked *