A groundbreaking computing architecture from China, inspired by the human brain, may bring us closer to achieving Artificial General Intelligence (AGI). This innovative design offers a new approach to building AI models by emphasizing internal complexity rather than merely scaling up existing neural networks.
Currently, advanced AI models, such as large language models (LLMs) like ChatGPT and Claude 3, rely on extensive neural networks. These networks simulate brain-like processes to analyze data and make decisions. However, these models are constrained by their training data and lack human-like reasoning abilities. AGI, in contrast, would be capable of reasoning, contextualizing, self-modifying, and mastering any intellectual task that a human can.
While scaling up neural networks might seem like a pathway to AGI, it also comes with challenges related to energy consumption and resource demands. The new study, published on August 16 in *Nature Computational Science*, proposes an alternative approach: focusing on enhancing the internal complexity of artificial neurons rather than increasing the external size of neural networks.
Inspired by the human brain’s 100 billion neurons and 1,000 trillion synaptic connections, researchers developed a Hodgkin-Huxley (HH) network model with rich internal complexity. This model aims to mimic the brain’s efficiency and power, despite the brain’s minimal energy consumption of around 20 watts.
The HH model, known for its accuracy in simulating neuronal activity, was shown to handle complex tasks effectively and efficiently. Remarkably, smaller models based on this architecture performed as well as larger, traditional models. This suggests that increasing internal complexity within artificial neurons could lead to more powerful and efficient AI systems.
Although AGI remains a long-term goal, some experts believe that advancements like these could bring us closer to realizing it within a few years. Alternative proposals, such as SingularityNET’s distributed supercomputing network, also explore different methods to develop AGI.
For further reading, you might be interested in how these innovations could influence future AI developments and the ongoing quest for AGI.