Pune Media

Korean Scientists Develop AI That Can Question Its Own Assumptions


Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a novel AI model that can question and verify its own assumptions. (Image courtesy of KAIST)

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a novel AI model that can question and verify its own assumptions. (Image courtesy of KAIST)

SEOUL, Feb. 28 (Korea Bizwire) — In a significant advance toward more reliable artificial intelligence systems, researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a novel AI model that can question and verify its own assumptions, mimicking human cognitive processes.

The research team, led by professors Lee Sang Wan and Jung Min-hwan, announced their findings on February 27, presenting a potential solution to one of AI’s most persistent challenges: the tendency to generate plausible-sounding but false information, a phenomenon known as hallucination.

While large language models like ChatGPT have demonstrated remarkable ability to generate human-like text, they remain prone to confidently presenting incorrect information. Current AI systems, though adept at reinforcement learning, lack the ability to evaluate their own outputs, leading to overconfidence and errors.

“We’ve presented a principle of hypothesis-based adaptive learning in the brain that cannot be explained by artificial intelligence reinforcement learning theory alone,” said Lee. The new model showed superior performance in predicting animal behavior in unexpected situations, outperforming existing AI models by up to 31%, with an average improvement of 15%.

The research delves into what neuroscientists call the “stability-flexibility dilemma,” a characteristic of animal brains that allows them to test hypotheses even when failure seems likely — a stark contrast to current AI systems that prioritize known successful paths.

In a key finding, the team identified that dopamine receptors in the striatum region of the basal ganglia encode experiences of both predicted and unexpected events, using this information to adjust behavioral strategies. This understanding of how the brain processes and verifies hypotheses could help reduce hallucination phenomena in AI systems.

The implications extend beyond artificial intelligence. “This research could contribute to understanding the neurological basis of mental illnesses such as addiction and obsessive-compulsive disorder, which are related to the reward learning circuit in the basal ganglia,” Lee explained.

The research, published in Nature Communications on February 20, represents a significant step toward creating more reliable AI systems that can question their own conclusions, much like humans do.

Kevin Lee (kevinlee@koreabizwire.com) 



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.

Aggregated From –

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More