SCL Paradigm Introduced to Prevent LLM Hallucinations
April 23, 2026

Professor Myung-ho Kim of Jae-neung University explains SCL technology at a technical presentation.
"Until now, Large Language Models (LLMs) have focused on creating larger and more complex models, yet reliability issues persist. Future progress depends not on simple scaling, but on securing reliability through structural design.”
These are the words of Professor Myung-ho Kim of Jae-neung University. During a technical presentation held on the 25th at the Press Center in Jung-gu, Seoul, he proposed Structured Cognitive Loop (SCL) technology as a solution to the reliability problems of LLMs.
Professor Kim identified three primary limitations of current LLMs:
- **Hallucinations:** Models presenting non-existent information as fact.
- **Memory Loss:** Repeating tasks because they fail to retain previous data.
- **Goal Drift:** Losing sight of initial objectives to produce irrelevant results.
He illustrated this with a stark analogy: “It’s like a self-driving car hitting the accelerator when it should brake, circling the same spot, or stopping at the wrong destination. If this happened, who could trust and use an autonomous vehicle?”
In a real-world test, Professor Kim prompted ChatGPT-5: “Given a reference temperature T, check the weather in Incheon, Daejeon, and Jeju, and create a travel plan. If all three regions are above the reference temperature, I want to go to the coolest one.” Despite the clear logic, ChatGPT suggested an incorrect location, failing the basic conditional reasoning.
The SCL Solution: Thinking Inside the Box
According to Professor Kim, SCL is an approach designed to solve these issues fundamentally. Instead of the conventional method of making the AI model larger to handle everything, SCL narrows the AI’s role strictly to 'judgment.'
Functions such as memory, execution, control, and norm management are separated into independent systems. The core of this structure is a loop where these five modules interact, ensuring that an AI's faulty judgment does not immediately result in action, while keeping the entire process transparent and traceable.
“We approached this with the philosophy that while you cannot entirely eliminate hallucinations, you can avoid them,” he explained. “It’s about keeping the LLM ‘inside a box’ so that its errors do not translate into dangerous behaviors.”
Introducing 'Chat Wonder'
The event also saw the unveiling of ‘Chat Wonder,’ a system that implements SCL. Professor Kim described it as a “no-code/low-code platform that allows anyone to develop customized AI agents without complex programming knowledge.” According to his findings, the SCL-applied system outperformed existing AI services in accuracy, transparency, and reproducibility.
“While ChatGPT and Gemini showed logical errors or goal drift during the same tasks, Chat Wonder consistently provided accurate results,” he stated. He concluded by emphasizing that “SCL is a significant step toward making AI more trustworthy. It will be meaningful in building an AI that is accurate, transparent, and reproducible.”

