Interpreter

Language and “real world knowledge” are inextricably connected, and neither functions well without the other. This is why, in my opinion, natural language processing (NLP) initiatives focusing exclusively, or even primarily on language structure have significant limitations. Language structure theory, grammar or syntax is necessary for understanding how language works, but it is most often insufficient for determining a person’s intent when they produce written or spoken language.

Language Model Formalism

If the goal is language comprehension approaching human competence, then knowledge-based approaches that have the potential to resolve ambiguity based on meaning in context are essential. Thus I see a need for a universal theory of knowledge that is suitable for supporting knowledge-based language understanding automata. The universal theory begins with defining concepts and the language symbols used to express them.

For a computer to be able to engage in a dialog with a human, it needs to have good language skills and large amounts of knowledge. To bring maximum value to the human in this interaction, it should adapt to the human rather than forcing the human to adapt to the computer. This requires systematizing knowledge and making it computable, but does it mean replicating a whole brain, or the entire body of human knowledge? In a way Large Language Models (LLMs) implicitly replicate the parts of body of human knowledge they are trained on as patterns in a neural network. Empathi AI does so explicitly.