From 2e6350e775b22a41e97c5373a4baa1325f35ff11 Mon Sep 17 00:00:00 2001 From: Zijian Zhang Date: Tue, 10 Oct 2023 15:33:02 -0400 Subject: [PATCH] docs: add "Failure of discrete knowledgee" --- .../4. Continuous and discrete knowledge.md | 28 +++++++++++++------ ....1 Interface of continuous and discrete.md | 2 +- 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/docs/writings/4. Continuous and discrete knowledge.md b/docs/writings/4. Continuous and discrete knowledge.md index 3c11bf9..7301528 100644 --- a/docs/writings/4. Continuous and discrete knowledge.md +++ b/docs/writings/4. Continuous and discrete knowledge.md @@ -14,11 +14,14 @@ Discrete knowledge is the ones whose state is defined in a discrete space. Varia For example, a coin has two states: head and tail. The state of a coin is discrete knowledge. More importantly, logic deductions are operating discrete knowledge. All the system with a flavour of **logic** and have a clear border of what is true and what is wrong, e.g., knowledge graph and symbolic deductions, are mainly operating discrete knowledge. + ### What is the property of discrete knowledge? Discrete knowledge is clear and easy to operate with computers. They can ensure 100% correctness given correct assumptions. For fields that have a concrete assumption, e.g., mathematics, discrete knowledge and its deduction will suffice. -However, not all fields have concrete assumptions. In the long debate of rationalism and empiricism, people found that it is absolutely not easy to find reliable and non-trivial assumption to reason from (See Kant and Hume). +### Failure of discrete knowledge + +However, not all fields have concrete assumptions. In the long debate of rationalism and empiricism, people found that it is absolutely not easy to find reliable and non-trivial assumption to reason from (See Kant and Hume). My claim to the failure is that the world is too complex to be described by a few pieces of discrete knowledge. Even there are a set of such discrete knowledge, they are not affordable to the human brain. For example, I admit that the world might be discrete if you look at it in a very small scale. However, the number of discrete states is too large for human to make any useful deduction except for cosmology or particle physics. Most of the useful knowledge does not change its essence when you vary it a little bit. ## Continuous knowledge @@ -37,7 +40,7 @@ More importantly, neural networks hold continuous knowledge. The state of a neur It might be tricky to check whether a piece of knowledge is continuous or not. The key is to imagine whether the knowledge can have a very small variation and still remain mostly true. For example, when you try to recall a voice of someone, you can never ensure that your memory today is the same as your memory yesterday. It also works for smell, visual or kinetic memory. -Most importantly, though also containing discrete knowledge like grammar, a large part of our **knowledge about language** is also continuous. For example, your **feeling** about a certain word is continuous. The most obvious example is brands. You must have a certain feeling about Coca-cola, Pepsi, Tesla and BMW; and they don't have a clear border of correctness, nor you can check your feeling is stable. +Most importantly, though also containing discrete knowledge like grammar, a large part of our **knowledge about words** is also continuous. Your **feeling** about a certain word is continuous. The most obvious example is brands. You must have a certain feeling about Coca-cola, Pepsi, Tesla and BMW; and they don't have a clear border of correctness, nor you can check your feeling is stable. ### What is the property of continuous knowledge? @@ -45,18 +48,25 @@ The representation power of continuous knowledge is much stronger than discrete Continuous knowledge is more natural for human to process. Most of the physics theory also assume that the space is continuous or its discreteness is negligible for human. The power of continuous knowledge can also be proved by the success of neural network. There was a shift of the paradigm of *artificial intelligence* in the 1990s from discrete to continuous and then follows the triumph of neural networks in nearly all the field. +### Natural language carries continuous knowledge + +Admittedly, symbols in a language is discrete. However, they are meaningless without an interpreter. The development of natural language processing has witnessed that all the discrete approaches to understand natural language failed. The history has seen that parsing sentences in to syntax tree is hard and not as useful as using neural networks to directly process the natural language. + +> Syntax tree can never represent the accurate meaning. For example, I can set a question: +> "If apple means eat in the next sentence. 'Mike apple an apple.' What did Mike intake?" +>This question is easy for human to answer but will break any natural language parser. + + +### Failure of continuous knowledge + However, the intrinsic drawbacks of continuous knowledge are still there. Even in 2023, we still cannot handle math, logic and coding satisfactorily with neural networks. This is surely because of the discrete nature of these tasks. How to bridge continuous knowledge with discrete knowledge will be the main challenge of building AI. ## How all this related to EvoNote? +::: tip Insight EvoNote is trying to add more discrete structure to the continuous knowledge. +::: -Here, we first claim that the knowledge need to be interpreted by large language models are continuous. Though they might look like discrete because they are symbols, but they are meaningless symbols without an interpreter. - -> Admittedly, you can parse a sentence into a syntax tree. But syntax tree can never represent the accurate meaning. For example, I can set a question: -> "If apple means eat in the next sentence. 'Mike apple an apple.' What did Mike intake?" ->This question is easy for human to answer but will break any natural language parser. - -The way we want to do this, is to use the tree structure to organize the natural languages in a macro scale (Recall the section: Tree indexing). This can assign the continuous knowledge a discrete structure (tree), which we believe can help building a continuous-discrete hybrid knowledge to help making AI capable at discrete tasks. +EvoNote uses tree structure to organize the natural languages in a macro scale (Recall the section: Tree indexing). This can assign the continuous knowledge a discrete structure (tree), which we believe can help building a continuous-discrete hybrid knowledge to help making AI capable at discrete tasks. diff --git a/docs/writings/4.1 Interface of continuous and discrete.md b/docs/writings/4.1 Interface of continuous and discrete.md index d615209..9b19e47 100644 --- a/docs/writings/4.1 Interface of continuous and discrete.md +++ b/docs/writings/4.1 Interface of continuous and discrete.md @@ -57,7 +57,7 @@ Math provides some concrete bridge between continuous and discrete. This kind of The space where the continuous knowledge lives might have a symmetry described by a certain Lie group. Group theory offers a way to analyze these continuous knowledge by analyzing its Lie group. For example, the Lie group might have a countable number of generators, which gives a discrete way to analyze the continuous knowledge. We can also analyze the representation of the Lie group, which will make the representation of it more discrete if we can decompose it into irreducible representations. -Using the discrete knowledge found by group theory have been applied to design neural networks. [Equivariant neural networks](https://arxiv.org/abs/2006.10503) are one example. +Using the discrete knowledge found by group theory have been applied to neural network design. [Equivariant neural network](https://arxiv.org/abs/2006.10503) is one example. ### Topology