Skip to content
This repository has been archived by the owner on Jun 15, 2024. It is now read-only.

Commit

Permalink
docs: add "Failure of discrete knowledgee"
Browse files Browse the repository at this point in the history
  • Loading branch information
doomspec committed Oct 10, 2023
1 parent 5dd5905 commit 2e6350e
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 10 deletions.
28 changes: 19 additions & 9 deletions docs/writings/4. Continuous and discrete knowledge.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,14 @@ Discrete knowledge is the ones whose state is defined in a discrete space. Varia
For example, a coin has two states: head and tail. The state of a coin is discrete knowledge.

More importantly, logic deductions are operating discrete knowledge. All the system with a flavour of **logic** and have a clear border of what is true and what is wrong, e.g., knowledge graph and symbolic deductions, are mainly operating discrete knowledge.

### What is the property of discrete knowledge?

Discrete knowledge is clear and easy to operate with computers. They can ensure 100% correctness given correct assumptions. For fields that have a concrete assumption, e.g., mathematics, discrete knowledge and its deduction will suffice.

However, not all fields have concrete assumptions. In the long debate of rationalism and empiricism, people found that it is absolutely not easy to find reliable and non-trivial assumption to reason from (See Kant and Hume).
### Failure of discrete knowledge

However, not all fields have concrete assumptions. In the long debate of rationalism and empiricism, people found that it is absolutely not easy to find reliable and non-trivial assumption to reason from (See Kant and Hume). My claim to the failure is that the world is too complex to be described by a few pieces of discrete knowledge. Even there are a set of such discrete knowledge, they are not affordable to the human brain. For example, I admit that the world might be discrete if you look at it in a very small scale. However, the number of discrete states is too large for human to make any useful deduction except for cosmology or particle physics. Most of the useful knowledge does not change its essence when you vary it a little bit.

## Continuous knowledge

Expand All @@ -37,26 +40,33 @@ More importantly, neural networks hold continuous knowledge. The state of a neur

It might be tricky to check whether a piece of knowledge is continuous or not. The key is to imagine whether the knowledge can have a very small variation and still remain mostly true. For example, when you try to recall a voice of someone, you can never ensure that your memory today is the same as your memory yesterday. It also works for smell, visual or kinetic memory.

Most importantly, though also containing discrete knowledge like grammar, a large part of our **knowledge about language** is also continuous. For example, your **feeling** about a certain word is continuous. The most obvious example is brands. You must have a certain feeling about Coca-cola, Pepsi, Tesla and BMW; and they don't have a clear border of correctness, nor you can check your feeling is stable.
Most importantly, though also containing discrete knowledge like grammar, a large part of our **knowledge about words** is also continuous. Your **feeling** about a certain word is continuous. The most obvious example is brands. You must have a certain feeling about Coca-cola, Pepsi, Tesla and BMW; and they don't have a clear border of correctness, nor you can check your feeling is stable.

### What is the property of continuous knowledge?

The representation power of continuous knowledge is much stronger than discrete knowledge. It is very hard to imagine how to represent the feeling of ski or recalling a picture with a discrete format.

Continuous knowledge is more natural for human to process. Most of the physics theory also assume that the space is continuous or its discreteness is negligible for human. The power of continuous knowledge can also be proved by the success of neural network. There was a shift of the paradigm of *artificial intelligence* in the 1990s from discrete to continuous and then follows the triumph of neural networks in nearly all the field.

### Natural language carries continuous knowledge

Admittedly, symbols in a language is discrete. However, they are meaningless without an interpreter. The development of natural language processing has witnessed that all the discrete approaches to understand natural language failed. The history has seen that parsing sentences in to syntax tree is hard and not as useful as using neural networks to directly process the natural language.

> Syntax tree can never represent the accurate meaning. For example, I can set a question:
> "If apple means eat in the next sentence. 'Mike apple an apple.' What did Mike intake?"
>This question is easy for human to answer but will break any natural language parser.

### Failure of continuous knowledge

However, the intrinsic drawbacks of continuous knowledge are still there. Even in 2023, we still cannot handle math, logic and coding satisfactorily with neural networks. This is surely because of the discrete nature of these tasks. How to bridge continuous knowledge with discrete knowledge will be the main challenge of building AI.


## How all this related to EvoNote?

::: tip Insight
EvoNote is trying to add more discrete structure to the continuous knowledge.
:::

Here, we first claim that the knowledge need to be interpreted by large language models are continuous. Though they might look like discrete because they are symbols, but they are meaningless symbols without an interpreter.

> Admittedly, you can parse a sentence into a syntax tree. But syntax tree can never represent the accurate meaning. For example, I can set a question:
> "If apple means eat in the next sentence. 'Mike apple an apple.' What did Mike intake?"
>This question is easy for human to answer but will break any natural language parser.
The way we want to do this, is to use the tree structure to organize the natural languages in a macro scale (Recall the section: Tree indexing). This can assign the continuous knowledge a discrete structure (tree), which we believe can help building a continuous-discrete hybrid knowledge to help making AI capable at discrete tasks.
EvoNote uses tree structure to organize the natural languages in a macro scale (Recall the section: Tree indexing). This can assign the continuous knowledge a discrete structure (tree), which we believe can help building a continuous-discrete hybrid knowledge to help making AI capable at discrete tasks.

2 changes: 1 addition & 1 deletion docs/writings/4.1 Interface of continuous and discrete.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Math provides some concrete bridge between continuous and discrete. This kind of

The space where the continuous knowledge lives might have a symmetry described by a certain Lie group. Group theory offers a way to analyze these continuous knowledge by analyzing its Lie group. For example, the Lie group might have a countable number of generators, which gives a discrete way to analyze the continuous knowledge. We can also analyze the representation of the Lie group, which will make the representation of it more discrete if we can decompose it into irreducible representations.

Using the discrete knowledge found by group theory have been applied to design neural networks. [Equivariant neural networks](https://arxiv.org/abs/2006.10503) are one example.
Using the discrete knowledge found by group theory have been applied to neural network design. [Equivariant neural network](https://arxiv.org/abs/2006.10503) is one example.

### Topology

Expand Down

0 comments on commit 2e6350e

Please sign in to comment.