From 0d81e71ee37134b579ff4713b86574064ef05c9a Mon Sep 17 00:00:00 2001 From: Zijian Zhang Date: Mon, 16 Oct 2023 18:15:55 -0400 Subject: [PATCH] docs: update --- docs/writings/0. TL;DR.md | 17 ----- docs/writings/1. Indexing is non-trivial.md | 62 ---------------- docs/writings/2. Tree indexing.md | 49 ------------- .../2.1 Method of Loci and sparsity.md | 39 ---------- docs/writings/3. Copernicus and indexing.md | 64 ----------------- .../4. Continuous and discrete knowledge.md | 72 ------------------- ....1 Interface of continuous and discrete.md | 66 ----------------- docs/writings/4.2 Tyranny of science.md | 13 ---- docs/writings/Comments on previous work.md | 23 ------ docs/writings/index.md | 4 -- 10 files changed, 409 deletions(-) delete mode 100644 docs/writings/0. TL;DR.md delete mode 100644 docs/writings/1. Indexing is non-trivial.md delete mode 100644 docs/writings/2. Tree indexing.md delete mode 100644 docs/writings/2.1 Method of Loci and sparsity.md delete mode 100644 docs/writings/3. Copernicus and indexing.md delete mode 100644 docs/writings/4. Continuous and discrete knowledge.md delete mode 100644 docs/writings/4.1 Interface of continuous and discrete.md delete mode 100644 docs/writings/4.2 Tyranny of science.md delete mode 100644 docs/writings/Comments on previous work.md delete mode 100644 docs/writings/index.md diff --git a/docs/writings/0. TL;DR.md b/docs/writings/0. TL;DR.md deleted file mode 100644 index 2c146fa..0000000 --- a/docs/writings/0. TL;DR.md +++ /dev/null @@ -1,17 +0,0 @@ - -# TL;DR - -- Indexing and understanding have many similarities. They both require adding context to the objective. -- Therefore, we want to implement better understand by AI with super-fine indexing with a rich structure which allows the AI find the context easily and precisely. -- Our plan is from two sides. One for the structure of the knowledge (indexing), one for the retrieval of the knowledge (searching). - -# Indexing - -- We want to use the tree structure to index the knowledge, because the path from the root to the interested node forms a natural context. -- We want to use LLM (or agent) to generate characterizing strings as the vector indexing of the knowledge, instead of using their embedding directly. -- We want to construct multiple trees in difference levels, with each level corresponding to a different level of abstraction. - -# Searching -- We want to use the tree structure to search the knowledge. An agent travelling on the tree will be developed. -- We want to use LLM (or agent) to generate characterizing strings for the queries, instead of using their embedding directly. -- We want to use multiple levels of trees to search the knowledge. The agent should go from the most abstract level to the most concrete level. \ No newline at end of file diff --git a/docs/writings/1. Indexing is non-trivial.md b/docs/writings/1. Indexing is non-trivial.md deleted file mode 100644 index c795e30..0000000 --- a/docs/writings/1. Indexing is non-trivial.md +++ /dev/null @@ -1,62 +0,0 @@ - -# Indexing is understanding - -What does it mean by understanding? Usually, it is use as the opposite of just memorizing. -When you memorize something, you just remember it. But when you understand something, there are a few more magic happening: - -- You know which part of the knowledge is relevant to the context. - -This is quite difficult because the context usually doesn't match the knowledge exactly. For example, if the doctor says: "don't drink any water". You may think that you can drink juice, but you can't drink juice either. You can't drink anything. - -- You know how different parts of the knowledge are related. - -This is even more difficult. The reason is two-fold: - -- Nearly every knowledge can be related in some way. For example, PNAS sounds like peanuts. Gravity and computer are related to apple by Newton and Turing. All these relations are true, but they are merely dry humor and not useful. -- Important relations are usually not obvious. For example, the observation of the movement of celestial bodies proves the existence of gravity. But no one notices this relation until Newton. - -# Ancient approaches - -The most important example of the use of indexing is search engine. Search engines collect the keywords in the documents and index them. When you search for a keyword, the search engine will return the documents that contain the keyword. This is the most basic form of indexing. - -Search engines provide an efficient way to find a webpage with some keywords. However, you cannot imagine you discover the gravity by searching "apple" in Google. This is because the search engine doesn't understand the relation between the keywords. It only knows that the keywords appear in the same document. This approach is already very powerful, but we obviously want more. - -A way to analyze the relation and the context of the keywords is to use a knowledge graph. A knowledge graph is a graph that contains the objects and the relations between them. For example, `juice` and `water` can be two nodes linked by an edge `has ingrediant`. In this way, it might help understanding whether you can drink juice when the doctor says "don't drink any water". - -However, obviously, none of these ancient approaches has a chance to draw a relation between celestial bodies and gravity. Ever after decades of development, they still struggle with understanding everything that is a little bit abstract. Certainly we need something new. - -# In the large language model (LLM) era - -I will not introduce the history of the reason why LLMs work. But I believe every one of you who are reading this article must have some feeling that LLM can understand abstract mathematical concepts. If you ask ChatGPT: -``` -When I dress myself, I can put on my shirt and then my pants. -I can also put on my pants and then my shirt. It won't make a difference. - -What mathematical concept is this? -``` -ChatGPT will recognize that the order of the two actions doesn't matter, so it is related to commutativity. The answer won't change a lot if you modify the situation as long as it represents the concept of commutativity. It's quite hard to imagine how a search engine or a knowledge graph can do this. The word "commutativity" doesn't even appear in the question. - -However, this good performance of LLM is at a cost. The most important limitation is that LLM the size of its input is limited, and it is completely not at the same scale as the knowledge graph and traditional search engines. You have to decide what is the most important context that the LLM have to know before you ask the question. This, again, requires some understanding of the knowledge. **LLM helps you the best when you already have some understanding of the knowledge.** - -## Embedding-based indexing - -Good news is that LLM not only helps us by directly giving the answer. It also helps us index existing knowledge. Notice that LLM are built with deep learning technology, in which neural networks are used to process the knowledge. In the intermediate layers of the neural network, the knowledge is represented as vectors called **embeddings**. - -These embeddings carry all the information about the input and have already been processed by the neural network for abstract understanding. Therefore, if two inputs to the LLM have similar embeddings, they are likely to be related, even in an abstract way. This is the key idea of **embedding-based indexing**. - -Giving a few pieces of knowledge, we can use LLM to generate their embeddings as their index. Whenever a context is given, we generate the embedding of the context and find the similar embeddings in the knowledge base. This will have the model to gain essential knowledge before answering a question. Importantly, this embedding similarity-based indexing is totally scalable, meaning that you have the chance to index the knowledge of astronomy and gravity together! - -# Wrap up - -Though there might be still a lot of steps before we let the model rediscover gravity, we have already seen the potential of LLM in indexing. Importantly, we find a good roadmap to solve the two problems we put in the beginning. For the first, by embedding similarity, we have the tool for finding the relevant knowledge to the context and retrieve them. "Don't drink water" will have a high similarity with "don't drink juice". For the second, with the abstract understanding ability of LLM, we can extract relation of two pieces of knowledge. It can discover that "don't drink water" actually means "don't drink any liquid". - -With embedding-based search in hand, it seems what left for us to build is simply improve its performance. However, you will find it as a surprisingly tricky task which involves much philosophical effort. Let's discuss it in another article. - - -# Related works - -[History of retrieval](https://dl.acm.org/doi/pdf/10.1145/3486250) - -[LlamaIndex](https://www.llamaindex.ai/) - -[ACL 2023 Tutorial on Retrieval-based Language Models](https://acl2023-retrieval-lm.github.io/) \ No newline at end of file diff --git a/docs/writings/2. Tree indexing.md b/docs/writings/2. Tree indexing.md deleted file mode 100644 index 1f09a04..0000000 --- a/docs/writings/2. Tree indexing.md +++ /dev/null @@ -1,49 +0,0 @@ -# Tree indexing - -Tree might be the most common traditional indexing that are adopted by human-being before the advent of computers. People have practiced organizing books in nested section for thousands of year. It has two advantages: - -- Tree structure help the reader locate the knowledge, when the reader has already understood parts of the knowledge. -- Tree structure help the readers to understand the relation between different knowledge. Similar knowledge are usually put in the same section or subsection. - -This is the most obvious reason and this is why we call it indexing. However, there are more subtle reasons: - -- Tree structure help the readers to understand the context of the knowledge. - -These reasons are important when the reader is not familiar with the knowledge. It makes it possible to understand the knowledge better without reading the whole book. It works by the following mechanism: - -- The path from the root section to the current section forms a natural context for the knowledge. -- The readers can choose which section to move to if they find the current section is too trivial, too difficult or too irrelevant to what they want. The tree structure offers a natural path to move. - -## What does it mean to LLM? - -The reason why we underscore the importance of tree indexing is obviously not because we want to make a better book. Our question is whether LLM can benefit from tree indexing. The answer is obviously yes. The reason is from the following points: - -- We only want LLM to read a book when it does not understand it well. Therefore, it is more similar to the case when the reader is not familiar with the knowledge. This means they need more hint of the context. The path can be a good hint. -- When we want carry out embedding search on the knowledge, it is always more reasonable to include the context. The path can be a good context to make the embedding better. - -Further, if we make the LLM into an agent who can actively travel on the book and add new content to the book. - -- The tree structure helps the agent explore related knowledge along the tree and filter out the useful ones. This improves the search result especially when the query is abstract and implicitly related to the knowledge. -- The existing tree structure offers a good reference to create new ones to keep the book organized and easy to read even when the book is very large. - -## What is the difference between tree indexing for human and for LLM? - -Usually, in human made books, the tree structure is quite coarse. One of the reasons might be this: explicit fine-grained tree structure is hard to make and read. Though human writers might make a lot of list and aside to make the tree structure actually more fine-grained, it is laborious to give a name to every small section. The human readers are also not willing to read a book with too many small sections. - -However, the situation does not hold for LLM. This is because -- The cost of LLM to write or read is much lower than human. It is not a problem to write a lot of small sections or read them. -- Because LLM does not come with a long-term memory system, it is more important to make the context explicit. The books for human does not assume the readers read each section directly. They can use their long-term memory to make the context. However, LLM does not have such ability. - -# Wrap up - -With the discussion above, we know that - -- Tree indexing is a time-tested way to organize knowledge. -- Tree indexing is important for LLM to understand the knowledge better. -- Tree indexing for LLM can be more fine-grained than that for human. - -As we discussed in the first article, indexing is closed related to understanding. Surely, we can see that tree indexing can help organizing and retrieve knowledge. However, could it really help LLM to understand abstract things like science? - -# Related works - -[Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading](https://arxiv.org/abs/2310.05029) \ No newline at end of file diff --git a/docs/writings/2.1 Method of Loci and sparsity.md b/docs/writings/2.1 Method of Loci and sparsity.md deleted file mode 100644 index 1b85b48..0000000 --- a/docs/writings/2.1 Method of Loci and sparsity.md +++ /dev/null @@ -1,39 +0,0 @@ -# Method of Loci - -Method of Loci, also known as memory palace, is a method to memorize things by associating them with a place. It is a very old method and has been used for thousands of years. It is also a very effective method. - -When you try to memorize a list of things, you can just imagine a place you are familiar with and put the things in the place. When you want to recall the things, you can just imagine the place and the things will come to your mind. - -## Why is the method good? - -Why this method is efficient? Here is the claim: -::: tip Claim -Method of Loci is efficient because it creates a graph of knowledge with each node has only limited number of edges. That is, it is a sparse graph. -::: -Here, in the graph of knowledge, the nodes are the context (situation) and the edges leads to the memory or another situation. The whole point of method of loci is to turn a list of things, which is densely indexed, into a sparsely connected structure. - -## Why sparse graph is good? - -Here is the claim: - -::: tip Claim -Sparse graph performs better because it fits in context window of human brain better. -::: - -Thinking with a sparse graph limits the number of things you need to think about at one time. In this meantime, because the knowledge are still interconnected, you can still think about the whole knowledge. - -## What does it mean to LLM? - -LLM also has a limited number of token in the context window. Current technology still struggles to make the context window large. When it seems to be large, the performance is usually not good. (See [Lost in the Middle: How Language Models Use Long Contexts](https://arxiv.org/abs/2307.03172)) - -Maybe it can be improved in the future, but I strongly don't believe that will happen very fast. We can use the sparsity of the graph to decrease the number of things LLM needs to think about at one time and enhance the performance. - -## How about EvoNote? - -EvoNote uses the tree structure to index the knowledge. It has a natural advantage to make the connection at each node (note) sparse. Compared to the approaches that use a flat list (e.g., chunks) or a dense graph (e.g., knowledge graph) to index the knowledge, it is more efficient. - -## How about DocInPy - -DocInPy provides a way to add sections to your Python codes to separate the functions and classes for arranging them into a tree structure. It makes it possible to make the tree sparse. - -There are a lot of Python projects put a tons of functions in one file. This have put a barrier for both human and LLM to understand the code for a long time. DocInPy can help to solve this problem. \ No newline at end of file diff --git a/docs/writings/3. Copernicus and indexing.md b/docs/writings/3. Copernicus and indexing.md deleted file mode 100644 index a8bc68c..0000000 --- a/docs/writings/3. Copernicus and indexing.md +++ /dev/null @@ -1,64 +0,0 @@ -# What are different understandings? - -In the previous article, we found that indexing is deeply related to understanding. However, can we somehow give a definition of understanding? We mentioned in the first article that memorization is different from understanding. So the explicit knowledge must not be the understanding and understanding must be something other than the knowledge itself. Naturally, because the "other" thing must be related to the stand-alone knowledge, we can call it the **implicit context** of the knowledge. With this definition, I claim that - -::: tip Claim -To understand a thing, you must know that implicit context of it. -::: - -and consequently, - -::: tip Claim -The way to understand a thing, is the way to assign the implicit context to it. -::: - -Let's illustrate this with a few examples. - -::: tip Example -Modern educated human think they understand earthquake and treat it as a result of the movement of tectonic plates. They think they understand because they can fit the phenomenon of earthquake into his existing knowledge of geology and use it as a context. -::: - -::: tip Example -Ancient Japanese people think they understand earthquake and treat it as a result of the movement of a giant catfish supporting the Japanese islands. They think they understand because they can fit the phenomenon of earthquake into his existing knowledge of mythology and use it as a context. -::: - -In this example, we show that different people have different understanding of the same thing. They assign different implicit context to one thing and both strongly believes so. - -Here, I want to emphasize that, we do not care about which context they assign is correct or not. We only care about the fact that they assign different context. Importantly, the context they assign might be both correct, but the context they assign is different. - -::: tip Example -A person who believes geocentric model thinks he understands the movement planets because they can perfectly fit into his existing knowledge of astronomy and use it as a context, though in the context planets move in a very complicated way. -::: - -::: tip Example -A person who believes heliocentric model thinks he understands the movement planets because they can perfectly fit into his existing knowledge of astronomy and use it as a context. The context is different from the previous one and the planets move in a very simple way. -::: - -## Tree indexing as an understanding - -As we introduced in the previous article, tree indexing can help assign a context to the knowledge. With a tree indexing, we can find existing knowledge that is similar to the incoming ones. With the help of the paths of the existing knowledge, a new path, namely a new context, can be created. This is the way tree indexing helps LLM to understand the knowledge. Specifically, the understanding can be carried out in the following way - -::: tip Procedure -- Search similar knowledge in the knowledge base. -- Gather the paths of the similar knowledge. -- Synthesize new paths for the incoming knowledge. -- Use the new paths as the context of the incoming knowledge for rephrasing them. -- Put the rephrased knowledge into the knowledge base. -::: - - -## Tree transformation as a transformation of understanding - -The generation of implicit context in tree indexing relies on the logic of paths. Therefore, we can say the logic of path generation can be used to characterize understandings. People can change the path logic to achieve a new understanding. For example: TODO. - - -## Understanding inside LLM - -LLM can understand sentences. Where is the implicit context? My interpretation is that the layers of the LLM is responsible for adding these implicit contexts, including the grammar, meaning of tokens and world knowledge. After being processed by layers, the hidden state, which is the embedding of the sentence, contains the implicit context. Therefore, we can say that even the simplest embedding based search provides a way to understand the knowledge, even when no tree structure is involved. - - -# Related works - -[A Contextual Approach to Scientific Understanding](https://link.springer.com/article/10.1007/s11229-005-5000-4) - -[Memory is a modeling system](https://doi.org/10.1111/mila.12220) diff --git a/docs/writings/4. Continuous and discrete knowledge.md b/docs/writings/4. Continuous and discrete knowledge.md deleted file mode 100644 index 7301528..0000000 --- a/docs/writings/4. Continuous and discrete knowledge.md +++ /dev/null @@ -1,72 +0,0 @@ - -# Continuous and discrete knowledge - -Surely, there are many criteria to classify knowledge. The important thing is how much insight we can get from the classification. In this article, I will introduce how to use the idea of continuous and discrete to classify knowledge. - -## Discrete knowledge - -### What is discrete knowledge? - -::: tip Definition -Discrete knowledge is the ones whose state is defined in a discrete space. Variation on it cannot be infinitesimal. -::: - -For example, a coin has two states: head and tail. The state of a coin is discrete knowledge. - -More importantly, logic deductions are operating discrete knowledge. All the system with a flavour of **logic** and have a clear border of what is true and what is wrong, e.g., knowledge graph and symbolic deductions, are mainly operating discrete knowledge. - -### What is the property of discrete knowledge? - -Discrete knowledge is clear and easy to operate with computers. They can ensure 100% correctness given correct assumptions. For fields that have a concrete assumption, e.g., mathematics, discrete knowledge and its deduction will suffice. - -### Failure of discrete knowledge - -However, not all fields have concrete assumptions. In the long debate of rationalism and empiricism, people found that it is absolutely not easy to find reliable and non-trivial assumption to reason from (See Kant and Hume). My claim to the failure is that the world is too complex to be described by a few pieces of discrete knowledge. Even there are a set of such discrete knowledge, they are not affordable to the human brain. For example, I admit that the world might be discrete if you look at it in a very small scale. However, the number of discrete states is too large for human to make any useful deduction except for cosmology or particle physics. Most of the useful knowledge does not change its essence when you vary it a little bit. - -## Continuous knowledge - -### What is continuous knowledge? - -::: tip Definition -Continuous knowledge is the ones whose state is defined in a continuous space. It allows an infinitesimal variation. -::: - -For example, the probability that a coin will be head is continuous knowledge. The probability is a real number between 0 and 1. - -More importantly, neural networks hold continuous knowledge. The state of a neural network is defined by the weights of the connections between neurons. The weights are real numbers, which is a continuous space. - - -### How to tell whether the knowledge is continuous? - -It might be tricky to check whether a piece of knowledge is continuous or not. The key is to imagine whether the knowledge can have a very small variation and still remain mostly true. For example, when you try to recall a voice of someone, you can never ensure that your memory today is the same as your memory yesterday. It also works for smell, visual or kinetic memory. - -Most importantly, though also containing discrete knowledge like grammar, a large part of our **knowledge about words** is also continuous. Your **feeling** about a certain word is continuous. The most obvious example is brands. You must have a certain feeling about Coca-cola, Pepsi, Tesla and BMW; and they don't have a clear border of correctness, nor you can check your feeling is stable. - -### What is the property of continuous knowledge? - -The representation power of continuous knowledge is much stronger than discrete knowledge. It is very hard to imagine how to represent the feeling of ski or recalling a picture with a discrete format. - -Continuous knowledge is more natural for human to process. Most of the physics theory also assume that the space is continuous or its discreteness is negligible for human. The power of continuous knowledge can also be proved by the success of neural network. There was a shift of the paradigm of *artificial intelligence* in the 1990s from discrete to continuous and then follows the triumph of neural networks in nearly all the field. - -### Natural language carries continuous knowledge - -Admittedly, symbols in a language is discrete. However, they are meaningless without an interpreter. The development of natural language processing has witnessed that all the discrete approaches to understand natural language failed. The history has seen that parsing sentences in to syntax tree is hard and not as useful as using neural networks to directly process the natural language. - -> Syntax tree can never represent the accurate meaning. For example, I can set a question: -> "If apple means eat in the next sentence. 'Mike apple an apple.' What did Mike intake?" ->This question is easy for human to answer but will break any natural language parser. - - -### Failure of continuous knowledge - -However, the intrinsic drawbacks of continuous knowledge are still there. Even in 2023, we still cannot handle math, logic and coding satisfactorily with neural networks. This is surely because of the discrete nature of these tasks. How to bridge continuous knowledge with discrete knowledge will be the main challenge of building AI. - - -## How all this related to EvoNote? - -::: tip Insight -EvoNote is trying to add more discrete structure to the continuous knowledge. -::: - -EvoNote uses tree structure to organize the natural languages in a macro scale (Recall the section: Tree indexing). This can assign the continuous knowledge a discrete structure (tree), which we believe can help building a continuous-discrete hybrid knowledge to help making AI capable at discrete tasks. - diff --git a/docs/writings/4.1 Interface of continuous and discrete.md b/docs/writings/4.1 Interface of continuous and discrete.md deleted file mode 100644 index 9b19e47..0000000 --- a/docs/writings/4.1 Interface of continuous and discrete.md +++ /dev/null @@ -1,66 +0,0 @@ -# Interface of continuous and discrete - -## Heap attack - -### Paradox of the heap - -::: tip Story -If you remove one grain of sand from a heap of sand, it is still a heap. If you keep removing grains of sand, eventually you will have only one grain of sand left. Is it still a heap? -::: - -The key point of this paradox lies in the continuous nature of the concept - `heap`. The concept `heap`, is actually never well-defined when we use it as natural language. It also does not have a formal definition in any way. You knowledge about the heap might vary infinitesimally. - -::: tip Observation -For a same object, you might think that is a heap this second and not a heap in the next second. However, you do not think your knowledge about the heap is changing even you give different answer. This proves that the concept `heap` is continuous because infinitesimal variation does not matter. -::: - -### Attack to any continuous concept - -For any continuous concept that does not have a clear definition. You can always attack it by the following way: - -::: tip Protocol -- Find an object that is an instance of the concept. -- Find way to vary the object, so it is still an instance of the concept. -- Show that along the way, the object will eventually not be an instance of the concept. -- You know that along the way there must be a "sweet point". However, the sweet point should not exist because infinitesimal variation should not change the concept, even on the sweet point. Therefore, the concept is not well-defined. -::: - -For example, you can attack the concept `machine learning` by the following way: - -::: tip Example -- Optimizing neural network on machines is an instance of `machine learning`. -- You can adjust the process by replacing some of the learning steps with human-made steps. It is still an instance of `machine learning`. -- You can adjust the process by replacing all the learning steps with human-made step except for one machine-made noise step. It is ridiculous if there are 100000 human-made steps and 1 useless machine-made step. However, it is still an instance of `machine learning`. -::: - -Importantly, nearly all the big concepts are continuous, including `philosophy`, `science`, `understanding`, `knowledge`, `freedom`, `democracy`, etc. They are all vulnerable to this attack. Though they are useful concepts, people should keep in mind that they can never have a clear definition. This impossibility of this has been proved by the history. - -## Mixture of continuous and discrete - -There are knowledge that is neither continuous nor discrete. They are mixture of both. For example, though I claimed that neural networks mainly carry continuous knowledge, they also carry discrete knowledge by its network structure. The network structure is discrete and the weights are continuous. - -The tree of knowledge in EvoNote is the same. The tree structure is discrete and the knowledge in the nodes are continuous because they are natural language. - -### Machine learning engineers - -Machine learning engineers does a funny job. They design the discrete structure of the neural network and train the continuous weights. In this way, they mix the discrete and continuous knowledge together. - -In this process, one interesting point is the roles of human and machine. The discrete structure is designed by human and the continuous weights are trained by machine. Therefore, machine only deals with continuous knowledge and the discrete knowledge are handled by human. This matches the performance of the model as products - they are good at continuous tasks and bad at discrete tasks. - -> Network Architecture Search (NAS) is a process that tries to automate the design of the discrete structure. However, it failed heavily in the competition with the transformer structure. This proves that machine learning are not good at discrete tasks again. - -## Math that bridges continuous and discrete - -Math provides some concrete bridge between continuous and discrete. This kind of bridge is hard to find from daily life knowledge. This makes math beautiful and holy. - -### Group theory - -The space where the continuous knowledge lives might have a symmetry described by a certain Lie group. Group theory offers a way to analyze these continuous knowledge by analyzing its Lie group. For example, the Lie group might have a countable number of generators, which gives a discrete way to analyze the continuous knowledge. We can also analyze the representation of the Lie group, which will make the representation of it more discrete if we can decompose it into irreducible representations. - -Using the discrete knowledge found by group theory have been applied to neural network design. [Equivariant neural network](https://arxiv.org/abs/2006.10503) is one example. - -### Topology - -Topology is an example where continuous entities can be unambiguously represented by discrete entities. - -Condense matter physics loves topology very much. Part of the reason is the topological properties of condensed matter system nearly the only way to describe them in a discrete way. \ No newline at end of file diff --git a/docs/writings/4.2 Tyranny of science.md b/docs/writings/4.2 Tyranny of science.md deleted file mode 100644 index be13a29..0000000 --- a/docs/writings/4.2 Tyranny of science.md +++ /dev/null @@ -1,13 +0,0 @@ -# Tyranny of science - -See Nietzsche and Foucault. - -## Scientific discourse - -Science is meant to be accurate, whose cost is ignore the rich context that real-life situation might have. By making science the absolute premium way to think, it put a coercive discretizer to human world, which leads to a deviation from the fact and many tragedies. - - - - - - diff --git a/docs/writings/Comments on previous work.md b/docs/writings/Comments on previous work.md deleted file mode 100644 index e9fe81c..0000000 --- a/docs/writings/Comments on previous work.md +++ /dev/null @@ -1,23 +0,0 @@ -# Comment on previous philosophical works - -My comments are mainly made on the following paper: - -What is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science: https://philpapers.org/rec/BAUWIU - -## A decisive question - -My decisive question to philosophers: Is Pavlov's dog and Russell's chicken understand bell and the coming of the farmer? - -My answer: yes! They understand to the extend that they can draw and context. We should not require the context to be always true. Even human believe mythology. - -My observation: Most of the philosophers do not agree with me according to the reference. - -## Chapter-wise comments - -4.1 Understanding and the facts - -My comment: factivity, to any extend, is not related to understanding. Understanding can easily be totally wrong. Mythology is a good example. - -4.2.2. Grasping - -My comment: It seems like the philosophers care about knowing the causality and thinks this might be a standard of understanding. In my view, causality is just a piece of knowledge. People usually forming an implicit context with it does not make it a necessary piece of understanding. \ No newline at end of file diff --git a/docs/writings/index.md b/docs/writings/index.md deleted file mode 100644 index d9ede4c..0000000 --- a/docs/writings/index.md +++ /dev/null @@ -1,4 +0,0 @@ - -# Writings - -Here are writings reflecting the ideas of EvoNote. \ No newline at end of file