Replies: 2 comments 1 reply
-
So here is the gist of how something like this might be implemented: 1.) persistently store the results of every simple ( 2.) When you generate a proof something is impossible, you are trying to prove something like Now you can do a breadth first search through the graph of all states of known of properties, joined by applicable theorems, to look for a contradictory state. (Note this is abstractly what you're doing, of course we don't need to store said graph anywhere. Every node in this graph could be represented pretty compactly as a My hot take is it ought to be feasible. There are by my count about 150 complex theorems in (*Every theorem is itself a negated conjunction, say e.g., 3.) After you get the list of theorems used in a shortest path, alongside the list of all "states of known properties" between each theorem, work backwards to generate the proof as follows:
4.) The breadth first search is obviously only important for generating proofs. For generating the list of properties of a single space you can do what it seems you already do, possibly slightly more efficiently: Just repeatedly applying theorems until no more are applicable, but still adding all of the persistently stored simply-implied properties after each theorem. On the other hand, if it turns out to be efficient enough (and I suspect it might), generating a single shortest possible chain of deductions for a given space might also be valuable. One could do the breadth first search for the property generation as well, taking the shortest path to a state where no theorems are applicable. This could give a user a nice overview on the main page of a space for how all the important properties are deduced - again only showing the strongest properties deduced and the complex theorems used, and hiding the rest inside chains of simple deductions, that can be expanded by users if more detail is desired. |
Beta Was this translation helpful? Give feedback.
-
Just to make things clear, each of the |
Beta Was this translation helpful? Give feedback.
-
I was reading this thread the other day talking about optimizing the proofs generated by$\pi$ -base.
I had an idea which I thought to post as a comment there, but it got kind of long-winded, and I didn't want to hijack @prabau 's thread with a long word-salad.
So instead I decided to hash out my ideas here, in a separate thread that can be safely muted :)
In the other thread, @jamesdabbs wrote:
What happens if you define the length of a deduction as the shortest number of complex theorems needed to prove the results? Here I consider a theorem of the form$A\implies B$ (or $A\implies \neg B$ , or $\neg A \implies B$ ) to be simple, and all others complex.
The philosophy for this definition is that a long chain of simple implications$A\implies B\implies C \implies D$ is often artificial complexity, and many mathematicians probably don't need to even see the whole chain - a default view of a more complicated proof using such a chain could just show $A\implies D$ , with a little "+" icon to expand the chain of equivalences for anyone who needed to see them.
I have a suspicion if you define things this way, and persistently store all the simple implications, then a breadth first search might be feasible.
I also have some ideas for how to implement this, though as I'm still digesting how the site works currently, maybe they should be taken with a grain of salt. In any case, I'll elaborate in a comment.
Beta Was this translation helpful? Give feedback.
All reactions