-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create some unit tests for Doc syntax #5376
Conversation
Previously it would trim the first and last `Lexeme` from the actual result, to avoid having to include the extra “file” `Open`/`Close` in the expected value. However, when lexing failed, you’d just get a mismatch against an empty list of tokens. This now adds `Open`/`Close` to expected before comparing, and reports lexing failures differently.
This does all `Doc` annotations with `Cofree`. As a consequence, some types have been broken apart and constraints that were mentioned in comments before are now encoded in the types. The only remaining direct recursion is now between `Column` and `List`. This is a precursor for unit testing Doc parsing, as this allows us to drop all annotations from the Doc structure, making it much less fragile.
Have them alongside the Doc parser tests, rather than embedded in `Main`.
data Cofree' f a x = a :<< f x | ||
deriving (Foldable, Functor, Traversable) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, any reason why not use CofreeF
from free
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have. I think I got confused and tried to use the isomorphic EnvT
… but that’s from transformers, and I didn’t find it in free (of course), so I was like “oh, maybe the pattern functor isn’t here.”
In my own code, I don’t use those because they’re not strict in the functor parameter, so they don’t result in lawful fixed points. But my Cofree'
here isn’t strict either, so …
|
||
type Algebra f a = f a -> a | ||
|
||
class Recursive t f | t -> f where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious why not use recursion-schemes
directly?
I personally find your encoding here less confusing than all the Base
type-family nonsense, but it's also nice to maintain compatibility with the ecosystem where possible 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, my plan was to replace this with a recursion scheme library eventually. At this point, I was just trying to centralize the (mostly) existing copies of this pattern.
I’m happy to put in recursion-schemes, but I also have my own recursion scheme library, Yaya, which makes some different decisions than recursion-schemes1. One of them is the use of FunctionalDependencies
, as you pointed out. Another is that Yaya tries to provide totality (especially when paired with NoRecursion
).
So I just punted on that discussion for now.
Footnotes
-
There’s a more in-depth comparison with recursion-schemes in Yaya’s README. ↩
Overview
The goal of this is faster testing cycles and more precise test cases than using transcripts. However, there is a bunch of background work to make it work. Most notably it
unison-util-recursion
package to unify the recursion-schemey code around the repo andToken
is removed from the lexer tests).Implementation notes
This takes more advantage of fixed-point operators to allow changing the structure more easily (dropping annotations for test cases).
The Doc restructuring also encodes more of the restrictions in types (which had previously been in comments). We don’t want to keep those restrictions, but they reflect what the parser is currently able to parse.
This PR also changes the lexer unit tests a bit to catch some failures that could previously be missed. E.g., since it dropped the first and last token of the actual result (to match the provided expected result), an
Err
lexeme could be dropped, making a failure look like a success when compared against an empty expected result, or at least obscuring the reason for a failure.Interesting/controversial decisions
I don’t know … recursion schemes? Although they’re already used in a few places, like the ABTs.
Test coverage
This adds a bunch of new unit tests, but none of them are particularly interesting, just a few to justify the existence of the new test suite.
Loose ends
Moar tests in the suite, that focus on edge cases. But with upcoming changes like #5255, it seems premature to try to explore the boundaries of the current syntax.