-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce guessed receiver types #2210
Conversation
Another thing we could do, especially for the benefit of tests, is to match on a type name followed by a number, e.g. |
b0615a5
to
886a7aa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add the script you used to benchmark somewhere?
886a7aa
to
1df800c
Compare
656e424
to
7cfdd9a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be very honest, I'm not convinced that this approach worth the extra complexity based on the accuracy metrics. But let's see how users react to it first 👍
Additionally, is it possible to identify the guessed type responses from our metrics? It'd be interesting to see how much completion requests are based on it vs not.
Do you think we're able to package the script into a flag, like |
7cfdd9a
to
e597de1
Compare
Talked to Stan and we agreed to ship this and follow up with an executable to estimate the type accuracy of guessed types. |
Motivation
This PR adds the experiment of guessed receiver types, where we try to guess the type of receivers based on their identifier.
Implementation
The relevant part of the implementation is all in
TypeInferrer
, everything else is just displaying to users why we picked a certain type.The idea is to try to guess the types like this:
@
symbolsMore details in the Markdown documentation.
Validation
I used Spoom's access to the Sorbet LSP to compare the guessed types vs the actual types informed by Sorbet. I also compared 4 approaches:
In the Ruby LSP repo, these are the accuracy results for each approach
In Core, the analysis script took way too long to finish, so I sampled a subset of the codebase. The results there were worse than in the Ruby LSP codebase, peaking at about 5% of correct types.
Surely, the level of accuracy will vary a lot between different codebases. That said, I still believe the experiment would be worth the try and would love to hear feedback from users about the usefulness of this.
Script:
Automated Tests
Added tests.
Manual Tests
Type any existing class name as a variable. After typing a dot, you should see completion options for that type (e.g.:
pathname.
).