-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FTheoryTools] Improve computation of refined Tate fiber types #4062
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #4062 +/- ##
==========================================
+ Coverage 84.58% 84.62% +0.04%
==========================================
Files 612 612
Lines 83125 83191 +66
==========================================
+ Hits 70312 70404 +92
+ Misses 12813 12787 -26
|
num_gens = length(gens(parent(f))) | ||
first_coord_inds = collect(1:num_gens - 2) | ||
last_coord_inds = collect(3:num_gens) | ||
rand_ints_ab2 = rand(-100:100, num_gens - 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we guaranteed that the method always works? Or could these random numbers sometimes fail us? If so, we would have random failures, which is likely not good.
(There were discussions elsewhere regarding fixed seeds etc., so as to make tests/failures reproducible. If random numbers are truly needed, then we should investigate more.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed this in triage.
If you really need randomness here, I would strongly suggest to add a comment that explains the background.
Also: are the given ranges really always sufficient? If so this should be justified. If not then one should at least describe what might go wrong if the random value is "insufficient" (whatever that means). You could also add checks or loop to try multiple random values until some criterion proves it is correct; or until a consistent result value has been produced several times for different random choices...
More was said and discussed with @HereAround but that's the gist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@apturner @emikelsons and I just discussed more on this. Here is the summary:
What this function is actually deciding is if certain ideals are radical or not. However, for the geometries encountered in FTheory, these computations are too hard/time consuming. The random numbers serve as shortcut as follows:
- Keep exactly two variable (in the polynomial generators of the ideals in question), and evaluate all remaining variables at integer values. Then check whether the result is radical or not.
- Currently, this step is executed twice - once by keeping the very first two variables as symbols (and evaluating all other variables to random integer values), and once by keeping the very last two variables as symbols.
We just agreed on the following:
- Instead of always keeping the first two/last two variables as symbols, do so for random pairs of variables.
- Repeat the evaluation and subsequent radical check five times (instead of currently only two times).
- Add tests, to gauge if this algorithm succeeds.
- The current implementation corrects for a tendency of this algorithm to favor non-split over split. This could be made stricter. But then, we need to tests if such a stricter test/algorithm still yields correct answers, or tends to fail.
- @apturner and @emikelsons seemed to favor the choice of a seed. So I understand @apturner will work this in as an optional argument to this method. Thereby, the tests should become reproducible.
- We were also thinking of using
put_over_concrete_base(P3, some_model_over_a_non_concrete_base)
. Then we can run the new algorithm also for models that are defined over non-concrete bases. However, there is unfortunately always a chance that this step would introduce a gauge enhancement, and so the results computed from this method would disagree with the "generic model". This is to be played with/tested. Our feeling is that the projective space P3 minimized this potential of failure though.
I believe this is all the points that we came to think of and @apturner will work on this. Let us know once this is ready to be reviewed again. Thank you for working on this!
if f_ord == 0 && g_ord == 0 | ||
q_first = quotient(ideal([9 * g_first_2, w_first_2]), ideal([2 * f_first_2, w_first_2])) | ||
q_last = quotient(ideal([9 * g_last_2, w_last_2]), ideal([2 * f_last_2, w_last_2])) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my curiosity: Why is there a line break here (and in similar placed below)? I am not sure if it enhances readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only for readability. I generally prefer to break up the logical steps of the calculation rather than have a solid block of code. For some of these cases, the first step is many lines, and for consistency I line break even the shorter cases
Thank you for working on this @apturner ! Generally speaking, this looks great. However, I believe all of the following points should be addressed before we merge this PR:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above.
|
Let me ping in @benlorenz. Maybe @lgoettgens knows as well. @apturner wants to add a function, the outcome of which depends on random numbers. In the worst case scenario, this leads to random failures. I am certain similar situations did arise already and have been discussed. Suggestions? |
I just realized that my response/status of this PR is somewhat hidden, as I added it to an above conversation. So, for convenience, here is the current (Friday, September 6) status: @apturner @emikelsons and I just discussed more on this. Here is the summary: What this function is actually deciding is if certain ideals are radical or not. However, for the geometries encountered in FTheory, these computations are too hard/time consuming. The random numbers serve as shortcut as follows:
We just agreed on the following:
I believe this is all the points that we came to think of and @apturner will work on this. Let us know once this is ready to be reviewed again. Thank you for working on this! |
71390bf
to
3994826
Compare
I believe I have addressed everything in the recent commit.
|
Currently, the optional keyword argument to provide a seed is only available in the internal function julia> B3 = projective_space(NormalToricVariety, 3)
Normal toric variety
julia> Kbar = anticanonical_divisor(B3)
Torus-invariant, non-prime divisor on a normal toric variety
julia> foah16_B3_weier = literature_model(arxiv_id = "1408.4808", equation = "3.203", type = "weierstrass", base_space = B3, defining_classes = Dict("s7" => Kbar, "s9" => Kbar), completeness_check = false)
Construction over concrete base may lead to singularity enhancement. Consider computing singular_loci. However, this may take time!
Weierstrass model over a concrete base -- F-theory weierstrass model dual to hypersurface model with fiber ambient space F_16 based on arXiv paper 1408.4808 Eq. (3.203)
julia> sl = singular_loci(foah16_B3_weier)
4-element Vector{Tuple{MPolyIdeal{<:MPolyRingElem}, Tuple{Int64, Int64, Int64}, String}}:
(Ideal with 1 generator, (0, 0, 1), "I_1")
(Ideal with 1 generator, (0, 0, 3), "Split I_3")
(Ideal with 1 generator, (0, 0, 3), "Split I_3")
(Ideal with 1 generator, (0, 0, 3), "Split I_3")
julia> Oscar._kodaira_type(sl[2][1], sl[2][2], foah16_B3_weier, rand_seed = 1234)
"Split I_3" |
@test length(singular_loci(foah1_B3_weier)) == 1 | ||
@test length(singular_loci(foah2_B3_weier)) == 1 | ||
@test length(singular_loci(foah3_B3_weier)) == 1 | ||
@test length(singular_loci(foah4_B3_weier)) == 2 | ||
@test length(singular_loci(foah5_B3_weier)) == 1 | ||
@test length(singular_loci(foah6_B3_weier)) == 2 | ||
@test length(singular_loci(foah7_B3_weier)) == 1 | ||
@test length(singular_loci(foah8_B3_weier)) == 3 | ||
@test length(singular_loci(foah9_B3_weier)) == 2 | ||
@test length(singular_loci(foah10_B3_weier)) == 3 | ||
@test length(singular_loci(foah11_B3_weier)) == 3 | ||
@test length(singular_loci(foah12_B3_weier)) == 3 | ||
@test length(singular_loci(foah13_B3_weier)) == 4 | ||
@test length(singular_loci(foah14_B3_weier)) == 4 | ||
@test length(singular_loci(foah15_B3_weier)) == 5 | ||
@test length(singular_loci(foah16_B3_weier)) == 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my curiosity - how long do you think those tests take now?
I believe we even raise a warning somewhere about singular_loci
being potentially super slow. So I just want to gauge how much this slows down the OSCAR tests.
The tests are green - at least for now. That of course is a good indication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On my laptop, running all of the tests for FTheoryTools, including these new ones, takes 6.5 minutes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good to me.
@@ -128,39 +128,39 @@ istar0_s_auxiliary_base_ring, (a1pp, a2pp, a3pp, a4pp, a6pp, vp, wp, mp) = QQ["a | |||
tate_auxiliary_base_ring, (a1p, a2p, a3p, a4p, a6p, v, w) = QQ["a1p", "a2p", "a3p", "a4p", "a6p", "v", "w"]; | |||
|
|||
# construct Tate models over arbitrary base | |||
t_istar0_s = global_tate_model(istar0_s_auxiliary_base_ring, [1 2 3 4 6 0 2; -1 -2 -2 -3 -4 1 -1], 3, [a1pp * (vp^2 + wp)^1, mp * (vp^2 + wp)^1 + a2pp * (vp^2 + wp)^2, a3pp * (vp^2 + wp)^2, mp^2 * (vp^2 + wp)^2 + a4pp * (vp^2 + wp)^3, a6pp * (vp^2 + wp)^4]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have the gradings changed? (At least it looks so to me right now.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of the gradings in these tests were incorrect in multiple ways:
- The number of columns in the grading matrices did not match the number of coordinates of the base. This also indicates an "error" of the functions that construct arbitrary bases, as they should probably require that this be the case. I didn't fix that issue in this PR, but I did update all the gradings in these tests.
- Even setting that aside, the gradings were not updated when we changed the tests to use gauge divisor
v^2 + w = 0
rather than justw = 0
. Where previously a given variable had graden
under[w]
, now it really should have grade2 n
under[v]
, andw
itself should have grade2
under[v]
. I've updated the gradings to make this the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Thx!
@test_throws ArgumentError global_tate_model(tate_auxiliary_base_ring, [1 2 3 4 6 0; 0 -1 -2 -3 -5 1], 3, [a1p]) | ||
@test_throws ArgumentError global_tate_model(tate_auxiliary_base_ring, [1 2 3 4 6 0; 0 -1 -2 -3 -5 1], 3, [a1p * v^0, a2p * v^1, a3p * v^2, a4p * v^3, sec_a6]) | ||
@test_throws ArgumentError global_tate_model(tate_auxiliary_base_ring, [1 2 3 4 6 0; 0 -1 -2 -3 -5 1], -1, [a1p * v^0, a2p * v^1, a3p * v^2, a4p * v^3, a6p * v^5]) | ||
@test_throws ArgumentError global_tate_model(tate_auxiliary_base_ring, [1 2 3 4 6 0 0; 0 -1 -2 -3 -5 1 2], 3, [a1p]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the gradings also did change. Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this delicate matter @apturner . This looks very good now!
I left a few minor questions (see above).
For which models will we now be able to work out the (refined) singularity type? For instance, will it work for a hypersurface model without known associated Weierstrass/Tate model? What about hypersurface models with an associated Tate model? Do we maybe want to add more req statements to catch cases which are currently not (yet) covered?
Another question: Will the "identify the singularity type" functionality benefit from put_over_concrete_base
working for hypersurface models also?
(I will not merge this PR just yet, because I am curious about the responses of @apturner . Other than that, there is no reason - as I see it - to hold this PR.) |
(Another question: Are you using #4075 in this PR? I assume not/cannot see it. But just wanted to double check.) |
In regards to your top-level questions:
|
I wonder if we might want to add this as comment? Or even make it part of the documentation? |
Is this something to be mentioned in the documentation of
Ok. |
This already works for any model that has a known Weierstrass model, but doesn't automatically work if it has a known Tate model but no directly known Weierstrass model. That is a simple fix I can add here. |
@req has_attribute(h, :weierstrass_model) || has_attribute(h, :global_tate_model) "No corresponding Weierstrass model or global Tate model is known" | ||
return has_attribute(h, :weierstrass_model) ? singular_loci(weierstrass_model(h)) : singular_loci(global_tate_model(h)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change allows for hypersurface models with known Weierstrass or global Tate models to utilize singular_loci
. I think this is all ready to merge, once the tests pass again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @apturner . Awesome!
This adds Mohab's proposed calculation method, except in the case of Type I_0^* fibers, addressing the issue that models over concrete bases are often too complicated and seem "accidentally nonsplit". However, models over arbitrary bases are typically too simple and appear "accidentally split", so the code currently checks if the base is concrete are not and then decides whether to use the old or the new version of the calculation. This is not ideal.
Tests should also be added to more thoroughly check this.
@HereAround, @emikelsons