-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Find parsing bottle necks. #137
Comments
I suspect this is all |
>>> p.sort_stats(SortKey.CUMULATIVE, SortKey.TIME).print_stats(20, 'sly')
Mon Aug 26 08:32:48 2024 Profile1
521393177 function calls (502227196 primitive calls) in 157.902 seconds
Ordered by: cumulative time, internal time
List reduced from 1903 to 20 due to restriction <20>
List reduced from 20 to 1 due to restriction <'sly'>
ncalls tottime percall cumtime percall filename:lineno(function)
19497 2.517 0.000 15.187 0.001 sly/yacc.py:2064(parse) >>> p.sort_stats(SortKey.CUMULATIVE, SortKey.TIME).print_stats(20, 'montepy')
Mon Aug 26 08:32:48 2024 Profile1
521393177 function calls (502227196 primitive calls) in 157.902 seconds
Ordered by: cumulative time, internal time
List reduced from 1903 to 20 due to restriction <20>
List reduced from 20 to 14 due to restriction <'montepy'>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 149.982 149.982 montepy/input_parser/input_reader.py:6(read_input)
1 0.113 0.113 149.981 149.981 montepy/mcnp_problem.py:236(parse_input)
1 0.006 0.006 74.486 74.486 montepy/mcnp_problem.py:304(__update_internal_pointers)
1396 0.810 0.001 59.804 0.043 montepy/data_inputs/material.py:173(update_pointers)
378833 0.143 0.000 57.314 0.000 montepy/data_inputs/material.py:259(__eq__)
378833 8.856 0.000 56.984 0.000 montepy/data_inputs/material.py:243(__hash__)
34037 2.443 0.000 49.055 0.001 montepy/numbered_object_collection.py:192(append)
21970084 10.758 0.000 46.409 0.000 montepy/numbered_object_collection.py:75(numbers)
52957827 25.651 0.000 45.834 0.000 montepy/utilities.py:76(getter)
24587278 11.843 0.000 16.076 0.000 montepy/data_inputs/isotope.py:221(__lt__)
97257 0.161 0.000 15.782 0.000 montepy/mcnp_object.py:37(__init__)
7322705 10.445 0.000 15.606 0.000 montepy/data_inputs/isotope.py:213(__str__)
19497 0.051 0.000 15.258 0.001 montepy/input_parser/parser_base.py:133(parse)
10368 0.083 0.000 11.501 0.001 montepy/cells.py:22(__setup_blank_cell_modifiers) Updating internal pointers takes about half the runtime. For my ATR experiment depletion model, the lion's share of that was the material pointers, which makes sense given the large number of complicated materials. This involves a huge number of hashing operations, each of which presently requires sorting material isotopes. This is a bottleneck that we could address in or after #507. Another thing that sticks out is the time spent in Third, there are the magic property utilities. The decorator getter was called 53 million times in the example above. Heavy usage of |
So my first thought: It would be interesting to run this profile for every release, and see if we can find specific changes that are very costly. |
My thoughts so far on the these results.
|
4. Isotopes are sorted every time the parent material is hashed. 5. 6. VERA or BEAVRS are two possibilities. |
Do you know when Ohh dang I missed that in the Are there MCNP models freely available of BEAVRS/VERA? My 2 second search yielded nothing. |
|
Oooofff yikes. So going to add that to scope of #510. |
I have noticed that |
With #518, there is a marked decrease in MontePy runtime, largely in >>> p.sort_stats(SortKey.CUMULATIVE, SortKey.TIME).print_stats(20, 'montepy')
Tue Aug 27 20:37:09 2024 Profile2
334195605 function calls (329836095 primitive calls) in 103.127 seconds
Ordered by: cumulative time, internal time
List reduced from 1919 to 20 due to restriction <20>
List reduced from 20 to 10 due to restriction <'montepy'>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 82.291 82.291 montepy/input_parser/input_reader.py:6(read_input)
1 0.097 0.097 82.290 82.290 montepy/mcnp_problem.py:236(parse_input)
34037 2.280 0.000 44.285 0.001 montepy/numbered_object_collection.py:192(append)
21970084 10.021 0.000 41.825 0.000 montepy/numbered_object_collection.py:75(numbers)
45370107 19.781 0.000 33.022 0.000 montepy/utilities.py:76(getter)
1 0.005 0.005 14.499 14.499 montepy/mcnp_problem.py:304(__update_internal_pointers)
97257 0.149 0.000 14.066 0.000 montepy/mcnp_object.py:37(__init__)
1 0.000 0.000 13.797 13.797 montepy/__init__.py:2(<module>)
19497 0.043 0.000 13.627 0.001 montepy/input_parser/parser_base.py:133(parse)
1 0.000 0.000 13.200 13.200 montepy/input_parser/__init__.py:2(<module>) |
I think |
Currently an ATR whole core model takes 40 seconds to parse. This would be a good task to identify what the bottle neck is in reading large models, and then implement some optimizations.
The text was updated successfully, but these errors were encountered: