You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As an engineer, I need a clear and comprehensive overview of the new data types being supported and the methods involved, along with detailed feedback on any failures when running the data types benchmark once #2206 is merged.
Expected behavior
The spreadsheet generated after running the benchmarks should meet the following criteria:
Visual Marking of Changes:
Mark cells with new True values (indicating successful support for a data type) in green.
Mark cells with new False values (indicating failures or lack of support for a data type) in red.
Summary Sheet with Key Metrics:
Include columns for:
Dtype: The data type.
Sdtype: The semantic data type.
3.8: The % support for this python version
3.9: The % support for this python version
3.10: The % support for this python version
3.11: The % support for this python version
3.12: The % support for this python version
Total % Support: The percentage of supported methods for each combination of dtype, sdtype, and Python version.
Percentage Calculation:
For each dtype and sdtype combination, compute the percentage of True values across all tested methods for each Python version, representing the support level.
Compute the total percentage of support as the average of the True values across all Python versions.
Conditional Summing for Edge Cases:
Implement logic to adjust the percentage calculation for cases where non-support is expected. For example, we currently don't support FixedCombinations unless the sdtype is either categorical or boolean. Non-supported cases (e.g., numericals in this case) should not negatively impact the percentage.
Order of the sheets:
Ideally we should land into the Summary sheet, then the previously_unseen and followed by the python versions.
Overall Goal:
Provide a clear, actionable view of data type support, summarized across all Python versions, while highlighting areas of improvement and exceptions.
The text was updated successfully, but these errors were encountered:
Problem Description
As an engineer, I need a clear and comprehensive overview of the new data types being supported and the methods involved, along with detailed feedback on any failures when running the data types benchmark once #2206 is merged.
Expected behavior
The spreadsheet generated after running the benchmarks should meet the following criteria:
Visual Marking of Changes:
True
values (indicating successful support for a data type) in green.False
values (indicating failures or lack of support for a data type) in red.Summary Sheet with Key Metrics:
Percentage Calculation:
True
values across all tested methods for each Python version, representing the support level.True
values across all Python versions.FixedCombinations
unless the sdtype is either categorical or boolean. Non-supported cases (e.g., numericals in this case) should not negatively impact the percentage.Order of the sheets:
Summary
sheet, then thepreviously_unseen
and followed by thepython versions
.Overall Goal:
The text was updated successfully, but these errors were encountered: