Skip to content

Commit

Permalink
Update design-document
Browse files Browse the repository at this point in the history
 * Add also tool to visualize the results and examples of results.
  • Loading branch information
clopez committed Oct 5, 2022
1 parent d5ea4f8 commit ab8791c
Show file tree
Hide file tree
Showing 21 changed files with 154 additions and 25 deletions.
79 changes: 54 additions & 25 deletions docs/design-document.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,17 +258,16 @@ The timestamp is the date (unix epoch) when the result was received by the serve
About the json files
--------------------

If you check the JSON data files generated by the bots you will see that
the following:


The value for a test is as follows. There are two possibilities:
If you check the JSON data files generated by the bots you will see the
following format:

The value for a test is as follows. There are two possibilities:

```pre
{ ${NameOfTest}: {metrics: { ${NameOfMetric} : { current : [ list of values ] } } } }
{ ${NameOfTest}: {metrics: { ${NameOfMetric} : [ AggregationAlgorithm ] }}}

```

The first option indicates that this specific test has real values for that metric
produced from the browser while running it.
Expand All @@ -280,7 +279,6 @@ A test can also contain subtests. Its mandatory that a test with aggreated value
contain subtests.



This is explained better with an example:


Expand All @@ -303,7 +301,7 @@ Here we have:

* Main test suite with name SpeedometerExample with two metrics: Score and Time.
* The metric "Score" for SpeedometerExample is calculated directly: we take the list of values in current and we calculate the mean (arithmetic mean)
* The metric "Time" for SpeedometerExample is an aggregated value that is calculated using the "Total" agregation algorithm with the values of the subtests.
* The metric "Time" for SpeedometerExample is an aggregated value that is calculated using the "Total" agregation algorithm with the values of the subtests.

* Subtest AngularJS-TodoMV has one metric Time of aggregated value Total with the values of the subtests

Expand All @@ -318,20 +316,17 @@ Here we have:
* Subtests Sync has one metric Time with current values: we calculate the mean of the values.




Check the attached file speedometerexample.result that contains the above example, and run:
Check the attached file [sample-files/speedometerexample.result](sample-files/speedometerexample.result) that contains
the above example, and run the example script ```read-json-results``` from this directory (```docs```)

* To generate a python dictionary with all the values calculated and print it:

./read-json-results -print-results-dict speedometerexample.result
```pre
./read-json-results -print-results-dict sample-files/speedometerexample.result
```

* To print the results on plain text:

./read-json-results -print-results-text speedometerexample.result

```pre
$ ./read-json-results -print-results-text speedometerexample.result
$ ./read-json-results -print-results-text sample-files/speedometerexample.result
SpeedometerExample:Score: 142.000pt stdev=0.7%
:Time:Total: 674.220ms stdev=9.8%
AngularJS-TodoMVC:Time:Total: 674.220ms stdev=9.8%
Expand All @@ -343,11 +338,23 @@ SpeedometerExample:Score: 142.000pt stdev=0.7%
Sync:Time: 429.160ms stdev=12.3%
```
* To print the results on a format like what is inserted into this application database use :
```pre
./read-json-results -print-results-db sample-files/speedometerexample.result
Name=SpeedometerExample Metric=Score\None Unit=pt Value=142.0 Stdev=0.00704225352113
Name=SpeedometerExample Metric=Time\Total Unit=ms Value=674.22 Stdev=0.098284410446
Name=SpeedometerExample\AngularJS-TodoMVC Metric=Time\Total Unit=ms Value=674.22 Stdev=0.098284410446
Name=SpeedometerExample\AngularJS-TodoMVC\Adding100Items Metric=Time\Total Unit=ms Value=217.81 Stdev=0.0290934151339
Name=SpeedometerExample\AngularJS-TodoMVC\Adding100Items\Async Metric=Time\None Unit=ms Value=11.25 Stdev=0.173561103909
Name=SpeedometerExample\AngularJS-TodoMVC\Adding100Items\Sync Metric=Time\None Unit=ms Value=206.56 Stdev=0.0294749686776
Name=SpeedometerExample\AngularJS-TodoMVC\Adding200Items Metric=Time\Total Unit=ms Value=456.41 Stdev=0.136262489719
Name=SpeedometerExample\AngularJS-TodoMVC\Adding200Items\Async Metric=Time\None Unit=ms Value=27.25 Stdev=0.395773479085
Name=SpeedometerExample\AngularJS-TodoMVC\Adding200Items\Sync Metric=Time\None Unit=ms Value=429.16 Stdev=0.122973388375
```

On the ```sample-files``` diretory there are more example of json files that were generated on the bots.

I have attached more json files, this time real ones generated on the bots
that you can run use as input to start programming the application.
You can also use the attached read-json-results script to parse and
process them.
You can use the ```read-json-results``` script in this directory to see how to parse and process them.


Frontend
Expand Down Expand Up @@ -411,7 +418,8 @@ Examples of browser performance related graphs:
Format of the JSON files and what to store in the database
-----------------------------------------------------------

I'm not sure what is the best thing to do here.
This are the design considerations that were taken into account when
designing the format for storing the data on the database.

We want the application to be fast. That it can draw the graphs quickly.

Expand All @@ -431,7 +439,8 @@ I do this with the "-print-results-dict" output of ./read-json-results.
By using the keyword "None" for real values, and indicating the aggregation
algorithm for the others.

./read-json-results -print-results-dict speedometerexample.result
```pre
./read-json-results -print-results-dict sample-files/speedometerexample.result
{u'SpeedometerExample': {'metrics': {u'Score': {None: {'mean_value': 142.0,
'raw_values': [142.0,
Expand All @@ -444,7 +453,7 @@ algorithm for the others.
u'Time': {u'Total': {'mean_value': 674.22,
'raw_values': [642.0,
[....]

```

Here we see:

Expand All @@ -453,12 +462,13 @@ Here we see:

The algorithm for aggregation are 3:

```pre
aggregators = {
'Total': (lambda values: sum(values)),
'Arithmetic': (lambda values: sum(values) / len(values)),
'Geometric': (lambda values: math.exp(sum(map(math.log, values)) / len(values))),
}

```

So I think that we can store this on the DB, have an aggregation field that
indicates how the value was generated. For real values we can use a keyword
Expand All @@ -473,3 +483,22 @@ We have two options:

* Do this step on the bots, and send to the server the JSON file already calculated.
Something similar to what the command -print-results-dict-json already outputs.

Final implementation:

* On the ```benchmark_results.py``` file the method ```_generate_db_entries``` is used to
format the data from the json files to the database format. You can see it on the screen
by executing the script ```./read-json-results``` from this directory (```docs```) as follows:

```pre
./read-json-results -print-results-db sample-files/speedometerexample.result
Name=SpeedometerExample Metric=Score\None Unit=pt Value=142.0 Stdev=0.00704225352113
Name=SpeedometerExample Metric=Time\Total Unit=ms Value=674.22 Stdev=0.098284410446
Name=SpeedometerExample\AngularJS-TodoMVC Metric=Time\Total Unit=ms Value=674.22 Stdev=0.098284410446
Name=SpeedometerExample\AngularJS-TodoMVC\Adding100Items Metric=Time\Total Unit=ms Value=217.81 Stdev=0.0290934151339
Name=SpeedometerExample\AngularJS-TodoMVC\Adding100Items\Async Metric=Time\None Unit=ms Value=11.25 Stdev=0.173561103909
Name=SpeedometerExample\AngularJS-TodoMVC\Adding100Items\Sync Metric=Time\None Unit=ms Value=206.56 Stdev=0.0294749686776
Name=SpeedometerExample\AngularJS-TodoMVC\Adding200Items Metric=Time\Total Unit=ms Value=456.41 Stdev=0.136262489719
Name=SpeedometerExample\AngularJS-TodoMVC\Adding200Items\Async Metric=Time\None Unit=ms Value=27.25 Stdev=0.395773479085
Name=SpeedometerExample\AngularJS-TodoMVC\Adding200Items\Sync Metric=Time\None Unit=ms Value=429.16 Stdev=0.122973388375
```
71 changes: 71 additions & 0 deletions docs/read-json-results
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
#!/usr/bin/env python
import sys
import os
import json
import argparse
utils_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '..', 'dashboard', 'core', 'bots', 'reports', 'utils')
sys.path.append(utils_dir)
from benchmark_results import BenchmarkResults
from pprint import pprint


if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Read the results from the browser based performance benchmarks.')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('-cleanfile', dest='clean_file', action='store_true',
help='Rewrite the JSON file without the debugOutput entries.')
group.add_argument('-print-raw', dest='print_raw', action='store_true',
help='Print the JSON file in raw format.')
group.add_argument('-print-results-dict', dest='print_results_dict', action='store_true',
help='Parse the JSON file, generate aggregated results, store them in a dict and print it.')
group.add_argument('-print-results-dict-json', dest='print_results_dict_json', action='store_true',
help='Parse the JSON file, generate aggregated results, store them in a dict, convert to JSON and print it.')
group.add_argument('-print-results-text', dest='print_results_text', action='store_true',
help='Parse the JSON file, generate aggregated results, print them to stdout.')
group.add_argument('-print-results-db', dest='print_results_db', action='store_true',
help='Parse the JSON file, generate aggregated results, and output a format appropiated for the DB.')
group.add_argument('-print-results-db-noaggregated', dest='print_results_db_noaggregated', action='store_true',
help='Parse the JSON file, generate aggregated results, and output a format appropiated for the DB skipping the aggregated results.')
group.add_argument('-print-results-text-scaled', dest='print_results_text_scaled', action='store_true',
help='Parse the JSON file, generate aggregated results, print them to stdout with the metric unit scaled.')
parser.add_argument('json_file', type=str, help='Specify file you want to format')
args = parser.parse_args()

if not os.path.isfile(args.json_file):
print ('ERROR: Cat find the file %s' % args.json_file)
sys.exit(1)

results_json = json.load(open(args.json_file, 'r'))

if 'debugOutput' in results_json:
del results_json['debugOutput']
if args.clean_file:
json.dump(results_json,open(args.json_file, 'w'))
print('Wrote new file without debugOutput: %s ' %args.json_file)
sys.exit(0)
else:
if args.clean_file:
print('File already clean from debugOutput')
sys.exit(0)

if args.print_raw:
pprint(results_json)
sys.exit(0)

# Generate the aggregated results
results = BenchmarkResults(results_json)

if args.print_results_dict:
pprint(results.format_dict())
elif args.print_results_dict_json:
print results.format_json()
elif args.print_results_text:
print results.format(False)
elif args.print_results_text_scaled:
print results.format(True)
elif args.print_results_db:
results.print_db_entries()
elif args.print_results_db_noaggregated:
results.print_db_entries(skip_aggregated=True)
else:
raise RuntimError("This should have not been reached")
1 change: 1 addition & 0 deletions docs/sample-files/chromium/animometer.result
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"Animometer": {"metrics": {"Score": ["Geometric"]}, "tests": {"Paths": {"metrics": {"Score": {"current": [5800.795054279303]}}}, "Canvas Arcs": {"metrics": {"Score": {"current": [1529.5914541827824]}}}, "Canvas Lines": {"metrics": {"Score": {"current": [6873.021198301258]}}}, "Leaves": {"metrics": {"Score": {"current": [998.5338266024603]}}}, "Focus": {"metrics": {"Score": {"current": [114.31134208429782]}}}, "Suits": {"metrics": {"Score": {"current": [1165.9160558579822]}}}, "Multiply": {"metrics": {"Score": {"current": [817.6559338497497]}}}, "Design": {"metrics": {"Score": {"current": [132.6417027974514]}}}, "Images": {"metrics": {"Score": {"current": [140.60988923798013]}}}}}}
1 change: 1 addition & 0 deletions docs/sample-files/chromium/dromaeo.result
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"Dromaeo": {"tests": {"DOM Core Tests": {"metrics": {"Runs": {"current": [3402.1613029819678]}}, "tests": {"DOM Modification": {"metrics": {"Runs": {"current": [956.6854850452467]}}, "tests": {"appendChild": {"metrics": {"Runs": {"current": [[2451, 2485, 2487, 2775, 2818]]}}}, "insertBefore": {"metrics": {"Runs": {"current": [[2161, 2192, 2194, 2220, 2451]]}}}, "innerHTML": {"metrics": {"Runs": {"current": [[285.42914171656685, 294.7052947052947, 299.4011976047904, 301, 307]]}}}, "cloneNode": {"metrics": {"Runs": {"current": [[434, 440, 451, 458, 461.53846153846155]]}}}, "createElement": {"metrics": {"Runs": {"current": [[1036, 1055, 1105, 1143.7125748502995, 1150]]}}}, "createTextNode": {"metrics": {"Runs": {"current": [[878, 880, 898, 903, 917.0829170829171]]}}}}}, "DOM Attributes": {"metrics": {"Runs": {"current": [1175.4404688619516]}}, "tests": {"element.expando": {"metrics": {"Runs": {"current": [[651.3486513486514, 665, 673, 676, 676.3236763236763]]}}}, "getAttribute": {"metrics": {"Runs": {"current": [[2862, 2908, 3342, 3350, 3433]]}}}, "setAttribute": {"metrics": {"Runs": {"current": [[890, 890, 893, 901, 907]]}}}, "element.expando = value": {"metrics": {"Runs": {"current": [[289.4211576846307, 349.65034965034965, 352.64735264735265, 360.9172482552343, 378]]}}}, "element.property = value": {"metrics": {"Runs": {"current": [[1065, 1066, 1068, 1069, 1080]]}}}, "element.property": {"metrics": {"Runs": {"current": [[3112, 3120, 4077, 4166, 4233]]}}}}}, "DOM Query": {"metrics": {"Runs": {"current": [38354.1053584363]}}, "tests": {"getElementsByTagName(div)": {"metrics": {"Runs": {"current": [[254920, 256896, 258120, 258123, 260343]]}}}, "getElementsByTagName(*)": {"metrics": {"Runs": {"current": [[257542, 259110, 259930, 260917, 261255]]}}}, "getElementsByName": {"metrics": {"Runs": {"current": [[1547, 1565, 1580, 1713, 1723]]}}}, "getElementsByName (not in document)": {"metrics": {"Runs": {"current": [[5475, 5736, 5771, 5810, 5846]]}}}, "getElementsByTagName(a)": {"metrics": {"Runs": {"current": [[253046, 256726, 258291, 258494, 263442]]}}}, "getElementById (not in document)": {"metrics": {"Runs": {"current": [[5718, 5780, 5811, 5813, 5833]]}}}, "getElementById": {"metrics": {"Runs": {"current": [[1480, 1500, 1501, 1508, 1516]]}}}, "getElementsByTagName(p)": {"metrics": {"Runs": {"current": [[260104, 265495, 267041, 270858, 271925]]}}}, "getElementsByTagName (not in document)": {"metrics": {"Runs": {"current": [[438372, 485741, 492375, 493133, 496124]]}}}}}, "DOM Traversal": {"metrics": {"Runs": {"current": [713.0177669831319]}}, "tests": {"nextSibling": {"metrics": {"Runs": {"current": [[901, 905, 914, 915, 920]]}}}, "lastChild": {"metrics": {"Runs": {"current": [[531, 533.9321357285429, 534, 537, 542]]}}}, "childNodes": {"metrics": {"Runs": {"current": [[764, 804.1958041958042, 815, 819, 827]]}}}, "previousSibling": {"metrics": {"Runs": {"current": [[858, 877, 878, 882.1178821178821, 887]]}}}, "firstChild": {"metrics": {"Runs": {"current": [[528, 529, 537, 539, 541]]}}}}}}}}}}
1 change: 1 addition & 0 deletions docs/sample-files/chromium/es6bench.result
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"ES6SampleBench": {"metrics": {"Time": ["Geometric"]}, "tests": {"Air": {"metrics": {"Time": ["Geometric"]}, "tests": {"firstIteration": {"metrics": {"Time": {"current": [107.415, 99.51000000000002, 104.96500000000002, 93.09, 100.35000000000001, 97.72500000000001, 95.08000000000001, 104.81, 99.72999999999999, 92.15, 97.705, 95.2, 107.57000000000002, 98.85500000000002, 97.00500000000001, 88.11500000000001, 104.395, 82.285, 81.72500000000001, 82.57000000000001]}}}, "averageWorstCase": {"metrics": {"Time": {"current": [49.64000000000002, 54.64000000000016, 64.95499999999994, 57.88624999999996, 45.2950000000003, 63.459999999999965, 53.60249999999969, 54.008750000000106, 65.31875000000002, 56.163750000000036, 46.97000000000011, 57.50499999999993, 56.28999999999999, 56.22000000000008, 56.70375000000005, 60.73999999999981, 51.97875000000002, 57.09999999999999, 54.542499999999876, 56.809999999999974]}}}, "steadyState": {"metrics": {"Time": {"current": [6092.549999999999, 6067.549999999998, 6772.594999999993, 6097.7899999999945, 6007.975000000007, 6712.109999999992, 6048.034999999993, 6325.635000000001, 6423.775, 6377.940000000007, 6192.425000000004, 6300.775000000001, 6266.119999999997, 6117.800000000002, 6115.4800000000005, 6106.674999999999, 5993.119999999999, 5959.774999999998, 6011.970000000001, 6085.980000000002]}}}}}, "Basic": {"metrics": {"Time": ["Geometric"]}, "tests": {"firstIteration": {"metrics": {"Time": {"current": [60.06, 62.04000000000001, 63.575, 47.565, 50.89, 48.395, 50.825, 47.980000000000004, 48.83000000000001, 49.905, 52.265, 62.08500000000001, 50.845000000000006, 49.074999999999996, 50.30500000000001, 47.545, 48.47500000000001, 57.97500000000001, 52.21000000000001, 50.86000000000001]}}}, "averageWorstCase": {"metrics": {"Time": {"current": [35.652500000000245, 42.625, 40.2, 36.33124999999984, 59.980000000000246, 42.4699999999999, 39.88499999999987, 40.551249999999754, 38.663750000000164, 39.42999999999985, 56.42500000000007, 44.93875000000021, 33.836250000000064, 36.63749999999994, 43.381249999999625, 41.43250000000003, 34.932499999999806, 35.02374999999996, 46.68375000000011, 33.50374999999976]}}}, "steadyState": {"metrics": {"Time": {"current": [6217.300000000001, 6615.145, 6118.130000000005, 6167.294999999999, 7033.114999999998, 6615.430000000004, 6455.940000000006, 6583.979999999998, 6395.294999999998, 6367.709999999994, 6682.0400000000045, 6401.580000000004, 6240.095, 6144.97, 6165.3250000000035, 6169.5700000000015, 6157.165000000003, 6147.645000000006, 6177.770000000002, 6264.240000000001]}}}}}}}}
Loading

0 comments on commit ab8791c

Please sign in to comment.