diff --git a/DEPRECATED_DOCS/DEPRECATED_Background.md b/DEPRECATED_DOCS/DEPRECATED_Background.md deleted file mode 100644 index e232ec2b8..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Background.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -layout: default -title: Background -nav_order: 7 ---- - -# Background Information - -## Detecting parallel patterns and parallelization suggestions - -Figure 1 demonstrates a simplified view of the computational units and the relevant data dependencies in the loops of function “initialize” in our test program. Based on the information, the pattern detector component (see [DiscoPoP Explorer](Pattern_Detection/DiscoPoP_Explorer.md)) identifies that the loops in the function are doall loops. Further, it suggests to wrap the loops with OpenMP parallel for constructs and the related data sharing clauses. Figure 1 also contains the parallelization suggestions. - -![A simplified view of the CUs, data dependences, parallel patterns and parallelization suggestions which are identified in function “initialize” of our test program.](img/init1.svg) -*(Figure 1: A simplified view of the CUs, data dependences, parallel patterns and parallelization suggestions which are identified in function “initialize” of our test program. )* - -Moreover, Figure 2 shows the analysis information for function compute in the test program. Unlike the function “initialize”, there is an inter-iteration dependence which can be resolved with the OpenMP reduction clause. - -![A simplified view of the CUs, data dependences, parallel patterns and parallelization suggestions which are identified in function “compute” of our test program.](img/reduction1.svg) -*(Figure 2: A simplified view of the CUs, data dependences, parallel patterns and parallelization suggestions which are identified in function “compute” of our test program. )* - -Please note that like many scientific applications which work on arrays or matrices, the suggestions do not change if we change the input size. Thus, it is possible to analyze the program with small inputs, obtain the parallelization suggestions and execute the parallelized version with larger inputs. However, this is a recommendation merely and it being applicable or not depends highly on the code. - -Furthermore, we need to mention that DiscoPoP has an optimistic approach towards parallelization and thus programmers require to validate the final suggestions. Considering the example above, it can be easily confirmed by looking at the loops that there are no inter-iteration dependences. - -## Running serial and parallel codes - -You can execute the parallel code by inserting the parallelization suggestions into the source code as described in the pages for the respective parallel patterns (see [parallel patterns](Pattern_Detection/Patterns/Patterns.md)). You need to compile the parallelized program with `-fopenmp`. The speedup which is gained by parallelizing the code highly depends on the hardware platform on which you execute the serial and parallel codes. diff --git a/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Manual_Example.md b/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Manual_Example.md deleted file mode 100644 index 7ab98ae6a..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Manual_Example.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -layout: default -title: Manual Example -parent: Manual Quickstart -nav_order: 3 ---- - -# Manual Quickstart Example - -## Prerequisites -We assume that you have finished the [manual setup](Manual_Setup.md) already. -In order to follow the example, please make sure you know the paths to the following executables, files and folders. -Occurrences of names in capital letters (e.g. `DP_SOURCE`) should be replaced by the respective (absolute) paths. - -- From installation of prerequisites: - - `clang` / `clang++` -- From DiscoPoP Profiler installation: - - `DP_SOURCE`: Path to the DiscoPoP source folder - - `DP_BUILD`: Path to the DiscoPoP build folder - -## Important Note -For the sake of simplicity, this example only shows the process to profile a single file manually. -In case you want to analyse a more complex project, please refer to the respective tutorial pages for detailed instructions regarding the [manual profiling](../Tutorials/Manual.md), the assisted profiling using the [Execution Wizard script](../Tutorials/Execution_Wizard.md) or the assisted profiling via the [graphical Configuration Wizard](../Tutorials/Configuration_Wizard.md). - - - - -## Step 0: Enter the Example Directory -As a first step, please change your working directory to the `DP_SOURCE/example` directory: - - cd DP_SOURCE/example - -## Step 1: Profiling - -### Step 1.1: Create File Mapping -Please refer to [this site](../Profiling/File_Mapping.md) for further details and instructions. - -### Step 1.2: Compile the Target -To prepare the following instrumentation and profiling steps, we first have to compile the target source code to LLVM-IR. - - clang++ -g -c -O0 -S -emit-llvm -fno-discard-value-names example.cpp -o example.ll - -The additional specified flags ensure the addition of debug information into `example.ll`, which is required for the later analyses steps. - -### Step 1.3: Instrumentation and Static Analysis -After creating the LLVM-IR representation of our target source code, the `DiscoPoP` optimizer pass is loaded and executed. - - opt-11 -S -load=DP_BUILD/libi/LLVMDiscoPoP.so --DiscoPoP example.ll -o example_dp.ll --fm-path FileMapping.txt - -In this process, the static analyses are executed and the instrumented version of `example.ll`, named `example_dp.ll` is created in order to prepare the dynamic profiling step. - -### Step 1.4: Linking -In order to execute the profiled version of the target source code, we have to link it into an executable, in this case named `out_prof`. -Specifically, we have to link the DiscoPoP Runtime libraries in order to allow the profiling. - - clang++ example_dp.ll -o out_prof -LDP_BUILD/rtlib -lDiscoPoP_RT -lpthread - -### Step 1.5: -To execute the profiling and finish the collection of the required data, the created executable `out_prof` needs to be executed. - - ./out_prof - -As a major result of the profiling, a file named `out_prof_dep.txt` will be created which contains information on the identified data dependencies. -Further details regarding the gathered data can be found [here](../Profiling/Data_Details.md). - - -## Step 2: Creating Parallelization Suggestions -In order to generate parallelization suggestions from the [gathered data](../Profiling/Data_Details.md), the [DiscoPoP Explorer](../Pattern_Detection/DiscoPoP_Explorer.md) has to be executed. Since all gathered data is stored in the current working directory and no files have been renamed, it is sufficient to specify the `--path` and the `--dep-file` arguments in order to generate the parallelization suggestions. -`--dep-file` has to be specified since it's name depends on the name of the original executable. - - discopop_explorer --path=. --dep-file=out_prof_dep.txt - -By default, the DiscoPoP Explorer outputs parallelization suggestions to the console. -It should show three different suggestions in total. -* The first should be a `Do-all` with the `Start line 1:19`. -* The second should be `Reduction` with the `Start line 1:23`. -* The third should be a `Geometric Decomposition`. - -## Step 3: Interpreting and Implementing the Suggestions -For detailed information on how to interpret and implement the created suggestions, please refer to the [pattern wiki](../Pattern_Detection/Patterns/Patterns.md). - -Looking at the created suggestions there are two ways that the code can be parallelized (the parallelized code can be found inside the example directory as `solution1.cpp` and `solution2.cpp`): - -* **Implementing the Do-All and Reduction Patterns**
To implement the `Do-All` and `Reduction` patterns all we have to do is to add the suggested pragma before each loop. Make sure to add the `reduction` clause when implementing Reductions! The DiscoPoP Explorer also suggests classifications for used variables to ensure correctness and improve performance. These should be added as clauses to the pragma. Note that OpenMP implicitly makes some variables like the loop index a private variable so we can omit the corresponding clauses. - - To implement the `Do-All` suggestion we add the following line before the corresponding loop. - - #pragma omp parallel for shared(Arr,N) - - - To implement the `Reduction` suggestion we add the following line before the corresponding loop. - - # pragma omp parallel for reduction(+:sum) shared(Arr,N) - - - For this specific example, when we implement both patterns it is better to open only one parallel region with `#pragma omp parallel` and use the `pragma omp for` before each loop: - - #pragma omp parallel shared(Arr,N) - { - #pragma omp for - for(...) {...} // first loop - #pragma omp for reduction(+:sum) - for(...) {...} // second loop - } - -* **Implementing the geometric decomposition**
The Geometric Decomposition Pattern requires some code rewriting. A more in-depth example with explanation of the Geometric Decomposition Pattern interpretation can be found in the [pattern wiki](../Pattern_Detection/Patterns/Patterns.md). -A possible solution is provided in `solution2.cpp` in the example directory. - -Please note that it is not possible to implement both the Geometric Decomposition together with one of the other suggestions at the same time. - -## Step 4: Compile the parallelized application -After changing the source recompile the edited source code with the `-fopenmp` flag to enable openMP. -We provide example solutions on how to parallelize the patterns in the files `solution1.cpp` and `solution2.cpp`. You can compile them using: - - clang++ solution1.cpp -fopenmp -o solution1 - clang++ solution2.cpp -fopenmp -o solution2 - -## Final Remarks - -- When using openMP library functions (like `omp_get_thread_num()` in the Geometric Decomposition) make sure to include the `omp.h` header. -- Make sure to check the correctness of each suggestion! The approach of DiscoPoP can find a lot more opportunities than most tools that use only regular static analysis. However there is the possibility of false positives. These advantages and disadvantages are due to the fact that data dependencies are recorded during an instrumented run of the application. Usually the data dependencies do not change much if a representative input is selected during the instrumentation but it depends on the application! -- DiscoPoP analyzes data dependencies, not expected speedup. This means that some suggestions - especially for small problem sizes - are in fact likely to cause a slowdown due to the overhead of parallelization. diff --git a/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Manual_Setup.md b/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Manual_Setup.md deleted file mode 100644 index dabdfa2fd..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Manual_Setup.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: default -title: Manual Setup -parebt: Manual Quickstart -nav_order: 2 ---- - -# Manual Setup -## Pre-requisites -Before doing anything, you need a basic development setup. We have tested DiscoPoP on Ubuntu, and the prerequisite packages can be installed using the following command: - - sudo apt-get install git build-essential cmake - -Additionally, you need to install LLVM on your system. Currently, DiscoPoP only supports LLVM versions between 8.0 and 11.1. Due to API changes, which lead to compilation failures, it does not support lower and higher versions. Please follow the [installation tutorial](https://llvm.org/docs/GettingStarted.html), or install LLVM 11 via a package manager as shown in the following snippet, if you have not installed LLVM yet. - - sudo apt-get install libclang-11-dev clang-11 llvm-11 - -If you want to make use of the [Configuration](../Tutorials/Configuration_Wizard.md) or [Execution Wizard](../Tutorials/Execution_Wizard.md) for a simplified analysis of your project, you additionally need a working installation of [gllvm](https://github.com/SRI-CSL/gllvm) and [go](https://go.dev/doc/install). - -The Configuration Wizard uses Tkinter for its GUI functionality. It can be installed using - - sudo apt-get install python3-tk - - -## DiscoPoP profiler installation -First, clone the source code into a designated folder. - - git clone https://github.com/discopop-project/discopop.git - -Then, create a build directory, for example inside the source folder: - - cd discopop - mkdir build; cd build; - -Next, configure the project using CMake. - -If you have installed LLVM from the source please specify the preferred LLVM installation path for DiscoPoP. This can be done using the `-DLLVM_DIST_PATH=` CMake variable. - - cmake -DLLVM_DIST_PATH= .. - -If you have installed LLVM using the package manager, specifying this variable should not be necessary. In this case, please just use: - - cmake .. - -Note: In case you want to use a specific Version of LLVM, it is possible to specify the `-DUSE_LLVM_VERSION=` flag. - -Note: In case your application uses PThreads, please specify `-DDP_PTHREAD_COMPATIBILITY_MODE=1`. Note, however, that this can influence the runtime of the profiling. - -Note: In case you require a more verbose output of the runtime library, specify the `-DDP_RTLIB_VERBOSE=1` flag. - -Note: In case you want to specify the number of Workers available for the profiling step, specify the `-DDP_NUM_WORKERS=` flag. - -Once the configuration process is successfully finished, compile the DiscoPoP libraries using `make`. All created shared objects will be stored in the build directory and can be found inside a folder named `libi/`. - - make - - -## Installation of Python Modules -The included Python modules `discopop_explorer` and `discopop_wizard` will be installed during the `cmake` build process, -but they can also be installed using `pip` by executing the following command in the base directory of DiscoPoP: - - pip install . - -Installing the modules allows the simple invocation of those via - - discopop_explorer - -and - - discopop_wizard - -respectively. The [manual quickstart example](Manual_Example.md) will assume this kind of installation. -However, if you do not want to install the modules, they can be invoked using: - - python3 -m discopop_explorer - -and - - python3 -m discopop_wizard - -respectively. diff --git a/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Quickstart.md b/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Quickstart.md deleted file mode 100644 index aa9e7356c..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Manual_Quickstart/Quickstart.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -layout: default -title: Manual Quickstart -has_children: true -permalink: /Manual_Quickstart -nav_order: 2 ---- - -# Quickstart -The following pages contain brief summaries of the necessary steps to perform the most important, basic actions of different stages during the setup and usage of DiscoPoP. -To keep the pages as short as possible, detailed explanations are omitted. These can be found if necessary on the respective linked wiki pages. diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/DiscoPoP_Explorer.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/DiscoPoP_Explorer.md deleted file mode 100644 index f1d329861..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/DiscoPoP_Explorer.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -layout: default -title: DiscoPoP Explorer -parent: Pattern Detection -nav_order: 1 ---- - -# DiscoPoP Explorer -The DiscoPoP Profiler is accompanied by a Python framework, specifically designed to analyze the profiler output files, generate a CU graph, detect potential parallel patterns, and suggest OpenMP parallelizations. -Currently, the following five patterns can be detected: -* [Reduction](Patterns/Reduction.md) -* [Do-All](Patterns/Do-All.md) -* [Pipeline](Patterns/Pipeline.md) -* [Geometric Decomposition](Patterns/Geometric_Decomposition.md) -* [Task Parallelism](Patterns/Tasks.md) - -## Getting started -We assume that you have already executed the DiscoPoP profiler on the sequential target application, and the following files have been created in the current working directory: -* `Data.xml` (CU information in XML format) -* `_dep.txt` (Data dependences) -* `reduction.txt` and `loop_counter_output.txt` (identified reduction operations and counted loop iterations) - -In case any of the files mentioned above are missing, please follow the [profiler instructions](../Profiling/Profiling.md) to generate them. - -### Task parallelism - TODO -Currently, the task parallelism detection is not supported, but it will be re-included in the near future. - - - - -### Pre-requisites -To use the DiscoPoP Explorer, you need to have Python 3.6+ installed on your system. Further Python dependencies should have been installed during the [manual setup](../Manual_Quickstart/Manual_Setup.md), but can be installed using the following command: -`pip install -r requirements.txt` - -### Usage -If you followed the installation instructions in the [manual setup](../Manual_Quickstart/Manual_Setup.md), you can execute the DiscoPoP Explorer by simply calling the module: - -`discopop_explorer --path ` - -If you have not installed the modules using `pip`, you can execute them manually by specifying the `-m` flag. - -`python3 -m discopop_explorer --path ` - -By specifying the `--path` flag, you can set the path to the DiscoPoP Profiler [output files](../Profiling/Data_Details.md). Then, the Python script searches within this path to find the required files. Nevertheless, if you are interested in passing a specific location for each file, please refer to the available command line options. These can be show by specifying the `-h` or `--help` flags: - -`discopop_explorer -h` - -By default, running the DiscoPoP Explorer will print out the list of patterns along with OpenMP parallelization suggestions to the standard output. You can also obtain the results in JSON format by passing the `--json` argument to the Python script. - -Detailed instructions on how to interpret the suggested patterns can be found on the pages of the respective [patterns](Patterns/Patterns.md). - -A simple walk-through example for the execution of the DiscoPoP Explorer and the interpretation and implementation of the results can be found as part of our [manual quickstart example](../Manual_Quickstart/Manual_Example.md). diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Pattern_Detection.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Pattern_Detection.md deleted file mode 100644 index 2a7d14e0a..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Pattern_Detection.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -layout: default -title: Pattern Detection -has_children: true -permalink: /Pattern_Detection -nav_order: 5 ---- - -# Pattern Detection and Parallelization Suggestions -The Pattern Detection is the next step after executing the [DiscoPoP Profiler](../Profiling/Profiling.md). -In this step, a set of parallel patterns can be identified on the basis of the profiling results. -In case that one or more parallel patterns have been identified, a potential parallel implementation is suggested via OpenMP pragmas. -Both steps, the identification and the so called implementation of the parallel patterns are executed by the [DiscoPoP Explorer](DiscoPoP_Explorer.md), and are described in more detail on the linked page. -An overview of the supported parallel patterns as well as detailed explanations how to interpret the respective suggestions can be found [here](Patterns/Patterns.md). -To simplify the process, the [graphical interface](../Tutorials/Configuration_Wizard.md) can be used to browse, highlight and preview the identified parallelization suggestions. diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Do-All.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Do-All.md deleted file mode 100644 index 6e8935e35..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Do-All.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: default -title: Do-All Loop -parent: Patterns -grand_parent: Pattern Detection ---- - -# Do-All Loop - -## Reporting -Do-All Loops are reported in the following format: -``` -Do-all at: 1:2 -Start line: 1:7 -End line: 1:9 -pragma: "#pragma omp parallel for" -private: [] -shared: [] -first private: [] -reduction: [] -last private: [] -``` - -## Interpretation -The reported values shall be interpreted as follows: -* `Do-all at: :`, where the respective parent file can be looked up in the `FileMapping.txt` using `file_id` and `cu_id` can be used for a look up in `Data.xml` -* `Start line: :`, where `line_num` refers to the source code line of the parallelizable loop. -* `End line: :`, where `line_num` refers to the last line of the parallelizable loop. - -* `pragma:`shows which type of OpenMP pragma shall be inserted before the target loop in order to parallelize it. -* `private: []` lists a set of variables which have been identified as thread-`private` -* The same interpretation applies to the following values aswell: - * `shared` - * `first_private` - * `last_private` -* `reduction: [:]` specifies a set of identified reduction operations and variables. For `Do-All` suggestions, this list is always empty. - -## Implementation -In order to implement a suggestion, first open the source code file corresponding to `file_id` and navigate to line `Start line -> `. -Insert `pragma` before the loop begins. -In order to ensure a valid parallelization, you need to add the following clauses to the OpenMP pragma, if the respective lists are not empty: -* `private` -> clause: `private()` -* `shared` -> clause: `shared()` -* `first_private` -> clause: `firstprivate()` -* `last_private` -> clause: `lastprivate()` -* `reduction`-> clause: `reduction(:)` - -### Example -As an example, we will analyze the following code snippet for parallelization potential. All location and meta data will be ignored for the sake of simplicity. - - for (int i = 0; i < 10; ++i) { - local_array[i] += 1; - } - -Analyzing this code snippet results in the following parallelization suggestion: - - pragma: "#pragma omp parallel for" - private: ["i"] - shared: ["local_array"] - first private: [] - reduction: [] - last private: [] - - -After interpreting and implementing the suggestion, the resulting, now parallel, source code could look as follows: - - #pragma omp parallel for private(i) shared(local_array) - for (int i = 0; i < 10; ++i) { - local_array[i] += 1; - } diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Geometric_Decomposition.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Geometric_Decomposition.md deleted file mode 100644 index c2101530e..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Geometric_Decomposition.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -layout: default -title: Geometric Decomposition -parent: Patterns -grand_parent: Pattern Detection ---- - -# Geometric Decomposition - -## Reporting -Possible geometric decompositions are reported in the following format: -``` -Geometric decomposition at: 1:9 -Start line: 1:26 -End line: 1:36 -Do-All loops: ['1:11'] -Reduction loops: [] - Number of tasks: 24 - Chunk limits: 1000 - pragma: for (i = 0; i < num-tasks; i++) #pragma omp task] - private: [] - shared: [] - first private: ['i'] - reduction: [] - last private: [] -``` - -## Interpretation -The reported values shall be interpreted as follows: -* `Geometric decomposition at: :`, where the respective parent file can be looked up in the `FileMapping.txt` using `file_id` and `cu_id` can be used for a look up in `Data.xml` -* `Start line: :`, where `line_num` refers to the first source code line of the potential geometrically decomposable code. -* `End line: :`, where `line_num` refers to the last line of the suggested pattern. -* `Do-All loops: [:]` specifies which [Do-all loops](Do-All.md) can be part of the geometric decomposition. -* `Reduction loops: [:]` specifies which [Reduction loops](Reduction.md) can be part of the geometric decomposition. -* `Number of tasks: ` specifies the number of tasks which should or can be spawned in order to process the geometric decomposition. -* `Chunk limits: ` determine the size of a workload package (amount of iterations) for each individual spawned task. -* `private, shared, first_private` and `last_private` indicate variables which should be mentioned within the respective OpenMP data sharing clauses. -* `reduction: [:]` specifies a set of identified reduction operations and variables. - - -## Implementation -In order to implement a geometric decomposition, first open the source code file corresponding to `file_id` and navigate to line `Start line -> `. -Insert `pragma` before each of the loops mentioned in `Do-all loops` and `Reduction loops`. Make sure to replace `num-tasks` with the specified `Number of tasks`, or insert a respective variable into the source code. -Modify the loop conditions of the original source code in order to allow a geometric decomposition. Each task should be responsible for processing a chunk of the size `Chunk limits`. -In order to ensure a valid parallelization, you need to add the following clauses to the OpenMP pragma, if the respective lists are not empty: -* `private` -> clause: `private()` -* `shared` -> clause: `shared()` -* `first_private` -> clause: `firstprivate()` -* `last_private` -> clause: `lastprivate()` -* `reduction`-> clause: `reduction(:)` - -### Example -As an example, we will analyze the following code snippet for parallelization potential. Some location and meta data will be ignored for the sake of simplicity. - - int main( void) - { - int i; - int d=20,a=22, b=44,c=90; - for (i=0; i<100; i++) { - a = foo(i, d); - b = bar(a, d); - c = delta(b, d); - } - a = b; - return 0; - } - -Analyzing this code snippet results in the following geometric decomposition suggestion: -``` -Geometric decomposition at: 1:1 -Start line: 1:2 -End line: 1:12 -Type: Geometric Decomposition Pattern -Do-All loops: ['1:3'] // line 5 -Reduction loops: [] - Number of tasks: 10 - Chunk limits: 10 - pragma: for (i = 0; i < num-tasks; i++) #pragma omp task] - private: [] - shared: [] - first private: ['i'] - reduction: [] - last private: [] -``` - -After interpreting and implementing the suggestion, the resulting, now parallel, source code could look as follows. -Since `i` has been used in the original source code already, the inserted `pragma` uses `x` instead. -As a last modification, the loop conditions in the original source code need to be modified slightly in order to allow the decomposition. -For a simpler interpretation of the example we have added the `chunk_size` and `tid` variables. -Note: Since the geometric decomposition relies on the identification of the thread number, the outermost `for` loop should be located inside a `parallel region`. However, depending on the specific analyzed source code, a surrounding `parallel region` might already exist or a different location for the surrounding `parallel region` may be more beneficial. - - int main( void) - { - int i; - int d=20,a=22, b=44,c=90; - - #pragma omp parallel - #pragma omp single - for (int x = 0; x < 10; x++ ) { - #pragma omp task - { - int tid = omp_get_thread_num(); - int chunk_size = 10; // value of Chunk limits - - for (i = tid*chunk_size; i < tid*chunk_size + chunk_size; i++) { - a = foo(i, d); - b = bar(a, d); - c = delta(b, d); - } - } - } - - a = b; - return 0; - } diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Patterns.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Patterns.md deleted file mode 100644 index 5b45758fc..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Patterns.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -layout: default -title: Patterns -parent: Pattern Detection -has_children: true -permalink: /Pattern_Detection/Patterns ---- - -# Parallel Patterns diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Pipeline.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Pipeline.md deleted file mode 100644 index 08cc36684..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Pipeline.md +++ /dev/null @@ -1,139 +0,0 @@ ---- -layout: default -title: Pipeline -parent: Patterns -grand_parent: Pattern Detection ---- - -# Pipeline - -## Reporting - -### Pipelines -Pipelines are reported in the following format: -``` -Pipeline at: 1:11 -Start line: 1:30 -End line: 1:34 -Stages: - - - - - ... -``` -The reported values shall be interpreted as follows: -* `Pipeline at: :`, where the respective parent file can be looked up in the `FileMapping.txt` using `file_id` and `cu_id` can be used for a look up in `Data.xml` -* `Start line: :`, where `line_num` refers to the first source code line of the identified pipeline. -* `End line: :`, where `line_num` refers to the last line of the pipeline loop. -* `Stages` defines a list of stages contained in the identified pipeline. The specific format of the stages is described in the following. - -### Pipeline Stages -Individual stages of a pipeline are reported in the following format: -``` -Node: 1:13 -Start line: 1:31 -End line: 1:31 -pragma: "#pragma omp task" -first private: ['i'] -private: [] -shared: ['d', 'in'] -reduction: [] -InDeps: [] -OutDeps: ['a'] -InOutDeps: [] -``` - -The reported values shall be interpreted as follows: -* `Node: :`, where the respective parent file can be looked up in the `FileMapping.txt` using `file_id` and `cu_id` can be used for a look up in `Data.xml` -* `Start line: :`, where `line_num` refers to the first source code line of the identified pipeline stage. -* `End line: :`, where `line_num` refers to the last line of the stage. -* `pragma:`shows which type of OpenMP pragma shall be inserted before the `start line`. -* `private: []` lists a set of variables which have been identified as thread-`private` -* The same interpretation applies to the following values aswell: - * `shared` - * `first_private` -* `reduction: [:]` specifies a set of identified reduction operations and variables. -* `InDeps: []` specifies `in`-dependencies according to the [OpenMP depend clause](https://www.openmp.org/spec-html/5.0/openmpsu99.html). -* `OutDeps: []` specifies `out`-dependencies according to the [OpenMP depend clause](https://www.openmp.org/spec-html/5.0/openmpsu99.html). -* `InOutDeps: []` specifies `inout`-dependencies according to the [OpenMP depend clause](https://www.openmp.org/spec-html/5.0/openmpsu99.html). - - -## Implementation -In order to implement a suggested pipeline, first navigate to the source code location specified by `Pipeline at:`. -For each individual stage the following OpenMP pragmas and closes need to be added to the source code, if the respective lists are not empty: -* Insert `pragma` prior to the `start line` mentioned by the stage. -* If `private` is not empty, add the clause `private()`, where vars are separated by commas to the pragma. -* Do the same for: - * `shared` -> clause: `shared()` - * `first_private` -> clause: `firstprivate()` - * `reduction`-> clause: `reduction(:)` - * `InDeps` -> clause: `depend(in:)` - * `OutDeps` -> clause: `depend(out:)` - * `InOutDeps` -> clause: `depend(inout:)` - - -### Example -As an example, we will analyze the following code snippet for parallelization potential. Some location and meta data will be ignored for the sake of simplicity. - - int i; - int d=20,a=22, b=44,c=90; - for (i=0; i<100; i++) { - a = foo(i, d); - b = bar(a, d); - c = delta(b, d); - } - a = b; - -Analyzing this code snippet results in the following parallelization suggestion: -``` -Pipeline at: -Start line: 1:3 -End line: 1:7 -Stages: -Node: 1:13 - Start line: 1:4 - End line: 1:4 - - shared: ['d', 'in'] - reduction: [] - InDeps: [] - OutDeps: ['a'] - InOutDeps: [] - - Start line: 1:5 - End line: 1:5 - pragma: "#pragma omp task" - first private: [] - private: [] - shared: ['d', 'in'] - reduction: [] - InDeps: ['a'] - OutDeps: ['b'] - InOutDeps: [] - - Start line: 1:6 - End line: 1:7 - pragma: "#pragma omp task" - first private: [] - private: ['c'] - shared: ['d', 'in'] - reduction: [] - InDeps: ['b'] - OutDeps: [] - InOutDeps: [] -``` - -After interpreting and implementing the suggestion, the resulting, now parallel, source code could look as follows: - - int i; - int d=20,a=22, b=44,c=90; - for (i=0; i<100; i++) { - #pragma omp task firsprivate(i) shared(d, in) depend(out:a) - a = foo(i, d); - #pragma omp task shared(d, in) depend(in:a) depend(out:b) - b = bar(a, d); - #pragma omp task private(c) shared(d, in) depend(in: b) - c = delta(b, d); - } - a = b; diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Reduction.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Reduction.md deleted file mode 100644 index 3f2f970d0..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Reduction.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -layout: default -title: Reduction -parent: Patterns -grand_parent: Pattern Detection ---- - -# Reduction Loop - -## Reporting -Reduction Loops are reported in the following format: -``` -Reduction at: 1:2 -Start line: 1:7 -End line: 1:9 -pragma: "#pragma omp parallel for" -private: [] -shared: [] -first private: [] -reduction: [] -last private: [] -``` - -## Interpretation -The reported values shall be interpreted as follows: -* `Reduction at: :`, where the respective parent file can be looked up in the `FileMapping.txt` using `file_id` and `cu_id` can be used for a look up in `Data.xml` -* `Start line: :`, where `line_num` refers to the source code line of the parallelizable loop. -* `End line: :`, where `line_num` refers to the last line of the parallelizable loop. -* `pragma:`shows which type of OpenMP pragma shall be inserted before the target loop in order to parallelize it. -* `private: []` lists a set of variables which have been identified as thread-`private` -* The same interpretation applies to the following values aswell: - * `shared` - * `first_private` - * `last_private` -* `reduction: [:]` specifies a set of identified reduction operations and variables. - -## Implementation -In order to implement a suggestion, first open the source code file corresponding to `file_id` and navigate to line `Start line -> `. -Insert `pragma` before the loop begins. -In order to ensure a valid parallelization, you need to add the following clauses to the OpenMP pragma, if the respective lists are not empty: -* `private` -> clause: `private()` -* `shared` -> clause: `shared()` -* `first_private` -> clause: `firstprivate()` -* `last_private` -> clause: `lastprivate()` -* `reduction`-> clause: `reduction(:)` - -### Example -As an example, we will analyze the following code snippet for parallelization potential. All location and meta data will be ignored for the sake of simplicity. - - for (int i = 0; i < N; i++) { - local_var *= global_array[i]; - } - -Analyzing this code snippet results in the following parallelization suggestion: -``` -pragma: "#pragma omp parallel for" -private: ["i"] -shared: [] -first private: ["global_array"] -reduction: ["*:local_var"] -last private: [] -``` - -After interpreting and implementing the suggestion, the resulting, now parallel, source code could look as follows: - - #pragma omp parallel for private(i) firstprivate(global_array) reduction(*:local_var) - for (int i = 0; i < N; i++) { - local_var *= global_array[i]; - } diff --git a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Tasks.md b/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Tasks.md deleted file mode 100644 index 5b10a4482..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Pattern_Detection/Patterns/Tasks.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -layout: default -title: Task Parallelism - TODO -parent: Patterns -grand_parent: Pattern Detection ---- - -# Task Parallelism -Currently, the task parallelism detection is not supported, but it will be re-included in the near future. diff --git a/DEPRECATED_DOCS/DEPRECATED_Profiling/Data_Details.md b/DEPRECATED_DOCS/DEPRECATED_Profiling/Data_Details.md deleted file mode 100644 index d832862cd..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Profiling/Data_Details.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: default -title: Gathered Data -parent: DiscoPoP Profiler -nav_order: 2 ---- - -# Gathered Data - -## Computational Units (CUs) - -When you apply the `DiscoPoP` optimizer pass, a file named `Data.xml` will be created. -It contains the identified computational units of the specified target project. - -The xml file contains much information about the program. There are four types of nodes in the xml file including: functions, loops, CUs, and dummies. Each node has an ID (consisting of `file_id:cu_id`), a type, a name (some nodes have empty names) and the start and end line of the node in the source code. - -Function nodes which are represented by type 1 contain information about functions in each file of the source code. Function nodes contain children nodes which can be CUs, loop nodes, and dummies. Also, you can find the list of function arguments there. - -Nodes with type 0 are CUs. They follow a read-after-write pattern. They are the atoms of parallelization; meaning that we do not look inside a CU for parallelization opportunities. The information that we report for CUs are the following: - -- `BasicBlockID`: The ID of the basic block that the CU happens in. A basic block is a block of code with single entry and exit points. A basic block may contain multiple CUs but a CU may not span over multiple basic blocks. -- `readDataSize`: number of bytes which is read in this CU. We consider LLVM-IR load instructions to compute this value. -- `writeDataSize`: Number of bytes written in this CU. It is computed like `readDataSize`. -- `instructionsCount`: Number of LLVM-IR instructions in the CU. -- `instructionLines`: The line numbers in which the CU appears. -- `readPhaseLines`: LLVM-IR load instructions which happen within the CU boundaries. -- `writePhaseLines`: LLVM-IR store instructions in the CU. -- `returnInstructions`: It indicates the line number of return instructions if the CU contains return instructions. -- `Successors`: The succeeding CU when analyzing the source code top-down in the source code. -- `localVariables`: the variables which appear within the CU. We also report the line number where the variable is defined, its name and its type. -- `globalVariables`: variables which break the read-after-write rule for the CU. They cause the creation of a new CU which will succeed the CU. -- `callsNode`: It indicates the line number of a called function if the CU contains a call instruction. - -Loop nodes have type 2. They contain children nodes which can be CUs, other loops, or dummy nodes. - -Dummy nodes are usually library functions whose source code is not available. We cannot profile them and thus do not provide parallelization suggestions for them. - -Please note that DiscoPoP appends CUs to an existing `Data.xml` file and thus if you need to extract computational units of the program again, you need to remove the existing `Data.xml` file. - -## Data Dependencies - -DiscoPoP uses a signature to store data dependences. You can configure the settings of this signature by creating a dp.conf file in the root directory of your program. The contents of the config file usually contains the following parameters: - -- `SIG_ELEM_BIT`: Size of each element in the signature in bits. -- `SIG_NUM_ELEM`: Size of the signature. The bigger it is, the less false positives/negatives are reported. -- `SIG_NUM_HASH`: Number of signatures. A value of two indicates that one signature is used for read accesses and one signature for write accesses. -- `USE_PERFECT`: When it is set to one, DiscoPoP uses a perfect signature. The default value is one. - -To find parallelization opportunities, we need to extract data dependencies inside the program. For that, we need to instrument the memory accesses, link the program with DiscoPoP run-time libraries, and finally execute the program with several representative inputs. The necessary steps are described [here](../Tutorials/Tutorials.md). -After executing the instrumented program, you find a text file which ends with `_dep.txt` which contains the data dependences identified using the provided input. -A data dependence is represented as a triple ``. `type` denotes the dependence type and can be any of `RAW`, `WAR` or `WAW`. Note that a special type `INIT` represents the first write operation to a memory address. `source` and `sink` are the source code locations of the former and the latter memory access, respectively. `sink` is further represented as a pair ``, while source is represented as a triple ``. The keyword `NOM` (short for "NORMAL") indicates that the source line specified by aggregated `sink` has no control-flow information. Otherwise, `BGN` and `END` represent the entry and exit points of a control region. - -## Loop Counters -As part of the DiscoPoP profiling loops are instrumented with the purpose to count the total amount of executed iterations per loop as well as the average iteration count per loop entry. - -### Total Loop Counters -The total observed iteration counts per loop will be stored in a file named `loop_counter_output.txt`. -Each line of the file contains the summed count of iterations for the specified loop. -The used format is as follows: ` `. - -### Total Counters, Amount of Loop Entries, Averages and Maxima -The total, summed iteration counters per loops as well as the amount of loop entries, average iteration count per loop and the maximum observed iterations per single loop entry can be found in the previously described dependency file, which stores all information gathered during the profiling. -The used format is as follows: `: BGN loop `. - -## Reduction Instructions -Identified reduction instructions are stored in a file named `reduction.txt`. -Each line of the file describes one identified reduction instruction in the code. -The format is quite simple and will be explained using the following example: - - FileID : 1 Loop Line Number : 10 Reduction Line Number : 12 Variable Name : sum Operation Name : + - -`FileID` specifies the id of the file, as stored in `FileMapping.txt`, which contains the identified reduction operation. -`Loop Line Number` refers to the source code line of the loop which contains the identified operation. -`Reduction Line Number` refers to the source code line where the operation is located. -The name of the affected reduction variable is presented by `Variable Name` and `Operation Name` shows which operation is used for the reduction. diff --git a/DEPRECATED_DOCS/DEPRECATED_Profiling/File_Mapping.md b/DEPRECATED_DOCS/DEPRECATED_Profiling/File_Mapping.md deleted file mode 100644 index a6af9fecf..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Profiling/File_Mapping.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: default -title: File Mapping -parent: DiscoPoP Profiler -nav_order: 2 ---- - -# File Mapping -Several steps of DiscoPoP's pipeline require a mapping of unique file ids to file paths, which is referred to as the filemapping. -To create this mapping, the following utility scripts can be used: - -`/scripts/dp-fmap` - -## Execution -The scripts has to be executed from the top folder of the target project by simply executing the following command without any parameters: - -`/scripts/dp-fmap` - -When executed, the scripts saves the current working directory and assings ids to all found and relevant files. - -## Output -The script will create a file named `FileMapping.txt` in the current working directory. -Each line in this file will correspond to an individual file. -The format used to report the assigned file ids is as follows: - -``` - - -... -``` diff --git a/DEPRECATED_DOCS/DEPRECATED_Profiling/Profiling.md b/DEPRECATED_DOCS/DEPRECATED_Profiling/Profiling.md deleted file mode 100644 index 51883d284..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Profiling/Profiling.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -layout: default -title: DiscoPoP Profiler -has_children: true -permalink: /Profiling -nav_order: 4 ---- - -# DiscoPoP Profiler -The DiscoPoP pattern detection requires different sets of information on the structure and characteristics of the target source code. -This data is gathered using a mixture of static and dynamic code analyses. -The static code analyses as well as the instrumentation of the target code for the dynamic analyses are conveniently bundled into a single `DiscoPoP` optimizer pass. -Detailed instructions on how to apply the pass, typically referenced to as `DiscoPoP Profiler`, can be found at the [Tutorials](../Tutorials/Tutorials.md) pages. - -A detailed explanation of the gathered data, and in particular the used formats can be found [here](Data_Details.md). - -An explanation of the filemapping required by several steps of the DiscoPoP pipeline can be found [here](File_Mapping.md). diff --git a/DEPRECATED_DOCS/DEPRECATED_Profiling/Wrapper_Scripts.md b/DEPRECATED_DOCS/DEPRECATED_Profiling/Wrapper_Scripts.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/DEPRECATED_DOCS/DEPRECATED_Quickstart.md b/DEPRECATED_DOCS/DEPRECATED_Quickstart.md deleted file mode 100644 index abf9eea89..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Quickstart.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -layout: default -title: Quickstart - GUI -nav_order: 2 ---- - -# Quickstart - GUI -The fastest and most convenient way to using DiscoPoP is provided by the [graphical user interface](Tutorials/Configuration_Wizard.md). -It can be installed from `PyPi` and offers the option to perform the necessary profiling steps using a docker container by default. -Thus, no manual setup of the environment apart from installing `docker` is required. - -## Prerequisites -* `docker` is installed -* `python3` and `pip` are installed -* Projects which shall be analyzed provide a `Makefile` which fulfills the criteria mentioned [here (under "Running DiscoPoP")](index.md). - -## Installation -DiscoPoP can be downloaded and installed via `pip` using the following command: `pip install discopop`, or installed from the source directory using `pip install .` - - -## Initial Start -The DiscoPoP GUI can be started by simply calling: - - discopop_wizard - -The first start will take some time due to downloading and setting up the docker container for the profiling. -Since the results can be cached, successive executions of the `discopop_wizard` will be faster. - -## Provided Quickstart Example -To execute the profiling and analysis for the example provided in the repository, first create a new run configuration by clicking `New..`. -Only three of the possible arguments are required to execute and analyze the provided example: -* `Label`: an arbitrary name for the configuration (e.g. "Walk-through example") -* `Executable name`: Name of the executable created by the make file - * For the provided example, use `my_exe` -* `Project path`: Path to the project which shall be analyzed - * For the provided example, use your `ABSOLUTE_PATH_TO_DISCOPOP/example` - -![Example Configuration](img/quickstart_example_1.png) - -Click `Save` to persist the changes and `Execute` to start the profiling and analysis pipeline. - -Once everything is finished, the identified parallelization suggestions will be shown automatically. -Clicking on any of them opens the code preview, which highlights the parallelizable source code section and shows the suggested OpenMP pragma. -Details regarding a specific suggestion can be shown by hovering over the respective button. - -![Example suggestions](img/quickstart_example_2.png) - -For further details and explanations, please refer to the [Tutorial](Tutorials/Tutorials.md) as well as the pages dedicated to the [Profiling](Profiling/Profiling.md) and [Pattern Detection](Pattern_Detection/Pattern_Detection.md). diff --git a/DEPRECATED_DOCS/DEPRECATED_Quickstart_CMake.md b/DEPRECATED_DOCS/DEPRECATED_Quickstart_CMake.md deleted file mode 100644 index f22193a17..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Quickstart_CMake.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -layout: default -title: Quickstart - CMake -nav_order: 2 ---- - -# Quickstart - CMake based projects -The following Instructions are intended to aid in profiling and analyzing CMake based projects. - -## Prerequisites -Please refer to the [setup instructions](Manual_Quickstart/Manual_Setup.md) for guidance regarding the installation and environment setup. -During the profiling and pattern analysis, a clear identification of individual source files is required. -For this, all scripts require a [FileMapping](Profiling/File_Mapping.md) as an input. Please make sure this file exists. -In the following, `` will represent the path to this file. - -## Profiling -DiscoPoP provides wrapper scripts for `CMake, clang, clang++` and `clang for linking`. -
-A detailed description of the individual scripts can be found [here](Profiling/Wrapper_Scripts.md). -
-The following instructions are based on the provided example, which can be found in the `example` folder. -To follow the example, please set the current working directory to `discopop/example`. -

-When all prerequisites are met, instrumenting a CMake based project with DiscoPoP is possible by simply using the -provided `CMake_wrapper.sh`, located in `build/scripts`, instead of the standard `cmake` command during the build, and specifying the `DP_FM_PATH` environment variable for use during the `make` process. - -``` -# create Filemapping -/scripts/dp-fmap - -mkdir build -cd build - -# cmake build -/scripts/CMAKE_wrapper.sh .. - -# make phase. Important: specify environment variable -DP_FM_PATH= make -``` - -After execution of `make`, the static analysis of your project as well as the instrumentation of the code should be finished and the build should result in the executable (e.g. `cmake_example`) just as expected. -To obtain the dynamic data dependencies, please execute the executable. Due to expected runtime overhead, choose small to very small but representative input sizes if possible. - -In case of the example: -``` -./cmake_example -``` - -After the execution has finished, all necessary files are available and the [DiscoPoP Explorer](Pattern_Detection/DiscoPoP_Explorer.md) can be invoked to identify suggestions for parallelization. -A detailed overview of the gathered data can be found [here](Profiling/Data_Details.md). - -## Pattern detection - -``` -discopop_explorer --dep-file=cmake_example_dep.txt --fmap=../FileMapping.txt --json=patterns.json -``` - -The identified patterns are stored in `patterns.json` as well as `detection_result_dump.json`, in the latter case together with the created PET graph for later use. -Please refer to [this site](Pattern_Detection) for detailed explanations how to interpret the results. - -An executable version of this introduction can be found in `example/execute_cmake_example.sh`. diff --git a/DEPRECATED_DOCS/DEPRECATED_Tutorials/Configuration_Wizard.md b/DEPRECATED_DOCS/DEPRECATED_Tutorials/Configuration_Wizard.md deleted file mode 100644 index 6e3df1ab5..000000000 --- a/DEPRECATED_DOCS/DEPRECATED_Tutorials/Configuration_Wizard.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -layout: default -title: Configuration Wizard - GUI -parent: Tutorials -nav_order: 1 ---- - -# Configuration Wizard - -The DiscoPop Configuration Wizard acts as a wrapper for the [Execution Wizard](Execution_Wizard.md) and provides a graphical user interface in a terminal to simplify the management of execution configurations. - - -## Important Note - Prerequisites -If you want to make use of the [Configuration](Configuration_Wizard.md) or [Execution Wizard](Execution_Wizard.md) for a simplified analysis of your project, you additionally need a working installation of [gllvm](https://github.com/SRI-CSL/gllvm) and [go](https://go.dev/doc/install). - -## Execution -The Wizard is provided via a python module. After successfully following the [manual setup](../Manual_Quickstart/Manual_Setup.md) it can be executed by: - - discopop_wizard - - -## Initial Setup -When you first start the Wizard, the `Setup` will automatically be started. -You will be prompted if you want to make use of a docker container for profiling. -If you select yes, no setup of the environment except for setting up docker is required. -Otherwise, you will be prompted for paths to folders and executables which shall be known after completing the [manual setup](../Manual_Quickstart/Manual_Setup.md) and installing [gllvm](https://github.com/SRI-CSL/gllvm) and [go](https://go.dev/doc/install). -If default values for all required paths can be determined automatically, you will be forwarded to the main menu. -If not, the `Settings` screen will be opened. -In this case, please enter the missing information and use the `Save` button to save your changes and proceed to the main menu. -You can modify the provided paths and settings at any time using the `Options->Settings` button in the main menu. - -## Execution Configurations -The main menu provides an overview of the stored execution configurations. -New ones can be created using the `New..` button. -The menu to show execution results, modify, delete or execute configurations can be opened by simply clicking on a configuration. -The following figure shows the opened menu for a sample configuration. -A detailed explanation for the required settings can be found in the next section. - -![Figure 1: Execution Configuration](../img/wizard_execution_configuration_screen.png) - -### Settings -#### Mandatory -* `Label`: Name of the configuration. Used to distinguish configurations in the main menu. -* `Executable name`: Name of the executable which is created when building the target project. The Name will be used to execute the configuration. -* `Project path`: Path to the project which shall be analyzed for potential parallelism. -* `Project linker flags`: Linker flags which need to be passed to the build system in order to create a valid executable. - -#### Optional -* `Description`: Brief description of the configuration. Used to distinguish configurations in the main menu. -* `Executable arguments`: Specify arguments which will be forwarded to the call of the created executable for the profiling. -* `Make flags`: Specified flags will be forwarded to `Make` during the build of the target project. -* `Make target`: TODO DESCRIPTION -* `Additional notes`: Can be used to store notes regarding the configuration. This information will not be used during the execution of the configuration in any way. - -## Executing the DiscoPop Pipeline -When you have entered the necessary data, use the `Save` button in order to save your changes. -The stored configuration can be executed by simply pressing the `Execute` button afterwards. -In the background, a command to call the [Execution Wizard](Execution_Wizard.md) is assembled using the stored settings and the information stored in the current execution configuration. - -## Results -As of right now, the created [parallelization suggestions](../Pattern_Detection/Patterns) will be printed to a file named `patterns.txt` in the project folder. -This is not an ideal solution and will be improved in the future. -The DiscoPoP Wizard provides a simple overview of the identified suggestions. The respective view can be opened by clicking on the `Results` tab of a configuration on the main screen and will be opened automatically after an execution has finished. -The identified parallelization suggestions are shown via buttons in the middle column of the screen and allow to open a code preview, which highlights the target code section and shows the suggested OpenMP pragma. -All details regarding the suggestion can be made visible by hovering over the respective buttons. -An example for this view can be found in the following figure: -![Parallelization Suggestions](../img/wizard_results_screen.png) - -## Data Storage -All created metadata (settings and execution configurations) will be stored in a folder named `.config`, located within the installation directory of the `Configuration Wizard`. -Settings are stored within a file named `SETTINGS.txt` in a JSON format. -Execution configurations are stored as executable scripts named `_