Skip to content

Commit

Permalink
Updated the table of contents in documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
anibalinn committed Sep 24, 2024
1 parent 44c73d6 commit 6d8f20f
Showing 1 changed file with 80 additions and 34 deletions.
114 changes: 80 additions & 34 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
- [Test Logs and Metrics](#test-logs-and-metrics)
- [Muting Test Scenarios](#muting-test-scenarios)
- [Handling Failing Scenarios](#handling-failing-scenarios)
- [FAQs](#faqs)
- [Displaying Progress Bar in Console](#displaying-progress-bar-in-console)
- [Show Your Support](#show-your-support)

## Introduction
Expand All @@ -44,9 +44,9 @@ BehaveX provides the following features:
- **Evidence Collection**: Include images/screenshots and additional evidence in the HTML report.
- **Test Logs**: Automatically compile logs generated during test execution into individual log reports for each scenario.
- **Test Muting**: Add the `@MUTE` tag to test scenarios to execute them without including them in JUnit reports.
- **Execution Metrics**: Generate metrics in the HTML report for the executed test suite, including Automation Rate and Pass Rate.
- **Dry Runs**: Perform dry runs to see the full list of scenarios in the HTML report without executing the tests.
- **Auto-Retry for Failing Scenarios**: Use the `@AUTORETRY` tag to automatically re-execute failing scenarios.
- **Execution Metrics**: Generate metrics in the HTML report for the executed test suite, including Automation Rate, Pass Rate, Steps execution counter and average execution time.
- **Dry Runs**: Perform dry runs to see the full list of scenarios in the HTML report without executing the tests. It overrides the `-d` Behave argument.
- **Auto-Retry for Failing Scenarios**: Use the `@AUTORETRY` tag to automatically re-execute failing scenarios. Also, you can re-run all failing scenarios using the **failing_scenarios.txt** file.

![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true)
![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_2.png?raw=true)
Expand Down Expand Up @@ -117,7 +117,7 @@ Execute BehaveX in the same way as Behave from the command line, using the `beha
## Constraints

- BehaveX is currently implemented on top of Behave **v1.2.6**, and not all Behave arguments are yet supported.
- The parallel execution implementation is based on concurrent Behave processes. Hooks in the `environment.py` module will be executed in each parallel process.
- Parallel execution is implemented using concurrent Behave processes. This means that any hooks defined in the `environment.py` module will run in each parallel process. This includes the **before_all** and **after_all** hooks, which will execute in every parallel process. The same is true for the **before_feature** and **after_feature** hooks when parallel execution is organized by scenario.

## Supported Behave Arguments

Expand Down Expand Up @@ -164,6 +164,15 @@ behavex -t=@<TAG> --parallel-processes=5 --parallel-scheme=feature
behavex -t=@<TAG> --parallel-processes=5 --parallel-scheme=feature --show-progress-bar
```

### Identifying Each Parallel Process

BehaveX populates the Behave contexts with the `worker_id` user-specific data. This variable contains the id of the current behave process.

For example, if BehaveX is started with `--parallel-processes 2`, the first instance of behave will receive `worker_id=0`, and the second instance will receive `worker_id=1`.

This variable can be accessed within the python tests using `context.config.userdata['worker_id']`.


## Test Execution Reports

### HTML Report
Expand All @@ -183,16 +192,19 @@ One JUnit file per feature, available at:
```bash
<output_folder>/behave/*.xml
```
The JUnit reports have been replaced by the ones generated by the test wrapper, just to support muting tests scenarios on build servers

## Attaching Images to the HTML Report

## Attaching Images and Evidence
You can attach images or screenshots to the HTML report using your own mechanism to capture screenshots or retrieve images. Utilize the **attach_image_file** or **attach_image_binary** methods provided by the wrapper.

You can attach images or screenshots to the HTML report. Use the following methods:
These methods can be called from hooks in the `environment.py` file or directly from step definitions.

### Example 1: Attaching an Image File
```python
from behavex_images import image_attachments

@given('I take a screenshot from current page')
@given('I take a screenshot from the current page')
def step_impl(context):
image_attachments.attach_image_file(context, 'path/to/image.png')
```
Expand All @@ -209,50 +221,84 @@ def after_step(context, step):
image_attachments.attach_image_binary(context, selenium_driver.get_screenshot_as_png())
```

## Test Logs and Metrics
By default, images are attached to the HTML report only when the test fails. You can modify this behavior by setting the condition using the **set_attachments_condition** method.

![Test Execution Report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report.png?raw=true)
![Test Execution Report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report_2.png?raw=true)
![Test Execution Report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report_3.png?raw=true)

For more information, check the [behavex-images](https://github.com/abmercado19/behavex-images) library, which is included with BehaveX 3.3.0 and above.

If you are using BehaveX < 3.3.0, you can still attach images to the HTML report by installing the **behavex-images** package with the following command:

> pip install behavex-images
## Attaching Additional Execution Evidence to the HTML Report

Providing ample evidence in test execution reports is crucial for identifying the root cause of issues. Any evidence file generated during a test scenario can be stored in a folder path provided by the wrapper for each scenario.

The evidence folder path is automatically generated and stored in the **"context.evidence_path"** context variable. This variable is updated by the wrapper before executing each scenario, and all files copied into that path will be accessible from the HTML report linked to the executed scenario.

## Test Logs per Scenario

The HTML report includes detailed test execution logs for each scenario. These logs are generated using the **logging** library and are linked to the specific test scenario. This feature allows for easy debugging and analysis of test failures.

## Metrics and Insights

The HTML report provides test execution logs per scenario. Metrics include:
The HTML report provides a range of metrics to help you understand the performance and effectiveness of your test suite. These metrics include:

- Automation Rate
- Pass Rate
- Steps execution counter and average execution time
* **Automation Rate**: The percentage of scenarios that are automated.
* **Pass Rate**: The percentage of scenarios that have passed.
* **Steps Execution Counter and Average Execution Time**: These metrics provide insights into the execution time and frequency of steps within scenarios.

## Dry Runs

BehaveX enhances the traditional Behave dry run feature to provide more value. The HTML report generated during a dry run can be shared with stakeholders to discuss scenario specifications and test plans.

To execute a dry run, we recommend using the following command:

> behavex -t=@TAG --dry-run
## Muting Test Scenarios

To mute failing test scenarios, add the `@MUTE` tag. This allows the test to run without being included in JUnit reports.
In some cases, you may want to mute test scenarios that are failing but are not critical to the build process. This can be achieved by adding the @MUTE tag to the scenario. Muted scenarios will still be executed, but their failures will not be reported in the JUnit reports. However, the execution details will be visible in the HTML report.

## Handling Failing Scenarios

### @AUTORETRY Tag
This tag can be used for flaky scenarios or when the testing infrastructure is not stable. The `@AUTORETRY` tag can be applied to any scenario or feature, and it is used to automatically re-execute the test scenario when it fails.

### Rerun All Failed Scenarios
Whenever you perform an automated test execution and there are failing scenarios, the **failing_scenarios.txt** file will be created in the execution output folder. This file allows you to run all failing scenarios again.
For scenarios that are prone to intermittent failures or are affected by infrastructure issues, you can use the @AUTORETRY tag. This tag enables automatic re-execution of the scenario in case of failure.

### Rerunning Failed Scenarios

After executing tests, if there are failing scenarios, a **failing_scenarios.txt** file will be generated in the output folder. This file allows you to rerun all failed scenarios using the following command:

> behavex -rf=./<OUTPUT_FOLDER\>/failing_scenarios.txt
To rerun failing scenarios, execute:
```bash
behavex -rf=./<OUTPUT_FOLDER>/failing_scenarios.txt
```
or
```bash
behavex --rerun-failures=./<OUTPUT_FOLDER>/failing_scenarios.txt
```

To avoid overwriting the previous test report, provide a different output folder using the **-o** or **--output-folder** argument.
> behavex --rerun-failures=./<OUTPUT_FOLDER\>/failing_scenarios.txt
To avoid overwriting the previous test report, it is recommended to specify a different output folder using the **-o** or **--output-folder** argument.

## FAQs
Note that the **-o** or **--output-folder** argument does not work with parallel test executions.

- **How do I install BehaveX?**
- Use `pip install behavex`.
## Displaying Progress Bar in Console

- **What is the purpose of the `@AUTORETRY` tag?**
- It automatically re-executes failing scenarios.
When running tests in parallel, you can display a progress bar in the console to monitor the test execution progress. To enable the progress bar, use the **--show-progress-bar** argument:

- **How can I mute a test scenario?**
- Add the `@MUTE` tag to the scenario.
> behavex -t=@TAG --parallel-processes=3 --show-progress-bar
If you are printing logs in the console, you can configure the progress bar to display updates on a new line by adding the following setting to the BehaveX configuration file:

> [progress_bar]
>
> print_updates_in_new_lines="true"
## Show Your Support

If you find this project helpful, please give it a star! Your support helps us gain visibility and motivates us to continue improving the project.
**If you find this project helpful or interesting, we would appreciate it if you could give it a star** (:star:). It's a simple way to show your support and let us know that you find value in our work.

By starring this repository, you help us gain visibility among other developers and contributors. It also serves as motivation for us to continue improving and maintaining this project.

Thank you for your support!
Thank you in advance for your support! We truly appreciate it.

0 comments on commit 6d8f20f

Please sign in to comment.