Skip to content

How do evaluators collect statistics

Esta Nagy edited this page Dec 30, 2020 · 4 revisions

To understand how evaluators will behave in certain scenarios, we need to understand a couple of basic things first.

Basics

What kind of options do we have to configure our matchers?

  • Test class name/package based matching
  • Feature file URI or Gherkin scenario name Since v2.4.0
  • Annotation
    • on the test class
    • on the test method
  • System property based matching
  • Environment variable based matching
  • A composite of these using one or more logical operators (and, or, not operators supported)
  • Custom matchers can be added easily using the CUSTOM type provided. Please see this example. Since v2.4.0

When are we looking for the matching evaluators

  • Before test instance post-processing starts
  • In case instance post-processing fails
  • Before a test case would start
  • In case a test case fails
  • When a test case passes

Which evaluators will match when?

Evaluators can behave differently based on how their matchers are set up and what are we using for lookup. For example, in case the execution knows about the test method, we can rightfully expect it to use the method level annotations as well, while in case we only have a test instance at the moment, the matching can only use class level configuration. Please refer to the table below for a comprehensive overview.

Evaluator selection approach based on configuration type

As you can see, we have quite a few cells marked as "Implementation dependent".

In case of the instance post-processing failures, the difference between test frameworks can be significant, there are simply no guarantees that we could even differentiate these events from the regular test failures, therefore we can only trust that the implementation will do its best and try to call in these cases but this is far from granted.

The situation is different in case of the composite matchers before test instance post-processing would start. This depends more on how you define your composite matcher, simply because your composite might rely on method level matchers which won't be able to match at this stage. Just keep this in mind.

What will happen if multiple evaluators match at the same time?

This is a very interesting topic. When you start wondering about this question, you have probably considered adding fine-grain configuration which is defining all of the dependencies on their own. No need to worry at all, even if your test will match multiple evaluators, only those will be marked as aborted which are already making abort decisions on their own. This is illustrated on the diagram below.

Since v2.0.0

Evaluation filtering logic in case of abort decisions

In case you started to think about ways how this will mess up the stats, you are not alone. You can read more about the calculations on our dedicated page here.