Quick start

Create your first test scenario

The example below shows how to describe a test scenario with step methods.

 1# -*- coding: utf-8 -*-
 2
 3import scenario
 4
 5
 6class CommutativeAddition(scenario.Scenario):
 7
 8    SHORT_TITLE = "Commutative addition"
 9    TEST_GOAL = "Addition of two members, swapping orders."
10
11    def __init__(self, a=1, b=3):
12        scenario.Scenario.__init__(self)
13        self.a = a
14        self.b = b
15        self.result1 = 0
16        self.result2 = 0
17
18    def step000(self):
19        self.STEP("Initial conditions")
20
21        if self.ACTION(f"Let a = {self.a}, and b = {self.b}"):
22            self.evidence(f"a = {self.a}")
23            self.evidence(f"b = {self.b}")
24
25    def step010(self):
26        self.STEP("a + b")
27
28        if self.ACTION("Compute (a + b) and store the result as result1."):
29            self.result1 = self.a + self.b
30            self.evidence(f"result1 = {self.result1}")
31
32    def step020(self):
33        self.STEP("b + a")
34
35        if self.ACTION("Compute (b + a) and store the result as result2."):
36            self.result2 = self.b + self.a
37            self.evidence(f"result2 = {self.result2}")
38
39    def step030(self):
40        self.STEP("Check")
41
42        if self.ACTION("Compare result1 and result2."):
43            pass
44        if self.RESULT("result1 and result2 are the same."):
45            self.assertequal(self.result1, self.result2)
46            self.evidence(f"{self.result1} == {self.result2}")

Start with importing the scenario module:

# -*- coding: utf-8 -*-

import scenario

Within your module, declare a class that extends the base scenario.Scenario class:

class CommutativeAddition(scenario.Scenario):

Depending on your configuration (see ScenarioConfig.expectedscenarioattributes()), define your scenario attributes:

    SHORT_TITLE = "Commutative addition"
    TEST_GOAL = "Addition of two members, swapping orders."

Optionally, define an initializer that declares member attributes, which may condition the way the scenario works:

    def __init__(self, a=1, b=3):
        scenario.Scenario.__init__(self)
        self.a = a
        self.b = b
        self.result1 = 0
        self.result2 = 0

Then, define the test steps. Test steps are defined with methods starting with the step pattern:

    def step000(self):
    def step010(self):
    def step020(self):
    def step030(self):

The steps are executed in their alphabetical order. That’s the reason why regular steps are usually numbered.

Give the step descriptions at the beginning of each step method by calling the StepUserApi.STEP() method:

        self.STEP("Initial conditions")

Define actions by calling the StepUserApi.ACTION() method:

        if self.ACTION(f"Let a = {self.a}, and b = {self.b}"):

Define expected results by calling the StepUserApi.RESULT() method:

        if self.RESULT("result1 and result2 are the same."):

Actions and expected results shall be used as the condition of an if statement. The related test script should be placed below these if statements:

        if self.ACTION("Compute (a + b) and store the result as result1."):
            self.result1 = self.a + self.b

This makes it possible for the scenario library to call the step methods for different purposes:

  1. to peak all the action and expected result descriptions, without executing the test script:

    in that case, the StepUserApi.ACTION() and StepUserApi.RESULT() methods return False, which prevents the test script from being executed.

  2. to execute the test script:

    in that case, these methods return True, which lets the test script being executed.

The expected result test script sections may usually use assertion methods provided by the scenario.Assertions class:

        if self.RESULT("result1 and result2 are the same."):
            self.assertequal(self.result1, self.result2)

Eventually, the StepUserApi.evidence() calls register test evidence with the test results. This kind of call may be used under an action or expected result if statement.

        if self.ACTION("Compute (a + b) and store the result as result1."):
            self.result1 = self.a + self.b
            self.evidence(f"result1 = {self.result1}")
        if self.RESULT("result1 and result2 are the same."):
            self.assertequal(self.result1, self.result2)
            self.evidence(f"{self.result1} == {self.result2}")

Your scenario is now ready to execute.

Scenario execution

A scenario must be executed with a launcher script.

A default launcher script is provided within the ‘bin’ directory (from the main directory of the scenario library):

$ ./bin/run-test.py --help
usage: run-test.py [-h] [--config-file CONFIG_PATH] [--config-value KEY VALUE]
                   [--debug-class DEBUG_CLASS] [--doc-only]
                   [--issue-level-error ISSUE_LEVEL]
                   [--issue-level-ignored ISSUE_LEVEL]
                   [--json-report JSON_REPORT_PATH]
                   [--extra-info ATTRIBUTE_NAME]
                   SCENARIO_PATH [SCENARIO_PATH ...]

Scenario test execution.

positional arguments:
  SCENARIO_PATH         Scenario script(s) to execute.

optional arguments:
  -h, --help            Show this help message and exit.
  --config-file CONFIG_PATH
                        Input configuration file path. This option may be
                        called several times.
  --config-value KEY VALUE
                        Single configuration value. This option may be called
                        several times.
  --debug-class DEBUG_CLASS
                        Activate debugging for the given class.
  --doc-only            Generate documentation without executing the test(s).
  --issue-level-error ISSUE_LEVEL
                        Define the issue level from and above which known
                        issues should be considered as errors. None by
                        default, i.e. all known issues are considered as
                        warnings.
  --issue-level-ignored ISSUE_LEVEL
                        Define the issue level from and under which known
                        issues should be ignored. None by default, i.e. no
                        known issue ignored by default.
  --json-report JSON_REPORT_PATH
                        Save the report in the given JSON output file path.
                        Single scenario only.
  --extra-info ATTRIBUTE_NAME
                        Scenario attribute to display for extra info when
                        displaying results. Applicable when executing several
                        tests. This option may be called several times to
                        display more info.

Tip

See the launcher script extension section in order to define your own launcher if needed.

Give your scenario script as a positional argument to execute it:

$ ./bin/run-test.py ./demo/commutativeaddition.py
SCENARIO 'demo/commutativeaddition.py'
------------------------------------------------


STEP#1: Initial conditions (demo/commutativeaddition.py:18:CommutativeAddition.step000)
------------------------------------------------
    ACTION: Let a = 1, and b = 3
  EVIDENCE:   -> a = 1
  EVIDENCE:   -> b = 3

STEP#2: a + b (demo/commutativeaddition.py:25:CommutativeAddition.step010)
------------------------------------------------
    ACTION: Compute (a + b) and store the result as result1.
  EVIDENCE:   -> result1 = 4

STEP#3: b + a (demo/commutativeaddition.py:32:CommutativeAddition.step020)
------------------------------------------------
    ACTION: Compute (b + a) and store the result as result2.
  EVIDENCE:   -> result2 = 4

STEP#4: Check (demo/commutativeaddition.py:39:CommutativeAddition.step030)
------------------------------------------------
    ACTION: Compare result1 and result2.
    RESULT: result1 and result2 are the same.
  EVIDENCE:   -> 4 == 4

END OF 'demo/commutativeaddition.py'
------------------------------------------------
             Status: SUCCESS
    Number of STEPs: 4/4
  Number of ACTIONs: 4/4
  Number of RESULTs: 1/1
               Time: HH:MM:SS.mmmmmm

Note

The output presented above is a simplified version for documentation concerns. By default, test outputs are colored, and log lines give their timestamp (see log colors and log date/time sections).

Test code reuse

In order to quickly get a first test case running, the example before defines a scenario with step methods.

As introduced in the purpose section, the scenario framework is better being used with step objects for test code reuse.

If you’re interested in test code reuse, go straight away to step object or subscenario sections.

Otherwise, take a dive in the advanced menu for further information on scenario features.