Type-safe StrictYAML python integration testing framework. With this framework, your tests can:
The tests can be run on their own or as pytest tests.
Project | Storytests | Python code | Doc template | Autogenerated docs |
---|---|---|---|---|
Website | add todo, correct spelling | engine.py | docstory.yml | Add todo, Correct my spelling |
REST API | add todo, correct spelling | engine.py | docstory.yml | Add todo, Correct my spelling |
Interactive command line app | add todo, correct spelling | test_integration.py | docstory.yml | Add todo, Correct my spelling |
A Python API | add todo, correct spelling | test_integration.py | docstory.yml | Add todo, Correct my spelling |
example.story:
Logged in:
given:
website: /login # preconditions
steps:
- Form filled:
username: AzureDiamond
password: hunter2
- Clicked: login
Email sent:
about: |
The most basic email with no subject, cc or bcc
set.
based on: logged in # inherits from and continues from test above
following steps:
- Clicked: new email
- Form filled:
to: Cthon98@aol.com
contents: | # long form text
Hey guys,
I think I got hacked!
- Clicked: send email
- Email was sent
engine.py:
from hitchstory import BaseEngine, GivenDefinition, GivenProperty
from hitchstory import Failure, strings_match
from strictyaml import Str
class Engine(BaseEngine):
given_definition = GivenDefinition(
website=GivenProperty(Str()),
)
def __init__(self, rewrite=False):
self._rewrite = rewrite
def set_up(self):
print(f"Load web page at {self.given['website']}")
def form_filled(self, **textboxes):
for name, contents in sorted(textboxes.items()):
print(f"Put {contents} in name")
def clicked(self, name):
print(f"Click on {name}")
def failing_step(self):
raise Failure("This was not supposed to happen")
def error_message_displayed(self, expected_message):
"""Demonstrates steps that can rewrite themselves."""
actual_message = "error message!"
try:
strings_match(expected_message, actual_message)
except Failure:
if self._rewrite:
self.current_step.rewrite("expected_message").to(actual_message)
else:
raise
def email_was_sent(self):
print("Check email was sent!")
>>> from hitchstory import StoryCollection
>>> from pathlib import Path
>>> from engine import Engine
>>>
>>> StoryCollection(Path(".").glob("*.story"), Engine()).named("Email sent").play()
RUNNING Email sent in /path/to/working/example.story ... Load web page at /login
Put hunter2 in name
Put AzureDiamond in name
Click on login
Click on new email
Put Hey guys,
I think I got hacked!
in name
Put Cthon98@aol.com in name
Click on send email
Check email was sent!
SUCCESS in 0.1 seconds.
$ pip install hitchstory
Help is available if you ask questions in these places: Github discussions | Github issues (not just for bugs) | Slack channel
Every feature of this library is documented and listed below. It is tested and documented with itself.
If you already have pytest set up, you can quickly and easily write a test using hitchstory that runs alongside your other pytest tests:
How to use the different features of the story engine:
- Hiding stacktraces for expected exceptions
- Given preconditions
- Gradual typing of story steps
- Match two JSON snippets
- Match two strings and show diff on failure
- Extra story metadata - e.g. adding JIRA ticket numbers to stories
- Story with parameters
- Story that rewrites given preconditions
- Story that rewrites itself
- Story that rewrites the sub key of an argument
- Raising a Failure exception to conceal the stacktrace
- Arguments to steps
- Strong typing
How to autogenerate documentation from your tests:
Inheriting stories from each other:
- Inherit one story from another simply
- Story inheritance - given mapping preconditions overridden
- Story inheritance - override given scalar preconditions
- Story inheritance - parameters
- Story inheritance - steps
- Variations
Running the stories in different ways:
- Continue on failure when playing multiple stories
- Flaky story detection
- Play multiple stories in sequence
- Run one story in collection
- Shortcut lookup for story names
Best practices, how the tool was meant to be used, etc.
- Is HitchStory a BDD tool? How do I do BDD with hitchstory?
- Complementary tools
- Domain Appropriate Scenario Language (DASL)
- Executable specifications
- Flaky Tests
- The Hermetic End to End Testing Pattern
- ANTIPATTERN - Analysts writing stories for the developer
- Separation of Test Concerns
- Snapshot Test Driven Development (STDD)
- Test Artefact Environment Isolation
- Test concern leakage
- Tests as an investment
- What is the difference betweeen a test and a story?
- The importance of test realism
- Testing non-deterministic code
- Specification Documentation Test Triality
Design decisions are justified here:
- Declarative User Stories
- Why does hitchstory mandate the use of given but not when and then?
- Why is inheritance a feature of hitchstory stories?
- Why does hitchstory not have an opinion on what counts as interesting to "the business"?
- Why does hitchstory not have a command line interface?
- Principles
- Why does HitchStory have no CLI runner - only a pure python API?
- Why Rewritable Test Driven Development (RTDD)?
- Why does HitchStory use StrictYAML?
HitchStory is not the only integration testing framework. This is how it compares with the others:
- Why use Hitchstory instead of Behave, Lettuce or Cucumber (Gherkin)?
- Why not use the Robot Framework?
- Why use hitchstory instead of a unit testing framework?
If you want to use HitchStory without pytest:
Miscellaneous docs about behavior of the framework: