Lettuce - scenario based tests for Django and other frameworks
lettuce is a test system driven by scenarios used in behavior driven development (BDD). The scenarios are described expressively in English and can be read and understood even for non programmers. Test system like lettuce parsers the scenarios and executes required tests for each step.
Lettuce can be used with Django, other web frameworks or scripts. The installation is typical - pip install lettuce. For Django you also have to add 'lettuce.django' to INSTALLED_APPS.
For an app you want to create Lettuce tests you have to create a sub folder called features. In this folder you then create a pair of files *.py and *.feature (both named identical). For example index.feature and index.py would do. The "feature" file contains scenarios, while ".py" file contains python code executed for steps used in scenarios.Here is a basic .feature file:
Feature: Main Page should show custom welcome messages to anonymous and logged in users. Scenario: Main Page works Given I access url "/" Then Server sends a 200 response
Each scenario file starts with a feature description. It's not parsed by Lettuce. The Scenarios are. In our example the scenario says that the main page works. To check it we have two steps - when I access it the server must respond with a 200 response code.
Scenarios may be and usually are longer. They start with Given part and end with Then part that checks the result of code executed in the "Given" part:
Scenario: ... Given ... Then ... Scenario: ... Given ... And ... And ... Then ...
from lettuce import * from django.test.client import Client from nose.tools import assert_equals @before.all def set_browser(): world.browser = Client() @step(r'I access url "(.*)"') def access_url(step, url): world.response = world.browser.get(url) @step(r'Server sends a ([0-9]+) response') def compare_server_response(step, expected_code): code = world.response.status_code assert_equals(int(expected_code), code)
Each scenario must be mapped to a function. To do that we use a @step decorator that takes the step string as an argument. Variables from the string are mapped via regular expressions. In the example the URL and the response code are mapped that way. There can also be steps run before/after each scenario or before/after all tests.
The "world" object in tests is persistent between each step call. Before "all" we set the Django test client under world.browser, and then in scenario test steps we use it to perform our tests.
To run the tests use python manage.py harvest command. Lettuce by default uses 8000 port for its server and will fail if the development server is using that port. Lettuce will search for tests in every app from INSTALLED_APPS and will execute them when found:
Feature: Main Page should show custom welcome messages to anonymous and logged in users. # myapp/features/index.feature:1 Scenario: Main Page works for anonymous user # myapp/features/index.feature:4 Given I access url "/" # myapp/features/index.py:13 Then Server sends a 200 response # myapp/features/index.py:19 1 feature (1 passed) 1 scenario (1 passed) 2 steps (2 passed)
To prevent code duplication of basic or frequently used steps you can use a terrain.py file located in features folder (available for all scenarios in that folder) or in project main folder (available to every application tests).I can put the @befire.all call in such file, as it's always needed:
from django.test.client import Client from lettuce import before from lettuce import world @before.all def initial_setup(): world.browser = Client()
Due to Google App Engine and other limiting environments Lettuce by default doesn't use the Django test database or its environment. You may find on the net various create/teardown configurations for Lettuce but the don't seem to work nicely with current Django and Lettuce versions. You can create and use test settings file and use it for lettuce tests, or use/wait for the django testserver pull request to be merged. With that change no create/teardown actions are needed.
Using Steven changes I could easily extend the tests to use the database:
Feature: Main Page should show custom welcome messages to anonymous and logged in users. Scenario: Main Page works for anonymous user Given I access url "/" Then Server sends a 200 response And Page displays "Hi!" response Scenario: Main Page works for authenticated user Given I am a logged-in user And I access url "/" Then Server sends a 200 response And Page displays "Welcome back!" response
from django.contrib.auth.models import User from lettuce import * from nose.tools import assert_equals @step(r'I access url "(.*)"') def access_url(step, url): world.response = world.browser.get(url) @step(r'Server sends a ([0-9]+) response') def compare_server_response(step, expected_code): code = world.response.status_code assert_equals(int(expected_code), code) @step(r'Page displays "(.*)" response') def compare_response_content(step, expected_response): assert_equals(expected_response, world.response.content) @step(r'I am a logged-in user') def loggin_user(step, **kwargs): user = User.objects.create_user('test', 'email@example.com', 'testpass') world.browser.login(username='test', password='testpass')
In this example I'm creating a "test" user (which could be done better with Factory Boy) and log him in. I'm also using world.response.content to see the server response (HTML page in real application. In this case the view just HttpResponse some text).
Vanilla lettuce would use the database from settings file and would not clean inserted data so you would quite quickly run into integrity errors on the user table.
splinter is a handy tool that can replace the Django test Client with a similar object that will use Selenium to do browser based tests:
from lettuce import after from lettuce import before from lettuce import world from splinter import Browser @before.all def initial_setup(): world.browser = Browser() @after.all def teardown_browser(total): world.browser.quit()
Selenium tests are handy for acceptance and frontend tests where you can test what user would see. For splinter the tests could look like so:
from lettuce import * from lettuce.django import django_url from lxml import html from nose.tools import assert_true @step(r'I access url "(.*)"') def access_url(step, url): world.response = world.browser.visit(django_url(url)) @step(r'Page displays "(.*)" response') def compare_response_content(step, expected_response): assert_true(world.browser.is_text_present(expected_response))
I had to remove few steps as you can just login a user from the tests.
Django-jenkins can handle Lettuce tests. Just add 'django_jenkins.tasks.lettuce_tests', to JENKINS_TASKS in your settings file. In Jenkins in the job configuration for "Publish JUnit test result report" you have to add "reports/lettuce.xml" (you then should have something like "reports/junit.xml, reports/lettuce.xml").
That should connect the lettuce tests with Jenkins - but check if Jenkins will notice that a lettuce test failed.
- Web development fun with Lettuce and Django
- The Biggest Mistakes Django Developers Make When Using Lettuce
- Django Full Stack Testing and BDD with Lettuce and Splinter
- Testing Django - part 2 - lettuce