This test suite is deprecated but you can still try to use its base concept and borrow the library for other similar topics.
XLT is a tool for functional test automation as well as load and performance testing. This test suite is meant to be the base for functional test automation with visual verification and can be easily combined with any existing XLT test suite.
Validation is essential for test automation. Do not simply click around and as long as buttons and form fields are there, you are happy. Really verify the content of the page that it matches your expectations.
Well of course this creates a certain dilemma. The more you validate, the more easily does you test suite break just because of smaller changes. But the less you validate, the fewer regressions you find and the more you might get the wrong impression of the quality of your implementation.
Normal test automation usually does not verify visually, it just verifies content or certain properties of elements. So you will see that a button disappeared, but you are not going to see that the button is suddenly right-aligned instead of left, larger or smaller, or maybe changed its color.
Visual regression testing to the rescue!
You might have already read about using Selenium for this: Visual Regression Testing with PhantomCSS, Catch CSS curve balls, or CSS regression testing.
We got inspired by these ideas, but wanted to take the idea a little further.
This test suite offers a way to take a screenshot, organize it for later, train masks for data that is changing all the time, and compare the baseline to the screenshots with applied masks in future runs.
This is a list of the test suite features you can use to visually verify your implementation during automated testing.
- Screenshots: The suite takes screenshots where the visual assertion module is inserted. These can be different from the regular screenshot automatically taken by XLT, or in case you disabled automatic screenshots, could be the only area where you take a visual look.
- Any Browser: Supports any browser that can take screenshots and is supported by Webdrivers (see Multi-Browser-Suite).
- Difference Image: Creates a difference image to easily see the changes.
- Marked Image: Created image with marked areas to highlight a change in the screenshot (there are two ways to mark things).
- Trainings Mode: Learns dynamic areas of the screen to automatically black them out/mask them, such as names, search phrases, addresses, and so forth. You can manually update the mask to include other areas as well.
- Thresholds: The fuzzy modes support thresholds to control the sensitivity of the comparison.
- Compare Modes: Visual Assert features three modes to compare: Exact, Color Fuzzy, and Fuzzy.
- Open Source: See and improve the code, share it with others. MIT license.
This test suite requires that you download XLT from https://www.xceptance.com/en/xlt/. Don't be shy, it is freely available for test automation purposes, we don't even ask for you email. Make sure you download the full package. The XLT Script Developer alone is not sufficient.
Learn the basics how to automate with XLT before you take it to the next level with Visual Assertions. Please study the user manual and take a look at the installation and configuration part of the SiteGenesis-Community-TestSuite.
We will use the bundled demo applications Posters to demonstrate how visual assertions work and can be used.
- Posters: So fire up the Posters test suite and open this test suite in your XLT Script Developer.
- Demo Test Case: You will find a slightly modified copy of the TGuestOrder test case of the XLT demo test suite Posters under
posters > functional > scenarios
. - VisualAssertion Module: Check
posters > functional > modules > VisualAssertion
. This module is a Java module that will later on take care of calling the Java classcom.xceptance.xlt.visualassertion.util.VisualAssertion
that does the heavy lifting. Please note that this module won't be executed in Script Developer. You have to run it as a JUnit test.
- Now import the test suite into Eclipse or any other of your favourite IDEs. Make sure you follow the guides already mentioned before to get it working.
- Open
src/posters/functional/scenarios/TGuestOrder.java
. This is the JUnit test we are about to execute. It references the script we designed before, so it is supposed to look pretty empty. - Run it as JUnit test. The test suite should be configured to open Firefox and run the test case.
- Check the
results
folder of your test suite and you will find the usual result browser folderTGuestOrder
andTGuestOrder.html
with the default screenshots.
So let's get to the interesting pieces that differentiate this test suite from any other. There is now a folder called visualassertion
.
The directory structure underneath the visualassertion
is composed of subdirectories and files. Please see this example.
- localmachine: This is a subdirectory name you can specify in the properties to distinguish your machine from others. See [Challenges and Pitfalls](Challenges and Pitfalls).
- TGuestOrder: This is the test case you just executed.
- firefox: The browser used, because different browser render screens differently. The Webdriver interface does not tell us the OS, so we have to use the first directory to differentiate this.
- 45.0: The version of Firefox used. Even small version changes of a browser change the way it renders the page. Often this is very hard to spot by the human eye.
Visual Assert has its own config file visualassertion.properties
. The project property file project.properties
includes it with the statement:
com.xceptance.xlt.propertiesInclude.1 = visualassertion.properties
If you include other files using the same mechanism, please make sure you change the number 1
to the next valid one.
The file lists a couple of important properties we want to discuss next. The full property documentation is part of the property file itself.
You can pick from three algorithms right now. Exact, color fuzzy, and fuzzy. See the details below.
If you set this to true, the suite will learn masks and not raise assertion failures.
If you have masks that are open, this setting can help to close them and cover a larger area (dilution).
The mask size determines how much black is applied to every detected difference. The default is 10x10 pixels. So the mask covers an area of 10x10 for every different pixel, but of course the masked spots can overlap, so two adjoined pixels will create a 10x10 mask.
To best utilize visual verification, we suggest to either create a dedicated test case or copy one of your most important ones and modify it. Most important is that you add dedicated points where visual verification should take place.
These places should symbolize typical screens. Avoid screens that are dynamic in the sense of animations. Please keep in mind that the screenshots and the comparisons take time, hence, do not add too many verification points.
After the first training round, you should verify the screen shots taken and see if there are roughly correct. Visually inspect them before you start to train the masks.
In the next step, you let the test cases run a number of times in training mode and let the tool create the masks. Depending on the amount of changing data and its structure, one run could be enough or you might have to run it as often as 10 times.
Also, try to play with the mask closing setting as well as closing size in case you do not achieve satisfying training results.
You might not want to trust the masks blindly in the first place. Especially masks that are set, but where you do not expect any mask to be existing. This indicates an application or test cases issue in most cases.
Disable the training mode and start running some tests to verify that the masks are properly trained and nothing unexpected comes up.
Because the baseline images can be different when you run your the tests in your CI environment compared to your local machine, you might want to activate training for some runs. Make sure the application under test does not change. Once the masks as well as the initial baseline screenshots are set, you can disable training mode and see how it works out.
Any difference will be reported.
The fuzziness is reached by tolerating a small amount of pixels being different for every small block in the picture. The property com.xceptance.xlt.visualassertion.fuzzy.blocksize.xy
determines the size of each of these blocks. The entire screenshot will be divided into these blocks.
So when you have 10x10 blocks, a fuzzy factor (com.xceptance.xlt.visualassertion.tolerance.pixels
) of 0.1 indicates, that 10% of the pixels can be different. So we will tolerate a difference of 10 pixels in total for this area.
The color comparison used an idea from http://www.compuphase.com/cmetric.htm to include the human factor. So smaller color differences yield smaller percentage values than larger differences. Play with the settings to see what fits best.
Let's talk about masks and training quickly. When you have a test case with variable data, and you definitely should have these, the screenshots are often different. Data is displayed (names, prices, emails) and so every comparison will fail.
The training mode will automatically create masks so that data variance matters less. Of course there are limits to what masking can do, but we will discuss this later.
Here is an example. You see two examples of different data and how one pass of masking has blackened it.
By varying the size of the mask box (...mark.blocksize.x
and ...mark.blocksize.y
) you can adjust the amount of masking per training run. Don't make it too big to avoid covering areas that are not changing. Don't set it too small, because you will need a lot of passes to get the masks trained and your test cases stable.
If your browser changes, visual assert is automatically taking care of that and starting a new subdirectory. You might want to train it again afterwards.
If you update your application under test and the changes of the screens are legit, you have to redo the baseline screenshots and the masks.
Simply delete the baseline directory and the masks or use a new identifier (com.xceptance.xlt.visualassertion.ID
). Make sure you don't forget to train it again. Because the application has changed, you probably cannot reuse the old masks.
If you know the screen that changed, you can only clear this data and keep the rest.
If your test case changed and hence naming and numbering of the screenshots, you should remove all previous screenshot baseline and masks after the changed position, including the new position of course.
You can reuse all baseline screenshots from before, because their name and number is still the same.
If screenshots keep changing when you run the test multiple times and the library starts to mask pixels in strange areas during training, you might see problematic subpixel alias handling by Windows. The ClearType font rendering seems to introduce sporadic pixel failures.
Make sure that baseline, masking, and later runs are done on the same machine, same OS, and same browser version to prevent typical problems due to rendering difference on OS or browser level. Later version of this library might solve this problem or if you like, solve it and send us a pull request.
Because masking excludes changing areas, you might want to pick screens that don't tend to be masked completely. For instance when your homepage is content heavy and changes often, you get a header section only and the rest is masked. The header is probably identical to other pages.
You can edit the masks and simply add areas manually, in case the training takes too long or is not precise enough.
Differences in size cannot be masked, hence a changing screenshot size will always cause assertions to fail.
This test suite is work in progress and we are looking for active participation and ideas.
Future adaptions and additions to the comparison algorithms are planned, which allow for a more consistent page comparison, that adjusts better to dynamic content changes.