To shorten the development cycle of your web application, you need to start testing it in the early stages of the project. It seems obvious, but many enterprise IT organizations haven’t adopted agile testing methodologies, which costs them dearly. JavaScript is a dynamically typed interpreted language—there is no compiler to help identify errors as is done in compiled languages such as Java. This means that a lot more time should be allocated for testing JavaScript web applications. Moreover, a programmer who doesn’t introduce testing techniques into his daily routine can’t be 100 percent sure that his code works properly.
The static code analysis and code quality tools such as Esprima and JSHint will help reduce the number of syntax errors and improve the quality of your code.
Tip
|
We demonstrate how to set up JSHint for your JavaScript project and automate the process of checking your code for syntax errors in [productivity_tools]. |
To switch to a test-driven development mode, make testing part of your development process in its early stages rather than scheduling testing after the development cycle is complete. Introducing test-driven development can substantially improve your code quality. It is important to receive feedback about your code on a regular basis. That’s why tests must be automated and should run as soon as you’ve changed the code.
There are many testing frameworks in the JavaScript world, but we’ll give you a brief overview of two of them: QUnit and Jasmine. The main goal of each framework is to test small pieces of code, a.k.a. units.
We will go through basic testing techniques known as test-driven development and Test First. You’ll learn how to automate the testing process in multiple browsers with Testem Runner or by running tests in so-called headless mode with PhantomJS.
The second part of this chapter is dedicated to setting up a new Save The Child project in the IDE with selected test frameworks.
All software has bugs. But in interpreted languages such as JavaScript, you don’t have the help of compilers that could point out potential issues in the early stages of development. You need to continue testing code over and over again to catch regression errors, to be able to add new features without breaking the existing ones. Code that is covered with tests is easy to refactor. Tests help prove the correctness of your code. Well-tested code leads to better overall design of your programs.
This chapter covers the following types of testing:
-
Unit testing
-
Integration testing
-
Functional testing
-
Load (a.k.a. stress) testing
Although quality assurance (QA) and user acceptance testing (UAT) are far beyond the scope of this chapter, you need to understand their differences.
Software QA (a.k.a., quality control, or QC) is a process that helps identify the correctness, completeness, security compliance, and quality of the software. QA testing is performed by specialists (testers, analysts). The goal of QA testing is to ensure that the application complies with a set of the predefined behavior requirements.
UAT is performed by business users or subject-area experts. UAT should result in an endorsement that the tested application/functionality/module meets the agreed-upon requirements. The results of UAT give the confidence to the end user that the system will perform in production according to specifications.
During the QA process, the specialist intends to perform all tests, trying to break the application. This approach helps find errors undiscovered by developers. On the contrary, during UAT the user runs business-as-usual scenarios and makes sure that business functions are implemented in the application.
Let’s go over the strategies, approaches, and tools that will help you in test automation.
A unit test is a piece of code that invokes a method being tested. It asserts some assumptions about the application logic and behavior of the method. Typically, you’ll write such tests by using a unit-testing framework of your choice. Tests should run fast and be automated with clear output. For example, you can test that if a function is called with particular arguments, it should return an expected result. We take a closer look at unit-testing terminology and vocabulary in Test-Driven Development.
Integration testing is a phase in which already tested units are combined into a module to test the interfaces between them. You might want to test the integration of your code with the code written by other developers; for example, a third-party framework. Integration tests ensure that any abstraction layers we build over the third-party code work as expected. Both unit and integration tests are written by application developers.
Functional testing is aimed at finding out whether the application properly implements business logic. For example, if the user clicks a row in a grid with customers, the program should display a form view with specific details about the selected customer. In functional testing, business users should define what has to be tested, unlike unit or integration testing, for which tests are created by software developers.
Functional tests can be performed manually by a real person clicking through each and every view of the web application, confirming that it operates properly or reporting discrepancies with the functional specifications. But there are tools to automate the process of functional testing of web applications. Such tools allow you to record users' actions and replay them in automatic mode. The following are brief descriptions of two such tools—Selenium and CasperJS:
- Selenium
-
Selenium is an advanced browser automation tool suite that has capabilities to run and record user scenarios without requiring developers to learn any scripting languages. Also, Selenium has an API for integration with many programming languages such as Java, C#, and JavaScript. Selenium uses the WebDriver API to talk to browsers and receive running context information. WebDriver is becoming the standard API for browser automation. Selenium supports a wide range of browsers and platforms.
- Casper.js
-
CasperJS is a scripting framework written in JavaScript. CasperJS allows you to create interaction scenarios such as defining and ordering navigation steps, filling and submitting forms, or even scrapping web content and making web page screenshots. CasperJS works on top of PhantomJS and SlimerJS browsers, which limits the testing runtime environment to WebKit-based and Gecko-based browsers. Still, it’s a useful tool when you want to run tests in a continuous integration (CI) environment.
PhantomJS is a headless WebKit-based rendering engine and interpreter with a JavaScript API. Think of PhantomJS as a browser that doesn’t have any graphical user interface. PhantomJS can execute HTML, CSS, and JavaScript code. Because PhantomJS is not required to render a browser’s GUI, it can be used in display-less environments (for example, a CI server) to run tests. SlimerJS follows the same idea of a headless browser, similar to PhantomJS, but it uses the Gecko engine, instead.
PhantomJS is built on top of WebKit and JavaScriptCore (like Safari), and SlimerJS is built on top of Gecko and SpiderMonkey (like Firefox). You can find a comprehensive list of differences between PhantonJS and SlimerJS APIs in SlimerJS’s documentation site.
In our case, Grunt automatically spawns the PhantomJS instance, executes the code of our tests, reads the execution results using the PhantomJS API, and prints them out in the console. If you’re not familiar with Grunt tasks, refer to the online bonus chapter for additional information about using Grunt in our Save The Child project.
Load testing is a process that can help answer the following questions:
-
How many concurrent users can work with your application without bringing your server to its knees?
-
Even if your server is capable of serving a thousand users, is your application performance in compliance with the service-level agreement (SLA), if any?
It all comes down to two factors: availability and response time of your application. Ideally, these requirements should be well defined in the SLA document, which should clearly state what metrics are acceptable from the user’s perspective. For example, the SLA can include a clause stating that the initial download of your application shouldn’t take longer than 10 seconds for users with a slow connection (under 1 Mbps). An SLA can also state that the query to display a list of customers shouldn’t run for more than 5 seconds, and the application should be operational 99.9 percent of the time.
To avoid surprises after going live with your new mission-critical web application, don’t forget to include in your project plan an item to create and run a set of heavy stress tests. Do this well in advance, before your project goes live. With load testing, you don’t need to hire a thousand interns to play the roles of concurrent users to find out whether your application will meet the SLA.
Automated load-testing software allows you to emulate the required number of users, set up throttling to emulate a slower connection, and configure the ramp-up speed. For example, you can simulate a situation in which the number of users logged on to your system grows at the speed of 50 users every 10 seconds. Stress-testing software also allows you to prerecord user interactions, and then you can run these scripts emulating a heavy load.
Professional stress-testing software allows simulating the load close to real-world usage patterns. You should be able to create and run mixed scripts simulating a situation in which some users are logging on to your application, while others are retrieving the data and performing data modifications. The following are some tools worth considering for load testing.
Apache Benchmark is a simple-to-use command-line tool. For example, with the command ab -n 10 -c 10 -t 60 http://savesickchild.org:8080/ssc_extjs/
, Apache Benchmark will open 10 concurrent connections with the server and will send 10 requests via each connection to simulate 10 visitors working with your web application for 60 seconds. The number of concurrent connections is the actual number of concurrent users. You can find an Apache Benchmark sample report in A sample Apache Benchmark report.
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Server Software: GlassFish
Server Hostname: savesickchild.org
Server Port: 8080
Document Path: /ssc_extjs/
Document Length: 306 bytes
Concurrency Level: 10
Time taken for tests: 60.003 seconds
Complete requests: 17526
Failed requests: 0
Total transferred: 11988468 bytes
HTML transferred: 5363262 bytes
Requests per second: 292086.73
Transfer rate: 199798.72 kb/s received
Connnection Times (ms)
min avg max
Connect: 10 13 1305
Processing: 11 14 12
Total: 21 27 1317
Apache JMeter is a tool with a graphical user interface (see JMeter test results output example). You can use it to simulate heavy load on a server, a network, or an object to test its strength or to analyze overall performance under different load types. You can find more about testing web applications by using JMeter in the official documentation.
Refer to What Is PhantomJS and SlimerJS? to familiarize yourself with PhantomJS. The slide deck titled "Browser Performance metering with PhantomJS" is yet another good resource for seeing how you can use PhantomJS for performance testing.
The methodology known as test-driven development (TDD) substantially changes the way traditional software development is done. This methodology wants you to write tests even before writing the application code. Instead of just using testing to verify your work after it’s done, TDD moves testing into the earlier application design phase. You should use these tests to clarify your ideas about what you are about to program. Here is the fundamental mantra of TDD:
-
Write a test and make it fail.
-
Make the test pass.
-
Refactor.
-
Repeat.
This technique is also referred to as red-green-refactor because IDEs and test runners use red to indicate failed tests and green to indicate those that pass.
When you are about to start programming a class with business logic, ask yourself, "How can I ensure that this function works as expected?" After you know the answer, write a test JavaScript class that calls this function to assert that the business logic gives the expected result.
An assertion is a true-false statement that represents what a programmer assumes about program state. For example, customerID >0 is an assertion. According to Martin Fowler, assertion is a section of code that assumes something about the state of the program. Failure of an assertion results in test failure.
Run your test, and it will immediately fail because no application code is written yet! Only after the test is written, start programming the business logic of your application.
You should write the simplest possible piece of code to make the test pass. Don’t try to find a generic solution at this step. For example, if you want to test a calculator that needs to return 4
as result of 2 + 2
, write code that simply returns 4
. Don’t worry about the performance or optimization at this point. Just make the test pass. After you write it, you can refactor your application code to make it more efficient. Now you might want to introduce a real algorithm for implementing the application logic without worrying about breaking the contract with other components of your application.
A failed unit test indicates that your code change introduces regression, which is a new bug in previously working software. Automated testing and well-written test cases can reduce the likelihood of regression in your code.
TDD allows you to receive feedback from your code almost immediately. It’s better to find that something is broken during development rather than in an application deployed in production.
Note
|
Learn by heart the Golden Rule of TDD:
|
In addition to business logic, web applications should be tested for proper rendering of UI components, changing view states, dispatching, and handling events.
With any testing framework, your tests will follow the same basic pattern. First, you need to set up the test environment. Second, you run the production code and check that it works as it is supposed to. Finally, you need to clean up after the test runs—remove everything that your program has created during setup of the environment.
This pattern for authoring unit tests is called arrange-act-assert-reset (AAAR).
-
In the Arrange phase, you set up the unit of work to test. For example, create JavaScript objects and prepare dependencies.
-
In the Act phase, you exercise the unit under test and capture the resulting state. You execute your production code in a unit-test context.
-
In the Assert phase, you verify the behavior through assertions.
-
In the Reset phase, you reset the environment to the initial state. For example, erase the DOM elements created in the Arrange phase. Most of the frameworks provide a "teardown" function that would be invoked after the test is done.
Later in this chapter, you’ll see how different frameworks implement the AAAR pattern.
In next sections, we will dive into testing frameworks for JavaScript.
We’ll start our journey to JavaScript testing frameworks with QUnit, which was originally developed by John Resig as part of jQuery. QUnit now runs completely as a standalone and doesn’t have any jQuery dependencies. Although it’s still being used by the jQuery project itself for testing jQuery, jQuery UI, and jQuery Mobile code, QUnit can be used to test any generic JavaScript code.
In this section, you’re going learn how to automatically run QUnit tests using Grunt. Let’s set up our project by adding the QUnit framework and test file. Begin by downloading the latest version by using Bower, as shown in Installing QUnit with Bower.
bower install qunit
You need to get only two files: qunit.js and qunit.css, as shown in QUnit framework in our project.
The code fragment shown in Our first QUnit test demonstrates a simple test function.
link:include/ch7/first_qunit_test.js[role=include]
You’ll also need a test runner for the test setup. A test runner is an HTML file that contains links to a QUnit framework JavaScript file, as shown in A test runner.
link:include/ch7/qunit-runner.htm[role=include]
-
In this section, we continue working on the jQuery-based version of the Save The Child application. Hence, our "production environment" depends on the availability of jQuery, so we need to include jQuery in the test runner.
-
Test files are included, too.
-
QUnit fills this block with results.
-
Any HTML you want to be present in each test is placed here. It will be reset for each test.
To run all our tests, we need to open qunit-runner.html in a browser. (See Test results in a browser.) Grunt config for QUnit test runner shows a sample Grunt task for executing the QUnit tests by using the provided HTML runner.
link:include/ch7/gruntfile_for_qunit.js[role=include]
-
Grunt loads the task from the local npm repository. To install this task in the node_modules directory, use the command npm install grunt-contrib-qunit.
Now let’s briefly review QUnit API components. A sample QUnit test shows a typical QUnit script.
link:include/ch7/sample_qunit_test.js[role=include]
-
A
module
function allows you to combine related tests as a group. -
Here, we can run the Arrange phase. A
setup
function is called before each test. -
A
teardown
function is called after each test, respectively. This is our Reset phase. -
You need to place the code of your test in a corresponding
test
function. -
Typically, you need to use assertions to make sure the code being tested gives expected results. The function
ok
examines the first argument to betrue
. -
A pair of functions,
equal
andnotEqual
, check for the equivalence of the first and second arguments, which could be expressions, as well. -
A code of the test is wrapped in IIFE and passes the
jQuery
object as a$
variable.
You can find more details about QUnit in its product documentation and QUnit Cookbook.
The idea behind behavior-driven development (BDD) is to use the natural language constructs to describe what you think your code should be doing, or more specifically, what your functions should be returning.
Similarly to unit tests, with BDD you write short specifications that test one feature at a time. Specifications should be sentences. For example, "Calculator adds two positive numbers." Such sentences will help you easily identify the failed test by simply reading this sentence in the resulting report.
Now we’ll demonstrate this concept using Jasmine—the BDD framework for JavaScript. Jasmine provides a nice way to group, execute, and report JavaScript unit tests.
Now let’s learn how to execute a Jasmine specification with Grunt. We cover Jasmine basics in the next section, but for the moment think of Jasmine as a piece of code that should be executed by Grunt.
Let’s begin by downloading the latest version of Jasmine by using Bower:
bower install jasmine
Unzip jasmine-standalone-2.0.0.zip in the dist directory. Jasmine comes with an example spec (spec folder) and an HTML test runner, SpecRunner.html. Let’s open this file in a browser, as shown in Running Jasmine specs in a browser.
SpecRunner.html, shown in The test runner SpecRunner.html, is structured similarly to the QUnit HTML runner. You can run specifications by opening the runner file in a browser.
link:include/ch7/SpecRunner.htm[role=include]
-
Required Jasmine framework library.
-
Initialize Jasmine and run all specifications when the page is loaded.
-
Include the source files.
-
Include the specification code. It’s not required, but files that contain specification code can have the suffix *Spec.js.
Now let’s update the Gruntfile to run the same sample specifications with the PhantomJS headless browser. Copy the content of the src folder of your Jasmine distribution into the app/js folder of our project, and then copy the content of the spec folder into the test/spec folder of your project. Also create a folder test/lib/jasmine and copy the content of the Jasmine distribution lib folder there. (See Jasmine specifications in our project.)
Now you need to edit Gruntfile_jasmine.js to activate Jasmine support, as shown in Gruntfile_jasmine.js with Jasmine running support.
link:include/ch7/gruntfile_for_jasmine.js[role=include]
-
Configuring the Jasmine task.
-
Specifying the location of the source files.
-
Specifying the location of Jasmine specs.
-
Specifying the location of Jasmine helpers (Jasmine helpers are covered later in this chapter).
-
Grunt loads the task from the local npm repository. To install this task in the node_modules directory, use the command npm install grunt-contrib-jasmine.
To execute tests, run the command grunt --gruntfile Gruntfile_jasmine.js jasmine
, and you should see something like this:
Running "jasmine:src" (jasmine) task
Testing jasmine specs via phantom
.....
5 specs in 0.003s.
>> 0 failures
Done, without errors.
In this example, Grunt successfully executes the tests with PhantomJS of all five specifications defined in PlayerSpec.js.
Continuous integration (CI) is a software development practice whereby members of a team integrate their work frequently, which results in multiple integrations per day. Introduced by Martin Fowler and Matthew Foemmel, the theory of CI recommends creating scripts and running automated builds (including tests) of your application at least once a day. This allows you to identify issues in the code early.
The authors of this book successfully use the open source framework called Jenkins, shown in Jenkins CI server running at the savesickchild.org website and used to build the sample applications for this book, for establishing a continuous build process. (There are other similar CI servers.) With Jenkins, you can have scripts that run either at a specified time interval or on each source code repository check-in of the new code. You may also force an additional build process whenever you like. The Grunt command-line tool should be installed and be available on a CI machine to allow the Jenkins server to invoke Grunt scripts and publish test results.
We use it to ensure continuous builds of internal and open source projects.
In the next section, you will learn how write your own specifications.
After we’ve set up the tools for running tests, let’s begin developing tests and learn the Jasmine framework constructs. Every specification file has a set of suites defined in the describe
function. ExampleSpec.js shows a specification file that describes two test suites.
link:include/ch7/ExampleSpec.js[role=include]
-
The function
describe()
accepts two parameters: the name of the test suite and the callback function. The function is a block of code that implements the suite. If for some reason you would like to skip the suite’s execution, you can use the methodxdescribe()
and a whole suite will be excluded until you rename it back todescribe()
. -
The function
it()
also accepts similar parameters—the name of the test specification, and the function that implements this specification. As with suites, Jasmine has a correspondingxit
method to exclude the specification from execution. -
Each suite can have any number of nested suites.
-
Each suite can have any number of specifications.
-
The code checks whether
2 + 2
equals4
. We used the functiontoEqual()
, which is a matcher. Define expectations with the functionexpect()
, which takes a value, called the actual. It’s chained with a matcher function, which takes the expected value (in our case, it’s 4) and checks whether it satisfies the criterion defined in the matcher.
Various flavors of matchers are shipped with the Jasmine framework, and we’re going to review a couple of the frequently used matcher’s functions:
- Equality
-
Function
toEqual()
check whether two things are equal. - True or False?
-
Functions
toBeTruthy()
andtoBeFalsy()
checks whether something is true or false, respectively. - Identity
-
Function
toBe()
checks whether two things are the same object. - Nullness
-
Function
toBeNull()
checks whether something isnull
. - Is Element Present
-
Function
toContain()
checks whether an actual value is an element of an array:expect(["James Bond", "Austin Powers", "Jack Reacher", "Duck"]) .toContain("Duck");
- Negate Other Matchers
-
This function is used to reverse matchers to ensure that they aren’t
true
. To do that, simply prefix matchers with.not
:expect(["James Bond", "Austin Powers", "Jack Reacher"]) .not.toContain("Duck");
Here, we’ve listed only some of the existing matchers. You can find the complete documentation with code examples at the official Jasmine website and wiki.
Tip
|
A large set of jQuery-specific matchers are available at link:https://github.com/velesin/jasmine-jquery. |
The Jasmine framework has an API to arrange your specification (based on the [AAAR] concept). It includes two methods, beforeEach()
and afterEach()
, which allow you to execute code before and after each spec, respectively. It’s useful for instantiation of the shared objects or cleaning up after the tests complete. If you need to fulfill your test with some common dependencies or set up the environment, just place code inside the beforeEach()
method. Such dependencies and environments are known as fixtures. Specification setup with beforeEach includes a beforeEach function that prepares a fixture.
A test fixture refers to the fixed state used as a baseline for running tests. The main purpose of a test fixture is to ensure that there is a well-known and fixed environment in which tests are run so that results are repeatable. Sometimes a fixture is referred to as a test context.
link:include/ch7/beforeEach.js[role=include]
-
This method will be called before each specification.
-
In the
beforeEach()
method, we create two input fields. These two inputs will be available in all specifications of this suite. -
You can place additional cleanup code inside the
afterEach()
function. -
A
beforeEach()
function helps implement the Don’t Repeat Yourself principle in our tests. You don’t need to create the dependency elements inside each specification manually. -
You can change defaults inside each specification without worrying about affecting other specifications. Your test environment will be reset for each specification.
The Jasmine framework is easily extensible, and it allows you to define your own matchers if for some reason you’re unable to find the appropriate matchers in the Jasmine distribution. In such cases, you’d need to write a custom matcher. Custom toBeSecretAgent matcher shows a sample custom matcher that checks whether a string contains the name of a "secret agent" from the defined list of agents.
link:include/ch7/jasmine_custom_matcher.js[role=include]
-
We need to implement the function
compare
that accepts two parameters from theexpect
call: actual and expected values. -
The function
compare
should return theresult
object. -
This function checks whether
agentsList
contains the actual value. -
The
pass
property of theresult
object indicates success or failure of matcher execution. -
We can customize an error message (a
message
property the ofresult
object) if the test fails.
The invocation of this helper can look like this:
it("part of super agents", function () {
expect("James Bond").toBeSecretAgent(); (1)
expect("Jason Bourne").toBeSecretAgent();
expect("Austin Powers").not.toBeSecretAgent(); (2)
expect("Austin Powers").toBeSecretAgent(); (3)
});
-
Calling the custom matcher.
-
Custom matchers could be used together with the
.not
modifier. -
This expectation will fail because 'Austin Powers' is not in the list of secret agents.
The following custom failure message displays on the console.
grunt --gruntfile Gruntfile_jasmine.js test
Running "jasmine:src" (jasmine) task
Testing jasmine specs via PhantomJS
My function under test should
✓ return on
another suite
✓ spec1
✓ my another spec
✓ 2+2 = 4
X part of super agents
Austin Powers is not a secret agent (1)
5 specs in 0.01s.
>> 1 failures
Warning: Task "jasmine:src" failed. Use --force to continue.
Aborted due to warnings.
"Austin Powers is not a secret agent (1)" is a custom failure message.
Test spies are objects that replace the actual functions with the code to record information about the function’s usage through the systems being tested. Spies are useful when determining a function’s success is not easily accomplished by inspecting its return value or changes to the state of objects with which it interacts.
Consider the login functionality shown in Production code of the login function. A showAuthorizedSection()
function will be invoked within the login
function after the user enters the correct username and password. We need to test that the invocation of showAuthorizedSection()
is happening in this sequence.
var ssc = {};
(function() {
'use strict';
ssc.showAuthorizedSection = function() {
console.log("showAuthorizedSection");
};
ssc.login = function(usernameInput, passwordInput) {
// username and password check logic is omitted
this.showAuthorizedSection();
};
})();
And here is how we can test it using Jasmine’s spies:
describe("login module", function() {
it("showAuthorizedSection has been called", function() {
spyOn(ssc, "showAuthorizedSection"); (1)
ssc.login("admin", "1234"); (2)
expect(ssc.showAuthorizedSection).toHaveBeenCalled(); (3)
});
});
-
The
spyOn
function will replace theshowAuthorizedSection()
function with the corresponding spy. -
The
showAuthorizedSection()
function will be invoked within thelogin()
function in case of successful login. -
The assertion
toHaveBeenCalled()
would be not possible without a spy.
The previous section was about executing your test and specification in headless mode by using Grunt and PhantomJS, which is useful for running tests in CI environments. Although PhantomJS uses the WebKit rendering engine, some browsers don’t use WebKit. It’s obvious that running tests manually in each browser is tedious and not productive. To automate testing in all web browsers, you can use the Testem runner. Testem executes your tests, analyzes their output, and then prints the results on the console. In this section, you’ll learn how to install and configure Testem to run Jasmine tests.
Testem will just pick any JavaScript file in your project directory. If Testem can identify any test among the .js files, it will run it. But Testem tasks can be customized by using a configuration file
You can configure Testem to specify which files should be included in testing. Testem starts by trying to find the configuration file testem.json in the project directory. A sample testem.json file is shown in A Testem configuration file.
{
"framework": "jasmine2", (1)
"src_files": [ (2)
"ext/ext-all.js",
"test.js"
]
}
-
The
framework
directive is used to specify the test framework. Testem supports QUnit, Jasmine, and many more frameworks. You can find a full list of supported frameworks on the Testem GitHub page. -
The list of test and production code source files.
Testem supports two running modes: test-driven development mode (tdd-mode) and continuous integration (ci-mode). (For more about continuous integration, see the note on CI). In tdd-mode, shown in Testem tdd-mode, Testem starts the development server.
In tdd-mode, Testem doesn’t spawn any browser automatically. On the contrary, you’d need to open a URL in the browser you want run a test against, to connect it to the Testem server. From this point, Testem executes tests in all connected browsers. In Testem is running the tests on multiple browsers, you can see we added different browsers, including a mobile version of Safari (running on an iOS simulator).
Because the Testem server itself is an HTTP server, you can connect remote browsers to it as well. For example, Using Testem to test code on remote Internet Explorer 10 shows Internet Explorer 10 running on a Windows 7 virtual machine connected to the Testem server.
You can combine running the tests with the Testem runner with the previously introduced Grunt tool. Using Testem and grunt watch side by side shows two tests in parallel: Testem runs tests on the real browsers, and Grunt runs tests on the headless PhantomJS.
Testem supports live reloading mode. This means that Testem will watch the filesystem for changes and will execute tests in all connected browsers automatically. You can force a test to run by switching to the console and pressing the Enter key.
In CI mode, Testem examines the system for all available browsers and executes tests on them. You can get a list of the browsers that Testem can use to run tests by using the testem launchers
command. The following shows the sample output after running this command:
# testem launchers Have 5 launchers available; auto-launch info displayed on the right. Launcher Type CI Dev ------------ ------------ -- --- Chrome browser ✔ Firefox browser ✔ Safari browser ✔ Opera browser ✔ PhantomJS browser ✔
Now you can run our test simultaneously in all browsers installed in your computer—Google Chrome, Safari, Firefox, Opera, and PhantomJS—with one command:
testem ci
Sample output of the testem ci command is shown in Output of the testem ci command.
# Launching Chrome # (1)
#
# Launching Firefox # (2)
# ....
TAP version 13
ok 1 - Firefox Basic Assumptions: Ext namespace should be available loaded.
ok 2 - Firefox Basic Assumptions: ExtJS 4.2 should be loaded.
ok 3 - Firefox Basic Assumptions: SSC code should be loaded.
ok 4 - Firefox Basic Assumptions: something truthy.
# Launching Safari # (3)
#
# Launching Opera # (4)
# ....
ok 5 - Opera Basic Assumptions: Ext namespace should be available loaded.
ok 6 - Opera Basic Assumptions: ExtJS 4.2 should be loaded.
ok 7 - Opera Basic Assumptions: SSC code should be loaded.
ok 8 - Opera Basic Assumptions: something truthy.
# Launching PhantomJS # (5)
#
1..8
# tests 8
# pass 8
# ok
....
-
The tests are run on Chrome…
-
… Firefox
-
… Safari
-
… Opera
-
… and on headless WebKit—PhantomJS
Testem uses TAP format to report test results.
As is discussed in [mocking_up_the_app], the Document Object Model (DOM) is a standard browser API that allows a developer to access and manipulate page elements. Often, your JavaScript code needs to access and manipulate the HTML page elements in some way. Testing the DOM is a crucial part of testing your client-side JavaScript. By design, the DOM standard defines a browser-agnostic API. But in the real world, if you want to make sure that your code works in a particular browser, you need to run the test inside this browser.
Earlier in this chapter, we introduced the Jasmine method beforeEach()
, which is the right place for setting all required DOM elements and making them available in the specifications. Using jQuery APIs to create DOM elements before running the spec illustrates the programmatic creation of the required DOM element <input>.
describe("spec", function() {
var usernameInput;
beforeEach(function() { (1)
usernameInput = $(document.createElement("input")).attr({ (2)
type: 'text',
id: 'username',
name: 'username'
})[0];
});
});
-
Inside the
beforeEach()
method, we use the API to manipulate the DOM programmatically. Also, if you’re using an HTML test runner, you can add the fixture by using HTML tags. But we don’t recommend this approach, because soon you will find that the test runner will become unmaintainable and clogged with tons of fixture HTML code. -
Create an
<input>
element by using jQuery APIs, which will turn into the following HTML:
<input type="text" id="password" name="password" placeholder="password"
autocomplete="off">
The jQuery selectors API is more convenient for working with the DOM than a standard JavaScript DOM API. But in future examples, we will use the jasmine-fixture library for easier setup of the DOM fixture. Jasmine-fixture uses syntax that is similar to that of jQuery selectors for injecting HTML fixtures. With this library, you will significantly decrease the amount of repetitive code while creating the fixtures.
Using jasmine-fixture to set up the DOM before spec run shows how the example from the previous code snippet looks with the jasmine-fixture library.
describe("spec", function() {
var usernameInput;
beforeEach(function() {
usernameInput =
affix('input[id="username"][type="text"][name="username]')[0]; (1)
});
it("should not allow login with empty username and password and return code
equals 0", function() {
var result = ssc.login(usernameInput, passwordInput); (2)
expect(result).toBe(0);
});
});
-
By using the
affix()
function provided by the jasmine-fixture library and expressiveness of CSS selectors, we can easily set up required DOM elements. You can find more examples of possible selectors at the documentation page of jasmine-fixture. -
When all requirements for our production code (
login()
function) are satisfied, we can run it in the context of a test and assert the results.
As you can see, testing the DOM manipulation code is much like any other type of unit testing. You need to prepare a fixture (a.k.a., the testing context), run the production code, and assert the results.
We assume that you’ve read [modularizing_javascript_projects], and in this section you’ll apply your newly acquired Ext JS skills. As a reminder, the Ext JS framework encourages using MVC architecture. The separation of responsibilities between models, views, and controllers makes an Ext JS application a perfect candidate for unit testing. In this section you’ll learn how to test the Ext JS version of the Save The Child application from [modularizing_javascript_projects].
Let’s create a skeleton application that can provide a familiar environment for our classes that should be tested (see An HTML runner for Jasmine and Ext JS application).
link:include/ch7/jasmine_runner_for_extjs.htm[role=include]
-
Adding Ext JS framework dependencies.
-
Adding the Jasmine framework dependencies.
-
This is our skeleton Ext JS application that will set up a "friendly" environment for components under the test. You can see the content of test.js in An Ext JS testing endpoint.
link:include/ch7/jasmine_extjs_runner.js[role=include]
-
Ext JS loader needs to know the location of the testing classes…
-
… and about the location of production code.
-
Create a skeleton application in the namespace of the production code to provide the execution environment.
-
The
AllSpec
class will be requesting loading of the rest of the specs. We show the code for theAllSpec
class in The AllSpec class. -
The skeleton application will test the controllers from the production application code.
By placing our spec names in a requires config property, we delegate the loading of specified specs to the Ext JS loader during a fixture application startup.
Ext.define('Test.spec.AllSpecs', {
requires: [ (1)
'Test.spec.BasicAssumptions'
]
});
-
The
requires
property includes an array of Jasmine suites. All further tests will be added to this array. Ext JS will be responsible for loading and instantiating all test classes.
A BasicAssumptions class shows how our typical test suite will look.
Ext.define('Test.spec.BasicAssumptions', {}, function() { (1)
describe("Basic Assumptions: ", function() { (2)
it("Ext namespace should be available loaded", function() {
expect(Ext).toBeDefined();
});
it("SSC code should be loaded", function() {
expect(SSC).toBeDefined();
});
});
});
-
Wrap the Jasmine suite into an Ext JS class.
-
The rest of the code is similar to the Jasmine code sample shown earlier in this chapter.
After setting up the testing harness for the Save The Child application, we will suggest a testing strategy for Ext JS applications. Let’s begin by testing the models and controllers, followed by testing the views.
The SaveSickChild.org home page displays information about fundraising campaigns by using chart and table views backed by a collection of Campaign
models. A Campaign
model should have three properties: title
, description
, and location
. The title
property of the model should have a default value: Default Campaign Title
. The location
property of the model is a required field.
In the spirit of TDD, let’s write a specification that will meet the requirements described, as shown in CampaignModelAssumptions specification.
link:include/CampaignModelAssumptions.js[role=include]
-
By default,
Ext.data.Model
caches every model created by the application in a global in-memory array. We need to clean up the Ext JS model cache after each test run. -
Instantiate the
Campaign
model class to check that it exists. -
We need to check whether the model has all required properties.
-
The property
title
has a default value. -
Validation will fail on the empty
location
property:link:include/Campaign.js[role=include]
Controllers in Ext JS are classes like any others and should be tested the same way. In Donate controller specification, we test the Donate Now functionality. When the user clicks the Donate Now button on the Donate panel, the controller’s code should validate the user input and submit the data to the server. Because we are just testing the controller’s behavior, we’re not going to submit the actual data. We’ll use Jasmine spies, instead.
Ext.define("Test.spec.DonateControllerSpec", {}, function () {
describe("Donate controller", function () {
beforeEach(function () {
// controller's setup code is omitted
});
it('should exists', function () { (1)
var controller = Ext.create('SSC.controller.Donate');
expect(controller.$className).toEqual('SSC.controller.Donate');
});
describe('donateNow button', function () {
it('calls donate on DonorInfo if form is valid', function () {
var donorInfo = Ext.create('SSC.model.DonorInfo', {});
var donateForm = Ext.create('SSC.view.DonateForm', {});
var controller = Ext.create('SSC.controller.Donate');
spyOn(donorInfo, 'donate'); (2)
spyOn(controller, 'getDonatePanel').and.callFake(function () { (3)
donateForm.down = function () {
return {
isValid: function () {
return true;
},
getValues: function () {
return {};
}
};
};
return donateForm;
});
spyOn(controller, 'newDonorInfo').and.callFake(function () { (4)
return donorInfo;
});
controller.submitDonateForm();
expect(donorInfo.donate).toHaveBeenCalled(); (5)
});
});
});
});
-
First, you need to test whether the controller’s class is available and can be instantiated.
-
With the help of Jasmine’s
spyOn()
function, substitute theDonorInfo
model’sdonate()
function. -
We’re not interested in the view’s interaction—only the contract should be tested. At this point, some methods can be substituted with the fake implementation to let the test pass. In this case, the specification tests the situation when the form is valid.
-
You need to inject emulated controller dependencies. The function
donate()
was replaced by the spy. -
Finally, you can assert whether the function was called by the controller.
The testable Donate controller shows what the function looks like under the test.
Ext.define('SSC.controller.Donate', {
extend: 'Ext.app.Controller',
refs: [{
ref: 'donatePanel',
selector: '[cls=donate-panel]'
}
],
init: function() {
'use strict';
this.control({
'button[action=donate]': {
click: this.submitDonateForm
}
});
},
newDonorInfo: function() { (1)
return Ext.create('SSC.model.DonorInfo', {});
},
submitDonateForm: function() {
var form = this.getDonatePanel().down('form');
if (form.isValid()) { (2)
var donorInfo = this.newDonorInfo();
Ext.iterate(form.getValues(), function(key, value) { (3)
donorInfo.set(key, value);
}, this);
donorInfo.donate(); (4)
}
}
});
-
The factory method for creating a new instance of the
SSC.model.DonorInfo
class. -
If the form is valid, read data from the form fields…
-
…and populate properties of corresponding objects.
-
DonorInfo
can be submitted by calling thedonate()
method.
UI tests can be divided into two constituent parts: interaction tests and component tests. Interactions tests simulate real-world scenarios of application usage as if a user is using the application. It’s better to delegate the interaction tests to functional testing tools such as Selenium or CasperJS.
Tip
|
Another UI testing tool is worth mentioning, especially in the context of testing Ext JS applications: Siesta. Siesta allows you to perform testing of the DOM and simulate user interactions. Written in JavaScript, Siesta uses unit and UI testing. There are two editions of Siesta: lite and professional. |
Component tests isolate independent and reusable pieces of your application to verify their display, behavior, and contract with other components (see the section Testing the Controllers). Let’s see how we can do that. Consider Testing the view.
Ext.define('Test.spec.ViewsAssumptions', {}, function () {
function prepareDOM(obj) { (1)
Ext.DomHelper.append(Ext.getBody(), obj);
}
describe('DonateForm ', function () {
var donateForm = null; (2)
beforeEach(function () {
prepareDOM({tag: 'div', id: 'test-donate'}); (3)
donateForm = Ext.create('SSC.view.DonateForm', { (4)
renderTo: 'test-donate'
});
});
afterEach(function () {
donateForm.destroy(); (5)
donateForm = null;
});
it('should have donateform xtype', function () {
expect(donateForm.isXType('donateform')).toEqual(true); (6)
});
});
});
-
A helper function for creating a fixture for DOM elements.
-
A reusable scoped variable.
-
Create a fixture for the div test element.
-
Create a fresh form for every test to avoid test pollution.
-
Destroy the form after every test so we don’t pollute the environment.
-
In this test, you need to make sure that the
DonateForm
component has thedonateform
xtype.
In this section, we will set up WebStorm to use the previously described tools inside this IDE. We will show how to integrate Grunt with WebStorm to run Grunt tasks from there.
Let’s begin with the Grunt setup. Currently, the WebStorm IDE has no native support for the Grunt tool. Because Grunt is a command-line tool, you can use a general launching feature of the WebStorm IDE and configure it as an external tool. Open the WebStorm preferences and navigate to the External Tools section to access the external tools configuration, as shown in The External Tools configuration window in WebStorm.
Click the +
button to create a new external tool configuration, and you’ll see the window shown in External tool configuration.
To configure an external tool in WebStorm (Grunt, in this case), you need to do the following:
-
Specify the full path to the application executable.
-
Some tools require command-line parameters. In this example, we explicitly specify the task runner configuration file (with the
--gruntifle
command-line option) and the task to be executed. -
Specify the Working Directory to run the Grunt tool. In our case, the Grunt configuration file is located in the root of our project. WebStorm allows you to use macros to avoid hardcoded paths. Most likely, you don’t want to set up external tools for each new project, but to just create a universal setup. In our example, we use the
$ProjectFileDir$
macros that will be resolved as the current WebStorm project folder root. -
WebStorm allows you to organize related tasks into logical groups.
-
You can configure how to access the external tool launcher.
When all of these steps are complete, you can find the Grunt launcher under the Tools menu, as shown in Grunt launcher available under the Tools→grunt menu.
Unit tests are really important as a means of getting quick feedback from your code. You can work more efficiently if you manage to minimize context switching during your coding flow. Also, you don’t want to waste time digging through the menu items of your IDE, so assigning a keyboard shortcut for launching external tools is a good idea.
Let’s assign a keyboard shortcut for our newly configured external tool launcher. In WebStorm Preferences, go to the Keymap section. Use the filter to find our created launcher jasmine: grunt test
. Specify either the Keyboard or the Mouse shortcut by double-clicking the appropriate list item, as shown in Setting up a keyboard shortcut for Grunt launcher.
By pressing a combination of keys specified in the previous screen, you will be able to launch Grunt for Jasmine tests with one click of a button(s). WebStorm will redirect all the output from the Grunt tool into its Run window, as shown in Grunt output in WebStorm.
Testing is one of the most important processes of software development. Well-organized testing helps keep the code in a good working state. It’s especially important in interpreted languages such as JavaScript, which has no compiler to provide a helping hand to find lots of errors in very early stages.
In this situation, static code analysis tools, such as JSHint (discussed in [productivity_tools]) could help identify typos and enforce best practices accepted by the JavaScript community.
In enterprise projects developed with compiled languages, people often debate whether TDD is really beneficial. With JavaScript, it’s nondebatable unless you have unlimited time and budget and are ready to live with unmaintainable JavaScript.
The enterprises that have adopted test-driven development (as well as behavior-driven development) routines make the application development process safer by including test scripts in the continuous integration build process.
Automating unit tests reduces the number of bugs and decreases the amount of time developers need to spend manually testing their code. If automatically launched test scripts (unit, integration, functional, and load testing) don’t reveal any issues, you can rest assured that the latest code changes did not break the application logic, and that the application performs according to SLA.