- Light green: 3rd party code
- Dark green: part of this tool
- Purple: your code
- Gray: generated data
When your app is running normally on the phone, it uses react-native-ble-plx to communicate over Bluetooth BLE with devices.
When testing the app with this tool, the react-native-ble-plx module is automatically mocked with a version that plays back traffic from a recording. This mock implements the same interface as the original module, plus a few methods, so that in your test, you can specify which recording to use, when to playback events, and optionally verify that the entire recording has been used when a test is complete. In this way you can use Jest and Testing Library like normally to test components and services that interacts with the device.
When you write scenarios for the recorder app, you will use a version of the react-native-ble-plx module wrapped in a recorder, so that all commands and events are not only propagated to and from the original module, but also persisted in a recording file. The wrapper implements the same interface as the original module, plus a few methods so your recording scenarios can insert labels into recordings. The recorder app can run through a number of scenarios, and create recordings for each.
To prevent recordings from having device-specific values in the recorded traffic, the wrapper also enables configuration of mappings for device names, characteristic values, etc. Via the wrapper you can also specify names for services and characteristics, which will be added to the recordings for easier debugging.
To create a recorder app, you can use the React Native template provided by this project. This will create a Mocha test runner app that can run on a phone connected to the device. You then write each of your recording scenarios as a Mocha test. I chose Mocha for its ease of embedding into a React Native app. When Jest Core has matured enough, it would make sense to add the option of using Jest instead of Mocha.
You will typically add the recordings to git, so they are available for running app tests on your CI server. You will want to run the recording app whenever you change the set of scenarios, or when the BLE protocol of the device changes. A good habit might be to generate new recordings on a weekly basis, plus as needed per feature branch.
Since the recording files are normally used by the same mock recording tool in its two modes (recording and playback), I could choose a file format that works well for this purpose. Some level of interoperability with other tools would be an added benefit.
Mock recording tools for HTTP traffic can use the W3C standard HAR file format, also used by browser debugging tools to save traffic.
For BLE traffic we could have chosen the Bluetooth HCI log format as produced by Android and consumable by tools such a Wireshark. However, this format is quite low-level and time-consuming to implement correctly. Instead I have chosen a file format that closely mimics the API of the react-native-ble-plx
module, making the tool easier to apply.
While recording tests run on a real phone, the tool needs to store the recording file as an artifact on the developer machine. So after completing a recording, the tool has to send the recording file from the phone to the developer machine. It could do that in various ways, for example using the phone file system, or sending it to a server running on the developer machine. Since the phone will already be attached to the developer machine running the recording test, I simply used the system logging capabilities of phones (console.log
and similar in React Native) which can be collected via tools such as adb logcat
on Android and idevicesyslog
on iOS.
Mock recording tools for HTTP traffic can rely on the inherent request-response nature of HTTP traffic: When the app sends a request, the tool will look in the recording for a matching request and will mock the corresponding response.
Since BLE traffic is inherently bi-directional, the approach needs to be a bit different. Just like the test has control over the user actions during the scenario being tested, the test must also control the device actions. The tool uses two mechanisms to accomplish this: First, the recording file is sequential - traffic is being simulated strictly in the order it was recorded. Secondly, interesting moments can be labelled during the capture, allowing the test to simulate all traffic up to a specified label.