Original Software
Last Updated:
Analyst Coverage: Daniel Howard
Original Software is a privately held software company, founded in 1996 and with offices in both the UK and USA.
Original Software TestDrive
Last Updated: 5th February 2020
TestDrive is a test automation platform that allows you to build automated, executable and resilient tests without writing any code.
Original Software also offers two additional automated testing products: TestDrive-Assist, for advanced, ‘dynamic’ manual testing, and TestDrive-UAT, for user acceptance testing. The former in particular is positioned as a method for transitioning from manual to automated testing, and notably includes the ability to generate automated tests from manual ones.
Customer Quotes
“We are executing about 650 different test scripts as part of our Regression packs using TestDrive and to have that in place is of enormous value to the business. The alternative would be to get people out of their day jobs and run tests manually and we are just not in a position where we can do that.”
Marston’s
“TestDrive has proven itself to be a flexible and robust testing tool during the proof of concept phase at one of our major clients. We have been able to test full end to end processes covering a large percentage of the clients applications including SAP, SRM, CRM, BW, various web portals and other 3rd party applications. The support provided by Original Software has been excellent and very hands on.”
Enzen
TestDrive provides a number of features relevant to test automation. These include dashboards for testing insight and analysis, test management, test/task assignment to individual testers, and automated test execution. Most importantly, it provides significant facilities for creating and maintaining automated tests.
Creating an automated test in TestDrive does not require any coding. Instead, you record a series of actions taken against your system (whether via a web browser, desktop application or green-screen), along with the context (for example, the web pages) in which you have taken them. Your recording is displayed alongside your web browser as you create it, as seen in Figure 1, and you can annotate it as you go (for example, to flag an error). Within your recording, your actions are labelled for both the specific action taken and the page element that action was applied to. The latter is named by TestDrive intelligently, automatically and in plain English via a patented process. This is shown in Figure 2. Your recordings also include a full content analysis for each page you’ve visited, which in turn enables a full impact analysis if those pages are changed.
Once you have your recording, you can use it to automatically generate a variety of testing assets, including a corresponding, executable test script, a distributable simulation that will replay your actions (useful for demonstration or training), or a test case, which in the context of TestDrive is a highly scripted (but not executable) manual test. The latter is of limited usefulness for actual testing, but may be helpful for accountability, issue/defect reporting, and so on.
You can also combine multiple scripts, test data, and control flow elements into a playlist. This allows you to create end to end tests by stitching several test scripts together, and enables you to supply your scripts with multiple sets of test data. For example, a standard pattern would be to import a selection of test data for each of your test scripts, then, for each script, loop through your test data, testing each data set in turn. This effectively parameterises the data in your tests, and moreover does so without modifying the underlying scripts. This means that you can freely change either your test data or your process flow – which is to say, your test scripts – individually, without needing to modify the other. Playlists can also be used for collaboration. For example, you might ask your nontechnical users to record the actions they normally take in their browser (which is possible since TestDrive is completely codeless) but leave it to your dedicated testers to assemble those recordings into playlists. In addition, any changes to your test scripts will automatically carry over to your playlists, making them relatively resilient.
Playlists can be executed just as a test script would be, and after execution is finished TestDrive will provide you with run results. These compare each page of your application after each action in your playlist to a predefined ‘baseline’, the expected state of the page, which is automatically generated (and may later be regenerated) based on your current application. If there is a discrepancy, the test will fail. In this case, you can decide that the discrepancy is the result of an intentional change and update your baseline accordingly, thus allowing the test to pass. Notably, these discrepancies will never occur due to something as simple as a change to the UI. Alterations to, for example, where a particular item is on the page are dealt with intelligently and automatically, without interrupting the testing flow. This can be seen in Figure 3, where, despite the dramatically different appearances of the actual and expected screens, the test has passed. The run results also display several performance metrics for each screen, such as response time, and thereby provide a measure of performance testing as well.
The most important selling point for TestDrive is that it allows you to create highly resilient, automated tests without writing code. Resilience is particularly important here: change will happen, and your tests will need to deal with it. More to the point, maintenance of tests in response to change is often expensive, so it makes sense to make it a priority. This is exactly what TestDrive does, both by enabling your individual test scripts to adapt to your UI and find elements on it intelligently, and by assembling your end to end tests from existing test scripts and test data. The former lets you modify the look and feel of your application without worrying about breaking your tests, while the latter allows you to focus on updating your testing assets individually rather than as a whole. This is all supported by full impact analysis functionality, providing you with the means to quickly evaluate what has changed when something goes wrong. This can be essential for quickly diagnosing and fixing the problem, either with your tests or with your underlying system.
The Bottom Line
TestDrive enables you to build automated tests easily and without code. Moreover, it significantly reduces the difficulty of maintaining those tests. If maintenance is one of your primary concerns (and there’s every reason it should be), it is certainly worth a look.
Commentary
Coming soon.