Automated testing makes the whole testing process easy and fast.
Record and replay testing make that process easier and quicker.
Record and Replay, otherwise known as codeless automation, is a way to run tests without programming knowledge.
Record and replay testing has been around for quite some time, but people are getting used to it now, though it’s been quite a ride.
In this blog, I’ll be talking about what Record and Replay testing is and why it developed a bad reputation.
What is Record and Replay Testing?
Record and record testing is a type of automated testing where the tool records the activity of the user and then imitates it.
These tools let the tester hit record and manually go through the real-user actions of a pre-scripted test case. When the tester is done, the tool will have created a script that can automatically run those exact same actions automatically.
Record and Replay is the lightweight solution for test automation. The value is the most prominent for teams that are still transitioning from mostly manual testing to include some automation in order to speed up testing and help integrate it earlier in the software development lifecycle.
Few ways Record and Replay is good for:
- Individuals with little or no programming knowledge
- Filling in the gaps of Selenium tests and transitioning from mostly manual testing
- Lightweight automation for smaller tests
- Non-technical roles doing one-off tests
- Teams where members outside of QA take part in some testing
Why Did Record and Replay Testing Get a Bad Reputation?
The issue is that application code, especially UI code, can change frequently. In that situation, record and replay tests break often, negating any time savings over just going with manual testing.
Unless your developers make changes only rarely, using record and replay will leave you with a pile of fragile test automation scripts and often duplicate lines of code, images, and objects each time they execute.When this duplication generates a lot of extra code, it also makes it harder to debug failing scripts.
In other words, the tool records more than simply the steps you take. It’s difficult to understand what part of the code belongs to the test steps and what is the extra data the tool collects.
Some more issues with record and replay testing:
1. Very high maintenance cost
These tools often store procedural steps or create procedural code. Procedural tests are a problem because even minor changes can require that all your tests need to be updated or rerecorded. This largely defeats the purpose of having automation in the first place. In many cases the cost of maintenance outweighs any realized value.
2. Limited test coverage
The main thing to understand about record & replay tools is that they typically follow the exact steps you recorded, no more, no less. That means these tests typically do little more than basic navigation testing. Navigation testing is important, but navigation testing alone is pretty low value automation. You are also typically limited to just testing against the user interface.
3. Poor understanding of the tools
Most testers have an incomplete understating of what exactly these tools are doing. This can lead to huge gaps in your test coverage. I have found the testers that typically use these tools assume the tools are do things, like verify a page actually loaded, that they are not doing.
4. Poor integration
These tools by and large don’t integrate well with your SDLC process. They typically have their own interface and test runners. This often means you need to manually kick off tests. This can also mean tests need exclusive control over the machine they are running on, which means the tester running the tests doesn’t do much to do while the tests are running. Once you do have test results they are often in a flat file or “fancy” report format that follows a custom format for that specific tool. These custom formats make uploading the results to your test management or build tool challenging.
So what are they good for?
Record and replay tools are good for getting one’s feet wet in test automation, but the files created are extremely large, they execute slowly, and get slower over time, if they work at all
says Hector Diaz de Leon, a test automation expert and development engineer.
We can say that it’s not something you should be using for tests you expect to last for the long haul.