In auto-recorded tests, verification is also often forgotten.
Of course, many record-and-playback tools can verify text within a webpage. However, verification criteria are often quite strict. For example, it can only check whether the page contains a specified text or whether an element is displayed. Even if it’s something obvious to humans, such as broken designs, it gets completely ignored by CSS selectors that identify elements. For instance, automatically recorded scripts don’t check whether the company logo is displayed correctly or whether the layout hasn’t unintentionally been changed from a 3-column design to a 2-column design.
People have hoped that test scripts auto-recorded with record-and-playback tools would be an exact replication of what humans do and free us from repetitive tasks. However, the original purpose and perspectives of the test scenario get lost in overly simplistic automation, turning the script into a rigid and unintelligent bot. Of course, you could rewrite the auto-recorded locator to better reflect your intention or add comments to the script to make it easier for you to understand later. But don’t you think this isn’t much different from manually programming a test code?
You need experience to write readable test code
Should we give up using record-and-playback tools and just go back to writing test code? It’s not that simple.
Personally, one of the biggest pain points with writing test code is that you often have to maintain both the application code and test code. For example, every time you change the element’sidorclass, you also have to change the corresponding test code. You could save yourself the trouble by specifying a text or accessibility ID as the locator or giving it a unique test attribute, such asdata-test. However, this doesn’t change the fact that you still have to manage both the application code and test code.
In addition, sometimes you can’t tell what the test code means from the test code alone. For example, if you have a locator calledbutton[type= “submit”], it’s difficult to guess which element in the screen the locator points to by merely looking at the test code. This is because nothing in the test code shows what the screen looks like at that point in time. To solve these issues, we use practices like the Page Object Pattern to manage UI components and their locators together. However, the more you try to structure your test code to reflect the structure of the actual UI, the more it will overlap with your application.
We could refine automated tests by sharing parts that are common between the application code and test code (StoryBook is a handy tool for this). Some teams may consider investing in code-sharing initiatives. However, this would require considerable resources, and more importantly, it wouldn’t be possible without engineers with sufficient skills, experience, and passion in both development and testing teams.
One of the biggest advantages of record-and-playback tools is that they are perfect for existing software tests. Writing test code makes sense if you start the development process by writing tests for software that does not yet exist (BDD, or Behavior Driven Development, is one such example). If not, you would have to repeatedly look up the locator of the target element in the browser, copy & paste the locator into the test code, and run the test code and see if it works as expected. The more lines there are in the test code, the harder this process gets.
How we can improve record-and-playback tools
To summarize, simply recording user actions will give you unstable and unreadable test scripts. Writing readable and maintainable test codes requires significant investment, including improvements to the development process. From personal experience, I’ve found that many user companies take the latter approach. For example, they hire a dedicated test automation engineer (Software Engineer in Test) who works with other developers and the QA team to build an automated test architecture.
What about record-and-playback tools? Have they remained unchanged for the past five years? Not at all! Let’s look at some of the challenges that record-and-playback tools face and how Autify has overcome them.
No visualization required
Whether it’s test code or a record-and-playback script, maintenance becomes more and more tedious if users have to use their imagination when deciphering the test code. In other words, users shouldn’t have to visualize the actual screen when reading the test code.
As the name suggests, Record-and-playback tools allow you to replay recorded actions, so you could read the code by executing the steps yourself. However, I’m more interested in improving the readability of the code itself. As an example, scripts recorded withPuppeteer Recorderused Accessibility ID like this:
const{open,click,type,submit,expect,scrollToBottom}=require(‘@puppeteer/recorder’);
open(’https://github.com’,{},async(page)=>{
awaitclick(‘aria/link[name=“Sign up”]’);
awaittype(‘aria/textbox[name=“Enter your email e.g. [email protected]”]’,‘[email protected]’);
awaitclick(‘aria/button[name=“Continue”]’);
awaittype(‘aria/textbox[name=“Create a password”]’,‘asdffdsa’);
Unfortunately, Puppeteer Recorder was an experimental project and isn’t maintained anymore. However, you can write test scripts that are visually easy to understand by using accessibility attributes such as Accessibility IDs. It also has the added benefit of raising awareness of accessibility. I hope other projects and OSS will inherit this practice in the future.