Thursday, March 3, 2016


    If you have some experience in UI automation and testing, probably you already are familiar with the bitter taste of the ever-changing DOM. As most of your maintenance of the code is related to constant updating of WebElements (their locators) and minor scenario flow fixes. At first this may look like the Sisyphus task. I guess at some point we all ask ourselves - is there a better way? Let's leave aside all we have been taught in the numerous courses, books and articles about Selenium's location strategies. I will use web site automation and XPath in the examples below.
    One of the important principles of programming states
 Abstractions should not depend upon details. Details should depend upon abstractions.
So, why we bend our thinking in favour of bots? We desperately try to enforce the rules of low-level implementation instead of working with abstractions? If you use Selenium-like libraries to create your frameworks and/or crawlers, probably you locate your DOM elements like this:

    Nothing wrong here, it works well and tests are up to speed, again. For a very long time I did stop here, happy that I've fixed the tests. But in a while (usually too soon), this scenario happens again. I agree that my Selenium based tests require such location, but the approach is somehow messed up. Clearly, something is not as it should be. I used to blame development, processes and whatever comes to be on my way. But I am the owner of my code, so it's me to blame.
Focus on end-users.
    I guess you all have heard about Look&Feel in web design. It can be explained simply as aspects of the UI design, including elements such as colors, shapes, layout, and typefaces (the "look"), as well as the behavior of dynamic elements such as buttons, boxes, and menus (the "feel"). Without going further in UX design details, we could say that this is the brand and the users - two of things we care most. After all, the product (SUT) is made for humans, not for crawlers. If you work in any kind of Agile probably you have to start developing tests in parallel to the functionality. In a good company some Mocks can be found describing/displaying how the GUI will look like. Once agreed by the business people, those (almost) never change.
Stop thinking like a bot.
    We found our abstraction, insensitive to the HMTL implementation details. Humans will need text to understand the sites. This is the main pillar in our new and upgraded location strategy. Use labels, input value attributes and text to locate your elements. We could actually imply a pattern here - between text and web elements. After label with text "Username", most likely there always will be an input for your username.
    We can make good use of XPath's advanced functionality like

    Let's look again the Username input field example

    It might look a bit complex, but once you get to use it you will see that the stability of your tests is increased significantly. Making your XPath locators smarter will save you lots of maintenance.  They expect changes and handles them elegantly. I've made some metrics gathering and it turns out that for a build of 200 unique test runs and 54 minutes, the execution was slower with 44 milliseconds than before. A small price to pay since the browser's xpath engine actually performs the calculations, not your code.  

No comments:

Post a Comment