Thursday, May 30, 2019

Grapefruit series - Editor's choice


Резултат с изображение за importance grapefruit

Going forward with the - to the Point, bite-sized content blog posts. Today, I want to talk about other professions, where Testing is put on a more rightful place. Just think about it! If you are a newspaper journalist, who reviews, corrects and agrees/approves what should be delivered to the customer? You are right - your editor! 

Imagine, you are a gourmet chef, you want your  restaurant in the top 'must visit' stops. Who helps customers understand the quality of the food, service and culture at your place? Someone who analyse all this and publish its findings? Food writer/critic does!

Police detectives are by nature - investigators. They often collect information to solve crime by talking to witnesses and informants, collecting physical evidence, or searching records in databases. As we all know, this is quite a respected and honourable role in the law enforcement!

Now a days, in the "
young people's music", most of the songs feature rap. A rapper, is someone who  finds minimum rhymes to express tacit knowledge. You need deep thinking to shine!

I have been around, to see that professional software testers are not treated with at least the half of what we deserve. Is it our fault, to let others decide our importance!?

Monday, April 1, 2019

Grapefruit series - My EGO in software testing



It's time for the next Get to the Point, bite-sized post. Today I really want to put my (problems with) Ego in the spotlight. After all - is April Fools' Day, so what better time to do that!?
 


Let's talk QA shamanism

Look 'ma, no hands. ”
I can use my ass to sit on the keyboard - that's a test, right? I see people, who claim they are skillful and knowledgeable, so called gurus put a veil of mystery over what testing is. Maybe, hoping to start a cult and sell snake oil. Me and you, are clearly NOT in the same (QA) business!

No! Software testing is not a rocket science, it is natural. And should be fun! Testing is everything, but voodoo and we don't need witch doctors. The product or service we are testing are not caused by witchcraft. So, let's say I'm a Tech lead that knows only one, single technology (in his entire career). Then, why do I preach to others, how automation should be done in testing. Form and propose solutions, to all of other people's problems.


I have only 50 visits on my blog today. Should, I post content, that is just the eye candy aggregate of the information I found on the web. Will this offend the knowledge reader!?  On the other hand, as receiver of such great knowledge - how intelligent really am I? You want an example: great minds that have a huge impact on development and DevOps in the past decades, advocate Page object model as The solution. POM is great, when applied correctly, in a very narrow scope. Such case is pywinauto and desktop apps. Your selectors are an integral part of your GUI abstraction.


What's the point here?

We don't need fancy-schmancy terms, but simpler ones, so more and more people can be engaged in the software testing domain. All I'm saying is, that I shouldn't overengineer the testing - just because of my EGO needs.

Tuesday, February 5, 2019

Grapefruit series - Harness AI in your Testing mindset



Let's first look at the startups and their Dev-centric approach. Cold facts, show that beside the core development, there are little (dedicated) gravitating roles: PM, SysAdmin, QA, Support.

In such a fast paced and agile environments, one can see future trends. For instance, how AI affects our daily IT jobs. As automation testers, at some point I guess we have all asked ourselves: "Am I automating myself out of a job?". I know I did automate an automation test engineer in the past. And the scary part is - it took just a couple of days, image recognition library and Selenium. The solution was imperfect and slow, but good enough for the business. She had to go. More and more IT tasks will be done by lines of code automatically written by a machine.

For a very long time, we're at our first stop on the way to AI: automation. Ask a veteran sysadmin what their daily life was like 10 years ago, and you will realize just how much automation has already happened in the field. Some things you might not even think of, simply because we now take them for granted. Need an example? In the "old days", for a website with medium complexity you needed a team, now a single resource (freelancer) can do a job that took a team of maybe three or four to do just a decade ago. The technologies require less and less system administrators to set them up, configure, maintain and scale them. You no longer need to take extra care of monitoring your environments or scaling of the system. It is done automatically according to expected usage (predictive provisioning).

Technology has been steadily impacting the jobs of project managers for years now, too. Just a few quick examples: offload truly routine tasks to increase value, coordinating tasks to increase efficiency and collect updates from the team in order to produce reports or raise triggers. Budget is a big part of the project, AI tools can now chalk out the most optimum and financially viable schedule and budget for any kind of project based on projection modeling techniques. Automatic project trackers (like Timely) are showcasing more and more benefits.

Support divisions and workflows also changed rapidly in the past years. I have witnessed first levels of the service desk blown away by chatbots and FAQ articles automation. The digital transformation took only a few weeks and there was no significant gap in the operations. The users got faster answers, reduced research and predictive insights for just a fraction of the old cost. The company focused beyond reaping first-contact efficiencies and invested in spotting patterns that will help the team perform root cause analysis to prevent issues.  

There are countless ways that Machine Learning can benefit a software tester. The thing is, one could not simply take the algorithms of AI and apply them to another game as-is (even though they are more general-purpose than in the past). As engineers, we should embrace this power and put in our human touch in the new service we provide. Advances in the AI field are exponential.
New challenges are rising, and the trick is to figure out what skills those challenges require. Here is one - utilisation of chatbots in software testing! How we can use the domain knowledge incorporated in them!? How to harvest the user interactions (expectations, questions) and convert them into test cases!?

Thursday, January 10, 2019

Grapefruit series - Challenge roles



      Get to the Point, bite-sized content is what this and next posts will be all about. After all, is blogging, not scientific research. Looking back at the last two years, honestly, I don't have the time to continue with detailed articles. The idea is to write more regularly (due to the narrow topic coverage), to share my raw thoughts and opinion.

     The very first thing I would like to address, is the roles that we all have in our teams. You have a single role in a single team - it will also work. Growing is part of our career. One way to do so, is moving to new roles. Taking ownership of new roles. Ones, that were not held by us (but exist, or not in the team). You need to deserve and keep it, making it official is just detail. So, if you are not in the place you wanna be - challenge the roles (NOTE: not positions).

     At some point, this should become a culture. Strongest roles, rule the team by leadership and competence. Strongest roles, have everything to prove, every day. Foster a way to learn and grow alongside your teammates. Never assume, that your title gives you comfort and you have the final say. The more important your role is - the more you have to prove to keep it. Look for the ones that challenge and level up your game.

Thursday, June 8, 2017

HTTP API Beckend tests with Ruby and NodeJS



     No more outdated API documentation. This is the promise Dredd makes. If you ever had your hands on a regression test suite of a RESTful backend, you know how tiresome can be just to keep up with all the models changing. Furthermore, till this point there wasn't really a testing framework that sets some standards in this domain. In gereral Dredd is a language-agnostic command-line tool for validating API description document against backend implementation of the API. So it reads the description and step by step validates whether your API implementation replies with responses as they are described. Just enough to call it our core engine. Dredd supports two API Description Formats:


The later one is my favorite, since it works with yml files (as the modern CI servers). We can work outside the NodeJS tech stack, so let's try Ruby.  The hooks and the documentation can be found here. Going into Ruby's world we will try and keep the good practices and use chruby as the default Ruby version manager, Rake as our build utility and couple of linters to keeping us from potential (newbie) errors. Configuring TravisCI is straightforward for our Github repo.



    Dredd can work as a standalone tool, but as we know our Test harness should be layered. As any software most of the times the backend we are going to test is quite complex, so having a Ubiquitous language is a must. As battle tested solution, I will use Gherkin. There are two major options here - Cucumber and Robot frameworks. Any will do, but since I'm already on Ruby, I will go with the native BDD framework support (setting up Robot with Ruby is a bit tricky too).


    I call the DSL layer we need to implement - the domain model layer. Since this is unique to every  system, there is no way to provide working out-of-the-box samples, context is king as you know.  In the end all that is left for us is just some glue code that will assemble the user journeys (flows).
    Another big concern of ours is the Test environment and infrastructure we need to setup and maintain. Utilizing the IaC concept and the Docker containers, we can keep all in a single repository under version control.


    Pro tip:  Don't forget to set the container port on the host, instead of your local one when running on your CI server via Docker executor. In the same time we should keep the Ruby hooks-worker handler bind to the same container.

Monday, December 5, 2016

Zombie lands: Selenium recorders



    Sometimes even best intents and plans may end up being just that. I’m not going to emphasize how much of a bad idea is to rely on capture replay tools, codeless tests and code generators in your automation testing strategy. Especially in a product company with long lived platform and production environments. Enough said. But those companies are not the only ones that you may work as a QA engineer. Some of them provide software services, virtual teams or whatever marketing calls the outsourcing in order to be different. Not better. This has its own advantages, such as vast variety of technologies and teams to work with. Experience matters. Also, boring testing stuff is really a rare sight.

    I will try to stay on the business side for this post, even if this blog is technically oriented. So, what is the main purpose of having QAs on your project? Is it about testing developers code? Automating tests? Or infrastructure? Reporting? Someone to ask why we have bugs released and given to the client? None of this really matters when you realize that QAs don’t assure quality.

    Yes, you heard me saying it.

    QAs simply report your software’s health at the end of the day. That’s it. They don’t fix bugs or implement your new feature. Why I said it!? The quality should be build in, not bolted on. Like it or not, quality is responsibility of all of us. Yes Mr. PM – yours too. There is a slim chance that your team is truly cross-functional and every team member can finish any simple backlog task (regardless of its type). If this is your case – Congrats! You’ve made it to the Big league. What happens if you, as a QA, work in a company where the majority of projects are short-lived, no more than 3 months, with no real change requests or production environment to support? In the same time is quite possible that you are also responsible for a projects that are the exact opposite. This is normal – business wants to make good use of your skills. Not taking “small” projects seriously, is a poor decision. Just their budget is different. Automation done right is expensive luxury, few can afford. Being agile, even if your environment is not, should be a must. After all, our responsibilities for both types of projects are the same.

    As advocates of the end user, we should understand what the core business is all about. And wishfully make the best in our efforts to assure its delivered with acceptable quality. Having the golden hummer of automation, sometimes we desire to bend everything with it. Is it really profitable to have complex and powerful framework and setup, if no one else (besides you) is willing to learn, use and maintain it? Good chances are you will find such people in time, but the project’s deadline is there, now. It is important to deliver and in many cases (I’ve witnessed) quality is the first priority that is cut down. And this is real life story for many of us.

    What are the alternatives!? Hate to admit it, but my best course action will be a Selenium recorder. I do think there is a perfect storm, when this makes sense. I know how many of you will stop reading after this line, but bare with me. Consider this:

• only one Manual QA, no coding skills what so ever
• website with little or no 3rd party integrations
• fairly straight-forward business flow
• static web pages, as templates are prepared long before back-end coding kicks in
• reviews were made and layout was approved as final
• no QA environments, just single Staging one
• no meaningful CI setup
• no real time for full cross- browser and platform runs
• lack of mature font-end unit testing culture
• short testing frame (couple of weeks top, out of budget with months dedicated for development), this was already agreed with the client and he is happy

    If all of the above present, I would go with a Selenium recorder. Every automation test case should justify its price, at any given time. I prefer 70% of the functionality covered by capture-replay tests over 20% created via complex UI testing framework and setup. Code coverage is always a sensitive topic. But if you keep the test pyramid you should be fine. I know what some of you may say:

– If your framework was that good, a manual QA can create the tests on the BDD level. Simple as that.

    Fair enough, but if your framework is designed well, your tests shouldn’t rely on imperative BDD, right? Ok, I lost another five good QAs reading this post. What I am trying to convince you (and myself) is that sometimes mindless tests are not so evil. No one can deny that recorded tests are cheap. Easy to create and execute. Everyone can do it. No complex setup is required. Plug and play solutions have their place under the sun. I will use a good practice from the DevOps, which states that creating new environment should be easier than fixing the existing one. Do you see how this is applicable in our context? The recorded tests should not be fixed! This is not in their nature. They should be used only until fit. The moment you start adding configurations to them, you better start using your complex, layered test framework. It is better suited and designed for this anyway. Don’t fall into the trap of adding too much gun powder to your Selenium IDE setup. I know how tempting is to add sugary plugins, but they only hide the problem. If you need more complex tests, recorders are not for this project. Same goes for Selenium Builder. I don’t consider this as a retrograde way of thinking, just being pragmatic.

    As bottom line, I would like to go back to the roots of automation – it is designed to free time and assist your testing. Having recorded scripts will save you the boring and repetitive tests for the manual QA staff. And they will have a bit extra time for the more important exploratory testing.

Monday, July 25, 2016

Shift right: Monitoring made easy

    It is all about reliability and scalability. While one of our servers may go down, the application can’t, so testing has to be in real time. Mainly we are looking at two KPIs functionality and performance. Most importantly, we also have a real-time feedback to raise issues that might not be detected by the previous testing or tools. With TestOps raising, testing in production becomes an essential piece of our overall quality plan.
    Many companies are already giving a lot of attention to the Shift-left transformation, but I also think that making a Shift-right with proactive monitoring and alerting after our releases into the wild is just as important. A while ago I did a similar task, but using NewRelic Synthetics.
    There are enough tools and platforms into the Open source stack to actually do it by ourselves very quickly and easy. This particular implementation is more of a suggestion and I’m sure you can do it with your own toolbox. The architecture is really quite simple, but backed up by the powerful IaC concept.  



    I prefer to use three types of monitors:

  •          Ping: simply check to see if an application is online. It uses a HTTP client to make requests to our service.
  •           API: the HTTP client is not a full browser, and does not execute JavaScript. Used to monitor our API endpoints. This can ensure that our app server works, in addition to the website.
  •           Virtual browser: used for more sophisticated, customized monitoring. With these monitors, we can set up a monitor that navigates our website and ensures specific resources are present.


    The central part is dedicated to the engine, which is composed via Jenkins, Docker and GitLab. Deciding what (cloud) servers should be used is all on you. Integrating those three is really straightforward. Major benefits are powerful execution, clean environments and a central repository. Alerts are via email, thanks to the Jenkins build-in functionality.
    For the first two types of the monitors we need a REST client like Postman. With this tool we can easily create and organize our tests in collections. The execution and reporting are handled by Newman. We can output the results in html, JSON and xml. The last one is JUnit formatted, so this can be plugged into Jenkins dashboards.
   In order to get a virtual browser, we will need a separate container with nodeJS, xvfb and browsers on it. I use my own Dockerfiles to build the containers I need. Turns out sometimes it is better to have a custom solution. The orchestration of our containers is done via Docker Swarm

    That’s all. Thanks for reading.