Hi guys, in this article I'll try to share with all of you my approach in SQA - Mobile applications. I'm still novice with Mobile applications, so if you find any gaps - please tell me about it, so I can correct them.
In this "Custom" Mobile Testing Framework I use the MVC design pattern to help me with the Testing system. I believe that the QA Engineers should create Software Systems for each Software under test. So basically first I'll show you the "big picture" of the framework
Features:
- Automate user interaction: Simulate user interaction programmatically by using the SeeTest software tool
- Specify expectations with screenshots
- Document test results (passed/failed tests, for each test: stepwise documentation with screenshot and description)
- Separation between ImageRepository and Testing logic for supporting multiple languages while keeping the same test logic
- Determine code coverage (line coverage, function coverage, branch coverage, relative coverage contribution per test case)
- Analyze system calls (e.g. list writes to file system)
In General we must understand the platform. Testing on Android or Windows Phone is not the same as testing on iOS. The testing tools and frameworks available for each platform are significantly different. (e.g. Android uses Java while iOS uses Objective-C, UI layouts are built differently on each platform, UI testing frameworks also work very differently in platforms). Now I'm using Windows Phone 8 - based application.
A mobile testing strategy is not complete without testing the integration between server backends and mobile clients. This is especially true when the release cycles of the mobile clients and backends are very different. A replay test strategy can be very effective at preventing backends from breaking mobile clients. The theory behind this strategy is to simulate mobile clients by having a set of golden request and response files that are known to be correct. The replay test suite should then send golden requests to the backend server and assert that the response returned by the server matches the expected golden response. Since client/server responses are often not completely deterministic, you will need to utilize a diffing tool that can ignore expected differences. To make this strategy successful you need a way to seed a repeatable data set on the backend and make all dependencies that are not relevant to your backend hermetic. Using in-memory servers with fake data or an RPC replay to external dependencies are good ways of achieving repeatable data sets and hermetic environments.
Let's get back to the MVC design pattern. We need to determine how we are going to use our Models, Views and Controllers.
As I said I am using C#, Visual Studio 2012 and the SeeTest. After everything is ready (download and setup) I connect the real WP8 Mobile device via USB and record a test via the SeeTest Studio. Depending from our QA team, we can divide the roles to suits us best. If we are 2 QA Engineers (Manual and Automation), so the for the Test Case Design and generating Views I will use the more experienced Manual tester. The test implementation and environment setup I will do myself (.Net, VS2012, Tortoise SVN and Seetest). The heavy documentation is divided equaly for each team member.
In order to get our Views, we used the SeeTest Object Spy and divide each WP8 Panorama according to the business logic in separate Object repositories (Login, News...). And, they can be easily reused and maintained.
The Models I get form the SeeTest Visual studio Pulg-in, Such as Client and ProjectBaseDirectory.
MORE details can be found if you navigate to Export a Code to C-sharp (MSTest, NUnit). By doing this I am able to extend this framework with behavior-driven development (abbreviated BDD) and use the Specflow.
MORE details can be found if you navigate to Export a Code to C-sharp (MSTest, NUnit). By doing this I am able to extend this framework with behavior-driven development (abbreviated BDD) and use the Specflow.
In the Controller I implement the actions/steps that use the Views and the Models. This is sample Test case code. In this Login functionality scenario, the test navigates through different panorama views (using different object re-positions and elements), so in order to keep the high quality code I've added two Properties. One for each repository name (the SeeTest Model uses dictionaries to keep the data).
Now let's talk a little bit about the Hybrid test automation frameworks. They are so important to me because I see in them two of my favourite concepts: Software evolution and Autonomic computing. Basically the Hybrid testing framework will combine Keyword-driven testing, Data-driven testing and Modularity-driven testing. And, in this Custom framework I take and extend exactly all that, as shown below:
Now let's talk a little bit about the Hybrid test automation frameworks. They are so important to me because I see in them two of my favourite concepts: Software evolution and Autonomic computing. Basically the Hybrid testing framework will combine Keyword-driven testing, Data-driven testing and Modularity-driven testing. And, in this Custom framework I take and extend exactly all that, as shown below:
As for the Test data that we will be using – A
good option is to create a test data generator tool! This will increase
our flexibility to test different scenarios in the future. Lets use as example a Custom Random Emails/Names/Phones/Addressess Generator.
And this is the workflow of the Framework:
This approach is still in "development" so I plan to update the article and to improve them.
No comments:
Post a Comment