What tests need to be
automated?
While everything is practically automatable, it is important
to understand what is needed to be automated in order to get more value from
the efforts spent on automation. It is also important to understand what tests
need to be automated first.
Over 30% of the automation projects in industry end up in
trash for different reasons, most common reason being the selection of wrong
features / tests for automation.
Frequently executed
tests
Everyone accepts that these are the best tests to start
automation with. If you have 10 features in your application with 1000 total test
cases, we need to sort the features based on their frequency of execution to
pick up the right tests for automation. It is a good idea to start calculating
the frequency based on the historical data. This data will give us an insight
into how many iterations the regression tests were executed for each feature in
last one year. It is also important to understand the trend to estimate the
frequency for upcoming year.
Feature
|
Total
Regression Executions
|
|||
Q1
2013
|
Q2
2013
|
Q3
2013
|
Q4
2013
|
|
Feature 1
|
8
|
6
|
6
|
5
|
Feature 2
|
3
|
4
|
5
|
5
|
Feature 3
|
6
|
3
|
2
|
5
|
Feature 4
|
5
|
4
|
5
|
4
|
Feature 5
|
1
|
2
|
1
|
1
|
In the above table, I probably give high importance to
Feature 2. Though the frequency is less compared to Feature 1, the trend shows
that the total executions are expected to be more in coming quarters. The execution
count for Feature 3 fluctuates a lot indicating that the feature went through
unexpected level of changes in last few quarters and it’s difficult to estimate
the future regression cycles. I probably give little less importance to Feature
3.
Once we prioritize the features, we can pick up top 2-3
features (based on the resources available) and initiate automation.
Top priority tests
first
If the identified features have more than 100 test cases, it
takes few person weeks to complete the automation for the current feature
before we pick up next feature for automation. In such cases, it would add
value to first automate all high priority cases from all the identified features
and then automate the medium and low priority cases.
If a high priority
test fails, it will have great impact on the business, while we can still handle
the customers if there are multiple UI issues in the applications. It is highly
important to ensure that no high priority test fails in the application.
Feature
|
Total
Cases
|
High
Priority
|
Medium
Priority
|
Low
Priority
|
Feature 1
|
200
|
60
|
100
|
40
|
Feature 2
|
200
|
50
|
120
|
30
|
Feature 3
|
200
|
30
|
110
|
60
|
We recommend automating the high priority tests for Feature
1, 2 & 3 before automating the Medium priority cases from Feature 1.
Tests that take more
execution time
For some reason, if we don’t have priority set up for the
test cases or we don’t wish to follow this approach, other best way is to prioritize
the test cases based on their execution time. The foremost purpose of the
automation is to save time and we obviously we want to quickly automate the tests
that take very long time to execute manually.
We might have already performed the regression multiple
times by the time we decide to automate (except in the case of TDD). We are
well aware of the time it takes to execute each test. If it’s not possible (or
time consuming) to identify the execution time for each test case, we should come
up with effort for each scenario or sub-feature or group of related tests and
divide the effort with total cases to come up with test case level execution
effort. Once we have the execution time for each test case (or set of cases),
we can sort them based on the time it takes and start automation with top 100 test
cases. We can pull the next set of 100 cases once the initial cases are
automated.
Execution Time Based on Test Cases:
Feature
|
Test
Case
|
Execution
Time
(in Minutes) |
Feature 1
|
Test
Case 1
|
15
|
Feature 1
|
Test
Case 2
|
10
|
Feature 1
|
Test
Case 3
|
5
|
Feature 1
|
Test
Case 4
|
25
|
Feature 1
|
Test
Case 5
|
18
|
Feature 1
|
Test
Case …
|
…
|
Feature 2
|
Test
Case 1
|
35
|
Feature 2
|
Test
Case 2
|
6
|
Feature 2
|
Test
Case 3
|
12
|
Feature 2
|
Test
Case 4
|
25
|
Feature …
|
Test
Case …
|
…
|
Execution Time Based on Test Sets:
Feature
|
Test
Set
|
Total
Cases
|
Total
Execution Time
(in Minutes) |
Execution
Time per Case
(in Minutes) |
Feature 1
|
Test
Set 1
|
25
|
450
|
18
|
Feature 1
|
Test
Set 2
|
10
|
225
|
23
|
Feature 1
|
Test
Set 3
|
30
|
510
|
17
|
Feature 1
|
Test
Set…
|
|||
Feature 2
|
Test
Set 1
|
5
|
40
|
8
|
Feature 2
|
Test
Set 2
|
55
|
1050
|
19
|
Feature 2
|
Test
Set 3
|
20
|
330
|
17
|
Feature …
|
Test
Case …
|
Tests that break
frequently
In addition to the above, we can also think of automating few
additional cases from remaining features that have frequently failed in last
few releases / builds. If you are using a good defect tracking tool and have a practice
to tag test case id to a defect, this task would not take much time. Tests that
are fixed in previous build and reopened multiple times in subsequent builds have
more probability to fail again. Such tests needed to be rechecked as many times
as possible as the effort keeps growing as the product grows.
By
Automation Mentor
We provide hands-on training on automation tools and frameworks
No comments:
Post a Comment