Friday, March 7, 2014

Is manual testing dead?

Personal disclaimer - I work in a team where every developer follows TDD and is capable of writing browser automation tests in BDD style using Cucumber/Ruby. We have a manual tester on team and no automation tester. Other teams around me that I closely interact with have not manual testers on team but only automation testers.

TL;DR; - The world of programming is very fast moving towards automating everything under the sun. But IMHO we are far from calling manual testing dead. 

The objective of the article is not to reason about/against the above argument. Automation is a big investment and is good investment on most counts but it may not pay off all the time. It can be useful understand in what situations that investment does not pay off. There is no thumb rule to these situations, these are purely based on my experience working with software I have built over the years. But before we get into that, lets try to understand what does it cost to automate a test (or use case, user journey, scenario, whatever you name it). Formally this is called  "Total Cost of Ownership" of automating a test. This consists of following

  1. Cost of automating a test
  2. Maintaining the automated test over time as your code changes
  3. Cost of manually testing edge cases not covered by the automated test

This inherently indicates that manual testing is still not our of order, but lets talk on that later. The first point is interesting - cost of automating a test. If you are automating simple tests through unit tests or integration tests, then this cost is very minimal. The investment here pays off very well and you can afford to automate a large number of tests. The reduced cost is a direct outcome of maturity of modern IDEs and testing tools. Today, you can hit the ground running with a unit or integration test in no time. It does not take long time to build and run tests. It is becoming more and more easier to repeatedly run these tests on more than one machines (thus making continuous integration cheaper). These tests are almost entirely written in the same language that the software under test (SUT) is written in. That makes it even easier. Maintaining these tests as SUT changes is not hard. 

Things become little difficult when we move towards browser automation tests that let you automate complete user journey. Agreed, this field is no more new and lot of advancement has happened in last few years in this area. We now have very stable tools like Waitr and Selenium WebDriver that make programatically interacting with browsers a breeze. But there are still some challenges 
  1. A different set of skills are required to build these tests. If your team is new to this kind of testing then they would take longer time in the beginning to automate simple user journeys
  2. If the SUT changes, then it takes longer to make changes to these tests
  3. In my experience, people prefer using Ruby for automation. So if you are a .NET or Java developer then you need to invest in cost of learning a new language
  4. These tests are not as reliable as unit tests are. They may be flaky and lot of times their reliability depends on factors beyond just tests e.g. load on the machine, response of the web pages etc.

By now, you would have some clarity on what I mean by TCO. A test might be cheap to automate in the first place but maintaining it over time may be expensive.

Another important point of view when it comes to automated tests is how is the faring in detecting defects or how badly the test is hiding defects. As your code changes, your tests either do not change, change or become completely irrelevant. If your tests are not changing then it is possible that they are failing to detect defects or hiding defects. If a test becomes completely irrelevant after a code change then the investment you made in automating the test ceases. From this standpoint, it would be worthwhile to ask yourself, how frequently the code being tested is going to change. You might need to talk to BAs and product owners to get a clear answer to this but if you are confident that code is likely to change a lot in near future then you need to be judgmental about how far you go with automation (unit tests are exempt from this. I always write unit tests no matter what)

Having said all of the above, I feel, there are situations when we come to a conclusion that manual testing is a better option. Below is a list of such situations from my experience

  1. Testing long running batch jobs - It is difficult to completely automate the testing of long running batch jobs. The core logic of the batch job can be unit tested but automating a single end-to-end run of a batch job, though possible, is not feasible and not worth the investment. Even if attempted, in my experience, it does not give enough confidence about the quality of the software and we resort to some level of manual testing in the end
  2. Testing email content and delivery - Email delivery can be tested to some extent by writing emails to disk instead of delivering them but then you are not really testing delivery. Also testing content of the email and how they render in browser based email clients vs. desktop based emails clients is something that cannot be automated reliably enough. Moreover, there is no one standard when it comes to desktop and mobile phone clients. Combinations are just too many and investment in automating this with tools available today is not worth it.
  3. Testing of user interfaces - Browser automation tools confirm the present of a particular element on page but they cannot verify the look and feel of the element. There is some movement happening in this area and people are experimenting with different tools. But again, given the maturity of these tools, automating such testing is not reliable. It is best left for human eyes. Here is a list of tools that let you automate testing of visual aspects of a web page by the way.
  4. Browser compatibility testing - Gone are the days when everyone on the planet used IE6. People now use tens of different types of browsers and every product owner wants his product to work on any browser out there in the market. They do not want to loose revenue because we did not support a browser that a prospective client was using. So testing software on vast range of browsers is inevitable. If you have got some browser automation tests then you can go a step further an run you test in multiple browsers. There are tools like BrowserStack that let you do exactly that. But this approach is not scalable given the time it would take to run all your tests on all possible browser combinations. And usually, the number of user journeys automated are not large enough to build confidence that software works in all browsers. 
  5. Testing responsiveness of website - "Responsive" is the buzzword today. Every website you build has to be responsive (well, not every site, but most sites).  Testing responsiveness of a website is very difficult to automates for the fact that you would need to render the site on different devices in order to see how your pages are scaling. You can use websites that render your site in different device resolutions to show you how your pages scales but automating that is not very easy. This kind of activity is mostly preferred to be done manually on real devices.
  6. Testing user journeys spanning multiple systems - Enterprise software usually have more than one component intricately communicating with each other. In order to reliably tests such systems, data flowing through one component into another needs to be verified. It is possible to break the problem down into logical units and test each unit with support of unit tests. But again, you feel the need of one test that runs an end-to-end user journey and ensures that everything is where it belongs. Automating such tests would be time consuming. Such tests would also take a long time to run.
Each of the above situations is something I have experienced. For some, I have attempted automating tests, frustrated and given up. For others, I had to rely on manually testing after getting issues from production users. All this makes me feel that manual testing is not dead. Rather role of manual testing has become more important. Automation has made it difficult for simple defects to creep in by mistake. What is there in the software are the defects that are not easy to find. So manual testers of these days have a huge responsibility of finding defects that automated tests have hidden. 


  1. Very thoughtful and thorough analysis.

  2. nice article...well took Automation quite a while to catch-up...but the momentum is there and may take some more time to catch-up with technology/platforms. Few comments from me :)
    1. It's always good to write tests for Automation it's native language (Ruby-watir & java -webdriver), so i would say testers need to b language independent and quick to learn. More community support & Documentation are it's benefits.
    2. regarding email system...u can mock smtp server like we do in unit testing ( not efficient but work around.
    3. you already mentioned about browser stack and Automated CSS testing. Automated CSS testing is not upto the mark...and in an year or so...i feel these frameworks will become more mature. Given CSS 3.0
    4. I completely agree to your point in "Testing user journeys spanning multiple systems". Even if it's possible to automate such systems...i would say it can't be scripts can't b intelligent enough as a manual testing to check complex paths/flows.

    Hence Manual testing has become more important.

    1. Thanks Anurag.

      On CSS testing - most tool take approach of comparing PNGs or styles applied to elements. IMO, this approach is maintenance heavy when the product is going through alpha-beta and changing at a high rate. "Total Cost of Ownership (TCO)" of automated tests is an interesting topic and I would write about it in one of the future posts. TCO is quite high with these tools.

    2. On testing emails - there are several ways to confirm that code under test is interfacing with SMTP server correctly. Mocking SMTP server is one way. We drop the emails in a folder and check that a file created there every time code is supposed to send an email. But that does not test whether email is rendering correctly. For simple text emails this is not a problem but emails where you have lot of HTML/CSS at work, it becomes difficult. Desktop email clients add more complexity as everyone follows a slightly different standard when it comes to HTML/CSS

  3. Thanks for a great post Suhas! I completely agree with you point that there is always a lot of room for manual testing and about most of the key situations where manual testing is most appropriate. At the same time I kind of doubt about the first point on your list - that about long running batch operations. In my opinion the longer (and costlier) an operation is, the more important it is to automate its testing to free human resources (which are likely the priciest ones), be able to scale testing and so on. I acknowledge that there are problems with automated testing of such things, but can't get rid of the idea that we should delegate time consuming tasks to machines - as much as it is possible.

    1. Thanks for taking time Alexander.

      There is one more aspect of automating the tests that I chose not to talk about in this post - Total Cost Of Ownership of an automated test. This cost is driven by time it takes to build and maintain a test and how effectively test helps in finding out bugs in the code. Tests around long running batch processes are not very reliable at the moment and lack the preciseness of finding out bugs. So the TCO of such tests is quite high. Situation on every project is different and you need to compare cost of automating against cost of manually testing the long running process. If I am confident that a particular long running batch process is not going to change for foreseeable future then I would not invest much in automating and maintaining the test. A test that is testing end-to-end positive user journey is enough.