Manual Testing vs Automated Testing

Manual Testing vs Automated Testing

Testing software is inevitable, regardless of what you are building: if the software you build is used, then it's being tested. Even if you write no test code, you will definitely still test what you build -- what's the point otherwise? The question is, though, will you automate those tests? Some would say without hesitation that we absolutely should have automated tests, regardless of what we are building or who's building it. I believe there is more nuance to this question, because automating tests takes engineering effort, and it's quite possible that that effort is not worth the reward in certain conditions.

Let's first set the stage: I work full time at LinkedIn, where more than 4,000 engineers worldwide work on software that powers the LinkedIn site and applications. I also work part time at MySwimPro, a small Michigan-based startup focused on delivering a great software experience to swimmers. The company only has four full time employees. Between these two positions, I have a direct view at how software is built in two very different sets of conditions: in the former, there's an abundance of engineering manpower, powering services for millions of users worldwide, while in the latter there are only two full time software engineers, and they power services for a considerably smaller user base.

The Different Types of Testing


There are many forms of testing software. The first and most fundamental is manual. For an iOS application, this is opening the app on your device and playing with it, looking for bugs. This is the most basic form of software testing, and is part of the process for most development teams in some form or another. Automated testing, however, is software that validates the user-facing software reaching customers. There are a large variety of mechanisms for automated testing, but the two main types are (1) tests that run on every commit, triggered by a change in code (the commit) and (2) tests that run on a scheduled basis. These two methods have their pros and cons: running tests on every commit can help easily track down the root cause of a regression, but as the number of commits grows, it can get expensive to continue running all these automated validations. Running tests nightly, though, is far less computationally expensive, but is far less granular in terms of issues that come up. 

At LinkedIn, we use both these mechanisms, with a lightened test suite run for each commit, and an extremely comprehensive suite run hundreds of times each night. At MySwimPro, we don't use automated testing, yet. That's partly because compared to manual validation, automated testing is much more costly. The mechanism that runs these automated tests is yet another service software engineers need to build and maintain, in addition to the tests themselves.

Why Put in the Effort to Automate tests Then?

While manual testing is cheap, it does not scale well. As a product's set of features expands, it quickly becomes impossible to properly validate all of the basic use cases that should function a specific way. This is when automated tests come in to save the day, since we can create automated checks for the most common paths within our product, removing the need to manually check them. In addition, through this type of automation, determining if a change breaks other things is much quicker to find, since a failing test would highlight this right away.

How Does Testing vary Between MySwimPro and LinkedIn?

While in both my roles, the goal is the same -- to produce a high quality iOS product -- the means to achieve that are vastly different. At LinkedIn, each code change triggers review from at least one other person, and a barrage of automated tests. If code reaches production, it has been peer reviewed, automatically tested dozens of times, gone through a week of beta testing, and even then we still have a mechanism to disable new code paths if an issue does come up after all those other checks. At MySwimPro, most code is peer reviewed, and each change goes through at least a day of beta testing, but that's it. Shouldn't MySwimPro try to be more like LinkedIn and create automated tests?

Well, that's where the conditions of each company start to play a factor. To get automated test coverage, one of the two software engineers at MySwimPro would need to set that up, which if you don't know, is non trivial. In addition, having automation running is in-and-of-itself a service that needs to be maintained. There's therefore a large amount of additional work to be done to create and maintain automated test suites. The question is then if that amount of work is less than the work needed to keep up with manual testing. For MySwimPro at its current size, the answer is clearly no: it's much easier to be a bit more vigilant when using the product and quick to act when finding issues rather than writing and maintaining automated tests. This will not be true forever, though, and judging by MySwimPro's current growth trajectory, we will need automation sooner rather than later.

Is There Such a Thing as Too Much Testing?

Yes, when the amount of time it takes to maintain the testing suite is larger than the amount of time it would otherwise take to field customer support messages and manually check the product. Tests should be an aid upon which the software product is built, not a drag on the development process. Testing should slow down the development process just enough that the extra validation it provides is worth the slower output. Just like Aristotle's philosophy regarding happiness and other virtues, there is a golden mean for testing as well. Too little testing and your product's quality will suffer, along with your users and customers. Too much testing, and your development team will suffer. It's got to be just right.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • David Wallace

    Both manual and automated testing offer benefits and disadvantages

  • James Murphy

    You are absolutely right: Testing is an integral part of any successful software project.

  • Quinn Carillo

    Manual testing works better for smaller companies. They can't afford automated testing.

  • Jodie Hall

    Thanks for sharing your experience with us

  • Victor Homanyi

    Highly insightful !!!

  • Morgan Capps

    When it comes to testing, one type may accomplish a goal better than the other.

  • Dan Guiver

    It all depends on the budget and timeline.

Share this article

Benjamin Hendricks

Tech Expert

Benjamin is a passionate software engineer with a strong technical background, with ambitions to deliver a delightful experience to as many users as possible. He previously interned at Google, Apple and LinkedIn. He built his first PC at 15, and has recently upgraded to iOS/crypto-currency experiments. Benjamin holds a bachelor's degree in computer science from UCLA and is completing a master’s degree in Software Engineering at Harvard University.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline