A Short History of Software Quality

The short history of software contains a handful of inevitable trends. Take any software asset, such as an app, a database, or an information system. In a very high probability, its size and complexity have grown throughout its whole lifetime. Take any software asset that is still in active use. Chances are, the number of its active users and external dependencies have grown throughout its whole lifetime. Take any piece of software that is still actively maintained. Most likely the time between subsequent releases is much shorter now than it was in the past.

In the good old times, the software developer and software user were the same person, usually a scientist. The same person did also the testing, as a part of his natural flow. Quality assurance was, obviously, very efficient, very focused, and very user-centric, with the fastest possible feedback cycle.

While the use of software and the number of users grew the new profession, software developer, was gradually born. Later on came, one by one, project managers, business analysts, product managers, chief architects, engineering leads, testers, scrum masters, user experience designers, data scientists …you name it.

Is Quality Assurance and Testing Expensive and Inadequate Then?

Software quality assurance has always been inadequate: too slow and too expensive yet passing through too many defects. It has been widely known for 40 years, at least, that the total cost of a software defect is cheapest if it is detected and corrected close to the time and place it was made. For contrast, Mr Sakichi Toyoda knew the same principle already before World War II when he laid the foundations of the world-famous Toyota Way.

Conventional wisdom tells that software developers should test their own code already at the unit level. However, practical wisdom tells that software developers are mostly incompetent and de-motivated when it comes to testing. Therefore, self-testing seldom happens. Experience has also taught that software developers are always busy writing new code and fixing old code so that in practice they never really have the time to do any testing. The debt of testing accumulates towards the later stages of the software flow. Still today, late testing is the most common approach to software quality assurance.

“Software quality assurance has always been inadequate: too slow and too expensive yet passing through too many defects.”

Late testing comes with a bunch of expensive problems. First of all, it requires a large number of testers. Therefore, testing needs to be planned, organized, and managed professionally which adds a substantial amount of overhead. Second, late testing results in a huge amount of time spent on waiting and rework, i.e. total waste. Testers wait for a software version to be tested. Developers wait for the test results in order to start debugging and fixing. Testers wait for the fixes to be re-tested. Meanwhile, instead of writing new code, developers spend most of their time fixing defects that could have been found much earlier.

Test Automation Works as Waste-Reduction

Test automation was invented as a waste-reduction solution to the late testing. The common belief was that if test execution were automated the testing times would be shorter and testing costs would be smaller. To a great extent, this is true. However, in most cases people have learned the hard way that automating tests is terribly expensive and demanding. Therefore, the payback of test automation efforts has, with a few notable exceptions, been negative.

As long as software testing as a discipline has been around it has revolved around the same basic questions: how to make developers test early, how to improve efficiency and effectiveness of late testing, and how to automate testing.

Agile Methods and Exploratory Testing

The introduction of agile methods brought along a partial salvation. The whole paradigm of making software was changed. Software was developed by small, self-contained teams in fast cycles and released frequently to the customers. Changing requirements became a virtue rather than a vice. It suddenly began to make perfect sense to test early. Because the release cycles were fast, it also made perfect sense to automate the tests. It helped, indeed, that the software developers possessed programming skills that were necessary for test automation.

The clever agile evangelists and QA gurus came up with fancy names and methods for the age-old ideas, such as “definition of done”, “test driven development”, and “behavior driven development”. Suddenly, all the young people were excited about the idea that you can declare a piece of software ready only after you know it works. The same guys came up with the earth-shaking finding that for each piece of software functionality there should be a test case.

Still today, I find it very amusing when agile aficionados evangelize that it is a really good idea for people working on the same thing to speak to each other. This all tells something about the young age of our industry.

Parallel with agile we saw the emergence of a new testing approach called “exploratory testing”. To be exact, the fundamentals of the approach were introduced in the literature already in the 1970s, under the title “error guessing”. Exploratory testing is, by far, the most efficient form of software testing. It is also necessary in the era of rapid release cycles: automated testing is seldom fast enough for new functionality while it is superior in regression testing.

The weakness of exploratory testing is that it does not scale. To be a great exploratory tester you need sound software testing skills plus quite a bit of business understanding and human understanding. Technical knowledge wouldn’t hurt either. Obviously you also need a tester’s mindset and a systematic, disciplined way of working. Such individuals are rare and most of them do not want to work as software testers.

“...DevOps is not a testing method. It is a way of organizing, automating, and measuring the software workflow.”

The evolutions of agile and exploratory testing reveal something interesting about our industry. They both began as rebellious movements. The message was clear: “Let’s throw everything old away because it sucks; the only things we need are good people, customer focus, more communication, and a rudimentary method.” Gradually, over the years, the rebels have learned that there was a reason for the existence of the age-old best practices. Now they have been brought back under different names and, fair to say, often in a much improved format.

The salvation brought along by the agile methods was only partial. Still today, less than a half of the agile software population is competent enough to apply the agile methods properly. Moreover, while solutions have become better challenges have grown bigger. Many software assets are so large that they simply cannot be delivered by a single agile team.

Although software teams are nowadays able to release faster the demand for ever-accelerating release cycles has also grown. The internet, SaaS, and API economy have brought along eco-systems where the software developer cannot really know, where, by whom, when, in what kind of environments and dependencies, and for what purpose his or her software will be used.

The Big Change, DevOps

Quality assurance is, again in front of a new challenge. The solution already exists. The idea is simple and it makes sense. You take the people that previously were in a separate agile development team, system QA team, and deployment team together, tell them to talk to each other, use common tools, and automatize everything possible. You call this effort DevOps.

DevOps has been as big a change as the introduction of agile methods was 15 years ago. DevOps is definitely the right way to go but, before the software development mainstream gets there, we will see many world-class disasters. A successful DevOps requires the right tool-chain and its disciplined use, high quality the teamwork, and a lean operational culture. These are all rare virtues.

“The weakness of exploratory testing is that it does not scale.”

One of the typical misconceptions about DevOps is that it would replace QA and make software testing irrelevant. But DevOps is not a testing method. It is a way of organizing, automating, and measuring the software workflow. Continuous improvement is an essential part of successful DevOps. DevOps alone does not make good quality but it puts a high pace requirement on software testing and requires great discipline in both exploratory testing and test automation.

SaaS’ Telemetry Approach and AI

The picture wouldn’t be complete without mentioning software as a service, the business model that broke through big time with products such as salesforce.com some ten years ago. One of the great things about software as a service is that there is always exactly one instance of the software publicly available: there are no customer installations or end-user installations. This, in turn, means that the software can be updated as often as needed, even several times per hour.

Software as a service has enabled the use of telemetry approach to software testing. Telemetry in software testing means that the software we run from the cloud is instrumented with a huge number of “software sensors” that collect data about what we do and how the software behaves. This data is sent back to the cloud and analyzed there. The benefits are tremendous. Testing work is crowdsourced to the end-users without them knowing.

We get information on how the software behaves in many different user environments. We are able to measure not only the functionality and technical performance but also user behavior. We can also conduct A/B testing and decide which users get a test versions and which ones don’t – this also happens without the user knowing.

It is already today possible to use artificial intelligence to analyze the test results collected via the telemetric approach. One can easily imagine a machine-learning system where an artificial intelligence learns from telemetric test results and generates exploratory test cases that are readily automated. This is not a sci-fi vision but already a reality in some few places. If software testing has come this far one may reasonably ask why programmers haven't been replaced by AI yet.

-----

This blog post was originally published in LinkedIn.