Wednesday, April 3, 2013

Testing - Start at the Start ...


Well, it seems like a good time to talk about testing.


I've touched on this subject in earlier posts, but over the next few blog entries, I want to take a deeper dive into testing.  I want to explore some of the shortcomings that I've seen and some of the things that I think work.  I'm not saying I have all the answers, heck I'm not even going to say that I've got most of the answers.  But I've interacted with enough development teams across various development methodologies that I think I have some answers.  More importantly, if the ideas generated within these posts assists someone in taking a fresh look at how they approach testing within their organization, then I'll consider it a success.


Ok, let's rewind and take a look at things that I've seen along the way.  Yep, I've worked in all types of shops - very small development teams that had seat of the pants processes all the way up to multi-national organizations with the "thou shall not deviate from the master plan"  processes.  None of them worked.  Yes, each of them allowed product out the door and into production, but all of the processes allowed serious flaws to go unnoticed until the product was out the door and then was caught by the end user.  Then requiring the team to divert from their present activity, tackle the defect, test the fix and then move the fix in to production or ship the fix out to the customer.  All this does is cause churn within the team and reduce the confidence of management and the end user in your ability to create a quality product.


I'm going to go back in time and talk about an early consulting job.  I essentially had been hired as the sole local resource to work on a marketing automation tool.  The primary contractor had multiple resources that were working the project for this client.  Their responsibilities included everything from the requirements, design, development, testing to implementation.  I had been hired by the head of the technology team to act as a local resource through the development phase.  I ended up primarily working on building sub-portions of the thick client (think VB) that was to be used by the marketing team.  The primary contractors were responsible for creating the interface to the mainframe, handling the data transformation to a dedicated marketing database, primary portions of the thick client and the analysis and reporting tool.

By the time I came on board the requirements had been completed, the design was partially completed and they were working on mock-ups of the application.  They handed the mock-ups to me as they concentrated on handling the data transformation and finishing up the design work.  As we worked the project, several things began to bother me.  Query times to fetch, manipulate and save data were running long.  I discussed this issue with the technical lead from the primary contractor, and was assured that his team would handle the issue before the system went live.  Additionally, I began to realize that many portions of the client were being built without validation routines.  A larger concern to me was the fact that there was very little interaction with the marketing team.  Oh, they came in for "reviews" every few weeks, but from what I could tell, there was no effort by the consulting team to integrate them into the overall effort.

At some point during the process, the head of the technology team for the client pulled me to the side to ask me how things were going.  They were paying the bills, so I gave them my unvarnished opinion of the status of the project.  I walked through the concerns that I had along with my belief that the project was not going to be completed in the required timeframe, or, if it was delivered, it was going to fail.

The following day, we ended up having a reset of the project.  The company began hiring their own developers to add to the team and interaction with the marketing team became more frequent.  Testing seemed to take on a more visible role - suddenly user tests were being identified and scripted and there was more interaction with the marketing team to walk through the tests..

Unfortunately, it wasn't enough, on the day of go-live, the system ate itself.  The response of the system was like molasses.  The users couldn't do their jobs.  We pulled the plug less than an hour after the system went live.

So what went wrong?  The fundamentals of the application were there - the primary contractor had identified what was to be captured and saved.  They had designed a system that, at least on paper, worked.  They could show the screen flows and how that mapped back to the data and then mapped back out to reports.

What had we all failed to do?  Any type of test planning - and I mean nothing.  The primary contractor did not focus on tests at any level.  I did not identify and build tests for the functionality I was building - outside of the “happy path”.  Where we really failed was the load testing of the application.  Boy did we fail!

The fact that the system could handle no load was ultimately what killed the system.  Putting data validations in was the easy part.  Having to rescript the entire data access layer - that was a huge undertaking.  Ultimately, the primary contractor was removed from the project and the client built their own team to rework the application.  I stayed on through the rebuild and was around to see the product successfully rolled out through the organizations internal teams.

The moral of the story - testing is not easy, not even close to easy.  And the reasons for testing have only become more critical as the systems we build and the interconnects between the systems become more complex.  And just as important, load testing - you can test all of the data and all of the entry screens, but if you’re not testing to ensure that the system can scale to the number of expected users - you’re going to fail.

Testing needs to address the following:

  1. Unit Testing - in my humble opinion, we need to be writing unit tests prior to writing the actual code.
  2. Integration Testing - testing all of the individual pieces of code as they are assembled into the overall system.
  3. Exception Testing - anyone who thinks that in today’s world all you need to test for is the happy path is wrong.  You need to plan on someone keying the wrong type of data into your fields, you need to expect that someone will key in data outside of the boundaries of values you are expecting, you can expect a connecting system to garble data on the way to your system.  If you’re not testing it, you’re expecting your user to test it for you.
  4. Load Testing - validating that the system is even remotely usable is always a good idea.
  5. New Function Testing - if you’re adding functionality to a system that is already in place, you need to ensure that you are focusing testing on the new features.
  6. Regression Testing - yes, you need to look over your shoulder and ensure that you haven’t busted anything that already works.  Your end users usually frown when you ship code that busts something they are used to doing.
  7. System Testing - testing the final integrated system and ensuring that they meet the defined and agreed upon requirements.  That would be holding yourself accountable to the contract defined within the requirements.

That’s a lot of testing!  Yep, and if you’re doing it all manually - good luck!

The software that we use and build has only gotten more complex over time.  There really are no stand-alone systems anymore - all of these things that we are building interconnect with other things we have built over time.  We can no longer view testing as an exercise of a quick review and push it out the door.  Test planning needs to occur early in the cycle - starting with requirements and building all the way through the lifecycle.

If you’re doing it right, you’re building test scripts that can be automated and report back out to the development team on a regular basis.  Why?  Well, the easy answer is why not.  No, seriously, you want to be able to monitor the status of what is going on.  Are your tests covering all of the code?  Are your tests succeeding or failing?  What does the current build look like compared to the last build from the perspective of testing?  Can you tell if your code is breaking generally accepted coding practices - things that will protect you from either the code failing or someone being able to crack into your code?

I’ve never met a developer that likes being told that an end-user has found a bug in their code.  Heck, we hate it even when QA finds a bug.  So, with a little more control upfront in defining what it is we need to test and then the follow-up on the back-end to ensure that we are actually testing everything we said we would test - we should be able to get better at all of this stuff.

Until next time ... happy testing!

Tags: #SDLC #softwaredevelopment #metrics #lifecycle #qa #qualityassurance  #applicationdevelopment


If you'd like more information on my background: LinkedIn Profile

No comments:

Post a Comment