Tuesday, September 17, 2013

Project Managers: Tactical ways to Push Testing

So, in my previous post, I talked about some of the high level things that you, as the Project Manager, need to be thinking about, as it relates to testing, when managing your projects.  In this post, I'm going to step down in to the tactical.  Specific things that you can begin to push within your projects and my recommendations for who should shoulder the responsibility.  These recommendations are based off of things that I've seen work in my current environment as well as things that I've seen work in various other organizations.

Now, before we dive in, a word of caution.  You need to evaluate what I'm saying and throw it up against the reality of the environment within which you work.  Some of what I say will work, some of what I say would probably spin heads within your organization.  Take what works for you and throw out the rest.  We're currently reevaluating some of the test strategies that are used within my teams and six months from now, I may have a different take on the subject.  My biggest concern is ensuring that we put enough thought in to the process to allow less experienced team members to be successful when working with unfamiliar subsystems or product features.

And, with that said, it's time to dig into the details and talk about things that I've seen work - note for those readers willing, I'd love to get feedback from others on what they are seeing work/not work in their environments.

First up, what are your end users (or the team members representing the end users) doing to drive the quality initiative?  These are the folks that are creating the user stories or requirements for the products that you're building.  Are they being held accountable at the start of the process to define what success means?  Are they telling you up front what they will do when performing user acceptance testing?  If not, it's time to sit down and have a conversation.  Why do they feel comfortable telling you what to build if they can't tell you how they will know it is right?  Specifically, what are the conditions and scenarios that they will use to validate that the functionality delivered is acting properly and producing the correct results?
  1. What set of information needs to exist within the system to allow them to create the scenarios that they will use to test the requested functionality?  What specific elements of data will be used in the testing - both from a positive and negative test perspective?
  2. What business logic is being used to assess whether the outcome is valid?  What will change within current processes?
  3. Ultimately, what does success mean to them?  How will they know that what you've delivered works or doesn't work?
Now, ask yourself, if these do not exist prior to the design and development of the solution, how will the team know that the proposed solution will actually solve the problem?  This function must be driven by the end users (or those team members representing the end users).  They can collaborate with others on the team, including business analysts, technical team members and dedicated testers.  But, ownership of the acceptance test plan must fall on the shoulders of the end users.

One of the things that I like to do once the users have defined their needs and the design phase is underway, is to have the technical subject matter experts (SME's) begin to identify high level integration test objectives.  This is built at the same time that the technical design is being worked and is a very brief document outlining the gotchas that the development teams need to be aware of when developing and exercising the new systems/features.  This is intentionally kept very high level - usually just descriptions of each item that will need to be fleshed out in later stages.
  1. Identify touch points between various sub-systems that will need to be tested.
  2. Identify data replication strategies that will be used and that will need to be tested.
  3. Identify data transformation strategies that will be used and that will need to be tested.
  4. Identify services that will be relied on that will need to be tested.
This acts as a roadmap for the entire technical team.  It should be a roadmap used to identify data validation and functional validation.  It can be used by those that may be unfamiliar with specific subsystems to gain a further understanding of the interrelationships between the various subsystems and external/internal services.  The IT SME's own this high level integration test plan and pass it along to others in the process.

As we dive down in to the detail design - several things begin to happen.  First, the development team begins to flesh out the high level integration test strategy and it becomes the Integration Test Plan.  This document details the specific actions that will be used to test the touch points previously identified and should include the data and functional validation - data should be traceable throughout the system with the end points being validated.  This includes identifying the specific data elements that will be used to perform the tests as well as the expected positive/negative results.  While driven by individual development team members, the plan will be reviewed and validated by the Technical Leads.

Additionally, the individual development team members begin to identify their individual Unit Test strategies.  Each functional unit of code being created or modified needs to be tested to ensure that it is performing to spec prior to moving in to integration testing.  I believe in the paradigm that the tests should actually be created prior to the code being written, but I've seen development teams be successful using several different paradigms - so I won't harp on it.  The individual developers own this and are responsible for ensuring that they develop clean code.

Once the developers have been able to execute their individual Unit Tests, then they work together and move in to pre-Integration Test planning.  This time is spent ensuring that the necessary data has been identified and created/setup, that the configuration changes needed to the system and supporting sub-systems have been applied, that the network configuration changes needed to open ports to talk to internal/external services have been applied, and that all of this has been documented for production.  This particular block of activity is owned by the Development Team and the Technical Leads.  Infrastructure Teams may support the activity, but the Development Team and Technical Leads should ensure the configuration is complete and documented for the Implementation Plan.

Now we step in to Integration Testing - executing the plan that was documented earlier.  It is inevitable that something will have been missed - but that is what this stage is for, to find the bumps before you move your changes into the quality assurance (QA) and production environment.  Here you will find the missing data elements or calls to services/api's that work differently than documented.  You will uncover the fact that replication strategy for data you rely on wasn't documented properly and the underlying data you were expecting to see doesn't exist.  All sorts of unknowns will surface and you'll spend time playing whack-a-mole to clean it all up to a point where you believe it is functional.  The Development Team and the Tech Leads own this piece of the puzzle and work to ensure that all of the individual pieces come together and work as a whole. 

Now, let's take a few minutes to talk about Regression Testing.  I'm actually in the midst of transitioning our teams form the old style of moving code in to QA and then running a full regression test of the code along with new function/feature testing to a model where the code is built nightly and then the suite of regression tests is run when the build has finished.  This has been a tectonic shift for the teams in that the developers are getting daily feedback on code that has been checked in to the repository and we actually are building up a set of statistics that show us the overall state of the code.  I can now walk in on a daily basis and see the various package builds, number of tests being executed and the success/failure rate - along with a whole suite of statistics on coverage and severity of the failures.  Benefit number one - immediate feedback to the development team while the code is still fresh in their minds.  If we need to change something due to a failing test, the developer doesn't need to spend time remembering what changes were made.  Benefit number two - reduction of effort by our QA Team Members to get code through the testing process - their focus is on new feature/function testing, not regression testing.  Benefit number three - the developers have more buy-in on the testing process and begin to have a better understanding of the underlying data.  The Development Team is responsible for the Regression Testing and must review the results with QA to validate that the code is ready to move in to QA Testing.

QA Testing is focused on the new features/functionality to ensure that they are operating within defined constraints.  Here is where I want to hit the system hard with positive and negative testing associated with the new features/functionality being introduced within the overall system.  What happens where certain pieces of data are missing from messages exchanged between subsystems?  What happens when a user forgets to input data?  What happens when a web service that we are relying on isn't available?  What happens when the system thinks everything processed fine?  I will also want to take a look at load testing on some system to ensure that we are staying within expected SLA's.  This testing is the responsibility of the QA Team with input from the Development Team, Tech Leads and support from the Infrastructure Teams.

As  part of the work that is being done during QA Testing - we should also work with those team members representing the end users to execute the User Acceptance Tests (UATs).  This should tie all the way back to the beginning stages of the project where the business teams defined the success criteria of the effort and began building the definition of their UATs.  During the entire project, these high level UATs should have been fleshed out and reviewed to ensure that they align with the final requirements of the project.  QA and the User Teams should jointly execute and review the results of the associated UATs and prepare to sign off and give final authorization to move the code in to production.

As a Project Manager, you need to assign tasks to cover these activities within your overall project plan.  More importantly, you need to be following up with the responsible parties at various stages of the lifecycle to ensure that they are not being ignored.  Your organization may hold different team members responsible for the specific activities outlined.  They may also skip some of the items outlined above.  However, you need to ensure you are covering enough of the pieces to confirm that the system being built will satisfy the requirements originally identified at the beginning of the overall effort.

Well, we've reached the end of the discussion today.  I hope that you've found some of what I've discussed useful and that you're able to take pieces back in to your organization.  Feel free to share and let me know your thoughts on what has been presented and let me know what works in your organization.

Tags: SDLC, Software Development Lifecycle,  Project Lifecycle, Project, Manager, Development, Paradigm, Security, Secure Development Lifecycle, Communication, QA, Quality Assurance, Testing

For more information on David L. Collison: LinkedIn Profile

No comments:

Post a Comment