Monday, September 23, 2013

Management - Leading, Coaching, Mentoring!

Well, I think it's time to switch up topics again.  With this posting, I'd like to change lanes and look at the human aspect of management.  As Leaders and Managers within the technology field, we need to be able to shift from the technical aspects of our jobs and deal with the individual team members at the drop of a hat.

Whether we realize it or not, we swing between different roles during the day as we move between issues and projects.  Depending on what is happening, we need to get in to the technical details of a specific sub-system or project; an hour later we may need to display our ability to drive to a decision; in another moment we may need to handle a disagreement or communication issue between two members of our team; or, we may need to mentor or coach an employee to continue their growth as they prepare for new opportunities.  Day to day, our bearings shift as we navigate between the various events that surface, that want to take us off task.

As I look back on my career - and, yes, it has been an interesting ride with paths yet to be explored - I think the toughest shift I've made was when I went from being the programmer pounding out code to a manager no longer responsible for creating the code.  I've touched on this subject before, so I won't belabor the issue.  That said, hitting that particular brick wall made me assess what my role was, what was important for me to focus on and what I needed to "give up".  My boss at the time let it happen and I give him a lot of credit for giving me the space to fail and then sitting me down and forcing me to understand my role.

As my boss pointed out to me at the time - I was there not to be the technical expert, but to mentor and guide the people on my team; to remove the roadblocks and ensure that my people knew what it was they were supposed to do and ensure that they were provided the proper resources to get it done.  I was there to keep them focused and to deliver results.  Along the way, I learned that there were limits to what I could do and that I would never be able to provide all of the resources that my teams wanted.  Nor could I feasibly address every project that was requested - it was up to me to find the balance between what was requested and what the company and the available resources could provide.

Now, all that said, I earned some bruises along the way.  There were times as a young manager, I didn't necessarily treat people the way that they should have been treated - sometimes my driver personality got the better of me and I ended up doing or saying something that wasn't as well thought out as it should have been.  In those moments, I was lucky enough to have a boss that continued to coach me, to teach me how to smooth the edges.  These learning experiences through several mentors have shaped me in to the role that I play today.  Do I still sometimes let the driver in me get the better part of me - yep, but it happens a lot less than it used to these days.

People that I respected took the time to invest in me.  They knew when I was at a point that I needed a stretch goal; they knew when it was time to take me out of my comfort zone; they gave me room to make mistakes and fail; they let me come and ask questions; they let me make a difference.  And when needed, they pulled me back from the abyss.  Even today, I am mentored by someone that I respect and that I feel is pushing me and preparing me for that next step in my career.

Not every boss I've ever had was a mentor - I've had my share of clunkers along the way.  Case in point, the boss that flew in to town right after my department had been moved under his to inform me, and I quote, "I don't know what it is you do, I don't want to know what it is you do, if you need something, tell me who I need to talk to and what I need to say".  Yikes!  Or, there was the owner of a company that I worked for that felt it was appropriate to verbally abuse everyone on her staff - the rush for the door was quick once it started and to this day I'm sure she never understood the message.

Somewhere along the way, I learned that I could also be a mentor.  Not everyone wants a mentor, but to those that do, I attempt to be available.  Sometimes, I identify someone who I think has the potential to act in a different role and I initiate the conversation.  Sometimes individuals ask if I would be interested in giving them guidance.  Sometimes these people report to me or within my organization.  Sometimes these people don't even work for the same company I work for.  However, it's my turn to give something back.

What are you giving back to your teams?  Who is it that your mentoring so that they can take that next step?

Tags: Leadership, Mentoring, Management, Lifecycle, Software Development

For more information on David L. Collison: LinkedIn Profile


Tuesday, September 17, 2013

Project Managers: Tactical ways to Push Testing

So, in my previous post, I talked about some of the high level things that you, as the Project Manager, need to be thinking about, as it relates to testing, when managing your projects.  In this post, I'm going to step down in to the tactical.  Specific things that you can begin to push within your projects and my recommendations for who should shoulder the responsibility.  These recommendations are based off of things that I've seen work in my current environment as well as things that I've seen work in various other organizations.

Now, before we dive in, a word of caution.  You need to evaluate what I'm saying and throw it up against the reality of the environment within which you work.  Some of what I say will work, some of what I say would probably spin heads within your organization.  Take what works for you and throw out the rest.  We're currently reevaluating some of the test strategies that are used within my teams and six months from now, I may have a different take on the subject.  My biggest concern is ensuring that we put enough thought in to the process to allow less experienced team members to be successful when working with unfamiliar subsystems or product features.

And, with that said, it's time to dig into the details and talk about things that I've seen work - note for those readers willing, I'd love to get feedback from others on what they are seeing work/not work in their environments.

First up, what are your end users (or the team members representing the end users) doing to drive the quality initiative?  These are the folks that are creating the user stories or requirements for the products that you're building.  Are they being held accountable at the start of the process to define what success means?  Are they telling you up front what they will do when performing user acceptance testing?  If not, it's time to sit down and have a conversation.  Why do they feel comfortable telling you what to build if they can't tell you how they will know it is right?  Specifically, what are the conditions and scenarios that they will use to validate that the functionality delivered is acting properly and producing the correct results?
  1. What set of information needs to exist within the system to allow them to create the scenarios that they will use to test the requested functionality?  What specific elements of data will be used in the testing - both from a positive and negative test perspective?
  2. What business logic is being used to assess whether the outcome is valid?  What will change within current processes?
  3. Ultimately, what does success mean to them?  How will they know that what you've delivered works or doesn't work?
Now, ask yourself, if these do not exist prior to the design and development of the solution, how will the team know that the proposed solution will actually solve the problem?  This function must be driven by the end users (or those team members representing the end users).  They can collaborate with others on the team, including business analysts, technical team members and dedicated testers.  But, ownership of the acceptance test plan must fall on the shoulders of the end users.

One of the things that I like to do once the users have defined their needs and the design phase is underway, is to have the technical subject matter experts (SME's) begin to identify high level integration test objectives.  This is built at the same time that the technical design is being worked and is a very brief document outlining the gotchas that the development teams need to be aware of when developing and exercising the new systems/features.  This is intentionally kept very high level - usually just descriptions of each item that will need to be fleshed out in later stages.
  1. Identify touch points between various sub-systems that will need to be tested.
  2. Identify data replication strategies that will be used and that will need to be tested.
  3. Identify data transformation strategies that will be used and that will need to be tested.
  4. Identify services that will be relied on that will need to be tested.
This acts as a roadmap for the entire technical team.  It should be a roadmap used to identify data validation and functional validation.  It can be used by those that may be unfamiliar with specific subsystems to gain a further understanding of the interrelationships between the various subsystems and external/internal services.  The IT SME's own this high level integration test plan and pass it along to others in the process.

As we dive down in to the detail design - several things begin to happen.  First, the development team begins to flesh out the high level integration test strategy and it becomes the Integration Test Plan.  This document details the specific actions that will be used to test the touch points previously identified and should include the data and functional validation - data should be traceable throughout the system with the end points being validated.  This includes identifying the specific data elements that will be used to perform the tests as well as the expected positive/negative results.  While driven by individual development team members, the plan will be reviewed and validated by the Technical Leads.

Additionally, the individual development team members begin to identify their individual Unit Test strategies.  Each functional unit of code being created or modified needs to be tested to ensure that it is performing to spec prior to moving in to integration testing.  I believe in the paradigm that the tests should actually be created prior to the code being written, but I've seen development teams be successful using several different paradigms - so I won't harp on it.  The individual developers own this and are responsible for ensuring that they develop clean code.

Once the developers have been able to execute their individual Unit Tests, then they work together and move in to pre-Integration Test planning.  This time is spent ensuring that the necessary data has been identified and created/setup, that the configuration changes needed to the system and supporting sub-systems have been applied, that the network configuration changes needed to open ports to talk to internal/external services have been applied, and that all of this has been documented for production.  This particular block of activity is owned by the Development Team and the Technical Leads.  Infrastructure Teams may support the activity, but the Development Team and Technical Leads should ensure the configuration is complete and documented for the Implementation Plan.

Now we step in to Integration Testing - executing the plan that was documented earlier.  It is inevitable that something will have been missed - but that is what this stage is for, to find the bumps before you move your changes into the quality assurance (QA) and production environment.  Here you will find the missing data elements or calls to services/api's that work differently than documented.  You will uncover the fact that replication strategy for data you rely on wasn't documented properly and the underlying data you were expecting to see doesn't exist.  All sorts of unknowns will surface and you'll spend time playing whack-a-mole to clean it all up to a point where you believe it is functional.  The Development Team and the Tech Leads own this piece of the puzzle and work to ensure that all of the individual pieces come together and work as a whole. 

Now, let's take a few minutes to talk about Regression Testing.  I'm actually in the midst of transitioning our teams form the old style of moving code in to QA and then running a full regression test of the code along with new function/feature testing to a model where the code is built nightly and then the suite of regression tests is run when the build has finished.  This has been a tectonic shift for the teams in that the developers are getting daily feedback on code that has been checked in to the repository and we actually are building up a set of statistics that show us the overall state of the code.  I can now walk in on a daily basis and see the various package builds, number of tests being executed and the success/failure rate - along with a whole suite of statistics on coverage and severity of the failures.  Benefit number one - immediate feedback to the development team while the code is still fresh in their minds.  If we need to change something due to a failing test, the developer doesn't need to spend time remembering what changes were made.  Benefit number two - reduction of effort by our QA Team Members to get code through the testing process - their focus is on new feature/function testing, not regression testing.  Benefit number three - the developers have more buy-in on the testing process and begin to have a better understanding of the underlying data.  The Development Team is responsible for the Regression Testing and must review the results with QA to validate that the code is ready to move in to QA Testing.

QA Testing is focused on the new features/functionality to ensure that they are operating within defined constraints.  Here is where I want to hit the system hard with positive and negative testing associated with the new features/functionality being introduced within the overall system.  What happens where certain pieces of data are missing from messages exchanged between subsystems?  What happens when a user forgets to input data?  What happens when a web service that we are relying on isn't available?  What happens when the system thinks everything processed fine?  I will also want to take a look at load testing on some system to ensure that we are staying within expected SLA's.  This testing is the responsibility of the QA Team with input from the Development Team, Tech Leads and support from the Infrastructure Teams.

As  part of the work that is being done during QA Testing - we should also work with those team members representing the end users to execute the User Acceptance Tests (UATs).  This should tie all the way back to the beginning stages of the project where the business teams defined the success criteria of the effort and began building the definition of their UATs.  During the entire project, these high level UATs should have been fleshed out and reviewed to ensure that they align with the final requirements of the project.  QA and the User Teams should jointly execute and review the results of the associated UATs and prepare to sign off and give final authorization to move the code in to production.

As a Project Manager, you need to assign tasks to cover these activities within your overall project plan.  More importantly, you need to be following up with the responsible parties at various stages of the lifecycle to ensure that they are not being ignored.  Your organization may hold different team members responsible for the specific activities outlined.  They may also skip some of the items outlined above.  However, you need to ensure you are covering enough of the pieces to confirm that the system being built will satisfy the requirements originally identified at the beginning of the overall effort.

Well, we've reached the end of the discussion today.  I hope that you've found some of what I've discussed useful and that you're able to take pieces back in to your organization.  Feel free to share and let me know your thoughts on what has been presented and let me know what works in your organization.

Tags: SDLC, Software Development Lifecycle,  Project Lifecycle, Project, Manager, Development, Paradigm, Security, Secure Development Lifecycle, Communication, QA, Quality Assurance, Testing

For more information on David L. Collison: LinkedIn Profile

Wednesday, September 4, 2013

Project Managers, Bake Testing in to all of your Projects!

Well, this week I'm going to jump back in on the topic of quality - more importantly the need to build it in from the start.  This is a topic that many teams struggle with - how much testing is enough?  When do you call it good enough to move code in to production and celebrate success?

While I can't give you a definitive answer, it is important for you to look at each project and assess the risk of failure.  More importantly, you need to look at the various functional areas and ask yourself - what if there is an error, what will the impact be?  How will the application be used?  What is the audience?  What if this error is impacting one user?  What if this error is impacting multiple users?  Can downstream processes within the application - or dependent applications - continue to operate?

One of the questions that I've been asked by various developers - why are you making me do all this testing?  Isn't that the job of QA (Quality Assurance)?  Well, I hate to burst your bubble, but everyone is responsible for the quality of the end product.  Depending on the structure of the overall team involved in the project, the level of effort of the developers dials up or down.  If the organization has a strong QA Team, the developers may only be responsible for unit testing and some level of integration testing.  In other organizations, development may be responsible for everything up to and including user acceptance testing.  In web facing projects, business representatives may be integrated in to the Agile team and test directly with the developers - with the developers making changes on the spot.

I'm not going to try and convince you to pick one methodology or paradigm over another.  Each organization is unique and your organization has built the test structure that is being used for some set of reasons.  What I am going to do is explain some of the things that need to be done along the way - I can tell you who within the project I think is responsible, but this probably won't match up with your organization.  (I'll let you in on a little secret - it probably won't even match up with the way that my teams handle testing.  We're reworking how and who performs testing, so some of this is still up in the air as I write this blog entry.)

Whew, now that we've laid the ground work, let's dig in to the details and see if we can make some progress. Regardless of what methodology that you follow to develop software - the quality equation must be dealt with at the beginning of the project and must touch through all of the parts of your lifecycle to ensure that you're providing the highest quality software in to the production environment.  This is just as true in an Agile environment as it is within a waterfall environment.

So, let's touch base on some of the key things that need to be addressed by the overall test plan associated with your development effort:
  1. Acceptance Criteria - when the team thinks it is done, what will be used to judge whether the software can move in to the production environment?
  2. Integration Test Strategy - what are the application touch points that need to be addressed?  This includes system to system touch points - messaging, it includes intra-application - application sub-systems - that need to be tested, if your replicating data - it should include testing the replication mechanism, and it should also include testing of the security mechanisms used within the application.
  3. Regression Test Strategy - is it necessary to perform a full regression test against the application, or is it valid to consider a limited regression test based on the functionality being touched by the development team?
  4. Test Data - what data can be used to ensure that you can execute all of the tests needed to validate the functionality of the system?  This should include both positive and negative testing of the application.
  5. Recovery Test Strategy - depending on the system you are building, you may have include functionality to allow the system to recover from various types of failure points - if so, you need to ensure that you test the recovery functionality?
  6. Security Test Strategy - depending on your application, you may or may not need to include elements to secure the data being captured and/or manipulated by the system.  If this is the case, you need to ensure that you are testing the security elements to ensure that you are not allowing access to sensitive and secure data to individuals that do not have the proper authority.
On top of all that, each individual developer should be responsible for developing and executing unit tests against the code that they are developing.

All in all, there is a quite a bit of activity that surrounds the overall test strategy associated with the development process.  You may not formally address each of the individual items identified above, but in some way, these will get addressed throughout your project.  Either by accident or by design, someone should be answering the above questions before you put your code in to production.

As the Project Manager, it is to your benefit to ensure that these questions are being asked early.  This allows for the entire team to proactively address the test strategy and test requirements.  What good is it for development to be finalized if the development team doesn't understand how they will perform integration testing, or if the team doesn't understand how they will validate the acceptance criteria?

Next time, I'll dig in to some of the tactics that I've used - both within the current Teams that I work with and within previous organizations where I've worked.

In the meantime, feel free to chip in to the conversation and talk to me about the strategies that you use within your teams to manage your test strategies.  What are the components you feel are important and who within your teams has responsibility?  I'm not hear to judge what anyone does, but by sharing, we all might learn something new.

Tags: SDLC, Software Development Lifecycle,  Project Lifecycle, Project, Manager, Development, Security, Secure Development Lifecycle, Communication, Testing, QA

For more information on David L. Collison: LinkedIn Profile