If
you can't measure it, you can't fix it. To be more specific, you need
to be able to recognize those things that are slowing you down,
interrupting the flow, introducing defects, raising costs and
frustrating your team members.
Some
things you can figure out just walking around, talking to your team and
taking a look at what is happening on a daily basis:
- Seeing that your technical teams are getting interrupted regularly and don't have time to concentrate on their assigned work.
- Seeing that your developers are the only ones testing their own code before moving it into production.
- Seeing that the project team is doing a lot of rework because nobody is willing to make a final decision.
At some point though, you'll have reached up and plucked all the low hanging fruit off of the tree. Then comes the tough part.
Our
processes can't remain static. We need to look for ways to reduce the
timeframes necessary to deliver products to production, instill a zero
defect mentality within our teams, improve communication between teams,
reduce the amount of time needed to regression test our code, and focus
more of our time validating the new functionality.
This
is where metrics come in to play. You need to begin to identify the
key data points within your cycle that you can use to measure/monitor.
Then you need to build in ways to capture those data points.
- How long does it take to move through any one phase of your lifecycle?
- How many elements require rework at any stage of the process?
- How many times do you have to backup in the lifecycle?
- How many defects, and what kind of defects, are found within the various testing - unit, integration, regression, parallel, or UATs?
- How much of your code has valid tests that you can prove exercise the code?
- How many defects are found after the code is moved in to production?
- How much time is wasted with the project in non-productive time as you wait for various teams to be available?
- How much of your teams time is being wasted on unproductive activity - non-project meetings, administrative activity, or unapproved development activity.
Now,
the fine line is defining and being able to record these data points
without impacting activity within the overall lifecycle. What systems
do you have in place today that are used within the lifecycle and can
you augment these systems to capture the data points that you have
identified and be able to automatee the analysis of the data. There
will be some level of manual analysis - but, I would recommend
automating as much of this activity as possible.
How
can you make the argument that you need to add new resources if you
can't prove that your resources are saturated with approved project
activity? How can you make decisions on your test processes if you
don't know where your tests are failing? How can you hold your
development team accountable if you don't know where their spending
their time and how many errors are getting past them and in to the QA
environment, or worse yet, production?
I'm
not advocating that you immediately go out and measure create metrics
across the lifecycle. I do challenge you to think through your
lifecycle and begin to identify the parts of the lifecycle that need
adjusting and then think about the metrics that could be catured that
would help you make decisions. If you are totally comfortable with your
entire process - then I would gather feedback from your peers
throughout the organization, asking them what is and is not working, and
use that information to guide you on where improvements could be made
to your lifecycle.
Tags: SDLC, Software Development, Metrics, Lifecycle, Application Development, Project Office
Tags: SDLC, Software Development, Metrics, Lifecycle, Application Development, Project Office
If you'd like more information on my background: LinkedIn Profile
No comments:
Post a Comment