Tuesday, April 9, 2013

Answers and The Way Back Machine ...


First, a couple of clarifications - answering questions from some of the comments/questions I have received on earlier postings:

  1. How did they get a room full of errors without anyone noticing?  Well, long story short, the CEO from the acquired company was aware of the errors being generated, but he was more concerned about selling us on the new system vs cleaning up the mess and holding the outside firm responsible for the mess that had been created.  I give a lot of credit to the individual that finally broke the silence and showed me the room.
  2. Are small businesses really at risk from crackers?  Let’s say you’re a small business.  You run a Windows network to connect your employees - let’s say you have 3 or 4 employees and yourself.  You store your customer information, contracts, payments, bids on your network.  Do you store copies of payment information - actual checks, credit card numbers (storing credit card information is a no-no)?  Do you have tax-id’s of your business or companies you do business with?  Do you store social security numbers along with names and addresses?  Do you store any medical/mental information on our staff or your clients?  Do you regularly patch your Windows server, your workstations?  Do you run anti-virus, anti-malware software?  Do you scan incoming emails for viruses and does your solution also scan attachments?  My bet is that you’re like thousands upon thousands of other small businesses that are not patching and taking security seriously.  Crackers scan the internet for easy targets - once they get your IP address, it’s off to the races.  They’ll steal anything you have and look for anything they can sell.  You won’t know until they steal the identify of you, your employees, your clients or drain your bank account.

Another question that I’ve received from several readers - how did I get started, what did I do to make this a career.  So, let’s hop in to the way, way back machine and see what got me to where I am today.  (A little luck, a lot of hard work and some great mentors along the way.)

I touched my first computer keyboard when I was the ripe old age of 12.  It was a teletype terminal, in a closet at my middle school, hooked up to a DEC PDP-1170 at the local community college.  (And with that, I’ve just dated myself.)  For those of you that aren’t in the know - a teletype terminal was a keyboard attached to a printer.  There was no computer screen.  When you typed, it was printed on greenbar paper and then the system response would follow.  I was very curious about what this machine was and what it could do.

One of my teachers was kind enough to get me an account.on the system.  Now the really nice thing was each account had access to BASIC and Fortran.  There was also a Star Trek game loaded on the system.  It would print out a grid on the paper showing where you were and where the Klingon battle cruiser was located.  You then fed commands into the computer to move your ship and to fire on the Klingon’s.  The program was written in either BASIC or Fortran and you were also allowed to print out the program listing.  And that was, as they say, all she wrote!

I printed out the computer program and poured through it to understand how it worked.  Now, back then there was no structure to the program.  Both the BASIC and Fortran languages at that time were crude and ugly, but they worked.  Variable names were 1 character long and you moved through the program with GOTO statements.  But as a 12 year old - it was something new and shiny and it grabbed my attention.  I spent quite a bit of free time poring through the code and figuring out how to reprogram the game.  I ended up learning BASIC on my own - and now that I think about it, the game had to have been written in BASIC, because I was making changes to the game before I formally took the Fortran programming class.

Through High School, I ended up taking the formal Fortran programming class.  I learned quite a bit from the instructors on how to write code that used the least resources from the system as possible.  Remember, back then, you were lucky if you had 64K of RAM available.  And if you really want a hard laugh, the Fortran class forced you to use punch cards to create your programs.  Yes, punch cards.  If you don’t know what they are - do a search on the internet and have a good laugh.

When I was 16 years old, my dad ended up purchasing an IBM 5110 computer for his office.  He needed someone that could work on the programs that had been created to track the clients and produce mailing labels and invoice his clients.  And that became my first real programming job!  The IBM 5110 was advanced for it’s time.  If memory serves me correct it had a grand total of 64K RAM and relied on two 10” floppy disks - 1 floppy to store the program and then you could swap disks in the second drive for the data that was needed by the program.  The computer was the size of a 2 drawer file cabinet and it had a small 5” screen and keyboard built in to the top of the cabinet.  Now, I also worked at a fast food joint during high school.  But it was computers that had grabbed my attention.  

When I left for college, I got a job at the local computer store to do delivery and setup.  So, for a couple of hours each afternoon, I delivered computers to local businesses, performed setup and would also pick-up equipment that needed to come back to the shop to be repaired.  The benefit to working for this computer store is that they had access to the first IBM luggable computer - the size of a sewing machine.  The owner of the store actually allowed me to use one and take it to my dorm room.  My instructors let me use the compilers on the IBM luggable - PASCAL - to compile my programs and produce the necessary assignments.  Joy of joys, for the most part, I didn’t have to go down to the computer lab.

Formally, between college and high school, I learned 3 languages: FORTRAN, Pascal and Assembler.  Over the years, I have learned additional languages and scripting languages on my own - BASIC, Visual BASIC, dBase, Clipper, Java, JavaScript, C, C++, and Ruby on Rails.  On top of that, I’ve also learned HTML and CSS to develop web applications.  There are probably a few other languages that I’ve learned - but you get the point.  I’ve learned way more languages on my own, then sitting in a classroom.

Out of college - I got my first job, selling and installing IBM PC’s and teaching courses on Lotus 1-2-3 and dBase III for an IBM VAR/VAD.  This was just as PC’s were really starting to take off - if I remember correctly, the opening price for an IBM PC with a couple of floppy drives was in excess of $5,000.  And the monitors were green screen - all text, no graphics capabilities.

It took me a couple of years to get my first full time job as a programmer - and from there I’ve moved up through various companies as a developer and then managing developers.  So, what were the secrets along the way?

  1. I’ve never stopped learning - I’ve purchased dozens upon dozens of books on programming, management, database architecture and other technical subjects.
  2. I’ve  always been comfortable asking questions - why are we doing it this way, what would happen if we chose to do it this way, how does that impact the results?
  3. I’ve had the pleasure of meeting excellent mentors along the way - people that took me under their wings and showed me how to code better or how to be a leader vs a manager.
  4. When my boss asked if there was anyone who wanted a “special assignment”, I always volunteered.
  5. Even when I became a manager and no longer coded during my day job, I’ve found ways to continue programming outside of work.  Sometimes as a volunteer - other times, creating small utility programs that I need.

So, that’s how I got into software development.  I truly love what it is that I do and enjoy the creative side of making software.  I’ve learned thru the years - whether it be at some of the smaller companies I’ve been associated with, or the larger organizations with offices on multiple continents.

On my next blog entry, I’ll talk about the toughest thing that I had to learn along the way.

If you'd like more information on my background: LinkedIn Profile

Monday, April 8, 2013

Testing - Making Changes


So, in the last couple of blog posts, I’ve discussed incidents from my past that have driven me to take testing more seriously than when I first entered the field.  Today, it has become even more critical for the teams that I lead to address testing as a critical component of the software development lifecycle.  My teams produce applications that, at peak times, can process 10’s of thousands of transactions a minute.  If we make a mistake - we can propagate that mistake really quickly. 

In the past, we have relied on brute force to test the applications that we release into production. This included individual scripts, but they would be kicked off manually and the results were tracked manually.  As our environment has grown more complex and the pace at which we release software has increased, it no longer makes sense to continue to try and manage this activity manually.


What we are in the midst of doing is automating and exposing all of the test information.  If you’ve been reading along in my blog postings, you can probably figure out that I like process.  Process, when used properly can actually accelerate the volume of activity within a development team and can bring greater returns to the business.  I readily admit the process run amok can destroy a team. It’s a fine line and I attempt to be cognizant of where the benefits can occur and where the pitfalls are that can wreak havoc across the team.


So what are we doing?  Well baby steps first - and none of this is revolutionary:

  1. I’ve asked our Quality Assurance Teams to own the organizational test plans.  I’ve moved them forward in the process to work alongside the business teams as functional and nonfunctional requirements are identified.  They are responsible for documenting what testing will be needed and how it will be accomplished.  Is this an app that needs load testing, have we thought about security, is there a user interface component?  By doing this, we are working through the test requirements at the same time.
  2. I’ve asked our Development Teams to move towards a Test Driven Development(TDD) methodology.  We are beginning to implement these strategies in individual project initiatives and over time, this should spread to all of our ongoing project activity.
  3. I’ve asked our Development Teams to move towards a continuous build/continuous test development paradigm where individual applications are either built nightly or weekly with automated unit, integration and regression test results accumulated and published to a central location.  Developers are notified of errors in the code base.  Management has insight into immediate reporting identifying key statistics including - code coverage, test success rate, conformance to best practices, etc..
  4. I’ve asked our Development Teams and Quality Assurance Teams to communicate more closely so that QA has visibility into the tests being generated and executed during the development cycle and that they have visibility into the continuous build/continuous test results.  This allows our QA Teams to validate that development is properly exercising the code that is being generated and ensuring that they aren’t breaking things that already work.

If done properly, this will allow our Quality Assurance Team to focus on New Function testing - while validating the integration and regression testing performed via our automated build/test scripts. Additionally, this should allow us to produce reliable test reports that can be reviewed with management and ultimately the business teams as they validate their UAT’s.  This should give them a higher degree of confidence that the applications we are moving into production have been tested thoroughly.

Over the last few months the teams have made big strides in changing the paradigm of testing within our organization.  We have a long way to go, but I’m confident that the team members understand the benefits to their individual teams, to the organization as a whole and ultimately to the customers that we serve.

I’ll admit that this is the tip of the iceberg - there is a quite a bit more that we can do.  However, we are making changes that can be implemented relatively quickly and that have the highest payback to the organization and to our customers.  As time moves on, the teams will evaluate what additional steps can be taken to improve the process and harden the delivery of applications to our production environment.  Our QA Teams have done a superb job to date of managing the applications we move through the test phase of the lifecycle, now it’s time to expand the responsibilities associated with testing across the organization and to utilize tools that allow us to automate key portions of the test strategy and produce standardized test metrics across all project activity.

Tags: #SDLC #softwaredevelopment #metrics #lifecycle #qualityassurance #qa

If you'd like more information on my background: LinkedIn Profile

Thursday, April 4, 2013

A Room Full of Errors


At one point in my career I was involved in the merger of two organizations.  Both were roughly equal in size and I was working for the CIO of the acquiring company in an effort to evaluate systems across the two organizations and make decisions on which systems would be used going forward.

Let’s back up a ways and find out how we got into the mess - literally a room full of errors.


During the acquisition, the CEO of the acquired company let it be known that he had hired an outside consulting firm to create a new order entry/contract system that was beginning to be implemented within the company.  It was a new application built upon a thick client vs the application that we were using internally that ran on an IBM AS/400.  The argument was being made to ditch the green screens and use the sleek looking new interface of this new system.


So the acquisition is complete and we begin to send individuals down to review the systems and gain insight into where they are in the migration process from their old system to the new system and to evaluate if the new system might support the merged organization.  These trips occurred 2 times a week and sometimes I stayed on-sight for 3 or 4 days at a crack.  

It took a while, but over time I began to hear rumblings that all was not well in the world.  Bugs that had disappeared would suddenly reappear, errors were being encountered in the migration process, contracts could not be generated, resources were having to back key stuff in to the new system that people thought had been migrated.

The developers that were on staff also began to feel comfortable with me and started asking if they could be put in control of the contractors that were driving the development of the new system.  They began to pull back the doors and show me bug reports that were being sent to the contractors and how the bugs would recur on a regular basis.  They showed me the system and how data would disappear.  Not a good situation by any means.

The kicker finally came one week as we were preparing for a team meeting to discuss the status of the development work and the current migrations being scheduled.  Immediately prior to the meeting I was meeting with some of the production folks and they said I needed to see something.  We walked through the building and arrived at an office - the door was closed and the lights in the office were off.  As they opened the door - I could see boxes stuffed with paper lining the walls of the room.  I was curious and asked what I was looking at.  This, they indicated, were all the errors from all the migrations that had already occurred.  My jaw hit the floor!  To say that I was stunned was an understatement.  I am not kidding when I say that the walls were lined.  The boxes were stacked in piles over 6’ in height and wrapped around the room.

I quickly found my boss - the CIO - and let her in on what was happening.  Needless to say, our direction changed.  In that meeting, we pulled the plug on the new system.  The direction was made to fall back the migrations to the old system so that their sales teams could get reliable information.

Yikes, a lot went wrong here ... in hindsight, we found out a lot of things:


  1. IT had no management oversight of the contracting company.  They reported directly to the acquired companies CEO.
  2. The contract development team was not using any type of source control system - bugs were either reappearing because developers were not validating who had what code, or they were intentionally making changes to reintroduce bugs.
  3. The test process used by the contract development team was woefully lacking - from what I was able to determine, there were no formalized test plans anywhere.  Each developer was responsible for writing the code, testing the code and moving it to production.
  4. Migration issues were largely ignored by the contracting company - in their eyes, fixing those issues were not an issue with the system.
  5. User issues were ignored by the contracting company - the CEO was satisfied with the system, so they didn’t feel any pressure to make corrections.
  6. The requirements documentation built by the contract company was mostly fluff - very little concrete documentation identifying what was/was not included.  The contractors were mostly going off of seat of the pants - show me the screens you currently use, show me the reports you currently get, we’ll do the rest.

First, I would like to put on record - if you’re generating enough errors to fill up an office, there is something seriously wrong with what you’re doing!  Period - not even up for conversation.

During the actual negotiations, we had very limited ability to confirm what was being said.  However, after the sale was complete and I was able to get on site - I should have been more aggressive in my review of the migration activity of the system and the actual use of the system.  It ended up taking me several weeks to figure out what should have been noticed within the first few trips.

If memory serves me right, it took almost a $1M to clean up the mess and get all of the data back into the original system.  These were real $’s - not some estimate of lost productivity. Ultimately, that part of the organization became a very successful part of the larger organization.  We moved the contractors out - hired staff to build up the internal development team and put the few local resources they did have in to leading the new hires.

Now, I’ll circle back.  Look, if you getting test results that tell you something is wrong, and you proceed anyway, you’re a fool.  If you’re ignoring the results and plugging away and propagating errors into your production environment, and you think that’s ok.  You’re in the wrong job.  There were many warnings in this project - but nobody listened to the people on the front line that were finding all of the issues.  If I didn’t understand it before I got involved in this situation, it became crystal clear to me that if you wanted to be able to claim success, you needed to ensure that the end users of the system were happy with the results.  It really doesn’t matter how clever of a developer you are, if you’re not testing the system and ensuring that the users can be functional, then you’re not doing your job.

What lessons have you learned along the way that gave you better insight into the need for true testing within your lifecycle?

Tags: #SDLC #softwaredevelopment #metrics #lifecycle #qa #qualityassurance


If you'd like more information on my background: LinkedIn Profile

Wednesday, April 3, 2013

Testing - Start at the Start ...


Well, it seems like a good time to talk about testing.


I've touched on this subject in earlier posts, but over the next few blog entries, I want to take a deeper dive into testing.  I want to explore some of the shortcomings that I've seen and some of the things that I think work.  I'm not saying I have all the answers, heck I'm not even going to say that I've got most of the answers.  But I've interacted with enough development teams across various development methodologies that I think I have some answers.  More importantly, if the ideas generated within these posts assists someone in taking a fresh look at how they approach testing within their organization, then I'll consider it a success.


Ok, let's rewind and take a look at things that I've seen along the way.  Yep, I've worked in all types of shops - very small development teams that had seat of the pants processes all the way up to multi-national organizations with the "thou shall not deviate from the master plan"  processes.  None of them worked.  Yes, each of them allowed product out the door and into production, but all of the processes allowed serious flaws to go unnoticed until the product was out the door and then was caught by the end user.  Then requiring the team to divert from their present activity, tackle the defect, test the fix and then move the fix in to production or ship the fix out to the customer.  All this does is cause churn within the team and reduce the confidence of management and the end user in your ability to create a quality product.


I'm going to go back in time and talk about an early consulting job.  I essentially had been hired as the sole local resource to work on a marketing automation tool.  The primary contractor had multiple resources that were working the project for this client.  Their responsibilities included everything from the requirements, design, development, testing to implementation.  I had been hired by the head of the technology team to act as a local resource through the development phase.  I ended up primarily working on building sub-portions of the thick client (think VB) that was to be used by the marketing team.  The primary contractors were responsible for creating the interface to the mainframe, handling the data transformation to a dedicated marketing database, primary portions of the thick client and the analysis and reporting tool.

By the time I came on board the requirements had been completed, the design was partially completed and they were working on mock-ups of the application.  They handed the mock-ups to me as they concentrated on handling the data transformation and finishing up the design work.  As we worked the project, several things began to bother me.  Query times to fetch, manipulate and save data were running long.  I discussed this issue with the technical lead from the primary contractor, and was assured that his team would handle the issue before the system went live.  Additionally, I began to realize that many portions of the client were being built without validation routines.  A larger concern to me was the fact that there was very little interaction with the marketing team.  Oh, they came in for "reviews" every few weeks, but from what I could tell, there was no effort by the consulting team to integrate them into the overall effort.

At some point during the process, the head of the technology team for the client pulled me to the side to ask me how things were going.  They were paying the bills, so I gave them my unvarnished opinion of the status of the project.  I walked through the concerns that I had along with my belief that the project was not going to be completed in the required timeframe, or, if it was delivered, it was going to fail.

The following day, we ended up having a reset of the project.  The company began hiring their own developers to add to the team and interaction with the marketing team became more frequent.  Testing seemed to take on a more visible role - suddenly user tests were being identified and scripted and there was more interaction with the marketing team to walk through the tests..

Unfortunately, it wasn't enough, on the day of go-live, the system ate itself.  The response of the system was like molasses.  The users couldn't do their jobs.  We pulled the plug less than an hour after the system went live.

So what went wrong?  The fundamentals of the application were there - the primary contractor had identified what was to be captured and saved.  They had designed a system that, at least on paper, worked.  They could show the screen flows and how that mapped back to the data and then mapped back out to reports.

What had we all failed to do?  Any type of test planning - and I mean nothing.  The primary contractor did not focus on tests at any level.  I did not identify and build tests for the functionality I was building - outside of the “happy path”.  Where we really failed was the load testing of the application.  Boy did we fail!

The fact that the system could handle no load was ultimately what killed the system.  Putting data validations in was the easy part.  Having to rescript the entire data access layer - that was a huge undertaking.  Ultimately, the primary contractor was removed from the project and the client built their own team to rework the application.  I stayed on through the rebuild and was around to see the product successfully rolled out through the organizations internal teams.

The moral of the story - testing is not easy, not even close to easy.  And the reasons for testing have only become more critical as the systems we build and the interconnects between the systems become more complex.  And just as important, load testing - you can test all of the data and all of the entry screens, but if you’re not testing to ensure that the system can scale to the number of expected users - you’re going to fail.

Testing needs to address the following:

  1. Unit Testing - in my humble opinion, we need to be writing unit tests prior to writing the actual code.
  2. Integration Testing - testing all of the individual pieces of code as they are assembled into the overall system.
  3. Exception Testing - anyone who thinks that in today’s world all you need to test for is the happy path is wrong.  You need to plan on someone keying the wrong type of data into your fields, you need to expect that someone will key in data outside of the boundaries of values you are expecting, you can expect a connecting system to garble data on the way to your system.  If you’re not testing it, you’re expecting your user to test it for you.
  4. Load Testing - validating that the system is even remotely usable is always a good idea.
  5. New Function Testing - if you’re adding functionality to a system that is already in place, you need to ensure that you are focusing testing on the new features.
  6. Regression Testing - yes, you need to look over your shoulder and ensure that you haven’t busted anything that already works.  Your end users usually frown when you ship code that busts something they are used to doing.
  7. System Testing - testing the final integrated system and ensuring that they meet the defined and agreed upon requirements.  That would be holding yourself accountable to the contract defined within the requirements.

That’s a lot of testing!  Yep, and if you’re doing it all manually - good luck!

The software that we use and build has only gotten more complex over time.  There really are no stand-alone systems anymore - all of these things that we are building interconnect with other things we have built over time.  We can no longer view testing as an exercise of a quick review and push it out the door.  Test planning needs to occur early in the cycle - starting with requirements and building all the way through the lifecycle.

If you’re doing it right, you’re building test scripts that can be automated and report back out to the development team on a regular basis.  Why?  Well, the easy answer is why not.  No, seriously, you want to be able to monitor the status of what is going on.  Are your tests covering all of the code?  Are your tests succeeding or failing?  What does the current build look like compared to the last build from the perspective of testing?  Can you tell if your code is breaking generally accepted coding practices - things that will protect you from either the code failing or someone being able to crack into your code?

I’ve never met a developer that likes being told that an end-user has found a bug in their code.  Heck, we hate it even when QA finds a bug.  So, with a little more control upfront in defining what it is we need to test and then the follow-up on the back-end to ensure that we are actually testing everything we said we would test - we should be able to get better at all of this stuff.

Until next time ... happy testing!

Tags: #SDLC #softwaredevelopment #metrics #lifecycle #qa #qualityassurance  #applicationdevelopment


If you'd like more information on my background: LinkedIn Profile

Monday, April 1, 2013

Security: Wrap Up



Just in case you haven’t been reading my past couple of posts – I’ve been discussing security.  Specifically, the need for organizations to take proper steps up-front within their development lifecycles to mitigate the risks of being hacked.


If you still don’t think this is something to be worried about, let me give you some information:

  • A Forbes article in 2011 states that the average cost of a private information leak was $6.3 million: Forbes: Data Breach
  • When crackers took down Sony in 2011, they reported that it cost them $170M to clean up the mess and put their sites back on-line: The Bright Side of Being Hacked
  • As Verizon indicates in their report, many of the organizations hit did not experience direct losses.  They did end up spending money on forensics and recovery losses – how much can your company afford to pay an external forensics team to determine if you’ve lost data or not? Verizon: Data Breach Report

Unfortunately, cracking is not as difficult as you might think.  Here is a link that shows how crackers expose passwords once they’ve stolen the files off your system:


If you really want to scare yourself – go to YouTube or Google and search on ‘How easy is it to hack someones computer’.  


This is the easy stuff – there are dedicated web sites used by professionals where they pass around password lists, sell and purchase stolen data, or give each other access to networks that they’ve cracked.  With a few clicks they can purchase or give away thousands of “identities” – ie: the customer data that they steal from you.  Are you keeping customer’s names, addresses, phone numbers, email addresses?  This is all stuff that the hackers want to get their hands on.  Even more so if your storing credit card information in your systems.  Do you keep bank account information and routing numbers so that you can process electronic checks?  Again, this is something the crackers will be after.


You may believe that your small potatoes and that the bad guys only concentrate on larger companies.  Nothing could be further from the truth.  They are scanning the internet and looking for weaknesses – they don’t care how big or small you are.  If you have a vulnerable system – they want to find it, they want to exploit it and they want to take as much as they can.  You need to build security in from the outset, you need to continually apply the latest security patches to your infrastructure and you need to ensure that your development teams – internal as well as external – are using industry best practices to mitigate security risks.


So, if my previous articles didn’t make you think seriously about incorporating security best practices within your organizations and the systems you develop, hopefully this article has helped tip the scales.  You need to review all of your systems, those built in-house and those purchased off the shelf or custom developed for your organization and you need to ensure that they provide the necessary security protections.

Tags: #sdlc, #softwaredevelopment, #lifecycle, #process, #applicationdevelopment, #security, #webdevelopment, #applicationsecurity

If you'd like more information on my background: LinkedIn Profile