Site hosted by Angelfire.com: Build your free website today!

[Back to GEEK main page]  [^^Up to HOME page]  

The Borges Library

(zix-42 Branch)

Geek Island #001

IPAFT-RQ - Yet Another Model for the SWDLC

(YAMS)

Geek Island (insula) #001

On this page: {OverView} {Objectives} {Suggested Reading} {The Inevitable Acronyms.} {Intro: Identify} {Intro: Prioritise} {Intro: Assess} {Intro: Fix} {Intro: Test} {Intro: Release} {Intro: Support} {Identify} {Prioritise} {Assess} {Fix} {Test} {Release & Support}

OverView

This document describes "yet another" model for the Software Development Life Cycle. A lot of the ideas apply to Hardware development, and a lot of other things. Mainly, it's built up on the hard-won lessons in the various development projecs which your current narrator has been invovled in. It isn't a magic bullet. It's just some ideas -- pretty well organised, i would hope. -- Share and enjoy. The author does not in any way guarantee, stand-behind, or etc. But, comments and/or contracts are welcome. Frank. email: fleeding@hotmail.com

Objectives

There are no "magic bullets" to assure a company's or product's success. There are only safety nets, checkpoints, and a process (that must necessarily evolve over time). If the process is understood by everyone involved and where they fit in, and why certain steps are necessary (and when they can be side-steped) then the PLC (Product Life Cycle) will be more than just a pipe-dream. When the process is ill-defined, too many steps are side-steped or just "written off", the the product fails in the market place and disappears -- possibly after actually endangering people's lives, or at the very least their blood pressure.

Scope

This document covers to a fairly medium degree the concepts in the development and support of software based products. It is based on a rather arbitrary (but clearly commonly recognised) tier of "water-fall" processes. Although in reality, the modern world has all of these supposely sequential, and time-dependent processes happening at the same time. Identify - What needs to be done? A new product? An upgrade? Prioritise - Resources are limited; what's the most important thing(s)? Assess - Figure out how to do what needs to be done; QUAD (Quick and Dirty) or detailed? Fix - Actually fix the problem; program it, patch it, or ...? Test - Test what was done; Does it work? Did it affect something already in place? Release - Put it out for the world to use. Quality Support - Keep the customer happy. Plan for the "even better next time" thing(s). That's pretty much it. Everything else is "just" (mere) details.

Suggested Reading

Some of the "classics" you might want to try and find. Note in many cases older books are more philosophical (there weren't any nice "off the shelf" software systems that purported to be a magic bullet to solve all your Q/C and Q/A concerns. Beiser, Software Testing Brooks, the Mythical Man Month. Kaner, Falk, and Nugent, Software Testing.

The Inevitable Acronyms.

SME Subject Matter Expert; permanent full-time staff member well versed in the operation of equipment, systems processes in use at THE company, etc. SUT System Under Test; ie, the specific product and other hardware, as well as specific software that is to be delivered as part of this feature. SWDLC Software Development Life Cycle (also SWLC); a process model used commonly to describe the process of specifying, developing, programming, debugging, testing, releasing, and then supporting software-based products. SWQA Software Quality Assurance (do not confuse with SWQC); The over-all process of managing the development/testing/ field-support of software products. Q/A is a MACRO process, Q/C is a MICRO process. Q/A usually monitors the Q/C sub processes within each portion of the over-all product development/support process. SWQC Software Quality Control (do not confuse with SWQA); the micro-process of controlling the software release process. In general Q/C is the specific part of the development/support process within each group. Thus, the micro-code (lowest level of software) will have its own Q/C process that will radically different from the Q/C process that is carried out in (eg) Sales and Service Support. Testing The process whereby the degree of quality is assessed in a product. The usual progression (once design is finished) is: Unit test, Function Test, Local Integration Test, Systems Test, Full Integration Test, Regression Test, Customer Acceptance Test. And then release, and then repeat as required. TC Test Case; a set of instructions written out in clear and un-ambiguous terms that detail the steps to be followed in testing a specific part of the operation of the SUT. A test case, when run, produces a result that is to be verified if the expected results are "as expected" then the TC is said to have "passed" otherwise, it is said to have failed. TP Test Plan; a document that is made up of background information and a list of the test cases. Note: Test Planning is a generic term for that part of the over-all SWQA process that deals with the testing portions of the QA process.

Quick Walk Thru

So, here is a whirl-wind tour of the overall process.

Intro: Identify

This step is the old "getting ready to do" thing. It includes usually a lot of things that have "cropped up" from either previous releases of a product, or "new ideas" for a product that is seen as filling a nitch in the market place. Usually the *initiation* of the step has come about due to things like: Customer Complaints New Feature Requests Pressure from competitors' new products etc. Part of this will be a response in the form of: Customer Reviews (formal meetings with customer/client) Focus Groups, User Surveys, Request for Change, Work Orders, etc. Internal Reviews Feasability Studies Proof Of Concept (POC) prototyping and/or research At any rate what needs to be done is to IDENTIFY what needs to be done. What S/W and/or H/W systems will need to be modified (if they are already in place) or written (if new work needs to be done). In many cases, no new programs are actually written, only modifications to existing "legacy" systems. In this case, the concern is that of introducing side effects to the existing functionality. As such, the emphasis will be on regression and negative-impact testing (both in the design and test groups). Finally, the main problem encountered in the identify stage is to simply locate the source code, documentation, schematics, etc. Often they were created without version or source-code control. Many times, the only documentation is the expert knowledge of the original designer. Getting that information down on paper will obviously impact their on-going work.

Intro: Prioritise

Obviously the resources for on-going efforts will have to be diverted (in full or in part) to the up-grading work to be done. It is during the "prioritization" step that the action plans are drawn up. The usual functions are: Impact/Risk Assessment; "delaying vs. deploying", "Actual Cost to Fix", etc. Business Impact Analysis; "market presence", "competitive edge", etc. Criticality to Normal Operations; Critical, Major, High, Low, Future Enhancement.

Intro: Assess

During this phase, the goal is to answer the question: "How bad and how much?". The key is not to start fixing everything until you have a clear idea of the interactions between the components and a clear idea as to WHAT needs to be fixed. The usual "process tools" are applicable at this point. Code Reviews & Walk Throughs Test Case Reviews; New vs Regression test cases. Tools Review/Assessment. Review of Documentation Release Planning; timing, press releases, damage control, & the usual field-support issues, training of customer-support staff, etc. Configuration Management Reviews; specifically, new-release planning. Data and Program Flow Analysis (detailed view of the systems) Functional and Systems Analysis (more global, less detailed view of the systems) Prototyping / Modeling Performance Analysis Dependency Charts, Flow Charts, Timing Diagrams Other diagrams: Warnier/Orr, HIPO, Burr, etc. Debuggers, Data Drivers, Test Equipment, Protocol Analyzers, etc, etc, etc.

Intro: Fix

Finally, we can start "fixing" things. The key issue here is that resources are limited. As such, there will be the need to share these resources. The key concepts are: Optimizing Resources; schedualers, e-mail, working groups, COMMUNICATE. Lab-time planning; New Installation, Upgrades, Routine Maintenance. Project Management; Assign/Do/Report/Release. Configuration Management; Check-in/Check-out, "Development Process" Establish the Exit Criteria; Number of open trouble reports, make/break tests, etc. Tools Development. Documentation Reviews; Technical doc's, Field support docs, customer support procedures are updated, new product literature, etc, etc, etc.

Intro: Test

The SEVERAL Testing processes: The DEVELOPMENT PROCESS Unit Testing, White Box Testing, Systems Testing, Local Integration Testing, The TESTING PROCESS Regression Tesing, Feature Testing, Customer Acceptance Testing, Field Testing (and INSTALLATION) As the finished products become available from the design group the independent test and validation process begins in earnest. (Prior to this, the test cases, and the test equipment have been under development). The usual stuff here: Establish Entrance Criteria; Open room vs Clean Room, Trouble Report Review, fit-for-use criteria, Make/Brake Testing; Ready to test assessment. Multi-function testing, Black Box Testing, Ease-of-use Assessment. System Integration & Full Integration Testing. Re-testing & Regression Testing. Fit for Use assessment. Customer Acceptance Testing. Beta Testing. GOING LIVE -- issue the pagers!

Intro: Release

At this point, the documentation is reviewed, the field-testing (and often customer acceptance testing) occurs, and the field-support and customer service people have one more feature to support. A final "post-mortem" review should be done as well as the final sigh of relief. Until the next project... Post Mortem; What worked? What didn't work? How can we work smarter? New Documentation is shipped. New Product is shipped. New Support Documentation ready for field-support/customer service. Pagers are issued - it's Up and Out there! On to the next!

Intro: Quality Support

(Field Support, UpGrades, New Products, etc) This includes all of the on-going processes that are the customers' main contact with the company. Customer Service Field Support (installation, repair, field upgrades) Operations (when a technical questio arises) Of course the MAIN impact of a new product is to let each of these groups know as soon as possible what impacts any new development and/or repair/up-grades will have on their view of the product and how the customer uses the product. And now, for our feature presentation....

IPAFT-R/S: Identify

For each software and hardware unit, the identify step accomplishes three things: Separate the various components into identifiable units Locate the source code, documentation Note any dependencies between the components During the identification phase, it is easy to over-look components that we take for granted in normal day-to-day operations; for example: The Operating System in use on both the Workstations and SUT's Databases, Editors, E-mail, V-mail, forums, etc... We now examine the two aspects (process & procedure) as they apply to the Identification step.

Identify: Process Aspects

Although component boundaries in some cases may be fuzzy and/or arbitrary, it is useful to do this since this will indicate where the interfaces are. For example, if a given program MUST run on a specific system, then that aspect of the component is identified and we are aware of that dependency. Further, in many cases the source code and supporting documentation will need to be located. In some cases, only a patched version is appropriate here. This should be located in an easily accessible location (still respecting any security considerations). Documentation will need to be located and identified as pertaining to each component. In the case of hardware manuals, these will have product and version/release information as well. Ideally, every software product will have a simple, unique identifying number (usually referred to as the "sub-system number") - this is usually the case for hardware components. This helps in the documentation process to be able to refer to "SAM-124" rather than to the "Incremental Backup Scripts stored in the /admin/scripts/ directory". Of course, these must be tied together in a master list - as well as in all documentation that references them. This can be conveniently be added to the "References" section of each document. At this point in the process, it would be helpful to find/create/update any high-level process-flow documents. This would also include dependency charts, etc. Th most worrisome point at this point is the fact that in many cases "there just isn't any documenation" -- and many cases, not even the source code can be found.

Identify: Procedural Aspects

The output from the Identify step is a document (in generic terms this would be the so-called "Problem Analysis Document" - see the discussion below, in "Documentation". It will contain the following information: The name and identifying sub-system number for each component. Version information and Revision History System Requirements (memory, disk, I/O access, etc) Platform Requirements (O/S version, Support Software Requirements) Other-system dependencies (Databases, hardware/software, interfaces) Intended Use (what, how, when, why) Input and Output descriptions Business-specific information (criticality, security, and fall-back strategies)

Identify: Hardware Components

Hardware components consist of the following attributes that are specific to them: Physical Location (Building, Floor, Room, Bay, Shelf, Card Slot). Manufacturer info Model Number, Serial Number, Asset Number Power & Space Requirements Installation & Administration Documentation (manuals) O/S Requirements Capacity (memory, disk space, etc - both min and max, as well as recommended).

Identify: Software Components

Software components consist of the following attributes that are specific to them: Module/Program Names Documentation Trail (manuals, development/test/field-support, etc) Purpose Inter-system dependency charts Platform Requirements Admin Info (Security, Access, Backup Frequency, etc) User Info (Sophistication, Tutorial/Training Requirements, etc) Data Access (Files, Databases, etc). Network Access

Identify: Documentation & Code

Regardless of what process is used, it is essential to provide access to the documentation (still keeping in mind any security concerns). Further, the documentation should be placed under version control (if not already) - as part of the up-grading process will be to update each document with one of: The Component will NOT require changes. The Component will require modification/replacement. The Component has been identified as needing updates, but this effort has been postponed. Investigate: It is not known whether this component requires modifications or not. Concerning the specifics of the process to be used. There are basically two different "philosophies" concerning the nature of the documents: 1) A separate document is developed at each stage, 2) A "living" document is created that crosses several stages. In the first case the reasoning is that once a document has been "signed off on", then it becomes the fixed blueprint for proceeding to the next stage of development. This has become the norm, however, there is much to be said for the second approach. If you create at least a "summary" document that spans all of the other documentation, then this can be used for training. Of course, this really creates "yet another document' (YAD), but the idea of a central, unifying document that ties all of the "issues" together pays for itself. Additionally, the "running minutes" of various meetings can be kept in this document to high-light where trade-off's were made during the development at all stages. This is useful, since it means that later "if" (read when) revisions are to be made to the product, it this "history" can be used to bring up critical concerns that used to "simplify" the deployment of the new product. Often it will be the case that the new revision require changes (sometimes major ones) to the original design or the test strategies, etc.

IPAFT-R/S: Prioritise

The next step is to prioritize the various programs and components into groups from highest to lowest priority. The most critical systems should be attacked first. Usually these consist of the operating systems software, and database systems, communications software, etc. These are the very highest priority, since if they ARE causing problems, then all of the applications running on them will be affected. Next come those application programs that are crucial to everyday operations. Chief among these must be such mundane programs as e-mail, conference room scheduler, and any other office-ware that is used to manage things. Additionally, the specific programs that are used for the core of normal operations - as well as - any "fix-up" utilities, editors, debuggers, or viewers that are used to inspect the actual files. Again, do not overlook the licensing software as well. Finally, there are the myriad of "middle" priority components, and then last but not least "and the rest".

Assess: Process Aspects

The process aspects of prioritization are fairly straight-forward: Establish a "Process Office" that will be responsible for developing the official (excuse the phrase) "compliance policy". They will be responsible for sifting through the submitted documents and basically tracking the progress of the efforts. They can also be used as an inter-departmental co-ordination group - distributing information and tips as they are found in each department's efforts. Also, it is usually at THIS POINT, that the exit criteria (from design) and the entrance criteria (to test) are established and agreed to. These consist of each of the group's idea as to what quality is. The exit criteria is from the design group's point-of-view that "we are done". The entrance criteria is from the test group's point-of-view that "it is actually ready to test."

Identify: Procedural Aspects

The easiest way for this to be done, is for each group to assess its own situation with respect to the software/hardware that they are having to maintain. Since they are most likely to be the experts on the system, they should have a fairly good feel for the relative importance of each component. The main problem here is that EVERYTHING tends to get prioritized as CRITICAL. This is where dependency or interface charts are useful (or at least a complete dependency/interface list in the program's header or supporting documentation). Obviously, if a given program is deemed critical, then the other programs and/or databases, that it interfaces with will need to be examined as well. Further, in many cases the "IPAFT" process is a cyclical procedure. That is, at the point (say previously) where the critical app's were remediated, tested and release has now passed. We are then faced with the "second pass" through the less critical app's - in this case, we may need to "re-visit" systems that were thought to be "done and out the door". This may occur because, as we are in our "n-th" cycle we determine "something" that we hadn't thought of - and LO! It turns out to be a major problem; eg, a database system that we "thought" was actually compliant, turns out not be so. This may mean (worst case), down-loading a new version off of their web-site, then re-testing our own stuff. And then finally finding out that we need to write a couple of "extra" input filters to pre-process the dates before we feed them into the UPDATED database, (that we had to re-build using a quick-and-dirty DB convert utility that we adapted from our earlier up-grading efforts) - so much for our own careful planning. The point is that due to cyclical nature of "peeling back the layers", we may need to do a significant amount of re-testing, and possibly even some new development; may these be only require slight and moderate resources. Regardless, more important than anything else is the need to document what has been done - even if the decision is taken to do nothing.

Identify: Documentation & Code

As the various components are ranked into the order in which they are going to be fixed, the documentation must be assessed as to its criticality -and- visibility. An important aspect at this stage is "damage control". Obviously, everyone (well most everyone) will be concerned with the changes. And with the advent of the internationally-accessible web page, it is imperative that the efforts be "managed" for least impact to public image. The up side of this is that one can advertise the amount of time and effort being put into the effort to keep up the high levels of quality. The down side is that other companies may be even doing more. Again, careful wording of any press releases can ease this concern. As to the documentation itself, the following is a check-list to go over for each document. 1) Co-ordinate efforts with the design's priority list. If a major system is to be changed first, then its documentation needs to be first as well, etc. 2) Also, in the face of the possibly severe impact of up-grade issues to business-related risks, it is very common for the documentation group to prepare several specific reports for management: Impact/Risk Assessment; "delaying vs. deploying", "Actual Cost to Fix", etc. With input from the design and test groups, as they prioritize the various application, this becomes a "tracking document" into the assessment phase. Business Impact Analysis; "market presence", "competitive edge", etc. Again, this leads into the assessment phase a bit, but it will get an idea as to what the competition is doing about the problem. (An easy way to check this is to surf their web pages -- also making sure to update your own web page at the same time!). There is really nothing new about this. Criticality to Normal Operations; Critical, Major, High, Low, Future Enhancement. These are usually handled by the design group(s) as a part of their prioritization planning. 3) Once the "complete" list is in from the Design Group's prioritization, then immediately begin the assessment phase to determine staffing requirements. This will be input to the other groups that will have to test (test group), support (customer support, as well as field support), and document (documentation group) the new features. Again, do not over- look training issues.

IPAFT-R/S: Assess

During the assessment stage, we begin actually looking at the code and systems that need to be remediated. It may turn out that many of the components require no work to be done to them. However, a common problem is that the INPUT to them must be checked. For example, a program may already be up-grade-compatible, but the input to it comes from an external sources that is NOT controllable. This means that either an input filter program would have to be written to scan the un-reliable input -or- the program itself would have to have the filtering logic put in.

Assess: Process Aspects

The tools by which the assessment proceeds are probably already in place. The only difference is that the FOCUS changes to: Is this component date-affected? Code Reviews. In this case, a preliminary "reading" of the affected code will be beneficial in two ways: It identifies the specific functions/sub-systems that may or may not need to be remediated, -and- it also serves as part of the official review of code. Later during the release process from the design group. Structured Walk-throughs. This is especially important for critical and/or complicated sections of the code. Further, at each point, the question of testability should be addressed. The more critical the application, or the more complex the code & changes, the more testing (both white and black box) needs to be done. Tools. Any new tools that will be needed must of course be developed in a timely manner so that they don't impact the design or test scheduals - which is exactly the problem that they are trying to fix: Working smarter, not harder. Documentation Review. This includes all kinds of documents from press releases (if applicable) to updated training manuals for customer-support. This will include reviewing the content of the affected documents, as well as user displays and the associated documentation for them. Also, reports and other output, including customer bills, etc., will need to be identified and marked for change. Release Planning. This will need to be co-ordinated with the other efforts. Especially if certain parts of the systems will be ready before others. Configuration Management. Since the affected programs will have to be "duplicated" this will impact disk storage considerably. Further, in order to do side-by-side comparisons, it will be necessary to have the original and modified code in a clearly different directory. Also, since the various computers will need different clock dates on them at different times - it is essential to plan how to schedual the various, different activities that will all be demanding the limited resources available. Data Analysis. This is always crucial. Usually this will involve either the creation of a NEW database (meaning one more thing to support), or updating/modifying an existing database; eg, adding new fields, or at least new SQL search queries, reports, etc. The latter means a DB conversion -- always "lots of fun". Regression testing (both in the DB design group as well as the integration test group) will be essential. Traing and support docs will need close attention since many screens may change. Flow Analysis. Although this not usally central to the over-all effort as data analysis, it is still necessary to check "error returns", "exception handling", and any control blocks that have "fall through" logic in them. Since it is likely that much of the up-grading will include new "if/then/else" or "switch" statements - it is likely that this coupled with the OLD code can cause serious logic errors - even though the code will compile fine. This is especially true, where the up-grading includes the NESTING of the new date-handling logic inside the existing programming logic. Functional & Systems Analysis. Regardless of whether the traditional "data abstraction" approach, or the newer, "object-oriented" approach is used - it is still necessary to evaluate the changes from a functional point of view. A crucial change is in how the directory structure might be affected - again going back to Data Analysis. It's often easier to create a new directory that ONLY the new feature uses, and then allow it to use the other directories in the "usual" manner. Prototyping. One way to "try out" a proposed up-grading fix would be to take the existing code, and simply "drop" the changes into it, re-compile and see what happens. To a lessor extent "modeling" will not be very useful other than to provide a "what-if" kind of view of the proposed changes. Performance Analysis. This should at least be considered, since the new logic WILL introduce additional time in the processing of the information. Document & Diagram it. (One picture is worth a thousand words, but when you have both!). Every time the code is changed then the opportunity to update the supporting documentation should not be ignored. The only thing worse than no comments are wrong comments. Even a hand-drawn diagram is better than none at all. Etc. Debuggers, Test Equipment, etc. Again, this will need to be reviewed (as with any other tools). Also, there "are" off-the-shelf products that are designed to look at source code (in a text mode) and try and find date-related variable names, functions, etc. (I only mention this since a few, well written scripts of plain-old text tools can do the same thing, and the cost of these programs is in the $5000 per SEAT range.)

Assess: Procedural Aspects

To change or not? The less changes to the systems and programs, the better. Every change that is made introduces the possibility of problems. If possible, a set of common date routines should be written that can be used for ALL the applications. This way, if a flaw is found in their logic, then at least once they are corrected, most of the programs will begin working better. Insertion of Filters into the data flow. In many cases, if a component is already compatible, this won't need to be done. However its input will be from an external (and possibly un-controllable) source - this is especially true in the case of transactions and/or data that originate from an outside soure; eg, the customers may be using other vendor's software. Inserting a filter between the two programs that simply checks and validates the input data means that neither program had to be changed. Also, once the filter has been debugged and tested, it can be used in the same way for other applications. Now where did that DATA ITME go? Again, even simple editor macros can be used to LOCATE the change-related variables in a program. Fixing it will be the challenge. The recommendation (since this IS the assess phase, NOT the fix phase) is: Analyze the common aspects of each of the routines that will have to be changed and see if a small number of common routines can be put together to accomplish the task. Again, if these new data-specific programs have bugs in them, and they are located at least the re-work is reduced by have a COMMON set of routines, as much as possible. Keep a change diary. It is important that with so many massive changes being made to so many modules, programs, systems, etc. that there be a clear PAPER TRAIL. I guarantee that 1-1/2 years from now you will NOT remember whether you checked the XYZ program for impact or not. Also, as it is VERY likely that there will be new people maintaining those programs this will provide valuable "legacy" information for them as well. Debugging. Again, as with the up-grading efforts in general: If you can have a common set of routines, test procedures, tools, debug scripts, etc. Then you have already reduced the amount of work to do the testing - as well as the "training" time to learn how to use the tool, and a familiarity with how the results "should" look. And of course, if there are bugs or problems with the test tools, then the effort to fix 12 scripts is much less than for 120 scripts.

Assess: Documentation

Each document that is affected will have to be reviewed and the necessary changes indicated - and these changes reviewed by the end-users of the document. Where-ever possible, documents (especially to field-support, customer-service, and system administrators) have a clear and easy to use format (preferably on-line, preferably with an index). Make sure "contact" numbers are up to date as well. As to the types of documentation listed below, I have tried to make this list as complete as possible. Document Numbers. This includes not just "bumping" the version number by one, but also the revision history. In some cases, it may be necessary to have separate versions of the program released (one prior to the new release, and the other for use after). In this case the so-called "generic problem" problem is essentially doubled. Product Literature. Always nice to re-assure the customer that everything is under control. And to high-light the up-grade efforts that are being taken. (Quality just isn't Job One, Quality is the ONLY product). Web-pages. In the new, on-line access that customer's have to information, it might be that the (considerable) efforts being expended on up-grade be high-lighted. Programmer's Notes. Again, these will make excellent tutorials in the future for new-hires. And (by the way), they will be very, very useful a year and half from now, when trying to figure out what was changed, and why a given function was "retired" even thought it had been working fine up until last week. Administrator's Guides. This (along with customer support and field-support) manuals are VITAL to be accurate and complete. Also, make sure the contact list is up to date. (It probably won't be necessary to list beeper numbers, since we will all probably be up here on December 31st anyway, checking "one more time"). Customer Support Manuals. This includes both the training guides, as well as any "at the help desk" guides that they use to do their job. Field Support Manuals. This must necessarily include technical guides that they specifically use, as well as any of the other (above) literature that they happen to have with them. The UP-GRADE Documentation. The up-grade documentation is probably the most important aspect of the "legal" aspects of ANY new relese. This documentation indicates a clear INTENT to provide a solution to problems. So, it is important that even the fact that we are documenting the fact that we are documenting the actions that we took is important.

IPAFT-R/S: Fix

Fix: Process Aspects

Schedualing. A major part of the headache is simply trying to get time on the test equipment. In some cases the designers can "roll the clock" on their own workstation and do some of the preliminary testing that way. Configuration Management. The main headache here is that all of the code that is to be changed will need new "branches" in the version-control tree. For example: "We had planned that "XYZ" is being changed to fix bug #123 and that was from version 2.3 to 2.4, but then we decided to go ahead and plan to fix the up-grade stuff as well. After we started looking at the code, we realized that we would have to make a lot of other changes to make it work for the up-grade. So, version 2.5 will have the bug fix and also the up-grade stuff that WILL work. And then we are moving the changes that were planned for Version 3.0 anyway up and will fix all of the stuff at that time. Thus, 2.3 -> 2.4 -> 2.5 -> 3.0 -> 3.1 where as we had originally planned: 2.3 -> 2.4 -> 3.0" This is probably a worst-case scenario. In general, the up-grade changes will be pretty minor and the most time-intensive effort is simply in the testing. Exit Criteria. The exit criteria will determine the conditions by which we determine that the design is finished. These are developed by the design groups as their measure of quality. This includes a certain level of functionality, a limited number of open trouble reports (as well as "usually", zero critical or major trouble reports), and the completion of some acceptable level of design-oriented tests. Entrance Criteria. The entrance criteria will determine the conditions by which we determine that the product is ready to test. These are developed by the independent test group as their measure of quality. These include such things as documentation, the availability of designers for hands-on training, load tapes, and a basic "white-box" sanity test to demonstrate basic functionality. Depending on the exact nature of the nature of up-grade, the entrance criteria may be somewhat more relaxed than usual - even though the up-grade itself IS what the feature is all about. This is a judgement call and usually depends on how wide-spread the up-grade's effects were seen to be in the ASSEMENT phase.

Fix: Procedural Aspects

Simply speaking, the main procedural concerns are the handling of the special dates. (Discussed in Section 5.5.2.1, below). However, the key points to address during the fix-ups are: Build Unit-Test drivers when-ever possible. When creating new functions or simply making changes to the existing code, it is important to build and RETAIN, test drivers for each of these. For example, if you are creating a new function, then you should create a test program that tests the function and self-checks the results; eg, using the ASSERT function (in C), or similarly installed specific checks. Of course a LOT of that can be handled using CONDITIONAL blocks so that IF they are seen to be service affecting, they can be disabled and only re-enabled when testing. Thus, the test-driver program could contain line, after line of such statements that essenially validate EACH special data value, condition, or specific feature conditions. Customer Support Manuals. This includes both the training guides, as well as any "at the help desk" guides that they use to do their job. New Algorithms or Access Methods, etc. All should be checked as throughly as possible and use stubs from the working code to simulate interactions with the rest of the system. Nothing new here. And of course "and the rest" User Interfaces, Formatting & Reporting File I/O & Databases Sort/Merge Operations Data & Applications Interactions between Interfaces etc, etc. And again, you must have interface diagrams, showing the data and processing flow - and then (having identified), you have to have a plan of attack. In some cases, you may have to fix your system because someone else's is broke.

Fix: Documentation

Now we begin the effort of fixing each of the documents. Now in reality, these changes are in many cases pretty trivial -- especially when compared to creating a new document from scratch. However, there will be a multitude of small changes that effect EVERY document associated with each program/system to be remediated. Next, for each document an "impact-checklist" should be developed. The following is taken from Section 5.4.3 (Assess: Documentation), above. The idea here, is to analyze the document from each of the following points-of-view. That is, we re-read the entire document for the CONTENT aspects of each of the following topics: Document Numbers. Product Literature. Web-pages Programmer's Notes. Administrator's Guides. Customer Support Manuals. Field Support Manuals. The up-grade Documentation. For each pass through the document (ie, for each of the above topics), we then make sure that we perform the following: a) Is the topic correctly handled in the document? If not, we fix it. b) Are references to this topic (eg, document number, web-page ref, etc) correct for this document as viewed by EXTERNAL documents that reference this one? c) Is the language, use of terms, AND style (font, bold, italic, etc) consistent. d) Check all reference to storage locations (web page, disk-drives, etc). e) Check ordering information for both internal and external customers. f) Check contact information: Web-page info (especially web-master), 1-800 numbers, internal telephone & e-mail numbers, etc. g) Recommend support teams for various product changes. h) Distribute internal documentation to all customer-support and field-support people. i) Distribute on-call numbers (phones, beepers, e-mail, etc). j) Plan new additions to problem reporting databases. k) Distribute results from design & independent test as each remediated component is "fixed, tested, and released" In addition, of course (as mentioned above), there will be a myriad of new reports to be generated for management at all levels (project, design, test, customer-support, field-support, etc). This is especially true for any new filter programs, batch processing programs, or administrative procedures that are required the on-going support or changes to actually accomplish the up-grade-related fixes.

IPAFT-R/S: Test

There are of course MANY kinds of tests. These are usually drive by the FOCUS of the group running them. For example, the Design Department will often design WHITE BOX tests that examine specific internal variables, function calls, traces, etc. While the Test Department will be focusing on BLACK BOX testing - making sure that the thing LOOKS like it works and doesn't do odd things from a customer/user point of view. And of course the support departments (field support, customer support, trouble desk) will go thru their paces that the updated manuals and such actually are compatible with what shows up in the new version of the product.

Test: Process Aspects

As mentioned before, while the new design (and up-grading) is going on, the test case writing is going on. As well as any new tools development. However, once the design group has "blessed" the product by announcing that it has passed the exit criteria -- then it is time to test. The following summarizes the testing process (very generic here). 1) The system under test is "cleaned" of all of the left-overs from the design efforts. For example, if the product includes new device drivers, then these should be REMOVED from the System Under Test (SUT) - and re-installed via the actual load tape that is provided as part of the entrance criteria. 2) The installation is provided. At this point, we are literally testing the documentation. Is it clear? Is it correct? We must be VERY careful to be in dumb-monkey mode at this point -- we can't use our internal knowledge of the design (from all of the e-mail's that we have been reading, the meetings we have been attending and the "down-the hall" discussions that we have been having with the very people designing the products. The customer will not have anything to go on, other than what's in the box. 3) We begin the make/break testing as prescribed in the Entrance Criteria. 4) We write the first test report: The entrance criteria have been met and ## new trouble reports have been written (no critical TR's and a limited (if any) number of major's). 5) If the Entrance Criteria are not met, a meeting is held, and any/all of the following may be decided: a) Ignore it for now, design will fix later. TR entered into the database. b) Works As Designed (WAD). Update documentation, etc. c) Continue testing, design will fix and re-release in the next load-build. d) Must stop testing. Bug must be fixed before we can resume. Go test something else. 6) Testing in earnest begins. 7) As bugs are found, they go through the same cycle as during the entrance criteria validation phase. 8) As fixes are released, they are re-tested. TR's are closed. 9) Regression Testing is performed. (Actually this can be run at just about any stage). The scaled-down tests from the previous release are run and any "side effects" to the previously released features are FLAGGED. Finally, regression test is completed. 10) Finally, the final load-build is done. A "clean" install is done, and the system is re-tested as appropriate. Possibly including: A full regression test, 100% re-test of all test cases, intensive white-box testing of any critical/major TR's that were uncovered, Break-and-destroy tests, etc, etc, etc. 11) Customer Acceptance Testing is performed on the version that is now essentially a beta version. 12) Final test report is written. The product is pronounced "fit for use".

Test: Procedural Aspects

Make/Break Tests. As mentioned previously, a set of basic "sanity checks" will be put together to form the testing part of the "entrance criteria". Further, they can be used to characterize the basic operation of the system after boot up. If any of these "bread and butter" tests fail, then there is little point in continuing with the rest of the tests until the system is fixed. For example, if basic dialog boxes or pull-down menues are not working properly in a GUI, then there is little point in continuing any of the tests dealing with dialog boxes and pull-down menues; there MUST be an underlying cause (a code fragment, an actual bug, system incompatabality - something). In any event, as each of these critical problems is found, make/break testing continues onward so as to locate any other major problems. Of course in some cases (eg, the install program hangs up) testing has to halt until the blockage is cleared. Negative Tests. One of the critical sets of tests that need to be run are the so-called "negative tests". This is often viewed as un-necessary, since we already know if we put in bad data that it "won't work". However, with respect to testing in general - negative tests will reveal MORE about the system than a positive (ie, "normal" test). For example, if the user if prompted for input and you VERY carefully type in what seems reasonable - then it is very likely that it will work. However, if you type in three garbage digits, and then hit break repeatedly and then the tab key, and the back-tab key, and then go out of the dialog box, and click on another field, and then come back in and type valid data: More often than not, the system will lock up. And the response, "well it's unreasonable for it to work in that case" is not acceptable. Since, (in this particular case) the actual fact was that there is a MISSING initialization statement for the dialog box in question. "A positive test only tests what we expect to work, not what we expect to fail." Boundary Tests. Another (obviously) critical area is that of boundary values. For example, it may be that a range of values was thought to be between 00 and 99 and now, it may well be that the range could exceed 1000! Also, since many older systems relied upon "tight storage" techniques, their logic will undoubtedly have many "bit-savings" ideas imbedded in them. So, it is important to put in REALLY un-reasonable numbers. These should be screened as "un-reasonable" or at least issue a warning - but they should NOT blow up the system! Local Integration Tests. Since it will probably be necessary to run and re-run tests, It is also a good idea to build a "mini" integration and systems test package. This would include essentially, "black-box" test cases but run in the design group as a sort of "sanity check" on the over-all product. This is especially a good idea, where the product must interact with any modules that are other-vendor supplied. (And as such the development group does not have direct control over their up-grade-compatability). Regression Tests. To a certain extent, all of the new tests on the updated system are in fact "negative impact" or regression tests. However, this becomes critical when we consider "historical" or "archived" data. This is especially true for any kinds of "year-to-date" types of reports, or those programs where the historical data can be retrieved on-line. Further, any kind or store/retrieve procedures should be carefully tested, since they will be your last line of defense when having to "re-build" databases, so that you can recover after a data-related crash. Many, many companies don't ever find out that the RESTORE function (command, the physical medium, or the way the function works) do not in fact work - until, they try to recover something. Many such systems are not thoroughly tested (even though they are vendor-supplied with the operating system!)

Test: Documentation

At this point, the main "testability" aspects of the documentation are answers to the following questions: Completeness: Have all the documents relevant to an application/system been updated? Have all the necessary changes been done/reviewed/signed-off? Version information and Revision History Updated? Fit for use: Are the changes not only correct, but understandable and as consistent as possible? Have all of the customer's requirements been met? Platform Requirements updated? Installation guides? Training guides: External Customer? Customer-Support? Field-Support? Press releases, web pages, product literature, etc, etc, etc, etc, etc.

IPAFT-R/S: Release & Support

At this point, the documentation is reviewed, the field-testing (and often customer acceptance testing) occurs, and the field-support and customer service people have one more feature to support. A final "post-mortem" review should be done as well as the final sigh of relief. Until the next project Post Mortem; What worked? What didn't work? How can we work smarter? New Documentation is shipped. New Product is shipped. New Support Documentation ready for field-support/customer service. Pagers are issued. On to the next!

Release & Support: Process Aspects

The primary goal of the release process is on-going support of the new, improved (and hopefully up-grade-compatible) products. The following are involved: Processing Orders, Customer Requests, Complaints. Testing problems, creating patches, testing patches, releasing patches. Configuration Management of the new generic loads, test data, etc. Product & Document upgrades are created & shipped..

Release & Support: Procedural Aspects

Mainly this will involve keeping a close note on on-going testing between now and the release. As each new set of tests is run (on the on-going changes that are not necessarily a part of up-grade), any "peculiarities" are noted. Again, a small mix of up-grade-specific test cases, as well as adding a "up-grade Issues" section to the analysis document for each new feature - and hopefully, the annotation would be "not applicable" for each of those. Further, of course it is important to retain "snap shots" of the configurations that were used to test for up-grade-compliance: Version & Control information; for: a) The Operating System and other platform systems, b) Hardware & Equipment configurations, c) The applications software modules, d) Test Data that was used, e) Test Procedures, Documents, Results, Trouble Reports, f) Test Scripts, equipment setup scripts, etc. Any of the ISO-9000 (or other SWQA-required documents). Test Results, Test Reports, (Test Records in general).

Support & Release: Documentation

Again, the usual "on-going" maintenance issues & procedures apply: Updated & shipped? Updated on the web? On product literature? Consistent updates across ALL documents? Technical reviews were done? And then: On to the next (but not right now; hopefully;).