How I may help
LinkedIn Profile Email me!
Call me using Skype client on your machine

Reload this page Defect/Bug Reporting

Here are what to keep in mind as you write bug reports that make debugging easier.

International semaphoreanother page on this site for “Error”

 

Topics this page:

  • Word Definitions
  • App Versions
  • Tester Authority
  • Run Conditions
  • Steps to Reproduce
  • Analysis
  • Severity/Priorities
  • Assigned To
  • Summary/Headline
  • Defect Status
  • Packages/Tools
  • Your comments???
  •  

    Site Map List all pages on this site 
    About this site About this site 
    Go to first topic Go to Bottom of this page


    Set screen Word Definitions: The Anatomy of a Problem Report

      Certain words have different meaning to different people and organizations. So here are definitions of some words used during the testing of systems and applications. Differences in meaning can often be emotional because of a misunderstanding about the concreteness and formality of discussions.

      • AUT is an acronym for Application Under Test.
      • SUT is an acronym for System Under Test.
      • CUT is an acronym for Component Under Test.

      • AD1 is an acronym for Automation at Day-1, test automation assets developed in parallel to feature development to verfy when DEV is "Done" with each feature.
      • BRT is an acronym for Basic Regression Test, a sanity test of integration (exercising the mainstream E2E workflow) run by DEV before releasing the build to QA.
      • BAT is an acronym for Build Acceptance Test, a basic, crucial, short set of stand-alone component regression tests run as part of the build process to determine whether the build passed or failed.
      • IBT is an acronym for Integrated Build and Test, which automatically builds and deploys the product, then tests it with the BAT suite.

      Test results from running (executing) test cases are evaluated using a verdict. In test management systems, a verdict is a property of a test case within its test suite. Verdict values focus on whether test results give evidence for the correctness of the SUT:

      • Inconclusive = test results do not conclusively prove that a Pass or Fail verdict can be given. An example is when a test run stopped earlier than planned.
      • Test Error = the test case or test system itself cannot properly examine the SUT. An example is when test script encounters a value not anticpated by the test case.
      • Fail = test results conclusively give evidence that the SUT does not meet the requirements and specifications examined by a specific valid test case.
      • Pass = test results conclusively give evidence for the correctness of the SUT examined by a specific valid test case.

    • An incident is an event occuring during testing that requires attention (investigation). [IEEE 1008] Incidents do not necessarily justify the filing of a formal Problem Report. However, information describing incidents are often attachments to a Problem Report.
      Examples: "When a word is copied from Microsoft Word 2003 document and pasted into a text field, the AUT does not recognize the entry." or "Reports are not being produced." or "Users must wait a minimum of 5 minutes for logins to authenticate."

    • A problem refers to one or more incidents with unknown underlying cause. This definition is used by ITIL
    • A Known Error is when the root cause of an incident is known and a temporary work around or permanent alternative has been identified.
    • A failure is the misbehavior of some program or process which results in the AUT ending in a failure mode due to a set of conditions.
      Examples: "The AUT does not recongize whitespace (a space character) at the beginning and end of text as extraneous and returns a soft error about text which visually appears correct to users."

      • The failure mode of an incident is the physical or functional manifestation of a failure.
        Examples: A system in failure mode may be characterized by slow operation, incorrect outputs, or complete termination of execution. [IEEE 610]

        • A soft error does not cause the AUT to terminate. The typical outcomes is that processing continues after an error message is issued. Optionally, a prompt may be issued so that the tester can decide whether to continue processing.

        • A hard error causes the AUT to terminate. Also known as an abend (abnormal ending) or a “Show Stopper” for the testing effort. Such defects cannot typically be reported by the program which contains them.

      • The failure condition is what triggers a failure to occur. An example in functional testing is the input of a particular value or some specific user action such as clicking on an errant button. An example in load testing is the incremental number of simultaneous users or data items which causes the system under test to fail.

        • A bug is a slang term for the cause of faults. This term began early in the history of computers when a moth was found inside a computer and identified as the cause for a fault in the circuit boards.

    • An issue is a point of disagreement requiring clarification and discussion. Issues are often about differences in mindset and philosophy that may be codified in a corporate Test Policy issued by managements.
      Examples: "It's too much to ask users to remove invisibles space character automatically inserted when copying a block of text within Microsoft Word 2003." or
      "It is unacceptable for users waiting over 5 minutes to login.".

    • A contention is an assertion, such as a conjecture about the possible cause of a problem or a remedy to that problem. The word "should" usually appears in contentions.
      Examples: "Text fields such as UserID should be stripped of whitespace (spaces, tabs, etc.) before being matched against the database." or
      "The authentication module should not be single threaded."

      I don't use this word to avoid confusion with the conflict between systems in load testing.

    • A concern is a non-specific desire for an improvement in or the avoidance of threats to the AUT or work process related to it.
      Examples: "I am concerned that login is too slow for our users".

    • A defect is a specific deviation from expectations that usually requires corrective action to the implementation (programming code) or design (requirements) of the AUT. A formal Problem Report or ticket is usually raised for each defect uncovered. A defect may be manifested in several occurences (incidents). A statement defect should ideally be descriptive of the functionality not provided (or should not be sprovided) rather than prescriptive.
      Examples: "There is no mechanism for controlling the verbosity of logging at run-time."

    • A fault is less specific than a defect in the AUT. Like a geologic fault line which may or may not be the location of an earthquake, a computer system fault may indicate a potential for error rather than the existence of a defect in the product.
      Examples: "The authentication module writes 250 lines into its std log detailing each of the 10 calls made to the authentication server per transaction. This slows processing time and can cause disk space overflow under load."
     

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Verification vs. Validation

      Testing is a process of reducing risk by comparing "what is" against "what should be".

      Software verification is often confused with software validation. The difference between 'verification and validation:

      Difference Verification Validation
      Asks: "Are we building the product right?"
      Does the software conform to its specification?
      "Are we building the right product?"
      Is the software doing what the user really need/want?
      Focus: verifies that the final product satisfies or matches the validates that the product design satisfies the
      Basis: original design (from low-level engineering). intended usage (from high-level marketing).
      Conclusion from Capability Maturity Model (CMMI-SW v1.1) the work products properly reflect the requirements specified for them. the product, as provided, will fulfill its intended use.
      The aim of testing: Find errors introduced by an activity, i.e. check if the product of the activity is as correct as it was at the beginning of the activity. Declare whether the product of an activity is indeed what is expected, i.e. the activity extended the product successfully.

      In the electronics industry:

        Near the end of the Prototyping stage, after engineers create actual working samples of the product they plan to produce, Engineering Verification Testing (EVT) uses prototypes to verify that the design meets pre-determined specifications and design goals. This is done to validate the design as is, or identify areas that need to be modified.

        After prototyping, and after the product goes though the Design Refinement cycle when engineers revise and improve the design to meet performance and design requirements and specifications, objective, comprehensive Design Verification Testing (DVT) is performed to verify all product specifications, interface standards, OEM requirements, and diagnostic commands.

        Process (or Pilot) Verification Test (PVT) is a subset of Design Verification Tests (DVT) performed on pre-production or production units to Verify that the design has been correctly implemented into production.

    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Application Version, Release, Build, Handoff

      The Application-under-test (AUT) is the software application that is currently being tested. But which part (components or features) of the AUT is being tested?

      Which release and build and handoff?

      • A Build is a single set of revisions to the AUT, created to correct defects in, or adds new functionality to, a previous revision.
      • Developers may create several formal or informal builds before issuing a handoff of the AUT for testing.
      • Testing may be conducted on several handoffs before making a conclusion about a formal release of the AUT for duplication and mass distribution.

      What user work process does the AUT seem fit to do? To whom should it be delivered to (the next test group, Business Users, etc.)?

      Specify the detailed specific version of each component under discussion. Also specify the version of the requirements document which server as the basis for determining whether the application is behaving as expected.



      "Experience is something you don't get until just after you need it."

      “Program testing can be used to show the presence of bugs, but never to show their absence." —Edsger W. Dijkstra

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Tester Authority

      What is the responsibility and authority assumed by testers? This may be indicated by the word the tester uses to describe (record) his/her conclusion about the AUT:

      • Approved (after approval) - the AUT satisfies the standards of the person or oganization granting the approval.

      • Authorized (after authorization) - the AUT has been cleared for use by a person with authority.

      • Verified (after verification testing) - the AUT satisfies some quality standard.

      • Certified (after certification testing) - the AUT has been passed a defined series of tests. Several experts in the testing field have advised testers to avoid using this formal term because the term can be misinterpreted to mean that the AUT is “fault-free” or that the organization performing the certification accepts some liability.

      • Accepted (after acceptance testing) - the AUT satisfies some set of requirements, as in “Crest has been Accepted by the Dental Association”.



     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Run Conditions

      Define the conditions of the test run. Examples of this are:

      • Platform characteristics (CPU type, RAM memory, etc.).
      • Type of test run (maximum values, load test, time-out, etc.)
      • Focus of test run (negative test, etc.)
      • Date and time settings (all machines)
      • The people involved (Developer, Tester, etc.)

      This gives debuggers the information they need to narrow down the cause of the problem. Some problems take months to solve, bouncing from one person or team to another. So the more information the better.



     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Steps to Reproduce

      Write step-by-step instructions to reproduce the bug. Number each step. Example:

      1. As user ___ from an NT4 Pro client, sign in the ___ server with a blank database.
      2. Click menu item ...
      3. Select From any account.
      4. Enter ...
      5. Select Hungary.
      6. Click Next.
        Notice that ...

      Start each step with an active verb such as .click OK., .select checking account., .type value 99 in the Transaction Amount field., etc. The last step should reveal the concern stated in the Summary sentence (.Temp delay EOT.). Note whether .Temp delay. screens results in an End of Transaction or End of Session (EOS) which causes another logon.

      To avoid needless repetition, produce a Test Specification document that defines the meaning of special words.

      If a special program not used by typical users is needed to view results (such as a log viewer), note its use.



      No one is listening until you make a mistake.

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Analysis of Symptoms

      Expore the extent of a bug's impact. State the result from various conditions under which the bug was found/verified. Examples:

      • This occurred for all currencies.
      • This did not occur for other countries (Spain, etc.).
      • This occurred for account type A and B but not C.
      • This occurred only for users with guest rights, not users with admin rights.
      • This occurred only when values have not been specified. (Populated fields are OK)
      • This occurred only for values loaded with the batch back-end process. (Values entered interactively using the GUI are OK).
      • This occurred for all functions using the underlying subroutine xyz.

      Other examples:
      • different routes through the application to the same point of failure (pulling down a menu vs. pressing a shortcut key),
      • similar functions in the same application,
      • different browsers,
      • different software,
      • different hardware configurations,
      • different locales,
      • different security contexts,
      • previous versions,
      • different run dates (accounting cycle, etc.),



     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Severity and Priority Levels

      Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by devlopers and management as the basis for assigning priority of work on defects.

      A sample guideline for assignment of Priority Levels during the product test phase include:

      1. Critical / Show Stopper - an item that prevents further testing of the product or function under test. No workaround is possible. Examples of this include an installation process which does not load a component; a GPF (General Protection Fault) or other situation which freezes the system (requiring a reboot); or a missing menu option or security permission required to access a function under test.

        Within a production test context, this category could also include incorrect financial calculations and even cosmetic items.

      2. Major / High — a defect that does not function as expected/designed or cause other functionality to fail to meet requirements. The workaround may be to reboot the system or run a fix-up program/hassle. Examples of this include inaccurate calculations; the wrong field being updated; the wrong rule, phrase, or data being retrieved; an update operation that fails to complete; slow system turn-around performance; or a transaction journal record which fails to occur as expected.

      3. Average / Medium — Annoyances which do not conform to standards and conventions. Easy workarounds exists to achieve functionality objectives. Examples include incorrect/missing hot-key operation; an error condition which is not trapped; or matching visual and text links which lead to different end points.

      4. Minor / Low — Cosmetic defects which does not affect the functionality of the system. Examples of this include misspelled or ungrammatical text; inappropriate or incorrect formatting (such as text font, size, alignment, color, etc.); or inconsistencies between product and documented text or formatting.

      5. Enhancement — Additional features that would improve the product for users, administrators, or other stakeholders.

      6. Emergency is a term defined in ITIL.

      Management makes the decision whether an application should be shipped in light of the number and types of defects open.



      Success always occurs in private, and failure in full view.

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Owner Assign To

      Understand the organizational structure of those who will work on resolving your bugs. Consider putting the name of the person who is first most likely to work on each particular bug.

      Most organizations who value fast turnaround of bug fixes prefer this approach.

      However, some (usually complex) organizations prefer all bugs for a project to flow through a coordinator or manager who then assigns the individuals to work on each particular bug.

      For example, if the development team is divided by people who write reports and people who write GUI code, analyze bugs so that you can specify who should review each specific bug report.

      • "GUI dialog xxx does not ... "
      • "REPORT yyy does not ... "
      • "REQUIREMENT yyy.xx does not define ... "

      This avoids the common situation of a bug report from being closed by one person for a single sub-item when additional sub-items going unsolved.

     

    Hydrogen Bomb.  
	Get this print framed on your wall!
    Get this print framed for your wall!


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Summary / Headline

      Follow these general rules when crafting statements:

    • Be specific. Don't use vague phrases such as "problems with", "is incorrect", or "issue with". State the expected behavior which did not occur - such as "after pop-up ___, ____ did not appear." and the behavior which occurred instead.

    • Use present or past tense. Say "appears" instead of "will appear", which may be confusing to readers.

    • Don't use unnecessary words such as "apparently". State observed facts.

    • Don't add multiple exclamation points!!!! (We do want to help.) End sentences with a period.

    • DON'T USE ALL CAPS (That's the same as shouting.) Format words in upper and lower case (mixed case).

      Other examples are:

      • different routes throught the application to the same point of failure,
      • similar functions in the same application,
      • different browsers,
      • different software and hardware configurations,
      • different locales,
      • different security contexts,
      • previous versions, etc.



      ...

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Status of Incident Resolution

      Defect Status A defect item attains these states of status:

      1. A new defect item is created with a Entered, Started, or Submitted status. NOTE: Avoid using the word "New" for this state because newness can be calculated by the date created.

      2. A test supervisor may confirm the need for the bug item and place it in Assigned or Open status.
        Alternately, an item may be be called in Evaluation status while the item is assigned for analysis by a developer or whoever "owns" the bug. Some organizations also add an intermediate "Prioritized" status.
      3. A tester may place a defect item into Withdrawn status items which duplicate another defect item already reported or is the result of tester misunderstanding about how the application should operate.
      4. Management may allow the application to be shipped “as-is” with the defect by marking certain defect items as:

        • Refused/Rejected (developers feel reported item is not recognized as a defect); Some automatically reassign this to the submitter's boss.

        • Waivered (defect is accepted, but no action will be taken); or

        • Forwarded/Postponed/Deferred to a future release
      5. This usually happens because of time or cost constraints.

      6. Analysts place a defect item into Development status when analysis is complete and changes are being made to the application by developers.
      7. Developers change the status of a defect item to Testing after unit testing. This signals to testers that application changes can be tested again. Such items are ideally noted on a Hand-off memo ("manifest") from developers to testers.
      8. If changes by developers did not fix the problem reported, that defect item can be Returned or Reopened to development. Another defect item is created if a fix introduces another problem.
      9. After a tester verifies that a defect has indeed been fixed, the defect item is Fixed. NOTE: Avoid the word Closed because it can be confused with other stuatus such as Refused/Forwarded, etc.

      Set screen Availability Metrics

      Compare these to measurements defined by ITIL to measure availability:

      Status The time in which ...
      Incident start the outage actually started (whether noticed or not).
      Detection the outage is discovered and reported (by users or others).
      Response the outage was escalated for support.
      Diagnosis the outage investigation/analysis began.
      Repair the CI at fault is repaired or replaced.
      Recovery the machine is brought back up to its original state.
      Restoration the business resumed normal operations.
      "Detection time" is the elapsed time between Incident start and Detection.

      "Response time" is the elapsed time between Detection and Response.

      etc.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Root Cause Categorization

      It is dangerous to require entry of a defect cause when a defect is first entered.

      If the categories are too broad (such as "bug", "clarification", "Infrastructure", etc.) they become useless for taking action.

      If the categories are too narrow, they become unwieldy to analyze. People will select the first item to avoid having to read through all the options.

      "User error" or "User Training" are not a helpful category because it doesn't specify the remediation. When it comes down to, every error can be attributed to some human error. If some machine breaks, it's because equipment was not serviced correctly or the equipment was not replaced soon enough.

      I recommend root cause categorization by some asset, such as a document:

      Asset Document
      • Software Coding
      • Server Utilities
      • Server Hardware
      • Database
      • Network
      • Glossary
      • Requirements
      • Software Design
      • Deployment scripts
      • Installation Instructions
      • User Documentation
      • User Training

      Blame errors on inanimate objects. If a user makes a mistake because he failed to read the instructions, it's still an instruction document failure -- the failure to use the instructions.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Bug Tracking Packages

      Among the list of tools at SQATester.com

    • FogBUGZ, a web-based system designed by web development guru Joel Spolsky, tracks (for $99 per user) three kinds of cases:
      • Features that you want to add to your product,
      • Bugs or other possible flaws in your product, and
      • Inquiries, when someone has a question about how something should work, a suggestion for improvement, or an email from a customer.

    • Test Track Pro from Seapine $295/named user


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Set screen Rational ClearQuest

      Reminder The id column must be the first column of every display. If, in Display editor, you delete the id column, ClearQuest automatically displays the internal number. This will still appear if you add the id column back.

      ClearQuest uses Crystal Reports. To create a new report, you need to select both Report Format and Query. So create them before defining the report.

      When you edit a Report Format, you can add additional fields, but they do not appear (after youclick Author Report and enter) Crystal Designer, you select Database > Verify Database.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Mechanical Problems

      After every flight, Qantas pilots fill out a form, called a "Gripe Sheet,"which tells mechanics about problems with the aircraft. The mechanics correct the problems, document their repairs on the form, and then pilots review the gripe sheets before the next flight.

      Quantas is the only major airline that has never had an accident.

      But never let it be said that Quantas ground crews lack a sense of humor. Here are some actual maintenance complaints submitted by Qantas' pilots (marked with a P) and the solutions recorded by maintenance engineers (marked with an S).

      P: Left inside main tire almost needs replacement.
      S: Almost replaced left inside main tire.

      P: Test flight OK, except auto-land very rough.
      S: Auto-land not installed on this aircraft.

      P: Something loose in cockpit.
      S: Something tightened in cockpit.

      P: Dead bugs on windshield.
      S: Live bugs on back-order.

      P: Autopilot in altitude-hold mode produces a 200 feet per minute descent.
      S: Cannot reproduce problem on ground.

      P: Evidence of leak on right main landing gear.
      S: Evidence removed.

      P: DME volume unbelievably loud.
      S: DME volume set to more believable level.

      P: Friction locks cause throttle levers to stick.
      S: That's what they're for.

      P: IFF inoperative.
      S: IFF always inoperative in OFF mode.

      P: Suspected crack in windshield.
      S: Suspect you're right.

      P: Number 3 engine missing.
      S: Engine found on right wing after brief search.

      P: Aircraft handles funny.
      S: Aircraft warned to straighten up, fly right, and be serious.

      P: Target radar hums.
      S: Reprogrammed target radar with lyrics.

      P: Mouse in cockpit.
      S: Cat installed.

      P: Noise coming from under instrument panel. Sounds like a midget pounding on something with a hammer.
      S: Took hammer away from midget.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Portions ©Copyright 1996-2014 Wilson Mar. All rights reserved. | Privacy Policy |

    Related Topics:

  • Software Testing
  • Confirguration Management
  • Free Training!
  • Technical Support

  • How I may help

    Send a message with your email client program


    Your rating of this page:
    Low High




    Your first name:

    Your family name:

    Your location (city, country):

    Your Email address: 



      Top of Page Go to top of page

    Thank you!