How I may help
LinkedIn Profile Email me!
Call me using Skype client on your machine

Reload this page Software Test Planning

Here are my notes about creating Test Plans in a way that reconciles several methodologies, including RUP and the UML 2.0 Test Profile (U2TP) language for designing, visualizing, specifying, analyzing, constructing, and documenting the artifacts of test systems.

Sound: Kid saying “This isn't good”

 

Inputs to Testing:

  • Artifact Flow
  • Terminologies
  • Test Plan Sections
  • Participants' Roles
  • Test Scope
  • Test Risks
  • Subjects of Testing
  • Test Strategies
  • Test Priorities
  • Test Ideas
  • Your comments???
  •  

    Site Map List all pages on this site 
    About this site About this site 
    Go to first topic Go to Bottom of this page


    Set screen Testing Artifact (Deliverable) Flow Sequence Diagram


    Go to Top of this page.
    Previous topic this page
    Next topic this page
    Set screen

      This shows the flow of deliverables among major participants (the stick figures in the use case diagram above).

      The darker vertical lines illustrate the principal exchanges of artifacts of information — the teamwork necessary among participants.

      Each artifact's color identify one of the 4 metalayers (packages) of abstraction defined by the UML 2.0 Test Profile (U2TP) standard:

      1. Test Architecture defining concepts related to test structure and test configuration (containing the relationship of elements involved in a test project)
      2. Test Behaviors defining concepts related to the dynamic aspects of test procedures — structural (static) Model -> Test Execution Directives (The interface for testing)
      3. Test Data (the structures and meaning of values to be processed in a test)
      4. Test Time defining concepts for a time quantified definition of test procedures (constraints and time observation for test execution)

      The word "Profile" in U2TP means that it conforms to UML standards.

    • TMap Next, for result-driven testing by Tim Koomen; Leo van der Aalst, Bart Broekman, and Michiel Vroon and their Business Driven Test Management (BDTM) is more popular in northern Europe (Netherlands, Germany) that this the basis for EXIN's TMap certification test, and Sogeti/Cap Gemini's QA methodology.
    • Software Testing: A guide to the TMap Approach by Martin Pol, Ruud Teunissen, and Erik Van Veenendaal
    • "A Standard for Testing Application Software" (1991) by William E. Perry

      "Quality Essentials" by Jack B. Revelle


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Type of Testing

      The basic types of testing (as defined by SourceLab's "CERT7") are:

      1. Unit Testing
      2. Functional Testing
      3. Security Testing
      4. Stress Testing
      5. ScalabilityTesting
      6. Reliability Testing
      7. Integration Testing


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Testing Terminologies

      HP (Mercury Interactive)'s Quality Center (formerly TestDirector) product organizes each requirement for testing as a test subject for each AUT under a Test Plan Tree hierarchy. Both manual and automated scripts can be specified in TestDirector. Each test script selected from the tree becomes a Test Step in a Test Set actually executed by TestDirector.

      In Rational's TestManager, a test plan contains test cases organized within test case folders. These are read by Rational's ClearQuest defect tracking system.

      Why Bother with UML Test Profiles?

      • You can describe a system precisely using UML. That's the reason why UML was invented. The visual representation of test artifacts aims for common and thus hopefully unambiguous intrepretation of test designs.
      • UML is the language (lingua franca) "spoken" by "professional" system architects and developers. Testers need to understand the Model Drive Architectures (MDA) that they design and build.
      • The UML standard Test Profiles includes specification of plain-text XML which (at this point, theoretically) enables tool independent interchange of test profile information.

      • Soon, test tools will require testers to augment UML created by architects to specify testing at a higher level of abstraction instead of crafting scripts as automation testers do now. Testers will specify executable UML action semantics which are automatically compiled into platform-specific test components used to conduct testing.
      The problem with UML's screen captured MOF-based Metamodel for Testing is that its object-oriented approach is not presented in a sequential way. That's what this page aims to do.

     

    From the Rational Unified Process (RUP) tutorial:

  • Executable UML Book.com by Stephen J. Mellor and Marc J. Balcer

  • Leon Starr "Executable UML: The Eelevator Case study".

  • Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Outputs

      The purpose of testing is to obtain information needed to make decisions about a System Under test (SUT).

      Set screen Test Logs

      With UML: A Test Log is an interaction resulting from the execution of a test case. It represents (remembers) the different messages exchanged between test components and the SUT and/or the states of the test components involved.

      A log is associated with verdicts representing the adherence of the SUT to the test objective of the associated test case.

      The names of test log files usually differ by vendor:

      • Logs output by the testing tool:

        • WinRunner, test Logs

        • LoadRunner output.txt and run logs.

      • Logs output by scripts within test tools.

      • Java JVM verbose logs

      • Windows OS Application logs and Security logs.

      • An application's stdout and stderr files.

      Set screen Test Results

      Test Results are

      Key Measures in Test

      • Number of
      • Time needed to run script

      Set screen Work Load Analysis Model

      See Performance Testing using LoadRunner

      Set screen Test Evaluation Summary

      Performance Reports and Enhancement Requests, Defect Reports, or something.

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Plan Sections

      Set screen Test Plan

      Set screen Test Interface Specs

      With U2TP, a test suite has a constraint: it must contain exactly one property realizing the Arbiter interface

      A structured classifier acting as a grouping mechanism for a set of test cases. The composite structure of a test suite is referred to as test configuration. The classifier behavior of a test suite is used for test control.

      Set screen Test Environment Configuration

      Set screen Test Automatation Architecture

      Set screen Test Classes

      Set screen Test Scripts


    Go to Top of this page.
    Previous topic this page
    Next topic this page
    Set screen
      Starting from the upper right corner of this diagram from the UML 2.0 Test standard document:

      The System Under Test (SUT)

      The system under test (SUT) is a part and is the system, subsystem, or component being tested. An SUT can consist of several objects.

      In UML testing profiles, the system under test (SUT) is not specified as part of the test model, but as a <<stereotype>> which the test architecture package imports the complete design (UML) model of the SUT in order to get the right to access the elements to be tested.

      The SUT is exercised via its public interface operations and signals by the test components.

      It is assumed that no further information can be obtained from the SUT as it is a black-box.

      Properties of the SUT

      <<metaclass>>

    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Architecture

      Test Elements

      The architecture section defines two test elements: test suites and test components.

      • A test suite contains :

        • a collection of test cases

        • an instance of the arbiter interface, normally an instance of the SUT, and

        • (optionally) high level behavior used to control the execution of the test cases.

      • Test components are the various elements which interact with the SUT to realize the test cases defined in the test suite.

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Scheduler

      The Scheduler is a predefined interface defining operations used for controlling the tests and the test components.

        startTestCase()
        finishTestCase(t : Test Component)
        createTestComponent(t: TestComponent)

      To AGEDIS[3], the Scheduler is a property of a test context used to control the execution of the different test components. The scheduler keeps information about which test components exist at any point in time, and collaborate with the arbiter to inform it when it is time to issue the final verdict. It keeps control over the creation and destruction of test components and it knows which test components take part in each test case.

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Test Validation Action

      Validation actions are performed by a test component. It sets the local verdict of that test component. Evaluation actions evaluate the status of the execution of a test case. The action assesses the SUT observations and/or additional characteristics/parameters of the SUT.

      Every validation action causes the setVerdict operation on the arbiter implementation to be invoked.

      Test Verdict

      A Verdict is an assessment of the correctness of the SUT.

      AGEDIS notes that a verdict is a property of a test case or a test context to evaluate test results and to assign the overall verdict of a test case or test context respectively. Test cases yield verdicts.

      Verdicts can be used to report failures in the test system. Predefined verdict values are:

      • Pass - indicates that the test behavior gives evidence for correctness of the SUT for that specific test case.
      • Fail - describes that the purpose of the test case has been violated.
      • Inconclusive - used where neither a Pass nor a Fail can be given.
      • Error - used to indicate errors (exceptions) within the test system itself.

      Verdicts can be user-defined. The verdict of a test case is calculated by the arbiter.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Arbiter

      With U2TP, an Arbiter is a predefined <<interface>> defining operations used for arbitration of tests.

      Test cases, test contexts, and the runtime system can use realizations of this interface to assign verdicts of tests and to retrieve the current verdict of a test. The arbitration algorithm can be user-defined. Some sample operations:

      • getVerdict() : Verdict Returns the current verdict.
      • setVerdict(v : Verdict) Sets a new verdict value.

      There is a default arbitration algorithm based on functional conformance testing which generates Pass, Fail, Inconc, and Error as verdict, where these verdicts are ordered as Pass < Inconc < Fail < Error.

      Every test suite must have an implementation of the arbiter interface, and the tool vendor constructing tools based on the Testing Profile will provide a default arbiter to be used if one is not explicitly defined in the test suite.

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page
    Set screen

    Go to Top of this page.
    Previous topic this page
    Next topic this page
    Set screen
      Starting from the middle of this diagram from ___ presenting at the U2TP Consortium:

      The Test Architecture

      Test Class

      Set screen Test Package

      Set screen Collaboration

      Set screen Test Cases

      A test case is a specification of one case to test the system including what to test with, which input, result, and under which conditions. It is a complete technical specification of how the SUT should be tested for a given test objective.

      A test case is defined in terms of sequences, alternatives, loops, and defaults of stimuli to and observations from the SUT. It implements a test objective. A test case may invoke other test cases. A test case uses an arbiter to evaluate the outcome of its test behavior.

      A test case is a property of a test context. It is an operation specifying how a set of cooperating test components interacting with a system under test realize a test objective. Both the system under test and the different test components are parts of the test context to which the test case belongs.

      Set screen Test Suite

      A collection of test cases is called a Test Suite.

      Set screen Test Operations

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Set screen Test Context

      With U2TP, each <<TestContext>> is a structured classifier acting as a grouping mechanism for a set of test cases.

      The composite structure of a test context is referred to as a test configuration.

      The classifier behavior of a test context is used for test control.

      Each test context must contain exactly one property realizing the Arbiter interface and the Scheduler interface.

      Set screen Test Configuration

      A Test Configuration is the collection of test component objects and of connections between the test component objects and to the SUT. The test configuration defines both (1) test component objects and connections when a test case is started (the initial test configuration) and (2) the maximal number of test component objects and connections during the test execution.

      Set screen Test Part

      A part of the test system representing miscellaneous components that help test components to realize their test behavior. Examples of utility parts are miscellaneous features of the test system.

      Test Utility

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Structured Classifier

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Set screen Test Components

      With U2TP, a test component is a structured classifier participating in test behaviors. A test component is commonly an active class with a set of ports and interfaces. Test components are used to specify test cases as interactions between a number of test components. The classifier behavior of a test component can be used to specify low level test behavior, such as test scripts, or it can be automatically generated by deriving the behavior from all test cases in which the component takes part.

      Test component objects realize the behavior of a test case.

      A test component has a set of interfaces via which it may communicate via connections with other test components or with the SUT.

      A test component object executes a sequence of behaviors against the SUT in the form of test stimuli and test observations. It can also perform validation actions, and can log information into the test trace. Whenever a test component performs a validation action it updates its local verdict.

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Behaviors Package


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Set screen Test Defaults

      Test Defaults are the set of concepts (in addition to the UML 2.0 behavioral concepts) to specify test behaviors, their objectives, and the evaluation of systems under test.

      Test Controls

      A test control is a specification for the invocation of test cases within a test context. It is a technical specification of how the SUT should be tested with the given test context.

      Test Events

      Test Case Invocation

      A test case can be invoked with specific parameters and within a specific context. The test invocation leads to the execution of the test case. The test invocation is denoted in the test log.

      Test Objectives

      A test objective is a named element describing what should be tested. It is associated to a test case.

      Test Stimulus

      A stimulus is an item of test data sent to the SUT in order to control it and to make assessments about the SUT when receiving the SUT reactions to these stimuli.

      Test Observation

      An Observation is Test data reflecting the reactions from the SUT and used to assess the SUT reactions which are typically the result of a stimulus sent to the SUT.

      Coordination

      Concurrent (and potentially distributed) test components have to be coordinated both functionally and in time in order to assure deterministic and repeatable test executions resulting in well-defined test verdicts. Coordination is done explicitly with normal message exchange between components or implicitly with general ordering mechanisms. Default Default is a behavior triggered by a test observation that is not handled by the behavior of the test case per se. Defaults are executed by test components.

      Test Log Action

      An action to log information in the test log.

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Set screen Test Case

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Data Package

      In general, test data refers to the specification of types and values that are received from or sent to the SUT. Data can be specified as being static or dynamic, where:

      • static data refers to the definition of types, and values given as arguments or read-only attributes.
      • dynamic data refers to the manipulation of values during the execution of a test behavior.

      UML 2.0 does not have a concrete syntax for values and expressions, but does allow the use of OCL (Object Constraint Language).

     


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test timings


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Participants

      Test Team

      Focus Role Last Name First Name Location Phone Cell, etc.
      Results
      Management
      Test Manager
      Functionality (Subject Matter) Specialists
      Technical
      Management
      Test Architect
      Technical
      Processes
      Test Lead
      Test Automation Specialist
      Technologists GUI Tester
      Website Tester
      Wireless Tester
      Database Tester
      HW Driver Tester
      Security Specialist

      Development Team

      Focus Role
      Results
      Management
      Development Manager (Executive)
      Project Manager
      Technical
      Management
      System/Product Architect
      Technical
      Processes
      Dev. Lead
      Configuration/Build Manager
      Security Specialist
      Technologists GUI Developer
      Component Developer
      Database Developer
      Wireless Developer
      HW Driver Developer

      Operations Team

      Focus Role
      Results
      Management
      Facility Manager (Executive)
      Operations Manager
      Technical
      Management
      NOC Architect
      Technical
      Processes
      Shift Leader
      Configuration/Build Manager
      Security Specialist
      Technologists Implementer
      Scheduler
      Backup/Recovery Specialist
      DBA
      Capacity/Performance Specialist


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Testing Participants' Role Summary Descriptions

      This definition does not imply the number of different people involved. The same person can take on several roles in a small project. In a larger project, several people can perform the same role.

      Set screen Test Manager

      Test Managers take care of managerial or business aspects of testing efforts. They define the cadence and organization of testing work in the Test Plan document.
      Test managers organize production of the Test Evaluation Summary document as a way to reporting to upper levels of management and to assemble a historical archive of the testing effort for follow-on work.

      Set screen Test Designer

      Test Designers are also called Test Architects because they architect the Test Interface Specification, Test Guidelines, Test Automation Architecte, Test Environment Configuration, Test Suite

      Set screen Test Analyst

      Test Analysts analyze the needs of end-user into Test Ideas Checklists and Test Cases with variations in Test Data.
      Test Analysts also analyze Test Results and contribute to the Test Evaluation Summary.

      Set screen Test Class Designer

      Test Class Designers define the structure and interfaces in relation to other classes (both those that exist and those to come).

      Set screen Test Component Implementer

      Test Component Implementers craft programming code that make/respond to web server calls, lookup/update data in databases, display information, etc.

      Set screen Tester

      Testers run (invoke) test scripts developed by others.
      Testers organize and analyze Test Logs resulting from test runs.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Scope

      Aspects that, if altered, could result in major differences with performance perceived by end users.

      # Aspect of Test Plan Potential Options
      1. Networking Infrastructure Physical media; Cabling Topology: Backbones & segments, hubs, routers & switches; Protocols, Bridges, Gateways; T1 vs. DSL; Proxies and Firewalls vs. NAT; ...
      2. Server Infrastructure Peer to Peer or Servers for File, Print, Web, Database, Mail, Fax, Telephony, Remote Access, Load balancing, & other App. services ...
      3. Machine Platform Compaq PC's, Sun, HP, etc. and different driver versions ...
      4. Individual Machine Configuration MB of RAM, capacity and speed of Hard and CD drives, etc. ...
      5. Operating System Windows, Apple OSX, Sun, HPUX, Linux, etc. ...
      6. Middleware vendors Broadvision, IBM Websphere, CORBA server, etc. ...
      7. File System (DBMS) Microsoft SQL, Oracle 8i, IBM DB2, etc. ...
      8. App. Version 1.0, 1.1, etc. / Handoff 1, 2, etc. ...
      9. App. Installation Option First time vs. Upgrade install; From local HD, CD or Network drive ...
      10. App. Configuration with/without module xxx ...

      These identify limitations to testing.

     

      Testing Object-Oriented Systems, Binder, Robert V. Reading, Massachusetts: Addison-Wesley, 2000.

      Complete Guide to Software Testing, 2nd Ed., Hetzel, Bill, New York, New York: John Wiley & Son, 1993.

      Software Testing: A Craftsman's Approach, Jorgensen, Paul C. CRC Press, 1995.

      Testing Computer Software, 2nd Ed., Kaner, Clem, Jack Falk, and Hung Quoc Nguyen, New York, New York: John Wiley and Sons, 1999.

      Software Testing and Continuous Quality Improvement, Lewis, William. CRC Press 2000.

      The Craft of Software Testing: Subsystems Testing Including Object-Based and Object-Oriented Testing, Marick, Brian. Prentice Hall, 1997.

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Test Risks

      Chart from Smartdraw "Spiral" chart of Boehm's approach to control risks by iterative design and development prototypes built incrementally.

      Based on Function Points:

      • GUI Inputs, Edits
      • Batch (background) Outputs
      • Online/Batch Reports
      • Database (stored procedures) and edits

     

      ...

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Set screen Subjects of Testing

      This list of quality metrics should be meaningful to testers, developers, and managers -- as this list is often used by testers to report to management an estimate of the completeness of development on application features.

    • Product Integration
    • Product installability (From CD or other drive, CD copied to a local folder, CD copied to a network drive, etc.)
    • Application Testability
    • Application Functionality (detailed below)
    • Application Performance (Turnaround Cycle Time, Capacity of transactions handled per minute)
    • Product Security:
      • Authentication
        • Site Administrators (used for doing configurations)
        • Users (Customers) - Normal and SuperUsers
        • Backup operators
        • Corporate Administrators
        • Managers
        • Guests (temporary employees, consultants, auditors, etc.)
        • Automated agents (doing backup, restore, etc.)
      • Authorization
        • Login/Logon
        • Main menu and sub menu presentation
        • Logout/Logoff
        • Individual menu item invocation
        • Specific functions (Printing, Approval, etc.)
      • Audit
      • Secure Transmission

    • Backward and forward compatibility. Can files created in a prior version of the product be opened in a current version?

    • Backward and forward interoperability. For two products that use data created with the other product, will files created in an older version of one product open in a newer version of its companion product?

    These may be separated into client and server tiers.

     

     
    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Go to top of page Test Guidelines & Strategies

    • First, test for installability. From CD, from CD copied to a local folder, from CD copied to a network drive.

    • Test for main-line functionality (such as logon, menu selection, logoff)

    • It is usually not possible to sequentially test all menu items (such as "File, Edit, View, Format, Tools, Help," etc.) because scenarios of actions are necessary to establish pre-conditions for testing. The most efficient approach is to reuse the script to test a predecessor function to establish the pre-conditions to automate testing for dependent functions requiring pre-conditions.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

      Go to top of page Test Priorities

      When a requirement is added using Mercury TestDirector, the product requires specification of a Priority:

        1 - Low
        2 - Medium
        3 - High
        4 - Very High
        5 - Urgent

      Note: WinRunner does not recognize the F4 key normally used to drop an activated list.

      Do this! Define examples of what each priority level means in your organization and when each value is appropriate and not appropriate.


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Go to top of page Test Ideas

    • Single point of action (a single screen)
      • Action: Mnemonic “CRUD”
        • Create (Add) -- New data percolated to subsidiary tables?
          • Initial Add
          • Add same key again (catch duplicate?)
          • Add another key (to test for conflicts with previous add action)
        • Read
          • Search for keywords
          • Browse sequentially
            • Read Previous when already at the beginning (negative test)
            • Read Next
            • Read Next when already at end (negative test)
            • Read First
            • Read Last
        • Update
          • Percolated to subsidiary tables
        • Delete
          • Delete again (attempt)
          • Recreate (to see if subsidiary tables have removed effects of previous delete)

        These actions are performed under these different circumstances:

      • Instantiation
        • Empty condition
        • Single instance
        • Multiple instances (for tables allowing non-unique keys)

        All the above are repeated for each hierarchical configuration:

      • Hierarchy
        • No parent
          • no possible parents populated
          • possible parents populated
        • Single parent
        • Multiple parents
        • Skip generation (grandfather with no parent)
        • Dual generation (grandfather and father)
        • Cyclic (child is also parent of another parent)

    • Each additional origin of action (performed from another screen)
        Same actions as above

      All the above are repeated for each data of these states:

    • Validity:
      • All valid
      • Only Child valid
      • Only Parent valid
      • Only GrandParent valid


    Go to Top of this page.
    Previous topic this page
    Next topic this page

    Portions ©Copyright 1996-2014 Wilson Mar. All rights reserved. | Privacy Policy |

    Related Topics:

  • Software Testing
  • Test Analysis
  • Defect Reporting
  • Data Driven Testing
  • Transition Testing

  • Free Training!
  • Technical Support

  • How I may help

    Send a message with your email client program


    Your rating of this page:
    Low High




    Your first name:

    Your family name:

    Your location (city, country):

    Your Email address: 



      Top of Page Go to top of page

    Thank you!