Unit Testing

Finds differences b\/t specified units and their implementations

Unit = component (module, function, class, objects, ...)

Dynamic Unit Testing

Control flow testing

  • Draw a control flow graph (CFG) from a program unit

  • Select a few control flow testing criteria

  • Identify a path in the CFG to satisfy the selection criteria

  • Derive the path predicate expression from the selection paths

  • By solving the path predicate expression for a path, one can generate the data

Data flow testing

  • Draw a data flow graph (DFG) from a program unit and then follow the procedure described in control flow testing.

Domain testing

  • Domain errors are defined and then test data are selected to catch those faults

Functional program testing

  • Input\/output domains are defined to compute the input values that will cause the unit to produce expected output values

Mutation Testing

modifies a program by introducing a single small change to the code

modified program = mutant

failed test cade = dead\/killed mutant

test case insufficient to kill mutant = stubborn\/sillable mutant

Targets for Unit Test Cases

  • Module interface

    • Ensure that information flows properly into and out of the module
  • Local data structures

    • Ensure that data stored temporarily maintains its integrity during all

      steps in an algorithm execution

  • Boundary conditions

    • Ensure that the module operates properly at boundary values established to limit or restrict processing
  • Independent paths (basis paths)

    • Paths are exercised to ensure that all statements in a module have

      been executed at least once

  • Error handling paths

    • Ensure that the algorithms respond correctly to specific error conditions
  1. Create unit tests as soon as object design is completed:

    • Black Box test: Test the use cases & functional model

    • White Box test: Test the dynamic model

    • Data-structure test: Test the object model

  1. Develop the test cases

    • Goal: Find the minimal number of test cases to cover as many paths as possible
  1. Cross-check the test cases to eliminate duplicates

    • Don't waste your time!
  1. Desk check your source code

    • Reduces testing time
  1. Create a test harness

    • Test drivers and test stubs are needed for integration testing
  1. Describe the test oracle

    • Often the result of the first successfully executed test
  1. Execute the test cases

    • Don’t forget regression testing

    • Re-execute test cases every time a change is made.

  1. Compare the results of the test with the test oracle

    • Automate as much as possible

Integration Testing

systemic technique for constructing software architecture

While integrating, conduct tests to uncover errors associated with interfaces

Objective: Take unit tested modules and build a program structure based on the prescribed design

  • exposes problems arising from the combination

  • obtains a working solution from components

Complete when:

  • The entire modules are fully integrated together

  • All the test cases have been executed

  • All the severe and moderated defects found have been fixed

Problem Areas

Internal: between components

  • Invocation: call\/message passing\/...

  • Parameters: type, number, order, value

  • Invocation return: identity (who?), type, sequence

External:

  • Interrupts (wrong handler?)

  • I\/O timing

Interaction

Types

Structural: 2 approaches

  • Non-incremental Integration Testing: “Big bang” approach

    • no error localization
  • All components are combined in advance

  • The entire program is tested as a whole

  • Chaos results

  • Many seemingly-unrelated errors are encountered

  • Correction is difficult because isolation of causes is complicated

  • Once a set of errors are corrected, more errors occur, and testing appears to enter an endless loop

  • Incremental Integration Testing: program consructed\/tested in small increments

      • Errors easier to isolate & correct

      • Interfaces are more likely to be tested completely

      • A systematic test approach is applied

Behavioral

System Testing

concerned with app's externals

More than functional:

  • Reliability tests

  • Availability tests

  • Performance tests

  • Load\/stress tests

  • Scalability tests

  • Robustness tests

  • Usability tests

Functional testing

  • Objective: Assess whether the app does what it is supposed to do

  • Basis: Behavioral\/functional specification

  • Test case: A sequence of ASFs (thread)

  • Coverage:

    • Event-based

      • PI1: each port input event occurs
      • PI2: common sequences of port input event occurs
      • PI3: each port input in every relevant data context
      • PI4: for a given context, all possible input events
      • PO1: each port output event
      • PO2: each port output event occurs for each cause
    • Data-based

      • DM1: Exercise cardinality of every relationship

      • DM2: Exercise (functional) dependencies among relationships

Stress Testing

  • push it to it's limit + beyond

Performance testing

  • Performance seen by

    • users: delay, throughput

    • System owner: memory, CPU, comm

  • Performance

    • Explicitly specified or expected to do well

    • Unspecified: find the limit

Usability testing

  • Human element in system operation

  • GUI, messages, reports, ...

Acceptance Testing

Formal testing to determine if system satisfies it's acceptance criteria

  • Purpose: ensure that end users are satisfied

    • Confirm that the system meets the agreed upon criteria

    • Identify and resolve discrepancies, if there is any

    • Determine the readiness of the system for cut-over to live operations

  • Basis: user expectations (documented or not)

  • Environment: real

  • Performed: for and by end users (commissioned projects)

  • Test cases:

    • May reuse from system test

    • Designed by end users

  • Types:
    • User Acceptance Testing (UAT)
      • Conducted by the customer to ensure system satisfies the contractual acceptance criteria before being signed-off as meeting user needs
  • Business Acceptance Testing (BAT)
    • It is undertaken within the development organization of the supplier to ensure that the system will eventually pass the user acceptance testing

Regression Testing

Part of the test cycle where a program is tested to ensure that changes do not afftect features that are not supposed to be affected

  • Types

    • Corrective regression testing: triggered by corrections made to the previous version

    • Progressive regression testing: triggered by new features added to the previous version

  • Whenever a system is modified (fixing a bug, adding functionality, etc.), the entire test suite needs to be re-run

    • Make sure that features that already worked are not affected by the change
  • 3 classes of test casses in the regression test suite:

    • A representative sample of tests that will exercise all software functions

    • Additional tests that focus on software functions that are likely to be affected by the change

    • Tests that focus on the actual software components that have been changed

Smoke Testing

  • Power is applied and a technician checks for sparks, smoke, or other dramatic

    signs of fundamental failure

  • Pacing mechanism for time-critical projects: Allows the software team to assess its project on a frequent basis

  • Includes the following activities

    • The software is compiled and linked into a build

    • A series of breadth tests is designed to expose errors that will keep the build from properly performing its function

      • The goal is to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule
  • Build is integrated with other builds and the entire product is smoke tested daily

    • Daily testing gives managers and practitioners a realistic assessment of the progress of the integration testing
  • When Complete, detailed test scripts are executed

  • Benefits:

    • Integration risk is minimized

      • Daily testing uncovers incompatibilities and show-stoppers early in the testing process, thereby reducing schedule impact
  • The quality of the end-product is improved

    • Smoke testing is likely to uncover both functional errors and architectural and component-level design errors
  • Error diagnosis and correction are simplified

    • Smoke testing will probably uncover errors in the newest components that were integrated
  • Progress is easier to assess

    • As integration testing progresses, more software has been integrated and more has been demonstrated to work

    • Managers get a good indication that progress is being made

Test Stopping Criteria

  • Meet deadline, exhaust budget, ... <- management

  • Achieved desired coverage

  • Achieved desired level failure intensity

The 4 Testing Steps

  1. Select what has to be measured

    • Analysis: Completeness of requirements

    • Design: tested for cohesion

    • Implementation: Code tests

  1. Decide how the testing is done

    • Code inspection

    • Proofs (Design by Contract)

    • Black Box, white box,

    • Select integration testing strategy (big bang, bottom up, top down, sandwich)

  1. Develop test cases

    • A test case is a set of test data or situations that will be used to exercise the unit (code, module, system) being tested or about the attribute being measured
  1. Create the test oracle

    • An oracle contains of the predicted results for a set of test cases

    • The test oracle has to be written down before the actual testing takes place

results matching ""

    No results matching ""