Once Upon A Time In Software Testing



Once Upon A Time In Software Testing

0 1


ouatist-slides

[PhD] Once Upon A Time In Software Testing

On Github willdurand / ouatist-slides

Once Upon A Time In Software Testing

William Durand - December 18, 2013

About Me

  • PhD student at Michelin / LIMOS
  • Graduated from IUT and ISIMA
  • I Open Source

PhD Topic

Automated Test Generation for applications and production machines in a Model-based Testing approach.

Agenda

  • Introduction
  • Verification Quickly
  • Testing 101
  • Model-based Testing
  • Current Research
  • Conclusion

So... Software Testing

Software testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item.

It is a verification and validation process.

Validation    → "Are we building the right software?" Verification → "Are we building the software right?"

Why?

  • To find faults (G. Myers, The Art of Software Testing)
  • To provide confidence of reliability, correctness,and absence of particular faults

This does not mean that the software is completely free of defects. Rather, it must be good enough for its intended use.

How?

Industry

Unit Testing, Integration Testing, Functional Testing, System Testing, Stress Testing, Performance Testing, Usability Testing, Acceptance Testing, Regression Testing, Beta Testing, <Whatever You Want> Testing

People now understand the need for testing things

They mostly do testing by hand

Academia

Verification Quickly

Definition

Formal Verification is the act of proving or disproving the correctness of intended algorithms underlying a systemwith respect to a certain formal specification or property,using formal methods of mathematics.

Advantages

  • Powerful method for finding software errors
  • Mathematical proof of absence of errors in models relative to specifications

Techniques

  • Model Checking
  • Runtime Verification
  • Theorem Proving
  • Static Analysis
  • Simulation

Testing 101

Definition

Software Testing is the process of executing a program or system with the intent of finding errors. However, "Testing shows the presence, not the absence of bugs" (Dijkstra).

It is the validation part!

The Big Picture

Test Suite

A Test Suite (TS) is a set of Test Cases.

Test Case

A Test Case (TC) consists of:

  • Test Data (TD)
  • Expected behavior
  • Expected output

Test Data

Inputs which have been devised to test the system.

Tester

A tester is a mechanism used for determining whether a test has passed or failed.

Test Execution

Different Approaches

White-Box Testing

White-box testing is a method that tests the internalstructure of a System Under Test (SUT).

Implementation: a realistic, executable piece of softwareor hardware that should provide desired behaviors.

It is usually done at the unit level.

Black-box Testing

Black-box testing is a method that tests the functionalitiesof a SUT without knowing its internal structure.

Specification: a description of the desired behaviors that define only what the system should do, not how it is done.

Also known as Functional Testing.

Grey-box Testing

The combination of White-box testing and Black-box testing.

You have access to the relevant internal parts of your SUT.

But...

Testing cannot guarantee the absence of faults.

How to select subset of Test Cases from all possibleTest Cases with a high chance of detecting most faults?

Test Selection (Strategies)

Black-box Testing: Combinatorial Testing (Pairwise), Equivalence Partitioning, Boundary Value Analysis, Function Coverage White-box Testing: Fuzz Testing (Random), Statistical Testing, Statement Testing, Path Testing, Branch Testing, Condition Testing, Multiple Condition (MC) Testing, Loop Testing, Mutation Testing

http://people.cs.aau.dk/~bnielsen/TOV07/lektioner/whitebox-07.pdf

Automatic Test Generation

White-box Testing  → Automatic Testing      Black-box Testing  → Model-based Testing

Model-based Testing

Definition

Model-based Testing (MbT) is application of Model-based design for designing and optionally also executing artifactsto perform software testing.

Models can be used to represent the desired behaviorof an SUT, or to represent testing strategies and a test environement.

http://en.wikipedia.org/wiki/Model-based_testing

Why?

  • The need for automation
  • Formal methods

Goals

  • To bring the benefits of automation to new parts of the test cycle (test cases creation for instance)
  • To provide testers more effective tools
  • To reduce cost and cycle time

The Big Picture

Three Stages

Formally modelling the requirements (specification); Generating test cases from the model; Running these test cases against an actual SUT and evaluating the results.

Combining 2. and 3. leads to On-The-Fly Testing.

Models

A Model is a description of a system that helps you understand and predict its behavior. It does not need to completely describe it to be effective.

Behavior/Control oriented: Finite Automata (FSM, LTS), Petri Nets, Synchronous Languages (Lustre, Scade) Data oriented (pre/post): JML, Spec#, OCL, B-Method, Praspel

Observations

Executing a test case on a system yields a set of observations.

Every observation represents a part of theimplementation model of the system.

Implementation Model

The set of all observations made with all possible test cases represents the complete implementation model of the system.

Testing Hypothesis

For every system there is a corresponding observational equivalent implementation model:

\forall\ iut \in IMPS,\ \exists\ I_{iut} \in MODS

  • iut \in IMPS is a concrete Implementation Under Test (IUT)
  • IMPS is the universe of implementations
  • I_{iut} is a model of iut
  • MODS is the universe of the models of all IUT

Implementation Relation

To define conformance between an implementation under test imp and a specification Spec , we use the notion of an implementation relation:

imp \subseteq MODS \times SPECS

with SPECS the set of specifications.

Conformance

An implementation iut conforms to a specification Spec if the existing model I_{iut} of iut is imp-related to Spec .

Conformance Testing

Conformance Testing assesses conformance to an unknown implementation under test ( iut ) to its specification ( Spec )by means of test experiments. Experiments consist of stimulating iut in certain ways and observing its reactions. This process is called test execution.

Test Execution

Successful execution of a test case TC :

I_{iut}\ {\bf passes}\ TC

It is easily extended to a test suite TS :

I_{iut}\ {\bf passes}\ TS \Leftrightarrow \forall\ TC \in TS : I_{iut}\ {\bf passes}\ TC

I_{iut}\ {\bf fails}\ TC \Leftrightarrow I_{iut}\ \cancel{\bf passes} TC

Test Suite Properties

Soundness \forall\ iut \in IMPS \dot\ [ (iut\ conforms\ to\ Spec) \Rightarrow (iut\ passes\ TS) ]

Exhaustiveness \forall\ iut \in IMPS \dot\ [ (iut\ passes\ TS) \Rightarrow (iut\ conforms\ to\ Spec) ]

Completeness \forall\ iut \in IMPS \dot\ [ (iut\ conforms\ to\ Spec) \Leftrightarrow (iut\ passes\ TS) ]

TS \subseteq TESTS is a test suite.

Test Architecture

A test architecture is an abstract description of the environment in which an implementation under test ( iut ) is embedded, and where it communicates with a tester.

Test Generation

  • Based on Finite State Machines
  • Based on Symbolic Transition Systems
  • Based on Labelled Transition Systems

Labelled Transition System

A Labelled Transition System (LTS) describes the transitions from one state to the other, caused by action execution.

L = (S, Act, \rightarrow)

  • S is a set of states
  • Act is a set of actions
  • \rightarrow \subseteq S \times ( Act \cup \lbrace \tau \rbrace) \times S
  • τ is a silent, unobservable action

Example

L = (S, Act, \rightarrow) with S = \lbrace s_{1}, s_{2}, s_{3}, s_{4} \rbrace and Act = \lbrace COFFEE, TEA, BUTTON \rbrace

Traces

Traces describe the observable behavior of LTS.

traces(s) = \lbrace \sigma | s \stackrel{\sigma}{\Longrightarrow} \rbrace

with

s \in S

The \Longrightarrow relation is used to abstract from \tau transitions.

Example

\begin{align} traces(s_{3}) = & \lbrace \\ & BUTTON, \\ & BUTTON \cdot TEA \cdot BUTTON, \\ & \dots \rbrace = traces(s_{1}) = traces(s_{4}) \\ \\ traces(s_{2}) = & \lbrace \\ & TEA, \\ & COFFEE, \\ & TEA \cdot BUTTON \cdot TEA, \\ & \dots \rbrace \end{align}

Input/Output LTS

By partitioning the actions labels ( Act ) into inputs ( Act_{I} ) and outputs ( Act_{U} ), we can obtain an IOLTS:

Act = Act_{I} \cup Act_{U}

The names of input actions end on " ? ", and those of output actions with " ! ".

We introduce a special action δ to denote quiescence.

Example

\begin{align} Act_{I} = & \lbrace BUTTON?, COFFEE?, TEA? \rbrace \\ Act_{U} = & \lbrace COFFEE!, TEA! \rbrace \end{align}

Conformance Relation

Relating two LTS can be done in a variety of manners:

  • Equivalence Relations: Isomorphism, Bisimulation, Trace Equivalence, Testing Equivalence, Refusal Equivalence;
  • Preorder Relations: Observation Preorder, Trace Preorder, Testing Preorder, Refusal Preorder;
  • Input-Output Relations: Input-Output Testing, Input-Output Refusal, ioconf, ioco.

Not all relations are suited for testing purposes.

Trace Preorder

ioco

Common implementation relation for IOLTS:

i\ ioco\ s = \forall\ \sigma \in straces(s) : out(i\ after\ \sigma) \subseteq out (s\ after\ \sigma)

i\ ioco\ s

  • if i produces output x after trace \sigma , then s can produce x after trace \sigma
  • if i cannot produce any output after trace \sigma , then s cannot produce any output after trace \sigma (quiescence)

A few tools: TorX, TGV, Autolink, TestComposer.

Test Case

A Test Case is an IOLTS:
  • modeling the observation of quiescence
  • being tree-structured
  • being finite and deterministic
  • having final states pass and fail

Parallel Execution

Executing a test case by putting it in parallel with the implementation model, leading to a verdict.

So... What?

Creating the specification model is complicated.

But then, it is possible to do cool stuff!

What about automatically generating it?

Automated Generation Of Specification Models

  • By leveraging the API documentation
  • By instrumenting the code (tracing)
  • By leveraging the logs
  • By monitoring the system

Current Research

Challenge

Based on a software, running in aproduction environment, would it possible to:

extract a knowledge base that can be formalized by a model that can be used to generate tests and/or specifications?

Context (1/2)

Michelin relies on a method close to the Computer Integrated Manufacturing (CIM) approach to control its production:

  • L4: Business Software
  • L3: Virtual level as it is not that used (Factory Management)
  • L2: Supervision / Workshop Management
  • L1: Automata

These levels can exchange data among them.

Context (2/2)

Focus on Level 2 applications but, then again, there are a lot of differences between them, such as:

  • Programming Language
  • Framework
  • Design
  • Version

Hypotheses

Applications deployed in production behave as expected Don't consider (existing) specifications

The Big Picture

Work In Progress

What Can We Do?

  • Test Data can be inferred from recorded data
  • "Easy" record & replay
  • Generation of a degraded model
  • Generation of documentation and/or specification
  • Generation of tests (code)

Automatic Funktional Testing Tool

A monitor records incoming/outgoing data (traces).

An Expert System is used to generate models.

The tool communicates with an explorer to "feed" itself.

Based on the model, it is possible to generate test cases.

Automatic Funktional Testing Tool

Written in Java, PHP, Node.JS, and JavaScript.

Distributed system thanks to RabbitMQ.

Service Oriented Architecture FTW!

This tool has been built for web applications. Michelin will get its own internal tool.

Perspectives

  • Formalizing the different generated models (WIP)
  • Proving the correctness of each model (WIP)
  • Test Data generation (WIP)
  • Adding more rules to the Expert System
  • Generating Test Cases
  • Improving generated code

Adapting this work for Michelin needs.

Conclusion

Model-based Testing is the next upcoming changein the industry, and it has already begun!

Michelin gives me a great opportunity to validate my experiments, and to develop a realistic tool coming from academia for the industry.

Thank You.

Questions?