On Github willdurand / ouatist-slides
Automated Test Generation for applications and production machines in a Model-based Testing approach.
Software testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item.
It is a verification and validation process.
Validation → "Are we building the right software?" Verification → "Are we building the software right?"
This does not mean that the software is completely free of defects. Rather, it must be good enough for its intended use.
Unit Testing, Integration Testing, Functional Testing, System Testing, Stress Testing, Performance Testing, Usability Testing, Acceptance Testing, Regression Testing, Beta Testing, <Whatever You Want> Testing
People now understand the need for testing things
They mostly do testing by hand
Formal Verification is the act of proving or disproving the correctness of intended algorithms underlying a systemwith respect to a certain formal specification or property,using formal methods of mathematics.
Software Testing is the process of executing a program or system with the intent of finding errors. However, "Testing shows the presence, not the absence of bugs" (Dijkstra).
It is the validation part!
A Test Suite (TS) is a set of Test Cases.
A Test Case (TC) consists of:
Inputs which have been devised to test the system.
A tester is a mechanism used for determining whether a test has passed or failed.
White-box testing is a method that tests the internalstructure of a System Under Test (SUT).
Implementation: a realistic, executable piece of softwareor hardware that should provide desired behaviors.
It is usually done at the unit level.
Black-box testing is a method that tests the functionalitiesof a SUT without knowing its internal structure.
Specification: a description of the desired behaviors that define only what the system should do, not how it is done.
Also known as Functional Testing.
The combination of White-box testing and Black-box testing.
You have access to the relevant internal parts of your SUT.
Testing cannot guarantee the absence of faults.
How to select subset of Test Cases from all possibleTest Cases with a high chance of detecting most faults?
Black-box Testing: Combinatorial Testing (Pairwise), Equivalence Partitioning, Boundary Value Analysis, Function Coverage White-box Testing: Fuzz Testing (Random), Statistical Testing, Statement Testing, Path Testing, Branch Testing, Condition Testing, Multiple Condition (MC) Testing, Loop Testing, Mutation Testing
http://people.cs.aau.dk/~bnielsen/TOV07/lektioner/whitebox-07.pdfWhite-box Testing → Automatic Testing Black-box Testing → Model-based Testing
Model-based Testing (MbT) is application of Model-based design for designing and optionally also executing artifactsto perform software testing.
Models can be used to represent the desired behaviorof an SUT, or to represent testing strategies and a test environement.
Combining 2. and 3. leads to On-The-Fly Testing.
A Model is a description of a system that helps you understand and predict its behavior. It does not need to completely describe it to be effective.
Behavior/Control oriented: Finite Automata (FSM, LTS), Petri Nets, Synchronous Languages (Lustre, Scade) Data oriented (pre/post): JML, Spec#, OCL, B-Method, Praspel
Executing a test case on a system yields a set of observations.
Every observation represents a part of theimplementation model of the system.
The set of all observations made with all possible test cases represents the complete implementation model of the system.
For every system there is a corresponding observational equivalent implementation model:
\forall\ iut \in IMPS,\ \exists\ I_{iut} \in MODS
To define conformance between an implementation under test imp and a specification Spec , we use the notion of an implementation relation:
imp \subseteq MODS \times SPECS
with SPECS the set of specifications.
An implementation iut conforms to a specification Spec if the existing model I_{iut} of iut is imp-related to Spec .
Conformance Testing assesses conformance to an unknown implementation under test ( iut ) to its specification ( Spec )by means of test experiments. Experiments consist of stimulating iut in certain ways and observing its reactions. This process is called test execution.
Successful execution of a test case TC :
I_{iut}\ {\bf passes}\ TC
It is easily extended to a test suite TS :
I_{iut}\ {\bf passes}\ TS \Leftrightarrow \forall\ TC \in TS : I_{iut}\ {\bf passes}\ TC
I_{iut}\ {\bf fails}\ TC \Leftrightarrow I_{iut}\ \cancel{\bf passes} TC
Soundness \forall\ iut \in IMPS \dot\ [ (iut\ conforms\ to\ Spec) \Rightarrow (iut\ passes\ TS) ]
Exhaustiveness \forall\ iut \in IMPS \dot\ [ (iut\ passes\ TS) \Rightarrow (iut\ conforms\ to\ Spec) ]
Completeness \forall\ iut \in IMPS \dot\ [ (iut\ conforms\ to\ Spec) \Leftrightarrow (iut\ passes\ TS) ]
TS \subseteq TESTS is a test suite.
A test architecture is an abstract description of the environment in which an implementation under test ( iut ) is embedded, and where it communicates with a tester.
A Labelled Transition System (LTS) describes the transitions from one state to the other, caused by action execution.
L = (S, Act, \rightarrow)
L = (S, Act, \rightarrow) with S = \lbrace s_{1}, s_{2}, s_{3}, s_{4} \rbrace and Act = \lbrace COFFEE, TEA, BUTTON \rbrace
Traces describe the observable behavior of LTS.
traces(s) = \lbrace \sigma | s \stackrel{\sigma}{\Longrightarrow} \rbrace
with
s \in S
The \Longrightarrow relation is used to abstract from \tau transitions.
\begin{align} traces(s_{3}) = & \lbrace \\ & BUTTON, \\ & BUTTON \cdot TEA \cdot BUTTON, \\ & \dots \rbrace = traces(s_{1}) = traces(s_{4}) \\ \\ traces(s_{2}) = & \lbrace \\ & TEA, \\ & COFFEE, \\ & TEA \cdot BUTTON \cdot TEA, \\ & \dots \rbrace \end{align}
By partitioning the actions labels ( Act ) into inputs ( Act_{I} ) and outputs ( Act_{U} ), we can obtain an IOLTS:
Act = Act_{I} \cup Act_{U}
The names of input actions end on " ? ", and those of output actions with " ! ".
We introduce a special action δ to denote quiescence.
\begin{align} Act_{I} = & \lbrace BUTTON?, COFFEE?, TEA? \rbrace \\ Act_{U} = & \lbrace COFFEE!, TEA! \rbrace \end{align}
Relating two LTS can be done in a variety of manners:
Not all relations are suited for testing purposes.
Common implementation relation for IOLTS:
i\ ioco\ s = \forall\ \sigma \in straces(s) : out(i\ after\ \sigma) \subseteq out (s\ after\ \sigma)
i\ ioco\ s
A few tools: TorX, TGV, Autolink, TestComposer.
Executing a test case by putting it in parallel with the implementation model, leading to a verdict.
Creating the specification model is complicated.
But then, it is possible to do cool stuff!
What about automatically generating it?
Based on a software, running in aproduction environment, would it possible to:
extract a knowledge base that can be formalized by a model that can be used to generate tests and/or specifications?Michelin relies on a method close to the Computer Integrated Manufacturing (CIM) approach to control its production:
These levels can exchange data among them.
Focus on Level 2 applications but, then again, there are a lot of differences between them, such as:
A monitor records incoming/outgoing data (traces).
An Expert System is used to generate models.
The tool communicates with an explorer to "feed" itself.
Based on the model, it is possible to generate test cases.
Written in Java, PHP, Node.JS, and JavaScript.
Distributed system thanks to RabbitMQ.
Service Oriented Architecture FTW!
This tool has been built for web applications. Michelin will get its own internal tool.
Adapting this work for Michelin needs.
Model-based Testing is the next upcoming changein the industry, and it has already begun!
Michelin gives me a great opportunity to validate my experiments, and to develop a realistic tool coming from academia for the industry.