Overview
- Why we test software
- A quick overview of the testing landscape
- The anatomy of a unit test
- Patterns / Anti Patterns of unit testable JavaScript
-
If you are expecting a lot of code samples and hello world testing stuff then you will be sorely dissapointed
-
Understanding the [context] of why we test is more important than the how. Your mileage on the [how] may vary
but if you understand the [why] you can apply/reason about to as how to implement the [how] in your context.
Why we test software
"Just because you’ve counted all the trees doesn’t mean you’ve seen the forest."
Most software projects fail
“60-70% of software projects fail”
“...only about 3% of those can be attributed to technical challenges”
-
Failure can come in many forms from a project taking to long, costing more than expected to flat out never launching.
What are some of the reasons our projects fail?
- Unrealistic or unarticulated project goals
- Badly defined system requirements (BDOF or none)
- Inaccurate estimates of needed resources
- Poor reporting of a project's status
- Sloppy development practices
The Iteration Cycle
-
Most of our failures come down to latency in the feedback loop. Communication.
The Iteration Cycle Explained
- To develop something, we first must understand what we are developing. We unearth a series of functional requirements that match our business requirements.
- To deliver something, we write code that implements our functional requirements.
- To evaluate something, we rely on automated or manual testing to prove the requirements have been met.
- To provide feedback, the stakeholders reflect on current state of software available to them.
- As the iteration cycle closes, stakeholders assess work delivered in steps 1-4 and evolve requirements where necessary.
-
Each iteration cycle is about implied or explict verbal or non verbal communcation.
-
Step through each step of the loop and talk through situations in which each step can break down
How does testing affect our feedback loop?
- Talk about the analogy of good, cheap and fast (you can only have two)
- The idea that you can choose two of these is kind of misleading because there is an implied amount of quality that is expected by those who are paying for a
project.
- I propose that in reality quality is a constant (and must remain so in order to perform agile). Only cost/time are actual variances we can control.
- Talk about the analogy of cosmetic surgery and how it relates to client services app development at large scale (Dr Nick)
The cost of poor communication
- Software Entropy
- The feedback loop is there whether we like it or not. We can choose how/when to speed up the process
- Pychologically things that don't have our immediate attention tend to get depriortized and ignored
- Entropy equals disorder which equals unpredicability. The knowledge that changing one thing can have unintended conquenquences. Unpredicability in a system
manifests itself in an inability to estimate property, assess risk and generally understand the state of a system.
- Entropy has a direct correlation with the cost/time curve in a long running project. Quick and fast seemed so great at first until you have to maintain it.
As developers, how can we improve the feedback loop?
"Those who fail to learn from history are doomed to repeat it." ~ Winston Churchill
- Almost everything in our profession is about standing on those shoulders of our predecessors.
Extreme Programming
-
Extreme Programming is a set of best practices formulated by Kent Beck in 1996, while working on a large payroll redesign at Chrysler Automotive.
TODO: Explain the differences between the circles
Agile
-
Extreme Programming closely align with the goals of agile. In fact XP could be thought of as the
developer centric parts of the greater agile methodology.
Its important to point out that this is probably a gross over simplication of the evolution of agile/xp but it
is fine for our context.
Key Principles of XP/Agile
- Code must be written to agreed upon standards (DDD)
- Welcome changing requirements
- Technical delivery (velocity) should remain consistent and predicable
- Deliver working software frequently
-
Working software is the primary measure of success and working software is defined as that which is tested
-
The super important piece here is the working software. Working software has been defined and redefined to imply quality proven through testing.
What are Unit Tests?
- A unit test is a piece of code that drives a unit of work and then checks a single assumption about the behavior of that work
- Must run in isolation from other units
- Must be fast < 5ms
- Must produce consistent results regardless of the order the tests were run or the number of times run
photo credit: Sam Hatoum
Isolating a Unit: Test Doubles
- Test doubles come in various forms typically at minimium a Stub and a Mock
- Stubs wrap existing functions or objects with canned behavior that doesn't call the actual underlying method or object
- Mocks are stubs with with pre-programmed expectations
- http://sinonjs.org
photo credit: Sam Hatoum
Can use the stunt man analogy
What are Integration Tests?
- Integration tests confirm that two seperate units can work together by testing the interfaces between them
photo credit: Sam Hatoum
What are Acceptance Tests?
photo credit: Sam Hatoum
This is a whole talk by itself. It is important to notice the difference between the acceptance and unit test
Common Misconceptions about TDD
- That unit testing is primarily about fixing or preventing bugs
- That it doesn't matter if we write the tests before or after we write the code
- That testing first somehow implies no up front design
- That testing will be a large time commitment and slow down velocity
- "Software testing proves the existing of bugs not their absence."
- "Everything should be taken in context. If your trying to introduce testing into a large legacy system where the code is already written then
yes it will be large time investment and perhaps not worth it. Otherwise testing saves countless hours/days in the long term."
Red
- Implied knowledge of the Domain Model and a list of functional requirements
- Minimum up front design necessary to write tests that imply the inner workings of your unwritten code based on the Functional Requirements
- The writing of a test spec to explore the desired functional behavior
- The failing test to verify that your functional behavior is not implemented or passing
Green
- We implement our functional design to make the code pass
- We reevaulate our design decisions if necessary
- We avoid making future assumptions about our code KISS / YAGNI
Refactor
- We look for code smells
- Removal of code duplication
- Object, class, module, variable and method names should clearly represent their purpose and use
- Recognize common patterns and implement the necessary design patterns
- Rethink any slow running tests (as they are unacceptable)
- "Premature performance optimization is the root of all evil." ~ Knuth
Anatomy of a Test Harness
- Test Framework (Mocha, Jasmine, QUnit)
- Test Runner (Karma, Testem, Mocha CLI, Jasmine CLI/Browser)
- Assertion Library (Chai)
- Test Doubles (SinonJS)
This is the bare minimum... move quickly through these
Test Framework
Responsible for defining syntax for test spec structuring BDD/TDD.
Test Runner
Responsible for running and displaying results of unit test framework in either CLI or Browser.
- Karma
- Testem
- Jasmine CLI/Browser
- Mocha CLI/Browser
Assertion Library
Responsible for validating input/output in boolean fashion. Typically used to make tests more human readable.
Anatomy of a Test Case
```
// implied setup
it("human readable functional requirement", function() {
// explicit setup
// execute unit under test
// one or more assertions
// explicit teardown
});
// implied teardown
```
- an implied or explicit setup
- an explict execution
- one or more assertions
- an implied or explicit teardown
Anatomy of a Test Suite
-
A testing suite is a series of test cases that when run together, confirm the validity of a piece of software
under test
- Typically provides grouping context (BDD) and helper methods for setup / teardown for groups of test cases
```
describe("foo", function() {
before(function() {
// setup test case
});
after(function() {
// teardown test case
});
it("human readable functional requirement", function() {
});
it("human readable functional requirement", function() {
});
it("human readable functional requirement", function() {
});
it("human readable functional requirement", function() {
});
});
```
Examples
```
describe("getClosestSupportedWidth()", function() {
it("should throw on invalid parameters", function() {
expect(function() {
Responsify.getClosestSupportedWidth('a');
}).to.throw;
expect(function() {
Responsify.getClosestSupportedWidth();
}).to.throw;
});
it("should calculate closest supported width if supportedWidths is set", function() {
Responsify.options.supportedWidths = [100, 500, 1000];
var width = Responsify.getClosestSupportedWidth(777);
expect(width).to.equal(1000);
});
it("should return provided width if no explicit supportedWidths are set", function() {
Responsify.options.supportedWidths = [];
var width = Responsify.getClosestSupportedWidth(777);
expect(width).to.equal(777);
});
});
```
Writing Testable JavaScript
Your primary benefit in writing tests is not the tests themselves, but in the act of writing code that can be tested
- We want to validate our initial design decisions
- We want to prove that our code delivers the desired functionality
- We want to assess that our code is malleable to change
- We want to confirm that our code is isolatable
The Single Responsibility Principle is a key tenant to writing testable code
The SRP states that a given context (class, function, variable) should have a single responsbility; that responsibility is defined
as a single reason to change
Common Opportunites for Responsibility Change
- Calculation
- Mutation
- Configuration
- Communication
- Presentation
Exercize: FizzBuzz
Write a program that prints the numbers from 1 to 100. But for multiples of three print FizzBuzz instead
of the number and for the multiples of five print Buzz. For numbers that are multiples of both three and five print FizzBuzz.
FizzBuzz: Simple Solution
```
var FizzBuzz = (function() {
return {
run: function() {
for (var i = 1; i <= 100; i++) {
if (i % 3 === 0 && i % 5 === 0) {
console.log("FizzBuzz");
} else if (i % 3 === 0) {
console.log("Fizz");
} else if (i % 5 === 0) {
console.log("Buzz");
} else {
console.log(i);
}
}
}
};
})();
```
Testing FizzBuzz: First Try
```
describe("FizzBuzz", function() {
it("should return FizzBuzz for numbers that are multiples of 3 and 5", function() {
FizzBuzz.run(); // presentation and calculation are combined
// ?? impossible to test
});
it("should return Fizz for numbers that are multiples of 3", function() {
// ??
});
it("should return Buzz for numbers that are multiples of 5", function() {
// ??
});
});
```
FizzBuzz: A Better Solution
```
var FizzBuzz = (function() {
return {
// presentation
run: function() {
for (var i = 1; i <= 100; i++) {
console.log(this.calculate(i));
}
},
// calculation
calculate: function(num) {
if (num % 3 === 0 && num % 5 === 0) {
return "FizzBuzz";
} else if (num % 3 === 0) {
return "Fizz";
} else if (num % 5 === 0) {
return "Buzz";
} else {
return num.toString();
}
}
};
})();
```
Testing FizzBuzz: Second Try
```
describe("FizzBuzz", function() {
it("should return FizzBuzz for numbers that are multiples of 3 and 5", function() {
// presentation and calculation are seperated
var value1 = FizzBuzz.calculate(15);
var value2 = FizzBuzz.calculate(30);
expect(value1).to.equal("FizzBuzz");
expect(value2).to.equal("FizzBuzz");
});
it("should return Fizz for numbers that are multiples of 3", function() {
var value1 = FizzBuzz.calculate(3);
var value2 = FizzBuzz.calculate(9);
expect(value1).to.equal("Fizz");
expect(value2).to.equal("Fizz");
});
it("should return Buzz for numbers that are multiples of 5", function() {
var value1 = FizzBuzz.calculate(5);
var value2 = FizzBuzz.calculate(10);
expect(value1).to.equal("Buzz");
expect(value2).to.equal("Buzz");
});
});
```
FizzBuzz: An Even Better Solution
```
var FizzBuzz = (function() {
return {
// configuration (workflow)
run: function(start, end) {
for(var i = start; i < end; i++) {
this.print(this.calculate(i));
}
},
// presentation
print: function(str) {
console.log(str);
},
// calculation
calculate: function(num) {
if (num % 3 === 0 && num % 5 === 0) {
return "FizzBuzz";
} else if (num % 3 === 0) {
return "Fizz";
} else if (num % 5 === 0) {
return "Buzz";
} else {
return num.toString();
}
}
};
})();
```
Unit Testing Best Practices
Test within your boundries. Don't test libraries you don't control.
- Avoid unit testing the DOM
- Don't test 3rd party libraries you don't control; assume they are already tested (jQuery, Backbone, etc)
- Test UI behavior at the acceptance level not at the unit level
Each method under test should have defined inputs and outputs
Automation is Important
"Imperfect tests, run frequently, are much better than perfect tests that are never written at all." ~ Martin Fowler
Happy Path
- Don't always test the happy path. It's often important to test both passing and failing conditions, as well as exception handling.
Run Test Cases in Isolation
- To avoid test specific race conditions or global variable leakage affecting your tests, run them with the `.only` flag in Mocha
Async Testing
- When testing async functionality, it's important to reduce the latency of the callback to the next turn through the event loop
- Fake time by using Sinons fake timers
Favor Pragmatism over Dogmatism
- Pragmatism implies an educated implementation of theories or beliefs put into practical application
- Dogmatism implies sticking to theoretical beliefs or theories without considering practicality
Understanding the [context] of why we test is infinintely more important than the how. Your mileage on the [how] may vary
but if you understand the [why] you can apply/reason about to as how to implement the [how] in your context.