Humans' Deep Learning for Humans



Humans' Deep Learning for Humans

0 0


pycon-pl-2015

Humans' Deep Learning for Humans

On Github williamwallacebrave / pycon-pl-2015

Humans' Deep Learning for Humans

Created by Maciej Sobczak

deep + learning (learnable) + human

learning?

DATA new words in French LEARNING remembering the words GOAL getting high score on vocabulary test
DATA technical specs LEARNING TDD GOAL pass the specs

human?

data = neural network

learnable?

All generalizations are false, including this one.

Mark Twain

the environment and generalization

The acquired skills should be useful also in the situations not encountered during the learning phase.

when generalization can occur?

  • Invariance Assumption
  • Learnable Regularity Assumption (detectable + detection algorithm is feasible)
Marble world - adapted from: Probably Approximately Correct

P(no1sin100picks)≈P(no \: 1s \: in \: 100 \: picks) \approxP(no1sin100picks)≈
≈2×10−10=\approx 2 \times 10^{-10} =≈2×10​−10​​=
=210000000000= \frac{2}{10000000000}=​10000000000​​2​​
P(no3sin100picks)≈P(no \: 3s \: in \: 100 \: picks) \approxP(no3sin100picks)≈
2×10−10=2 \times 10^{-10} =2×10​−10​​=
=210000000000= \frac{2}{10000000000}=​10000000000​​2​​
...

Mono world - adapted from: Probably Approximately Correct
Rare world - adapted from: Probably Approximately Correct

97% confidence that after 100 picks one can see representative of 80% of the contents of the urn.

animal vs plant

features (20) has ears, has leaves, is blue ... number of species 1048576 (exponentially many!)
learnable algorithm should learn from a number of examples that is polynomial in the number of the features and one can control the error, and insist that this control be again polynomial

only certain (very limited) classes of functions can fulfil the above (linear separator and conjuctions).

PAC
  • learning process takes limited number of steps
  • the computation requires only limited number of interactions with the world
  • learning leads to categorization with small error rate

teacher

evolution = learning?

why the world around us is learnable?

breeding the intelligent systems = neuroevolution

Travelling Salesman!

deep?

backpropagation

e = (2a + b)(a - b + 4)

adapted from: colah's blog
adapted from: colah's blog
adapted from: colah's blog

deep representations?

adapted from: Andrej Karpathy's demo

can the representation be useful on its own?

image embeddings

word embeddings

adapted from: colah's blog

human?

adapted from: colah's blog

use case: foreign language acquisition

  • hierarchically driven: phonemes, chunking, words
  • naturally unfolding examples
  • don't be lazy e.g. motor cortex used
  • interest driven: fight with the attention selecting mechanism

Big big thank you to Christopher Olah for his blog and permission to adapt some of his examples for this talk

Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark “Perhaps there are thoughts we cannot think,” surprise you? Evolution, so far, may possibly have blocked us from being able to think in some directions; there could be unthinkable thoughts.

Richard Hamming

References

55
Humans' Deep Learning for Humans Created by Maciej Sobczak