Skip to content
This repository has been archived by the owner on Sep 1, 2023. It is now read-only.

Temporal Pooler Simultaneous Tandem Learning Experiments

Ryan McCall edited this page Jul 2, 2015 · 8 revisions

Overview

These experiments explore the feasibility of training both the Temporal Memory and Temporal Pooler simultaneously in a TM-to-TP network. We used the Sensorimotor Capacity Experiment as the test bed, which trains on exhaustive sweeps of the world and tests on random sweeps. The primary measures of performance were Temporal Pooler stability (confusion) and distinctness (confusion).

Initially we explored a number of variations on the basic Sensorimotor Capacity Experiment:

  • Vary number of worlds vs. Vary number of elements
  • Strict vs. reasonable parameter sets for Temporal Memory and Temporal Pooler
  • One pass (simultaneous) learning vs. two pass (TM first then TP) learning
  • Regular vs. Slow (slower increments and more repetitions; 3x and 10x) Temporal Pooler learning

Given these options, a lot of experiments corresponding to the possible combinations. We generally found:

  • Same (for vary worlds) to worse (for vary elements) performance for one pass learning
  • Same (for vary worlds) to improved (for vary elements) performance for slow learning

At this point we saw that the reasonable parameters with one pass learning gave reasonable performance so we decided to focus on Vary Worlds, Reasonable parameter, and Regular TP learning rate. Next, we report some more target experiments assessing tandem learning.

Temporal Pooler Stability Experiment

Question: Does the Temporal Pooler representation become stable in 2 training repetitions?

Experiment parameters: One pass learning, 2 worlds x 10 elements, Reasonable TM-TP params

Results: The TP representation doesn't become stable in 1 training pass, but did after two (pictured)

Exp 1

Direct link to plot image here

Distinctness Scalability Experiment

Question: Will two repetition tandem learning fail if there are many worlds?

Experiment parameters: Vary worlds, reasonable TM-TP params

Results: Two-repetition one-pass learning works acceptable and comparably to earlier results in terms of distinctness. In the plot below the left two columns show two-repetition results while the right two show three-repetition results. The first and third columns give means; the second and fourth maximums (to understand the worst case)

Exp 2

Direct link to plot image here

Permanence Learning Increments

Question: Is pooling overlap (synPredictedInc parameter) doing most of the work in Temporal Pooler learning?

Experiment parameters: Vary worlds, reasonable TM-TP params, one-pass learning, 3 learning repetitions

Results: In the plot below we turn off the standard learning parameter, synPerActiveIn. This leaves only synPredictedInc controlling learning since the standard decrement parameter for Temporal Pooler is set to 0.0. If anything, performance improved when synPredictedInc was turned off, suggesting that "pooling overlap" is doing the bulk of the work.

Exp 3

Direct link to plot image here

Stability Convergence Time

Question: How does stability confusion behave over time?

Design: Generate a set of sequence based on exhaustive sweeps of worlds as before. Repeat a series of train-test runs:

  • Run TM & TP with one-pass / tandem learning on a subsequence
  • Clear TP monitoring records (including stability)
  • Run test phase with random movements
  • Record stability mean and max

Experiment parameters: 20 worlds x 10 elements, Reasonable TM-TP, Many learning repetitions, Sequence piece size

Measure: Stability confusion mean and max after each test period

Results: We found comparable stability confusion convergence between one-pass and two-pass learning methods (see plot).

Exp 4

Direct link to plot image here

Summary

Online learning is comparable to offline learning in many conditions tested.

  • Reasonable performance after two learning repetitions
  • Two-rep learning is robust to increasing number of worlds
  • Online comparable to offline in stability convergence

“Pooling overlap” is doing most of the work in Temporal Pooler learning

Finally, the code for the experiments described on this page can be found here: https://github.com/numenta/nupic.research/tree/master/sensorimotor/experiments/capacity