Theme One Program • Motivation

3 views
Skip to first unread message

Jon Awbrey

unread,
Aug 16, 2022, 11:11:48 AMAug 16
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Theme One Program • Motivation 1
http://inquiryintoinquiry.com/2022/08/16/theme-one-program-motivation-1-2/

All,

The main idea behind the Theme One program is the efficient use
of graph-theoretic data structures for the tasks of “learning”
and “reasoning”.

I am thinking of “learning” in the sense of learning about an environment,
in essence, gaining information about the nature of an environment and
being able to apply the information acquired to a specific purpose.

Under the heading of “reasoning” I am simply lumping together all the
ordinary sorts of practical activities which would probably occur to
most people under that name.

There is a natural relation between the tasks. Learning the character of an
environment leads to the recognition of laws which govern the environment and
making full use of that recognition requires the ability to reason logically
about those laws in abstract terms.

Resources
=========

• Theme One Program • Overview
https://oeis.org/wiki/Theme_One_Program_%E2%80%A2_Overview

• Theme One Program • Exposition
https://oeis.org/wiki/Theme_One_Program_%E2%80%A2_Exposition

• Theme One Program • User Guide
https://www.academia.edu/5211369/Theme_One_Program_User_Guide

• Theme One Program • Survey Page
https://inquiryintoinquiry.com/2022/06/12/survey-of-theme-one-program-4/

Regards,

Jon

Jon Awbrey

unread,
Aug 17, 2022, 10:10:13 AMAug 17
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Theme One Program • Motivation 2
http://inquiryintoinquiry.com/2022/08/17/theme-one-program-motivation-2-2/

All,

A side-effect of working on the Theme One program over the course of
a decade was the measure of insight it gave me into the reasons why
empiricists and rationalists have so much trouble understanding each
other, even when those two styles of thinking inhabit the very same soul.

The way it came about was this. The code from which the program is
currently assembled initially came from two distinct programs, ones
I developed in alternate years, at first only during the summers.

In the Learner program I sought to implement a Humean empiricist style of
learning algorithm for the adaptive uptake of coded sequences of occurrences
in the environment, say, as codified in a formal language. I knew all the
theorems from formal language theory telling how limited any such strategy
must ultimately be in terms of its generative capacity, but I wanted to
explore the boundaries of that capacity in concrete computational terms.

In the Modeler program I aimed to implement a variant of Peirce's graphical
syntax for propositional logic, making use of graph-theoretic extensions
I had developed over the previous decade.

As I mentioned, work on those two projects proceeded in a parallel series
of fits and starts through interwoven summers for a number of years, until
one day it dawned on me how the Learner, one of whose aliases was “Index”,
could be put to work helping with sundry substitution tasks the Modeler
needed to carry out.

So I began integrating the functions of the Learner and the Modeler,
at first still working on the two component modules in an alternating
manner, but devoting a portion of effort to amalgamating their principal
data structures, bringing them into convergence with each other, and
unifying them over a common basis.

Another round of seasons and many changes of mind and programming
style, I arrived at a unified graph-theoretic data structure, strung
like a wire through the far‑flung pearls of my programmed wit. But
the pearls I polished in alternate years maintained their shine along
axes of polarization whose grains remained skew in regard to each other.
To put it more plainly, the strategies I imagined were the smartest tricks
to pull from the standpoint of optimizing the program's performance on the
Learning task I found the next year were the dumbest moves to pull from the
standpoint of its performance on the Reasoning task. I gradually came to
appreciate that trade-off as a “discovery”.

Regards,

Jon

Jon Awbrey

unread,
Aug 18, 2022, 7:05:36 AMAug 18
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Theme One Program • Motivation 3
http://inquiryintoinquiry.com/2022/08/18/theme-one-program-motivation-3-2/

All,

Sometime around 1970 John B. Eulenberg came from Stanford to direct
Michigan State's Artificial Language Lab, where I would come to spend
many interesting hours hanging out all through the 70s and 80s. Along
with its research program the lab did a lot of work on augmentative
communication technology for limited mobility users and the observations
I made there prompted the first inklings of my Learner program.

Early in that period I visited John's course in mathematical linguistics, which
featured “Laws of Form” among its readings, along with the more standard fare of
Wall, Chomsky, Jackendoff, and the Unified Science volume by Charles Morris which
credited Peirce with pioneering the pragmatic theory of signs. I learned about
Zipf's Law relating the lengths of codes to their usage frequencies and I named
the earliest avatar of my Learner program XyPh, partly after Zipf and playing
on the xylem and phloem of its tree data structures.

Jon Awbrey

unread,
Aug 19, 2022, 12:12:49 PMAug 19
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Theme One Program • Motivation 4
http://inquiryintoinquiry.com/2022/08/19/theme-one-program-motivation-4-2/

All,

From Zipf's Law and the category of “things varying inversely with frequency”
I got a first brush with the idea that keeping track of usage frequencies is
part and parcel of building efficient codes.

In its first application the environment the Learner has to learn is the
usage behavior of its user, as given by finite sequences of characters from
a finite alphabet which might as well be called “words” and as given by finite
sequences of those words which might as well be called “phrases” or “sentences”.
In other words, Job One for the Learner is the job of constructing a user model.

In that frame of mind we are not seeking anything so grand as a Universal Induction
Algorithm but simply looking for any approach to give us a leg up, complexity wise,
in Interactive Real Time.

Regards,

Jon

Jon Awbrey

unread,
Aug 20, 2022, 11:45:16 AMAug 20
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Theme One Program • Motivation 5
http://inquiryintoinquiry.com/2022/08/20/theme-one-program-motivation-5-2/

All,

Since I'm working from decades-old memories of first inklings
I thought I might peruse the web for current information about
Zipf's Law. I see there is now something called the Zipf–Mandelbrot
(and sometimes –Pareto) Law and that was interesting because my wife
Susan Awbrey made use of Mandelbrot's ideas about self-similarity in
her dissertation and communicated with him about it. So there's more
to read up on ...

Just off-hand, though, I think my Learner is dealing with a different problem.
It has more to do with the savings in effort a learner gets by anticipating
future experiences based on its record of past experiences than the savings
it gets by minimizing bits of storage as far as mechanically possible.
There is still a type of compression involved but it's more like Korzybski's
“time-binding” than space-savings proper. Speaking of old memories ...

The other difference I see is that Zipf's Law applies to an established and
preferably large corpus of linguistic material, while my Learner has to start
from scratch, accumulating experience over time, making the best of whatever
data it has at the outset and every moment thereafter.

Resources
=========

Jon Awbrey

unread,
Aug 21, 2022, 9:00:38 AMAug 21
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Theme One Program • Motivation 6
http://inquiryintoinquiry.com/2022/08/20/theme-one-program-motivation-6-2/

All,

Comments I made in reply to a correspondent's questions
about delimiters and tokenizing in the Learner module
may be worth sharing here.

In one of the projects I submitted toward a Master's in psychology I used
the Theme One program to analyze samples of data from my advisor's funded
research study on family dynamics. In one phase of the study observers
viewed video-taped sessions of family members (parent and child) interacting
in various modes (“play” or “work”) and coded qualitative features of each
moment's activity over a period of time.

The following page describes the application in more detail and reflects on
its implications for the conduct of scientific inquiry in general.

• Exploratory Qualitative Analysis of Sequential Observation Data
https://oeis.org/wiki/User:Jon_Awbrey/Exploratory_Qualitative_Analysis_of_Sequential_Observation_Data

In this application a “phrase” or “string” is a fixed-length sequence
of qualitative features and a “clause” or “strand” is a sequence of
such phrases ending with what the observer considers a significant
pause in the action.

In the qualitative research phases of the study one is simply attempting to
discern any significant or recurring patterns in the data one possibly can.

In this case the observers are tokenizing the observations according to
a codebook that has passed enough intercoder reliability studies to afford
them all a measure of confidence it captures meaningful aspects of whatever
reality is passing before their eyes and ears.
Reply all
Reply to author
Forward
0 new messages