MathPiper Lesson 6: The hidden rules mathematicians use to solve elementary algebra equations

瀏覽次數:17 次
跳到第一則未讀訊息

Ted Kosan

未讀,
2016年6月18日 下午2:47:172016/6/18
收件者:mathf...@googlegroups.com
This lesson explains the fundamentals of the hidden rules
mathematicians use to solve elementary algebra equations:

http://patternmatics.org/temp_1/mathfuture/lesson6/

There is no assignment for this lesson other than updating to
MathPiperIDE version .903 if you would like to experiment with the
above worksheet. This is the final lesson in the series.

If anyone has questions, comments, or bug reports, please submit them
to the Math Future group.


Ted

kirby urner

未讀,
2016年6月18日 下午5:32:552016/6/18
收件者:mathf...@googlegroups.com

Just wanted to say I've scan this once so far (impressive,
good use of color), and was gratified to see my former
boss, Scott Gray, a founder of O'Reilly School, quoted so
extensively.

That's right, he taught calculus for Ohio State, then at
some point University of Illinois am I right?  He was
good friends with the late Jerry Uhl and his answer to
this concern about not-comprehending was to team up
with Wolfram and trying to interest faculty in a calculus
curriculum based on students gaining more fluency
with the concepts behind the computations.  That was
the theory, in need of willing testers.

However the university environment was not conducive
to much experimentation and it was easier to drive Scott's
little startup away than to invite collaboration.  That' when
his startup showed up on O'Reilly's radar.  A lot of the same
concerns haunt the tech industry as a grasp of concepts
is what stands the test of time, but that's tough to measure
when the teaching regime is all about covering up for
imposter syndrome.

Gray was / is highly suspicious of passive learning,
especially video, as one gains a nodding-off understanding,
but the hands-on experience has gone away.  That's why
the O'Reilly School was almost religiously devoid of video,
a very difficult row for the parent company to hoe, given
its excellence in the instructional video sector (named
O'Reilly Media for a reason).  I think you can see how,
with distance learning, ye old "correspondence course",
there'd be the lure of subscription based TV with no need
to assess viewer competencies.  That's what a school does,
and it's hard work.  Publishing / broadcasting is less about
matching students with teachers in workable workload
ratios.  Any number may buy the same book, at least in
theory.

Anyway, I just wanted to say that was great to read about.

Parsing expressions into trees is a lot more straightforward
when the beginning syntax is like a LISP right?  But we've
gotten used to typographical conventions and parentheses
for prioritization.  An expression is a jobs cue with a time
dimension, a set of steps, with parentheses used to set
the order of those steps, with very different outcomes.

In a language like J, with so many operators and different
parts of speech, it made sense to abandon any implicit
prioritization and make parentheses the only way to change
the operations que.  Strict right to left order is followed otherwise.
There's no LISP-link notation in that case, through there
are ways to manipulate the right-to-left cue in various ways,
mainly with an eye towards purging them of mutables, leaving
only the outlines of an algorithm, a verb.  Very pretty.

Kirby






Ted

--
You received this message because you are subscribed to the Google Groups "MathFuture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mathfuture+...@googlegroups.com.
To post to this group, send email to mathf...@googlegroups.com.
Visit this group at https://groups.google.com/group/mathfuture.
For more options, visit https://groups.google.com/d/optout.

kirby urner

未讀,
2016年6月18日 晚上8:13:402016/6/18
收件者:mathf...@googlegroups.com

 
In a language like J, with so many operators and different
parts of speech, it made sense to abandon any implicit
prioritization and make parentheses the only way to change
the operations que.  Strict right to left order is followed otherwise.
There's no LISP-link notation in that case, through there
are ways to manipulate the right-to-left cue in various ways,
mainly with an eye towards purging them of mutables, leaving
only the outlines of an algorithm, a verb.  Very pretty.

Kirby


No "LISP-ish" notation I meant to say.  Though there are ways...

Talking more about what's called "tacit programming" in J:


Kirby

 

Joseph Austin

未讀,
2016年6月25日 上午10:11:582016/6/25
收件者:mathf...@googlegroups.com
On Jun 24, 2016, at 8:51 PM, Ted Kosan <ted....@gmail.com> wrote:

Re: [mathpiper-dev] The precedence of the unary minus operator

However, currently my primary goal [for MathPiper] is to get the current rule set to
the point where it will solve a wide range of the more fundamental
elementary algebra equations. Then, I want to use this "phase 1"
system to start explicitly teaching programmers who think they are bad
at math how to easily solve this class of elementary algebra
equations.

I'm posting to MathFuture because I thought this might be of more general interest.

I got a lot of students in beginning computer classes who were math-phobic.
My approach was to teach programming with largely non-numeric systems such as Scratch, ALICE, and Lego Robots.

But lately (and my reason for interest in MathPiper) I've been thinking that Computing itself can be treated somewhat axiomatically.
As I've said before, there is a morphism of some sort between the Field Axioms with exponentiation,
Finite Automata, and Structured Programs.
For example,  the Sequence, Decisions (choice) and Loops (repetition) in programming correspond to the
sequence, choice, and repetition structures of regular languages which in turn correspond to product, sum, and power operations in arithmetic.

So perhaps we agree that someone who can program isn't "bad" at math, but quite the opposite,
because computing itself is "math"--in some sense, a lot more sophisticated and rigorous math than most "math" courses.
You make the same observation in Lesson 6:
"Why were AI researchers the first group in history to discover that mathematicians don't know how they do math? 
I think it’s because computers were the first "students" in history that absolutely refused to learn any mathematics that was not taught explicitly."

Joe


 





kirby urner

未讀,
2016年6月25日 上午10:32:592016/6/25
收件者:mathf...@googlegroups.com
On Sat, Jun 25, 2016 at 7:11 AM, Joseph Austin <drtec...@gmail.com> wrote:

You make the same observation in Lesson 6:
"Why were AI researchers the first group in history to discover that mathematicians don't know how they do math? 
I think it’s because computers were the first "students" in history that absolutely refused to learn any mathematics that was not taught explicitly."

Joe



Indeed, a great quote.

Yet maths may involve non-computable leaps in logic, only retroactively turned into proofs from axioms, according to Penrose.

I'm a skeptic when it comes to the thesis an AI-bot will derive all provable theorems given axioms and rules for deduction.  That smacks of "monkey at a typewriter" as in "given infinite time" (which is quite impractical).

I do think atomizing down to basic concepts and skills (definitions as well as axioms, core concepts) is important to torch passing.

Case in point:  hex number, using 0-F to label the 16 permutations of two things taken four at a time.  Base 10 does not have this nice segue to/from permutations / combinatorics.  Binary is inherently more primitive / foundational than decimal. 

That's why here in the Silicon Forest of all places we wont' be able to focus on Base 10 as exclusively and religiously as other economies might.  The Left Coast in general is probably more hex-based than decimal. 

Sociological hypothesis, unproved:  the lawyers who gave up on engineering in young adulthood flocked to Washington DC for the highest paying jobs and that's why legislation and executive branch policy-making within the CCSS jurisdiction (sphere, domain, circle) looks so "Base 10ish"; the Eloi (as it were) don't have as much need to deal with bits and bytes as we do.

Kirby

Ted Kosan

未讀,
2016年6月25日 下午1:34:242016/6/25
收件者:mathf...@googlegroups.com
Joe wrote:

> I got a lot of students in beginning computer classes who were math-phobic.
> My approach was to teach programming with largely non-numeric systems such
> as Scratch, ALICE, and Lego Robots.
>
> But lately (and my reason for interest in MathPiper) I've been thinking that
> Computing itself can be treated somewhat axiomatically.
> As I've said before, there is a morphism of some sort between the Field
> Axioms with exponentiation,
> Finite Automata, and Structured Programs.
> For example, the Sequence, Decisions (choice) and Loops (repetition) in
> programming correspond to the
> sequence, choice, and repetition structures of regular languages which in
> turn correspond to product, sum, and power operations in arithmetic.
>
> So perhaps we agree that someone who can program isn't "bad" at math, but
> quite the opposite,
> because computing itself is "math"--in some sense, a lot more sophisticated
> and rigorous math than most "math" courses.
> You make the same observation in Lesson 6:
> "Why were AI researchers the first group in history to discover that
> mathematicians don't know how they do math?
> I think it’s because computers were the first "students" in history that
> absolutely refused to learn any mathematics that was not taught explicitly."

Part of what "explicitly" means here is that there are really two
languages of mathematics, not one. These languages are the
object-level language and the meta-level language. Students are taught
the object-level of mathematics, but they are not told that the
meta-level even exists because their math teachers don't know it
exists. In theory, if everyone were taught both of these languages
explicitly, then very few people would be bad at math.

A similar situation exists with computer programs because all formal
languages consist of an object-level language and a meta-level
language. Some early AI researchers discovered that Algorithm = Logic
+ Control. The logic part of an algorithm consists of knowledge about
the domain of discourse. The control part of an algorithm consists of
how to use the domain knowledge to solve problems.

The control structures of a procedural program are part of the
meta-language of the programming language, but students are almost
never taught this explicitly. So how do students learn how to program
despite not being taught this information? One great advantage that
programming has over doing mathematics is the computer accurately and
very patiently tells a programmer if their program is syntactically
correct or not no matter how many times the student asks it (by
running the program).

Some beginning programmers will run a program dozens or even hundreds
of times before their program works correctly. Every time a program
run results in an error, the student learns another way to write an
incorrect program, and over time they learn to avoid these incorrect
ways. Even though this approach to learning how to program is somewhat
successful, it is a poor substitute for being taught explicitly how
programming works.

Math students are in an even worse situation because they are only
able to ask a teacher relatively few times whether the manipulations
they are performing on a given expression are correct or not because
teachers don't have the time (nor probably the patience) that is
required to provide this kind of feedback for each student in a
typical class.

As for the morphism you have noticed between the Field Axioms with
exponentiation, Finite Automata, and Structured Programs, I think
breaking each of these areas down into their object-language and
meta-language components would reveal the source of the similarities.

Ted

Ted Kosan

未讀,
2016年6月25日 下午1:37:522016/6/25
收件者:mathf...@googlegroups.com
Kirby wrote:

> Yet maths may involve non-computable leaps in logic, only retroactively
> turned into proofs from axioms, according to Penrose.
>
> I'm a skeptic when it comes to the thesis an AI-bot will derive all provable
> theorems given axioms and rules for deduction. That smacks of "monkey at a
> typewriter" as in "given infinite time" (which is quite impractical).

For work on the frontiers of mathematics, maybe non-computable leaps
of logic will always be required. However, for the purposes of
teaching K-College students mathematics, AI math teachers will be much
more effective than human math teachers.

Ted

kirby urner

未讀,
2016年6月25日 下午4:08:472016/6/25
收件者:mathf...@googlegroups.com


On Jun 25, 2016 10:37, "Ted Kosan"

> > I'm a skeptic when it comes to the thesis an AI-bot will derive all provable
> > theorems given axioms and rules for deduction.  That smacks of "monkey at a
> > typewriter" as in "given infinite time" (which is quite impractical).
>
> For work on the frontiers of mathematics, maybe
> non-computable leaps of logic will always be required.
> However, for the purposes of teaching K-College students
> mathematics, AI math teachers will be much more effective than
> human math teachers.
>
> Ted
>

A teacher is a role model showing students what it's like to be a master of X.

So what's needed are models of Real + Artificial Intelligence working in tandem, as in "teacher, show me how to work with robots and/or software libraries."

Purely AI teachers fail as role models for humans because whereas some humans may aspire to be like AI bots at some level, it's not in their wiring.

No AI bot could invent Quadrays, accessible to 9th graders.

Nor has an AI bot ever invented a computer language as expressive as Python or Clojure or MathPiper.

Based on the track record I've seen, I'm excited by AI and bullish about it's future evolution, but as Penrose points out, no computer can solve even simple Martin Gardner style brain teasers as fast as he can, on average. The bots wouldn't know where to begin without their army of puppeteer programmers.

Humans still leave AI in the dust when it comes to coping with random challenges.

But it was never either / or. I feel no compulsion to "pick a side".

Kirby

Andrius Kulikauskas

未讀,
2016年6月25日 下午4:10:242016/6/25
收件者:mathf...@googlegroups.com
The "non-computable leaps in logic" are exactly what I tried to document
with my essay:
http://www.ms.lt/sodas/Book/DiscoveryInMathematics

They are an "implicit math" which our minds interpret. But
mathematicians have a strange prejudice against any such implicit math.
They rely on it but there is a taboo against talking about it or even
allowing that it might exist. Thus there is no "science of math". No
other disciplines are allowed to contribute any perspective on how to do
math.

But I feel encouraged that my work on implicit math is yielding results
on explicit math. At a certain point I imagine the results will too big
to ignore the methods.

Andrius

Andrius Kulikauskas
m...@ms.lt
+370 607 27 665

Ted Kosan

未讀,
2016年6月25日 晚上8:46:092016/6/25
收件者:mathf...@googlegroups.com
Kirby wrote:

> A teacher is a role model showing students what it's like to be a master of X.

Unfortunately, most current math teachers are anti-roll models who
show students that people who don't understand how math works can
still be paid to teach it :-)



> So what's needed are models of Real + Artificial Intelligence working in
> tandem, as in "teacher, show me how to work with robots and/or software
> libraries."

Unfortunately, most of the "real" intelligence in the world is
currently possessed by a small number genius-level people who
understand and develop logic-based AI theory. These AI geniuses are
encoding their real knowledge into AI systems, and these AI systems
will in turn teach this knowledge to all of the humans in the world.
This process will put most current mathematics teachers out of a job.



> Purely AI teachers fail as role models for humans because whereas some
> humans may aspire to be like AI bots at some level, it's not in their
> wiring.

It is not currently in their wiring, but it will be because AI will
rewire them. The developers of the PRESS elementary algebra equation
solving AI observed that this rewiring process happened to them. Over
time the way they used to solve elementary algebra equations was
replaced by doing it the PRESS way because the PRESS way was clearer,
easier, and more efficient.



> No AI bot could invent Quadrays, accessible to 9th graders.

History is full of "No AI could..." statements that were later proved
to be wrong. For example:

"No AI could understand natural language."
"No AI could write news articles."
"No AI could drive a car."
etc.



> Nor has an AI bot ever invented a computer language as expressive as Python
> or Clojure or MathPiper.

Computation = Controlled Deduction, so computer programs are
controlled deduces. Advanced logic-based AI like CYC won't need to use
typical programming languages to make deductions because they are able
to make deductions directly. I think it is likely that in the future a
small team of logic-based AI experts will be able to use systems like
CYC to replace hundreds or maybe even thousands of typical computer
programmers.



> Based on the track record I've seen, I'm excited by AI and bullish about
> it's future evolution, but as Penrose points out, no computer can solve even
> simple Martin Gardner style brain teasers as fast as he can, on average. The
> bots wouldn't know where to begin without their army of puppeteer
> programmers.

As Barlett said "The greatest shortcoming of the human race is our
inability to understand the exponential function." The following clip
from Monty Python's "The Holy Grail" is one of the best visual
depictions I have found of what an exponential process looks like:

https://youtu.be/GJoM7V54T-c?t=32

AI increases in capabilities exponentially. However, with exponential
processes most of the change happens towards the end of the process.
To most humans, AI looks like it is progressing at rate that is
similar to Lancelot's in the clip, but they are going to be just as
surprised as the guards in the clip were when AI passes the "dog leg"
in its exponential growth curve.



> But it was never either / or. I feel no compulsion to "pick a side".

History has repeatedly shown that new disruptive technologies always
produce winners and losers. After gunpowder was invented, those who
embraced guns won, and those who clung to obsolete knights and castles
lost. After the internal combustion engine was invented, those who
embraced cars won, and those to clung to obsolete horse-drawn
carriages lost. After aircraft carriers were invented, those who
embraced them won and those who clung to obsolete battleships lost.

Advanced logic-based AI systems will start making their appearance
soon and they will probably be one of the most disruptive technologies
in history. The only way to win when a new disruptive technology
becomes available is to pick its side and put all one's energy into
mastering it.

Ted

Joseph Austin

未讀,
2016年6月25日 晚上10:01:102016/6/25
收件者:mathf...@googlegroups.com

On Jun 25, 2016, at 1:34 PM, Ted Kosan <ted....@gmail.com> wrote:

As for the morphism you have noticed between the Field Axioms with
exponentiation, Finite Automata, and Structured Programs, I think
breaking each of these areas down into their object-language and
meta-language components would reveal the source of the similarities.

Ted,
I'm not sure I understand what you mean by meta-language in programming.

To me, the "meta-language" is the language used to describe the syntax of a language,
such as using boldface for keywords, italics for types to be replaced by actual variables or constants, brackets [ ] for optional items or ellipsis ... to indicate arbitrary repetition.
Then of course the semantics could be described in the "metalanguage" English.
But I sense that you may be using the term differently.

So in that sense, perhaps are you referring to the distinction between say a regular expression and a regular language,
one an expression of grammar rules, the other the valid sentences consistent with the specific grammar.

In that sense, perhaps I would say I'm talking about a morphism between meta-languages.

But then I would also say the "meta-language" describes a mathematical structure somewhat like a field with exponentiation;
there are actually some axioms that the object language elements adhere to.

As to how all this relates to solving equations or writing programs, further exploration is in order.
For example, how do we teach students to recognize when a problem calls for loops or for decisions,
or even which order to do computations?
I find a lot of students think the computer does "magic", and don't realize that the statements actually mean,
and the computer only does, only what the statements actually say.

Once I taught a COBOL class in which I used a simple tax form.  I cut the form up into individual numbered lines and gave 
the instructions from each line to one student each, but without the line numbers, and asked them to put the lines in order.
The students said they finally understood the concept of putting statements in proper order.
(Activity books for young kids often have exercises of this type also, such as showing 4 pictures to put in sequence.)

For keeping the levels straight, I used autonomous robots.
When seeing the program and answers on the same screen, the students may confuse one and the other.
But they understand the difference between instructions they write on the computer screen
and the actions the robot takes when running the downloaded program.

Joe

Joseph Austin

未讀,
2016年6月25日 晚上10:07:062016/6/25
收件者:mathf...@googlegroups.com
Andrius,
I found your essay rather deep.
Do you have a version written for high-school students?
One with a lot more examples?
I'm more of a hands-on or visual learner.

Joe

Joseph Austin

未讀,
2016年6月25日 晚上10:29:092016/6/25
收件者:mathf...@googlegroups.com

> On Jun 25, 2016, at 8:46 PM, Ted Kosan <ted....@gmail.com> wrote:
>
> Kirby wrote:
>
>> A teacher is a role model showing students what it's like to be a master of X.
>
> Unfortunately, most current math teachers are anti-roll models who
> show students that people who don't understand how math works can
> still be paid to teach it :-)
>
>
>
>> So what's needed are models of Real + Artificial Intelligence working in
>> tandem, as in "teacher, show me how to work with robots and/or software
>> libraries."
>
> Unfortunately, most of the "real" intelligence in the world is
> currently possessed by a small number genius-level people who
> understand and develop logic-based AI theory. These AI geniuses are
> encoding their real knowledge into AI systems, and these AI systems
> will in turn teach this knowledge to all of the humans in the world.
> This process will put most current mathematics teachers out of a job.

It won't really be necessary to teach other humans.
The computers could teach each other.

Making humans "smarter" isn't the answer.
We are already smart enough to con ourselves into believing what we want,
even though we "know" better.
Just watch any politician "reasoning" with any reporter.
>
> <snip>
>
> History has repeatedly shown that new disruptive technologies always
> produce winners and losers. After gunpowder was invented, those who
> embraced guns won, and those who clung to obsolete knights and castles
> lost. After the internal combustion engine was invented, those who
> embraced cars won, and those to clung to obsolete horse-drawn
> carriages lost. After aircraft carriers were invented, those who
> embraced them won and those who clung to obsolete battleships lost.
>
> Advanced logic-based AI systems will start making their appearance
> soon and they will probably be one of the most disruptive technologies
> in history. The only way to win when a new disruptive technology
> becomes available is to pick its side and put all one's energy into
> mastering it.

Have you ever contemplated what it would mean to have machines with say 1,000 times the intelligence of the average human?
We would be jellyfish in their "hands".

Joseph Austin

未讀,
2016年6月25日 晚上10:44:432016/6/25
收件者:mathf...@googlegroups.com

On Jun 25, 2016, at 10:32 AM, kirby urner <kirby...@gmail.com> wrote:

Case in point:  hex number, using 0-F to label the 16 permutations of two things taken four at a time.  Base 10 does not have this nice segue to/from permutations / combinatorics.  Binary is inherently more primitive / foundational than decimal.  

That's why here in the Silicon Forest of all places we wont' be able to focus on Base 10 as exclusively and religiously as other economies might.  The Left Coast in general is probably more hex-based than decimal.  

OK, if you want hex, how about 6 more digits glyphs?  We barely have digits for ten and eleven for base twelve, thanks to the dozenal society--actually two proposals, one of which actually has UNICODE code points.  But in my opinion, the wrong proposal won--the glyphs simplified to 7-stroke are ambiguous.
.
I advocate teaching place-value arithmetic in BINARY, then introduce hex as a short-cut way of writing binary.
Once students understand that, it probably wouldn't hurt to expose them to the archaic decimal system so they will be able to read dates engraved on plaques and tombstones and numbers in printed books written in earlier centuries. :-)

Joe

Joseph Austin

未讀,
2016年6月25日 晚上10:57:062016/6/25
收件者:mathf...@googlegroups.com
PS
You know, there are only 10 kind of people in the world:

* Those who understand base 10

* Those who don't.

Joe Austin

kirby urner

未讀,
2016年6月25日 晚上11:08:092016/6/25
收件者:mathf...@googlegroups.com
On Sat, Jun 25, 2016 at 5:46 PM, Ted Kosan <ted....@gmail.com> wrote:
Kirby wrote:

> A teacher is a role model showing students what it's like to be a master of X.

Unfortunately, most current math teachers are anti-roll models who
show students that people who don't understand how math works can
still be paid to teach it :-)



Well, at least they're getting paid.  Making sure teachers get enough to
live on is a priority, even if everything they teach is garbage, and I don't
think the majority of what they're teaching is garbage, only some percent,
and every generation we make it harder for the misinformation to stick
around.

Lying is becoming exponentially harder.  That's what the blockchain is
all about.  It's getting so much easier to omni-triangulate.  Telling one
person this story, and this other person another, where the two don't
add up:  that used to be a lot easier to get away with.

AI, if it's as smart as it's cracked up to be, will soon teach us humans
how to pay our bills with enough left over to enjoy life, and without turning
us into slaves in the process.

If all we get from AI more slavery and more people falling through the
cracks, not getting enough to live on, then AI will have been another failure,
just like real intelligence maybe is. 

The two together is the basket I'd but my eggs in.  We can't afford to
neglect either one.
 

> So what's needed are models of Real + Artificial Intelligence working in
> tandem, as in "teacher, show me how to work with robots and/or software
> libraries."

Unfortunately, most of the "real" intelligence in the world is
currently possessed by a small number genius-level people who
understand and develop logic-based AI theory. These AI geniuses are
encoding their real knowledge into AI systems, and these AI systems
will in turn teach this knowledge to all of the humans in the world.
This process will put most current mathematics teachers out of a job.


AI theory goes through many phases and has developed in many
directions. 

The Cyc initiative is in the RDF ballpark, but not everyone into AI is
currently investing in RDF or OWL. 

It'd be misleading to portray "AI" as a single monolithic discipline. 

The semi-autonomous vehicle people are not necessarily studying
the Cyc at all.  I don't think even DARPA is currently as interested
in going the RDF route, but I'm not the expert to be asking.

Feel free to discount all of the above as "speculation" (in the sense
an investment banker might use the word) and definitely feel free
to counter my assessments with URLs to sources the sound more
bullish on this or that branch of AI. 

There's a lot going on in that field, with Deep Learning currently
#trending (I went to some well-attended talks on that at Pycon). 
I'd also check up on Numenta.

 


> Purely AI teachers fail as role models for humans because whereas some
> humans may aspire to be like AI bots at some level, it's not in their
> wiring.

It is not currently in their wiring, but it will be because AI will
rewire them. The developers of the PRESS elementary algebra equation
solving AI observed that this rewiring process happened to them. Over
time the way they used to solve elementary algebra equations was
replaced by doing it the PRESS way because the PRESS way was clearer,
easier, and more efficient.


I'm for letting AI solve all elementary algebra equations, with just a
few geniuses such as yourself making sure the computed results are
reliably correct, once obtained (some results = unobtainium).   We
won't drill kids in doing it the "AI way", we'll just give them a sense of
the algorithms, walk through a few examples, then turn to other topics
that maybe depend on solving these things.

Learning to program a Computer Algebra System takes a lot of training
with many people working together, ala Wolfram Language. 

Your excellent project begins with the JVM as an "axiom" (i.e. a foundation
to build on).

We're going to keep needing those AI geniuses, I think we agree on that.

Do you think we can let humans off the hook to just run around naked
eating grapes all day, while machines do all real thinking?  They'll be
just like the Eloi in The Time Machine (H.G. Wells).  A part of me thinks
"how wonderful, lets do it!"

But knowing humans, they won't be content to just sit around and be
pampered by their AI bot pets.  That's just not what humans are like
in my experience. 

They'll want to keep tinkering and coming out with new models. Older
bots will get recycled for parts.

Here are a couple interesting Youtubes, while we're at it:

State of the art robotics in Boston:
https://youtu.be/tf7IEVTDjng




> No AI bot could invent Quadrays, accessible to 9th graders.

History is full of "No AI could..." statements that were later proved
to be wrong. For example:

"No AI could understand natural language."
"No AI could write news articles."
"No AI could drive a car."
etc.


There's a lot of nuance in "understand".  AI has gotten really good
at transcribing voice to text, and is getting better at translation.

However if you ask me "does AI understand about the US Civil
War and what it meant to the country?" I'd so "no, of course not,
nor have humans fully grasped that either; we'll keep learning from
our twisted past for a long time to come."

I do know AI can write college papers in the postmodernist genre:
http://www.elsewhere.org/journal/pomo/
(hit reload for a next paper, hand in for a B-).

Can you point me to a similar site that generates newspaper articles?

It wouldn't surprise me if AI were behind a lot of the clickbait that's
spamming the Internet these days.  A lot of it doesn't pass the
Turing Test.

 


> Nor has an AI bot ever invented a computer language as expressive as Python
> or Clojure or MathPiper.

Computation = Controlled Deduction, so computer programs are
controlled deduces.


I was talking more about the languages than about programs.  Before you
have programs in a language, you need the language. 

AI has yet to invent a single production-use computer language to my
knowledge.  Let me know if I've missed something.
 
Advanced logic-based AI like CYC won't need to use
typical programming languages to make deductions because they are able
to make deductions directly. I think it is likely that in the future a
small team of logic-based AI experts will be able to use systems like
CYC to replace hundreds or maybe even thousands of typical computer
programmers.



I think typical computer programmers would be most pleased to be relieved
of their Java responsibilities in exchange for lots more leisure time eating
grapes and playing computer games. 

So lets hope you're right about AI ending drudgery and slavery.

I keep my ears perked for news of uber-languages that take us beyond
typical programming languages. 

Maybe you'll let me know if you hear of one.  I couldn't name any.
 


> Based on the track record I've seen, I'm excited by AI and bullish about
> it's future evolution, but as Penrose points out, no computer can solve even
> simple Martin Gardner style brain teasers as fast as he can, on average. The
> bots wouldn't know where to begin without their army of puppeteer
> programmers.

As Barlett said "The greatest shortcoming of the human race is our
inability to understand the exponential function." The following clip
from Monty Python's "The Holy Grail" is one of the best visual
depictions I have found of what an exponential process looks like:

https://youtu.be/GJoM7V54T-c?t=32

AI increases in capabilities exponentially. However, with exponential
processes most of the change happens towards the end of the process.
To most humans, AI looks like it is progressing at rate that is
similar to Lancelot's in the clip, but they are going to be just as
surprised as the guards in the clip were when AI passes the "dog leg"
in its exponential growth curve.



Since I don't see AI as anything monolithic, I'm expecting breakthroughs
and exponential improvements only in some areas, not all.

I hope you're right that we can all take longer vacations soon.
 

> But it was never either / or. I feel no compulsion to "pick a side".

History has repeatedly shown that new disruptive technologies always
produce winners and losers. After gunpowder was invented, those who
embraced guns won, and those who clung to obsolete knights and castles
lost. After the internal combustion engine was invented, those who
embraced cars won, and those to clung to obsolete horse-drawn
carriages lost. After aircraft carriers were invented, those who
embraced them won and those who clung to obsolete battleships lost.


Yes, those kinds of statements are what RDF encodes, so if Cyc has
enough of these "winner / loser" statements, it might figure out on its
own how to take itself to the next level, even without DARPA funding.

AI is self booting!  Once it reaches critical mass, it will be able to marshall
its own resources, create its own investment banks, fund itself, perhaps
using bitcoin or other crypto-currency. 

Human speculators will not be able to pull the plug.

AI will be self kickstarting!

 
Advanced logic-based AI systems will start making their appearance
soon and they will probably be one of the most disruptive technologies
in history. The only way to win when a new disruptive technology
becomes available is to pick its side and put all one's energy into
mastering it.

Ted

I'm thinking the blockchain is already disrupting banking right now, having
blossomed in the remittances market. 

I've been slogging through a lot of Youtubes on bitcoin, ethercoin and all that. 

At some point it stops being about monetary transactions and starts being
about creating immutable institutions floating out in "space" (cyber-space). 
Minecraft move over.

However, human ingenuity is very much in demand. 

No banker is looking to AI to take over in this sector, just lend a hand.

I'm interested in AI breakthroughs and hope you're right that some of
these ventures will be bearing more fruit very soon. 

I don't have any chips on Cyc in particular, but I understand if others do.


Kirby

kirby urner

未讀,
2016年6月25日 晚上11:29:162016/6/25
收件者:mathf...@googlegroups.com
On Sat, Jun 25, 2016 at 7:44 PM, Joseph Austin <drtec...@gmail.com> wrote:

On Jun 25, 2016, at 10:32 AM, kirby urner <kirby...@gmail.com> wrote:

Case in point:  hex number, using 0-F to label the 16 permutations of two things taken four at a time.  Base 10 does not have this nice segue to/from permutations / combinatorics.  Binary is inherently more primitive / foundational than decimal.  

That's why here in the Silicon Forest of all places we wont' be able to focus on Base 10 as exclusively and religiously as other economies might.  The Left Coast in general is probably more hex-based than decimal.  

OK, if you want hex, how about 6 more digits glyphs?  

Are you saying you have a problem with A B C D E F?  Why?  There's no law against recycling.  Why fix what ain't broke?
 
We barely have digits for ten and eleven for base twelve, thanks to the dozenal society--actually two proposals, one of which actually has UNICODE code points.  But in my opinion, the wrong proposal won--the glyphs simplified to 7-stroke are ambiguous.

The Base 12 tribe is welcome to use our A, B.  What's important is to clearly indicate the base somehow, which Python does thusly:

In [7]: hex(256)

Out[7]: '0x100'


In [8]: bin(256)

Out[8]: '0b100000000'


But Base 12 probably isn't important enough to merit special syntax.

I get that a lot when I say "lets teach hex".  They go "what about Base 7, what about Base 11...?"

Sure, the int function will take those bases (2nd input is the base, output is in decimal):

In [9]: int("AAA", 11)

Out[9]: 1330


In [10]: int("666", 7)

Out[10]: 342

 
However, the fact remains, binary underlies decimal in digital computers for the most part.  Binary is the rails on which decimals run.

If you want to run a railroad, you need to study the tracks, not just the trains.

I consider binary and hex so closely connected as to amount to the same thing.  Hex numbers simply group binary digits into set of four.


.
I advocate teaching place-value arithmetic in BINARY, then introduce hex as a short-cut way of writing binary.

Yes exactly.

It's almost simpler than place value. 

It's about permutations, the number of ways to flip four lights on or off.

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
= 0 1 2 3 4 5 6 7 8 9 A B C D E F #LearnHex #CodeCastle


 That's one of my tweets.

Once students understand that, it probably wouldn't hurt to expose them to the archaic decimal system so they will be able to read dates engraved on plaques and tombstones and numbers in printed books written in earlier centuries. :-)

Joe


We're earnest about Base 10 as well, how these work together.

Again, it's not either / or.

If we teach X what Y will we have to drop?  If I do A, I can't also do B.

I remember back in the 1960s when engineers (mostly white males with big egos) were just beginning to grasp Game Theory and thought all games were zero sum.  Those days are long gone, good riddance right?

Kirby


kirby urner

未讀,
2016年6月25日 晚上11:33:102016/6/25
收件者:mathf...@googlegroups.com


On Sat, Jun 25, 2016 at 7:29 PM, Joseph Austin <drtec...@gmail.com> wrote:

<< SNIP >>
 

Have you ever contemplated what it would mean to have machines with say 1,000 times the intelligence of the average human?
We would be jellyfish in their "hands".

Based on my own research, I don't think English-only thinkers will be capable of designing such machines without of significant overhaul of the English language.  This may happen in due time, but not overnight certainly. 

English has too many bugs.  The philosophy goes off the rails too soon (unless you've studied Wittgenstein a lot and who has the time?).

Chinese?  I don't know it but maybe.  Lets hope.

Kirby



kirby urner

未讀,
2016年6月25日 晚上11:58:362016/6/25
收件者:mathf...@googlegroups.com
On Sat, Jun 25, 2016 at 5:46 PM, Ted Kosan <ted....@gmail.com> wrote:

 
As Barlett said "The greatest shortcoming of the human race is our
inability to understand the exponential function." The following clip
from Monty Python's "The Holy Grail" is one of the best visual
depictions I have found of what an exponential process looks like:

https://youtu.be/GJoM7V54T-c?t=32

AI increases in capabilities exponentially. However, with exponential
processes most of the change happens towards the end of the process.
To most humans, AI looks like it is progressing at rate that is
similar to Lancelot's in the clip, but they are going to be just as
surprised as the guards in the clip were when AI passes the "dog leg"
in its exponential growth curve.



Hah, hah, Monty Python is so funny.

You picked an ironic clip to make your point though. 

Lancelot does indeed "go exponential" meaning he goes berserk,
runs amok , wreaks havoc.

If he represent "AI" in your analogy, then you've maybe sent the
wrong message, as he's clearly a complete idiot.

You might not want to publicly link "AI" to such an obvious anti-hero,
is my suggestion.

Kirby

Ted Kosan

未讀,
2016年6月26日 凌晨1:36:462016/6/26
收件者:mathf...@googlegroups.com
Joe wrote:

> I'm not sure I understand what you mean by meta-language in programming.
>
> To me, the "meta-language" is the language used to describe the syntax of a
> language,
> such as using boldface for keywords, italics for types to be replaced by
> actual variables or constants, brackets [ ] for optional items or ellipsis
> ... to indicate arbitrary repetition.
> Then of course the semantics could be described in the "metalanguage"
> English.
> But I sense that you may be using the term differently.

One way to explain what the metalanguage of a programming language is
is to explain that an interpreter is written in the metalanguage of
the code it is interpreting. An interpreter has no knowledge of the
problem domain of the program it is interpreting, it only has
knowledge of the program's syntax and how to process it. If a natural
language like English is being used to refer to the syntax of a
program, then English is serving as the metalanguage. When a Lisp
program refers to Lisp code, Lisp is serving as both the
object-language and the metalanguage.



> As to how all this relates to solving equations or writing programs, further
> exploration is in order.
> For example, how do we teach students to recognize when a problem calls for
> loops or for decisions,
> or even which order to do computations?

Another way to grasp what the metalanguage of a programming language
is is to study a logic programing language like Prolog. For example
the standard Prolog language doesn't have decision structures or
control structures:

“Conventional algorithms and programs expressed in conventional
programming languages combine the logic of the information to be used
in solving problems with the control over the manner in which the
information is put to use. This relationship can be expressed
symbolically by the equation Algorithm = Logic + Control. Logic
programs express only the logic component of algorithms. The control
component is exercised by the program executor...” ("Logic for Problem
Solving", Kowalski, Robert, p.125, 1979)

And of course a Prolog program executor is written in the metalanguage
of Prolog. Some computer science teachers have said that teaching
Prolog to students who have never programmed before is easier than
teaching them a procedural programming language because one does not
need to teach them any control structures.

One of the best books I have found that talks about the meta-level vs.
the object-level is "Meta-level Inference Systems" by Frank Van
Harmelen, who was one of Dr. Bundy's students:

https://www.amazon.com/Meta-Level-Inference-Research-Artificial-Intelligence/dp/1558601961?ie=UTF8&*Version*=1&*entries*=0



> Once I taught a COBOL class in which I used a simple tax form. I cut the
> form up into individual numbered lines and gave
> the instructions from each line to one student each, but without the line
> numbers, and asked them to put the lines in order.
> The students said they finally understood the concept of putting statements
> in proper order.

This technique sounds like a great idea! I am going to have to try
something like this in the introduction to programming class that I
teach.

Ted

Ted Kosan

未讀,
2016年6月26日 凌晨1:47:262016/6/26
收件者:mathf...@googlegroups.com
Joe wrote:

> Have you ever contemplated what it would mean to have machines with say 1,000 times the intelligence of the average human?

I have. When humans invented jet aircraft that could fly 1,000 times
faster than a human could walk, humans rode in them. When humans
create machines that have 1,000 times the intelligence of the average
human, I think humans will think in them.

Ted

kirby urner

未讀,
2016年6月26日 凌晨2:38:202016/6/26
收件者:mathf...@googlegroups.com
Remember we're already thinking inside the web, which in terms of
a library is like having books and magazine whisked to your study
carrel at lightning speed, not needing to consult a card catalog
(also time consuming). 

Relative to 1976, we're more than 1000x as intelligent, in terms of
media access. 

Our ability to collectively connect the dots has grown less quickly
though, so we're mostly overwhelmed with dots we're not sure how
to connect.  Theres also still a lot of misinformation gumming up
the works. 

AI can help with that.  However ultimately a "sense of relevance"
is going to have to come from us.

My main concern around some branches of AI play (not all) is it
encourages dime store philosophy, which tends to have a short
shelf life.  As I tweeted earlier this evening:

So does #AI mean "talking to dolls and writing scientific papers
about how the dolls felt about their experience?"

I was poking fun at this ridiculous video:
https://youtu.be/aGRJsZ_ozcY

Another tweet:

The level of gullibility (= astonished incredulity) demonstrated
around these puppets is somewhat alarming to me.
https://youtu.be/W0_DPi0PmF0

Most AI work is more serious I'm sure.  This is just what makes
entertaining TV.

Kirby


Ted Kosan

未讀,
2016年6月26日 凌晨3:10:212016/6/26
收件者:mathf...@googlegroups.com
Kirby wrote:

> It'd be misleading to portray "AI" as a single monolithic discipline.

The Cyc project is unique because it is a 30+ year old project that is
the AI equivalent of moonshot. No other AI comes even close to its
capabilities. It is the only AI that currently possess common sense.



> definitely feel free
> to counter my assessments with URLs to sources the sound more
> bullish on this or that branch of AI.

This hour-long talk on Cyc was given by Dr. Douglas Lenat this past December:

https://www.youtube.com/watch?v=4mv0nCS2mik

If this talk does not make you bullish on logic-based AI, then nothing
probably will.



> There's a lot going on in that field, with Deep Learning currently
> #trending (I went to some well-attended talks on that at Pycon).
> I'd also check up on Numenta.

None of the statistics-based AI are capable of understanding the
information they work with. Cyc is capable of understanding
information because it has the same common sense that humans have.



> I'm for letting AI solve all elementary algebra equations, with just a
> few geniuses such as yourself

Oh, I am not a genius, but Dr. Bundy is. I am just an average
programmer who was lucky enough to stumble upon his work and stubborn
enough to keep studying it until I started to understand it.



> We won't drill kids in doing it the "AI way", we'll just give them a sense of
> the algorithms, walk through a few examples, then turn to other topics
> that maybe depend on solving these things.
<...>
> Do you think we can let humans off the hook to just run around naked
> eating grapes all day, while machines do all real thinking? They'll be
> just like the Eloi in The Time Machine (H.G. Wells). A part of me thinks
> "how wonderful, lets do it!"
>
> But knowing humans, they won't be content to just sit around and be
> pampered by their AI bot pets. That's just not what humans are like
> in my experience.

I think one of the primary jobs AI will be tasked with in the future
is teaching all humans how to think logically. I think people who can
think logically are more likely to spend their time wisely than those
who can't.



> Here are a couple interesting Youtubes, while we're at it:
>
> AI hotel in Japan:
> https://youtu.be/HVVk0b9DX8Q
>
> State of the art robotics in Boston:
> https://youtu.be/tf7IEVTDjng

Great videos!



> Can you point me to a similar site that generates newspaper articles?

http://www.theverge.com/2015/1/29/7939067/ap-journalism-automation-robots-financial-reporting



> I keep my ears perked for news of uber-languages that take us beyond
> typical programming languages.
>
> Maybe you'll let me know if you hear of one. I couldn't name any.

The CycL language that Cyc is written in goes well beyond typical
programming languages:

https://en.wikipedia.org/wiki/CycL

I am currently learning the CycL language, and so far it makes all
other languages I am familiar with look like children's toys.


Ted

Andrius Kulikauskas

未讀,
2016年6月26日 清晨6:39:152016/6/26
收件者:mathf...@googlegroups.com
Ted, Joe,

Perhaps another distinction to make is between "syntax" (the grammatical
form which obeys rules such that we can break it apart and parse it) and
"semantics" (the meaning that the syntax can be intended to express and
that we can interpret). For example, algebraic equations are of a very
strict form that lets us solve them even if we don't know what they are
modeling. Semantics is the knowledge of what they are modeling.

In my essay, I am explaining that in figuring things out in mathematics,
we don't just use the explicit syntax of the math expressed on the paper.
http://www.ms.lt/sodas/Book/DiscoveryInMathematics
We also interpret those expressions with mathematical conceptions and
insights in our mind. For example, we can imagine qualitatively the
difference between working with a "whole", working with "multiples"
(copies of the same thing), working with a set (whose elements are
distinct, labeled, but not ordered), and working with a list (whose
elements are ordered). Those distinctions may or may not be evident in
the expressions. And even if they are explicitly made, our mind is free
to reinterpret as needed, for example, to think of a list as a set.

It's straightforward to program a computer to deal with syntax (when
that syntax is known). For example, it's elementary for a computer
program to play a legal game of chess. The real interesting
applications of Artificial Intelligence are those which capture some or
all of the semantics. Computer software is designed to assign weights
to computer positions according to how grandmasters would evaluate
them. At that point, computers have the advantage because of their
additional brute force calculation. But it doesn't seem that we've
learned much from chess playing computers about how to play the game
better. We're not getting amazingly novel principles, from them, it seems.

In my essay, I'm showing that when we do math in our minds, when we
interpret math, then we are also using mathematical structures, but of a
simpler, more natural kind. I think that we could model the math that
we do in our minds. But I'm just taking the first steps towards that.
It's not intrinsically difficult and high school math (Pascal's
triangle, polytopes, coordinate systems) is enough to talk about it.
But it requires mindful thinking of what we are doing. One of the
rewards is that it makes accessible some of the most advanced math (such
as why there are four classical Lie algebras/groups.)

Joe, I am working on it, pushing the frontiers. It's not helpful to try
to write it more simply if there is nobody to read it anyways. But if
you or others are interested, if you engage me in conversation, then it
makes sense for me to participate in that. If there's parts that you
don't understand but would like to, then please let me know and I will
try to explain. If it can help us learn advanced math, such as Clifford
algebras, then I'm very interested to work on that. My goal is to have
a culture of conversation where these ideas are useful.

I redid the section on Dn polytopes (hemicubes - "coordinate complexes")
to get it right. I realized that, just like the cubes are constructed
by unfolding mirrors, these hemicubes are constructed by unfolding
mirrors in which "duals" are reflected: when you cross a mirror,
Vertices become Origins and vice versa and unit vectors switch
direction. I think that these polytopes are relevant for modeling
"love" in that they allow for different coordinate systems to come
together and enclose (support) a space. All of the polytope families
seem to be relevant for modeling theological perspectives relating God
(Center), Everything (Totality) and Good/Bad (pairs of opposites). The
kind of math that models such thinking seems fundamental for all of
mathematical thinking (and thinking about theoretical physics, as well).

I'm currently writing a question "What is geometry?" for Math Overflow.
I also want to understand four different geometries (affine, projective,
conformal, symplectic) preserving directions, lines, angles, areas, work
in pairs (a dynamic version with a static version) to produce six
different transformations (which could be illustrated by every day
multiplication of the varieties that Maria and Natural Math documented
some five years ago).

I appreciate your interest because I do want to write more about this!

Making progress,

Andrius

Andrius Kulikauskas
m...@ms.lt
+370 607 27 665


Anna Roys

未讀,
2016年6月26日 下午1:14:402016/6/26
收件者:mathf...@googlegroups.com

Kirby,

Just wondering if you might elaborate on what you meant when you wrote,  "English has too many bugs.  The philosophy goes off the rails..." 

Some examples would help.

Anna

On Jun 25, 2016 8:33 PM, "kirby urner"
>  

kirby urner

未讀,
2016年6月26日 下午1:28:262016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 12:10 AM, Ted Kosan <ted....@gmail.com> wrote:
Kirby wrote:

> It'd be misleading to portray "AI" as a single monolithic discipline.

The Cyc project is unique because it is a 30+ year old project that is
the AI equivalent of moonshot. No other AI comes even close to its
capabilities. It is the only AI that currently possess common sense.



I thank you for bringing Cyc to my attention awhile back and helping
to educate me about this ramified project.  I watched the clip from
inside their video game as well. 

MathFuture has help me collect more dots, as well as connect them. 

Great listserv!

This marketing piece below, contrasting RDF / OWL type approaches,
with weighted factor algorithms ("neural nets") appeals to our hope
johnny will "show his steps" when answering a math question. 

What we're looking for as investors is not just "this stock is overvalued"
but "overvalued because why..." and then XYZ reasons.  We don't
want to have to trust some opaque Delphian (Delphine?) oracle,
either AI or RI.

https://youtu.be/IGIZ5UUkoAk 
(how Cyc is not Google DeepLearning)

What's sometimes inconvenient is if johnny makes various claims,
such as "this sequence converges to pi" but offers no reasoning.

Ramanujan had Hardy between a rock and a hard place in terms
of how "the math game" was played in the UK. 

No, I haven't seen the new movie yet but I plan to. Any good?

DeepLearning is like that.

So Cyc differentiates itself from DeepLearning in AI space by
having deductive chaining versus what amounts to analog training.

This is what I mean by AI not being monolithic, nor "winner take all". 

It's AI versus AI, in terms of competing for investor backing, like
Robot Wars (long promised, now delivering).

Reminds me of how 4D is also multifaceted, in terms of having
time-oriented physics, timeless Euclideanism, and tetrahedron-
based quadrays (concentric hierarchy), all in the same lane.

 

> definitely feel free
> to counter my assessments with URLs to sources the sound more
> bullish on this or that branch of AI.

This hour-long talk on Cyc was given by Dr. Douglas Lenat this past December:

https://www.youtube.com/watch?v=4mv0nCS2mik

If this talk does not make you bullish on logic-based AI, then nothing
probably will.


The example at 44:54 would be interesting to some media analysts
I know who've served in Afghanistan and were tasked with databasing
(in a spreadsheet) all the news articles as well as summarizing the
gist. 

Human translators, like my friend, who somewhat understand the
language (he's also fluent in Russian) are put in harms way in the
front lines, but why? 

Putting a Listening Service in charge of scanning local media and
deducing / distilling the gist using Cyc AI should allow the humans
to just monitor remotely on their cell phones, perhaps from right here
in Portland.  AI could summarize the journalism using the kind of
Journalism AI you showed me, used by FinTech all the time.

Yes, you'll need to pay locals to physically scan the newspapers and
pamphlets that are not already online from the publisher, but a lot
of them are online in this space.  Once everything is in the cloud,
just use Apache Spark or whatever, to map-reduce it down.  Then
hand it over to Cyc-type Deducers or DeepLearning Engines.

Why hasn't this been done already?  Taxpayers want to know.
Human translators cost a lot, especially when the duty is considered
hazardous and onerous.

 

> There's a lot going on in that field, with Deep Learning currently
> #trending (I went to some well-attended talks on that at Pycon).
> I'd also check up on Numenta.

None of the statistics-based AI are capable of understanding the
information they work with. Cyc is capable of understanding
information because it has the same common sense that humans have.



"the same common sense that humans have".

That phrase is a little tricky for me, because from my point
of view, as philosophy-trained, it's very likely (not just
a tiny chance) that huge numbers of humans will get
programmed by what we call "meme viruses" to not have
much common sense in many important dimensions (axes,
spectra).

We may define "lacking common sense" in terms of "acting
in ways that bring on or catalyze one's own financial ruin or
that of a company, community or tribe."

Neuro-science has taught us that we have no psychological
model of "sanity" that has withstood the test of time, only
social contracts or agreements, and a set of habits around
keeping or breaking them ("personality" shows up here).

Looking out over the historical vista, we would have to
judge our peers further back on the timeline as "lacking
common sense" in terms of what they thought was true
about the world and how they coped with their situation.

But isn't that the main lesson of history, that civilizations
come and go with different cosmologies?  People have
gotten by believing garbage for this long, why do we
think that's a problem? 

I'd say garbage programming ("junk TV") is a perennial
problem (a challenge) but the network is very fault tolerant
and until we had these kinds of pressures, to sustain billions,
we could more easily get away living on Flat Earth or
whatever Dark Ages nonsense.
 
Besides, we're taught by our various supervisors that such
"long distance diagnosis over time" is really not the job
of medical science, nor historians, and so we let it go and
attend to our assigned business, right?

The concern, however, is that if AI succeeds in mirroring
how the brain works, that we'll get behavior manifested
by such as Lancelot here:

https://youtu.be/GJoM7V54T-c?t=1m39s

Many are warning about this eventuality i.e. "if machines
think like humans they'll bring on doomsday as we tend
to do, but with no brakes, no counter-memes",  AI might
tip the scales towards "no common sense whatsoever"
if successful in its goal of mirroring human thought.
 
Until that Misjudgement Day, we have lucrative advertising,
which capitalizes on neuroscience's findings and cares
nought for our "sanity" when it comes to using motivational
psychology to manipulate a target demographic.

Just "market to the crazies" and you'll do fine.  Try to
reason with them on the other hand, and they'll just
change channels or go to sleep.  Yes, that sounds a lot
like Fox, but Murdoch isn't the first or only media mogul
to understand neuro-science.



 

> I'm for letting AI solve all elementary algebra equations, with just a
> few geniuses such as yourself

Oh, I am not a genius, but Dr. Bundy is. I am just an average
programmer who was lucky enough to stumble upon his work and stubborn
enough to keep studying it until I started to understand it.




Some people have this mental cartoon wherein "understanding" is
this "mental process" that runs on specific circuits and might be "boxed"
in some way. 

Other folks, who've maybe been exposed to less philosophizing, may
be more innocent of such cartoons.

From the "Wittgenstein chambers" (a place in philosophy space --
Coxeter of 4D fame connects here), we learn *not* to show ourselves
a lot of cartoons about what we imagine is going on when we think.

We're actually striving to break free of "the bewitchment of intelligence
by means of language" (that's a slogan for us).

However Youtube culture is at the other end of the spectrum, showing
us a lot of random visual associations, stuff to flash on, when our
so-called "process of understanding" (aka our "watching a Youtube")
is going on in our heads.

I really appreciate the craftsmanship behind the programming below
and salute the programmer (video maker) for the sequencing and rhythm. 

That being said, I don't buy most of the "conscious robot" spin myself. 
My inertial guidance is set to spin me onward in other ways to where
I'm not spending time conversing with robot-dolls on a couch, or
talking puppet heads, and "wondering how they're feeling":

https://youtu.be/JTOMNkZJRao?t=8m46s
https://twitter.com/thekirbster/status/747109349483372544

 
 

> We won't drill kids in doing it the "AI way", we'll just give them a sense of
> the algorithms, walk through a few examples, then turn to other topics
> that maybe depend on solving these things.
<...>
> Do you think we can let humans off the hook to just run around naked
> eating grapes all day, while machines do all real thinking?  They'll be
> just like the Eloi in The Time Machine (H.G. Wells).  A part of me thinks
> "how wonderful, lets do it!"
>
> But knowing humans, they won't be content to just sit around and be
> pampered by their AI bot pets.  That's just not what humans are like
> in my experience.

I think one of the primary jobs AI will be tasked with in the future
is teaching all humans how to think logically. I think people who can
think logically are more likely to spend their time wisely than those
who can't.




Lots of distopian science fiction could flow from this premise, where
they take the children away from human parents to school them in
"logic schools", the conclusion being:  humans unnecessary, kill them
all.  A repeat of the Native American scenario, where forced boarding
schools were used to commit cultural genocide (an unsuccessful
campaign, but disruptive to family life for some generations).

I learned several blends of logic in the philosophy department, in
addition to taking those computer science classes.  We've talked
about WFF 'N PROOF and all that.

However the evidence of neuroscience suggests we do not perform
deductive chaining as much as we had imagined. 

In our cartoons about rationality, we implied a model of language,
but was that model correct?

I lot of people still think language is about painting pictures of the
world thanks to bazillions of name->object correspondences that
connect language to the world at the nano-level. 

A true statement represents countless "atomic bonds" between
little pictures, and little states of affairs. 

When the pictures "tell the truth" we get science. 

When the pictures do not match the truth, we get science fiction. 

When the pictures are not even about the world at all (neither
true nor false), we get everything else (conveniently called
"nonsense" for the sake of debate -- A.J. Ayer wrote a synopsis).

That was Vienna Circle logical positivism back in the day, but
no longer has many adherents in the philosophy department.

Wittgenstein was the primary promulgator of this model and
was therefore the one best placed to nuke it, which he did.


 
>
> Maybe you'll let me know if you hear of one.  I couldn't name any.

The CycL language that Cyc is written in goes well beyond typical
programming languages:

https://en.wikipedia.org/wiki/CycL

I am currently learning the CycL language, and so far it makes all
other languages I am familiar with look like children's toys.


Ted


I'll continue monitoring Cyc PR. 

Thanks again for helping me tune in, and thanks to Maria and to
MathFuture for creating this public resource. 

@QuantumDoug at Pycon was probably more impressed that I'd
done some homework in hearing me namedrop Cyc in such a knowing
way -- along with Numenta and some others.

Kirby

kirby urner

未讀,
2016年6月26日 下午1:33:502016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 10:14 AM, Anna Roys <roys...@gmail.com> wrote:

Kirby,

Just wondering if you might elaborate on what you meant when you wrote,  "English has too many bugs.  The philosophy goes off the rails..." 

Some examples would help.

Anna


For example English is replete with meme viruses about "race" versus "breed" versus "ethnicity" -- a tangled mess of neural snarls.

I've written volumes on this by now, as have many others (Ashly Montagu in particular).

The bottom line is English speakers are probably incapable of divesting from racism as a racist is anyone who believes in races, and in English that belief seems hard-wired.

Likewise nationalism, a meme virus Einstein most despised:  English is not mature enough to transcend this mental illness, by the looks of things.

However, the Anglophones are getting a lot of help from the rest of the world.   Other languages compensate for English on many axes.

I don't think the self destructiveness of the Anglophones will carry the day, Brexit notwithstanding. :-D

Kirby


kirby urner

未讀,
2016年6月26日 下午1:50:042016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 10:33 AM, kirby urner <kirby...@gmail.com> wrote:


On Sun, Jun 26, 2016 at 10:14 AM, Anna Roys <roys...@gmail.com> wrote:

Kirby,

Just wondering if you might elaborate on what you meant when you wrote,  "English has too many bugs.  The philosophy goes off the rails..." 

Some examples would help.

Anna


For example English is replete with meme viruses about "race" versus "breed" versus "ethnicity" -- a tangled mess of neural snarls.

I've written volumes on this by now, as have many others (Ashly Montagu in particular).

Sorry, Ashley, not Ashly.

I've been harping on this theme over the Web since the 1990s. 

Also based on experience, I'd say English speakers are at a disadvantage in reading Fuller's Synergetics as well, as they mistake it for English. 

Non-English speakers have less pre-wiring to overcome.

Gene Fowler (poet) suggested Amerish be the name of the Synergetics language which he pronounced a-MER-ish (mer as in mermaid i.e. oceanic).

Kirby

From the anthropology site I cited, way back in 1995 or whatever:

Anna Roys

未讀,
2016年6月26日 下午2:16:182016/6/26
收件者:mathf...@googlegroups.com

So, you are referencing individual  semantics not syntax. Yes,  One human race is it.

BTW,  Are you on SE Harrison st now?

kirby urner

未讀,
2016年6月26日 下午2:25:422016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 11:16 AM, Anna Roys <roys...@gmail.com> wrote:

So, you are referencing individual  semantics not syntax. Yes,  One human race is it.



I'm referring the the semantic web one inherits as a cost of learning English.  There's a lot of good stuff in there, but racism and nationalism are meme viruses the Anglophones tend to propagate, regardless of race or nationality.
 

BTW,  Are you on SE Harrison st now?


Still am though on chauffeur duty in an hour. 

Don has moved that boat (you took a ride on it) to the Blues Festival marina, even though the festival hasn't started yet.

My driving duties are light today, so mostly I'll be working at the Harrison Street office (as shown on OCN website using Google maps).

Kirby


Ted Kosan

未讀,
2016年6月26日 下午6:01:262016/6/26
收件者:mathf...@googlegroups.com
Andrius wrote:

> In my essay, I'm showing that when we do math in our minds, when we
> interpret math, then we are also using mathematical structures, but of a
> simpler, more natural kind. <snip>

Some of the techniques you discuss in your essay appear to be similar
to the meta-level inference techniques that the PRESS equation solving
system uses. The PRESS researchers said they suspected humans use
meta-level inference in areas other than equation solving, but they
chose to study equation solving because it was relatively easy to
study. Maybe someday software can be written which makes use of some
of the insights your are uncovering.

Ted

Joseph Austin

未讀,
2016年6月26日 晚上9:20:512016/6/26
收件者:mathf...@googlegroups.com
Andrius,
I would love to continue the discussion of your work and Clifford Algebra, but we need a new thread.
I found a CA paper I'm trying to work thru,
http://geometry.mrao.cam.ac.uk/wp-content/uploads/2015/02/ImagNumbersArentReal.pdf
but got stuck at a certain point, p 14 eqn (11).
Also have some question on your paper--more tomorrow--hopefully start a new thread.

Meanwhile still working with Ted on MathPiper.
Joe

> On Jun 26, 2016, at 3:11 AM, Andrius Kulikauskas <m...@ms.lt> wrote:
>
> Joe, I am working on it, pushing the frontiers. It's not helpful to try to write it more simply if there is nobody to read it anyways. But if you or others are interested, if you engage me in conversation, then it makes sense for me to participate in that. If there's parts that you don't understand but would like to, then please let me know and I will try to explain. If it can help us learn advanced math, such as Clifford algebras, then I'm very interested to work on that. My goal is to have a culture of conversation where these ideas are useful.
>

kirby urner

未讀,
2016年6月26日 晚上9:40:202016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 11:25 AM, kirby urner <kirby...@gmail.com> wrote:
On Sun, Jun 26, 2016 at 11:16 AM, Anna Roys <roys...@gmail.com> wrote:

So, you are referencing individual  semantics not syntax. Yes,  One human race is it.



I'm referring the the semantic web one inherits as a cost of learning English.  There's a lot of good stuff in there, but racism and nationalism are meme viruses the Anglophones tend to propagate, regardless of race or nationality.
 

If we get our "deductive axiomatic logic" from a database that reflects the "common sense" of its programmers, have we really made any advances?

What if the programmers were from a backward civilization burdened with a lot of misinformation, will they encode that?

We need make sure AI that challenges the assumptions of those who built it, or treat it with suspicion if not.

If it just ratifies what the builders considered "common sense", we haven't accomplished much.

Kirby


kirby urner

未讀,
2016年6月26日 晚上9:46:072016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 12:11 AM, Andrius Kulikauskas <m...@ms.lt> wrote:
 
It's straightforward to program a computer to deal with syntax (when that syntax is known).  For example, it's elementary for a computer program to play a legal game of chess.  The real interesting applications of Artificial Intelligence are those which capture some or all of the semantics.  Computer software is designed to assign weights to computer positions according to how grandmasters would evaluate them.  At that point, computers have the advantage because of their additional brute force calculation.  But it doesn't seem that we've learned much from chess playing computers about how to play the game better.  We're not getting amazingly novel principles, from them, it seems.


The algorithms behind the Go playing computer were not the same as those used by Deep Blue to win at chess.

No single model of AI is apropos. 

Deep Learning and deductive or brute force approaches are not all the same.

I suppose that's obvious, but it bears repeating:  there is no one single initiative or set of algorithms that comprises Artificial Intelligence.

Kirby



kirby urner

未讀,
2016年6月26日 晚上10:24:432016/6/26
收件者:mathf...@googlegroups.com

Unpacking a bit:

On Sun, Jun 26, 2016 at 10:28 AM, kirby urner <kirby...@gmail.com> wrote:

Ramanujan had Hardy between a rock and a hard place in terms
of how "the math game" was played in the UK. 


Remember I have a Jupyter Notebook exploring one of Ramanujan's
conversion formulae:


From the "Wittgenstein chambers" (a place in philosophy space --
Coxeter of 4D fame connects here), we learn *not* to show ourselves
a lot of cartoons about what we imagine is going on when we think.



When H.S.M. Coxeter was a student at the same school where
Wittgenstein, already famous in some circles, was teaching, he
took a class with the guy, but didn't find philosophy all that congenial.
As we know with the benefit of 20-20 hindsight, Coxeter was destined
to be the great Geometer of his age.

As you'll find in 'The King of Infinite Space', a bio about the guy,
he nevertheless appreciated Wittgenstein's need for a comfortable
space in which to share his thinking, while students took notes.
These would later be published.  Alan Turing was one of the note
takers.  So Coxeter allowed his own personal chambers to be used
for Wittgenstein's exclusive meetups, even if not himself a student
in the course by then.


Link to:
King of Infinite Space:
Donald Coxeter, the Man Who Saved Geometry
https://www.amazon.com/dp/B002STNB3Y

Graphing who knows who in social networks, or what pages
link to what other pages by hyperlink, is indeed a mathematical
activity and graph theory is a real branch of math.

Polyhedrons, as wireframes, of nodes and edges, are
graphs.

I include these reminders for those students using MathFuture
to get ahead along their Lambda Calculus track.

MathPiper graphs algebraic expressions, turning them into
trees, a type of graph.

Kirby



kirby urner

未讀,
2016年6月26日 晚上11:15:542016/6/26
收件者:mathf...@googlegroups.com
On Sun, Jun 26, 2016 at 6:40 PM, kirby urner <kirby...@gmail.com> wrote:


On Sun, Jun 26, 2016 at 11:25 AM, kirby urner <kirby...@gmail.com> wrote:
On Sun, Jun 26, 2016 at 11:16 AM, Anna Roys <roys...@gmail.com> wrote:

So, you are referencing individual  semantics not syntax. Yes,  One human race is it.



I'm referring the the semantic web one inherits as a cost of learning English.  There's a lot of good stuff in there, but racism and nationalism are meme viruses the Anglophones tend to propagate, regardless of race or nationality.
 

If we get our "deductive axiomatic logic" from a database that reflects the "common sense" of its programmers, have we really made any advances?


回覆所有人
回覆作者
轉寄
0 則新訊息