Ted
--
You received this message because you are subscribed to the Google Groups "MathFuture" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mathfuture+...@googlegroups.com.
To post to this group, send email to mathf...@googlegroups.com.
Visit this group at https://groups.google.com/group/mathfuture.
For more options, visit https://groups.google.com/d/optout.
Kirbyonly the outlines of an algorithm, a verb. Very pretty.mainly with an eye towards purging them of mutables, leavingare ways to manipulate the right-to-left cue in various ways,There's no LISP-link notation in that case, through therethe operations que. Strict right to left order is followed otherwise.prioritization and make parentheses the only way to changeIn a language like J, with so many operators and differentparts of speech, it made sense to abandon any implicit
On Jun 24, 2016, at 8:51 PM, Ted Kosan <ted....@gmail.com> wrote:
Re: [mathpiper-dev] The precedence of the unary minus operatorHowever, currently my primary goal [for MathPiper] is to get the current rule set to
the point where it will solve a wide range of the more fundamental
elementary algebra equations. Then, I want to use this "phase 1"
system to start explicitly teaching programmers who think they are bad
at math how to easily solve this class of elementary algebra
equations.
You make the same observation in Lesson 6:"Why were AI researchers the first group in history to discover that mathematicians don't know how they do math?I think it’s because computers were the first "students" in history that absolutely refused to learn any mathematics that was not taught explicitly."Joe
On Jun 25, 2016 10:37, "Ted Kosan"
> > I'm a skeptic when it comes to the thesis an AI-bot will derive all provable
> > theorems given axioms and rules for deduction. That smacks of "monkey at a
> > typewriter" as in "given infinite time" (which is quite impractical).
>
> For work on the frontiers of mathematics, maybe
> non-computable leaps of logic will always be required.
> However, for the purposes of teaching K-College students
> mathematics, AI math teachers will be much more effective than
> human math teachers.
>
> Ted
>
A teacher is a role model showing students what it's like to be a master of X.
So what's needed are models of Real + Artificial Intelligence working in tandem, as in "teacher, show me how to work with robots and/or software libraries."
Purely AI teachers fail as role models for humans because whereas some humans may aspire to be like AI bots at some level, it's not in their wiring.
No AI bot could invent Quadrays, accessible to 9th graders.
Nor has an AI bot ever invented a computer language as expressive as Python or Clojure or MathPiper.
Based on the track record I've seen, I'm excited by AI and bullish about it's future evolution, but as Penrose points out, no computer can solve even simple Martin Gardner style brain teasers as fast as he can, on average. The bots wouldn't know where to begin without their army of puppeteer programmers.
Humans still leave AI in the dust when it comes to coping with random challenges.
But it was never either / or. I feel no compulsion to "pick a side".
Kirby
On Jun 25, 2016, at 1:34 PM, Ted Kosan <ted....@gmail.com> wrote:As for the morphism you have noticed between the Field Axioms with
exponentiation, Finite Automata, and Structured Programs, I think
breaking each of these areas down into their object-language and
meta-language components would reveal the source of the similarities.
On Jun 25, 2016, at 10:32 AM, kirby urner <kirby...@gmail.com> wrote:Case in point: hex number, using 0-F to label the 16 permutations of two things taken four at a time. Base 10 does not have this nice segue to/from permutations / combinatorics. Binary is inherently more primitive / foundational than decimal.
That's why here in the Silicon Forest of all places we wont' be able to focus on Base 10 as exclusively and religiously as other economies might. The Left Coast in general is probably more hex-based than decimal.
Kirby wrote:
> A teacher is a role model showing students what it's like to be a master of X.
Unfortunately, most current math teachers are anti-roll models who
show students that people who don't understand how math works can
still be paid to teach it :-)
> So what's needed are models of Real + Artificial Intelligence working in
> tandem, as in "teacher, show me how to work with robots and/or software
> libraries."
Unfortunately, most of the "real" intelligence in the world is
currently possessed by a small number genius-level people who
understand and develop logic-based AI theory. These AI geniuses are
encoding their real knowledge into AI systems, and these AI systems
will in turn teach this knowledge to all of the humans in the world.
This process will put most current mathematics teachers out of a job.
> Purely AI teachers fail as role models for humans because whereas some
> humans may aspire to be like AI bots at some level, it's not in their
> wiring.
It is not currently in their wiring, but it will be because AI will
rewire them. The developers of the PRESS elementary algebra equation
solving AI observed that this rewiring process happened to them. Over
time the way they used to solve elementary algebra equations was
replaced by doing it the PRESS way because the PRESS way was clearer,
easier, and more efficient.
> No AI bot could invent Quadrays, accessible to 9th graders.
History is full of "No AI could..." statements that were later proved
to be wrong. For example:
"No AI could understand natural language."
"No AI could write news articles."
"No AI could drive a car."
etc.
> Nor has an AI bot ever invented a computer language as expressive as Python
> or Clojure or MathPiper.
Computation = Controlled Deduction, so computer programs are
controlled deduces.
Advanced logic-based AI like CYC won't need to use
typical programming languages to make deductions because they are able
to make deductions directly. I think it is likely that in the future a
small team of logic-based AI experts will be able to use systems like
CYC to replace hundreds or maybe even thousands of typical computer
programmers.
> Based on the track record I've seen, I'm excited by AI and bullish about
> it's future evolution, but as Penrose points out, no computer can solve even
> simple Martin Gardner style brain teasers as fast as he can, on average. The
> bots wouldn't know where to begin without their army of puppeteer
> programmers.
As Barlett said "The greatest shortcoming of the human race is our
inability to understand the exponential function." The following clip
from Monty Python's "The Holy Grail" is one of the best visual
depictions I have found of what an exponential process looks like:
https://youtu.be/GJoM7V54T-c?t=32
AI increases in capabilities exponentially. However, with exponential
processes most of the change happens towards the end of the process.
To most humans, AI looks like it is progressing at rate that is
similar to Lancelot's in the clip, but they are going to be just as
surprised as the guards in the clip were when AI passes the "dog leg"
in its exponential growth curve.
> But it was never either / or. I feel no compulsion to "pick a side".
History has repeatedly shown that new disruptive technologies always
produce winners and losers. After gunpowder was invented, those who
embraced guns won, and those who clung to obsolete knights and castles
lost. After the internal combustion engine was invented, those who
embraced cars won, and those to clung to obsolete horse-drawn
carriages lost. After aircraft carriers were invented, those who
embraced them won and those who clung to obsolete battleships lost.
Advanced logic-based AI systems will start making their appearance
soon and they will probably be one of the most disruptive technologies
in history. The only way to win when a new disruptive technology
becomes available is to pick its side and put all one's energy into
mastering it.
Ted
On Jun 25, 2016, at 10:32 AM, kirby urner <kirby...@gmail.com> wrote:Case in point: hex number, using 0-F to label the 16 permutations of two things taken four at a time. Base 10 does not have this nice segue to/from permutations / combinatorics. Binary is inherently more primitive / foundational than decimal.
That's why here in the Silicon Forest of all places we wont' be able to focus on Base 10 as exclusively and religiously as other economies might. The Left Coast in general is probably more hex-based than decimal.OK, if you want hex, how about 6 more digits glyphs?
We barely have digits for ten and eleven for base twelve, thanks to the dozenal society--actually two proposals, one of which actually has UNICODE code points. But in my opinion, the wrong proposal won--the glyphs simplified to 7-stroke are ambiguous.
In [7]: hex(256)
Out[7]: '0x100'
In [8]: bin(256)
Out[8]: '0b100000000'
In [9]: int("AAA", 11)
Out[9]: 1330
In [10]: int("666", 7)
Out[10]: 342
.I advocate teaching place-value arithmetic in BINARY, then introduce hex as a short-cut way of writing binary.
Once students understand that, it probably wouldn't hurt to expose them to the archaic decimal system so they will be able to read dates engraved on plaques and tombstones and numbers in printed books written in earlier centuries. :-)Joe
Have you ever contemplated what it would mean to have machines with say 1,000 times the intelligence of the average human?
We would be jellyfish in their "hands".
As Barlett said "The greatest shortcoming of the human race is our
inability to understand the exponential function." The following clip
from Monty Python's "The Holy Grail" is one of the best visual
depictions I have found of what an exponential process looks like:
https://youtu.be/GJoM7V54T-c?t=32
AI increases in capabilities exponentially. However, with exponential
processes most of the change happens towards the end of the process.
To most humans, AI looks like it is progressing at rate that is
similar to Lancelot's in the clip, but they are going to be just as
surprised as the guards in the clip were when AI passes the "dog leg"
in its exponential growth curve.
So does#AI mean "talking to dolls and writing scientific papers
about how the dolls felt about their experience?"
The level of gullibility (= astonished incredulity) demonstrated
around these puppets is somewhat alarming to me.
https://youtu.be/W0_DPi0PmF0
Kirby,
Just wondering if you might elaborate on what you meant when you wrote, "English has too many bugs. The philosophy goes off the rails..."
Some examples would help.
Anna
On Jun 25, 2016 8:33 PM, "kirby urner"
>
Kirby wrote:
> It'd be misleading to portray "AI" as a single monolithic discipline.
The Cyc project is unique because it is a 30+ year old project that is
the AI equivalent of moonshot. No other AI comes even close to its
capabilities. It is the only AI that currently possess common sense.
> definitely feel free
> to counter my assessments with URLs to sources the sound more
> bullish on this or that branch of AI.
This hour-long talk on Cyc was given by Dr. Douglas Lenat this past December:
https://www.youtube.com/watch?v=4mv0nCS2mik
If this talk does not make you bullish on logic-based AI, then nothing
probably will.
> There's a lot going on in that field, with Deep Learning currently
> #trending (I went to some well-attended talks on that at Pycon).
> I'd also check up on Numenta.
None of the statistics-based AI are capable of understanding the
information they work with. Cyc is capable of understanding
information because it has the same common sense that humans have.
> I'm for letting AI solve all elementary algebra equations, with just a
> few geniuses such as yourself
Oh, I am not a genius, but Dr. Bundy is. I am just an average
programmer who was lucky enough to stumble upon his work and stubborn
enough to keep studying it until I started to understand it.
> We won't drill kids in doing it the "AI way", we'll just give them a sense of
> the algorithms, walk through a few examples, then turn to other topics
> that maybe depend on solving these things.
<...>
> Do you think we can let humans off the hook to just run around naked
> eating grapes all day, while machines do all real thinking? They'll be
> just like the Eloi in The Time Machine (H.G. Wells). A part of me thinks
> "how wonderful, lets do it!"
>
> But knowing humans, they won't be content to just sit around and be
> pampered by their AI bot pets. That's just not what humans are like
> in my experience.
I think one of the primary jobs AI will be tasked with in the future
is teaching all humans how to think logically. I think people who can
think logically are more likely to spend their time wisely than those
who can't.
>
> Maybe you'll let me know if you hear of one. I couldn't name any.
The CycL language that Cyc is written in goes well beyond typical
programming languages:
https://en.wikipedia.org/wiki/CycL
I am currently learning the CycL language, and so far it makes all
other languages I am familiar with look like children's toys.
Ted
Kirby,
Just wondering if you might elaborate on what you meant when you wrote, "English has too many bugs. The philosophy goes off the rails..."
Some examples would help.
Anna
On Sun, Jun 26, 2016 at 10:14 AM, Anna Roys <roys...@gmail.com> wrote:Kirby,
Just wondering if you might elaborate on what you meant when you wrote, "English has too many bugs. The philosophy goes off the rails..."
Some examples would help.
Anna
For example English is replete with meme viruses about "race" versus "breed" versus "ethnicity" -- a tangled mess of neural snarls.I've written volumes on this by now, as have many others (Ashly Montagu in particular).
So, you are referencing individual semantics not syntax. Yes, One human race is it.
BTW, Are you on SE Harrison st now?
So, you are referencing individual semantics not syntax. Yes, One human race is it.
BTW, Are you on SE Harrison st now?
On Sun, Jun 26, 2016 at 11:16 AM, Anna Roys <roys...@gmail.com> wrote:So, you are referencing individual semantics not syntax. Yes, One human race is it.
I'm referring the the semantic web one inherits as a cost of learning English. There's a lot of good stuff in there, but racism and nationalism are meme viruses the Anglophones tend to propagate, regardless of race or nationality.
It's straightforward to program a computer to deal with syntax (when that syntax is known). For example, it's elementary for a computer program to play a legal game of chess. The real interesting applications of Artificial Intelligence are those which capture some or all of the semantics. Computer software is designed to assign weights to computer positions according to how grandmasters would evaluate them. At that point, computers have the advantage because of their additional brute force calculation. But it doesn't seem that we've learned much from chess playing computers about how to play the game better. We're not getting amazingly novel principles, from them, it seems.
Ramanujan had Hardy between a rock and a hard place in termsof how "the math game" was played in the UK.
From the "Wittgenstein chambers" (a place in philosophy space --
Coxeter of 4D fame connects here), we learn *not* to show ourselvesa lot of cartoons about what we imagine is going on when we think.
On Sun, Jun 26, 2016 at 11:25 AM, kirby urner <kirby...@gmail.com> wrote:On Sun, Jun 26, 2016 at 11:16 AM, Anna Roys <roys...@gmail.com> wrote:So, you are referencing individual semantics not syntax. Yes, One human race is it.
I'm referring the the semantic web one inherits as a cost of learning English. There's a lot of good stuff in there, but racism and nationalism are meme viruses the Anglophones tend to propagate, regardless of race or nationality.
Here's a good New York Times article spelling out the issue:
http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.htmlIf we get our "deductive axiomatic logic" from a database that reflects the "common sense" of its programmers, have we really made any advances?