--
You received this message because you are subscribed to the Google Groups "FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fricas-devel...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/11e29cc8-7c91-4ad2-a56b-d380db9a33e3n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/CAAWYfq36fDxKgk_wsx1g_Rx%3DaWWwv0uyKQHEpO5yBTbcEaNE9Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/6ec0fcdb-1c96-4d5d-95df-35b52e3d97e1n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/217ae176-b177-48f6-9a31-05a0758b5820n%40googlegroups.com.
I'm not the only one raising the question of AI and GPT systems andtheir effects on Mathematics. The best and brightest of mathematicians,of which I'm not one, are raising the issue.
This is a lecture by Jeremy Avigad,Professor of Mathematics and Philosophy at CMU titled:Is Mathematics Obsolete?Timothy Gowers, a Fields Medalist, raises similar issues:Can Computers Do Mathematical Research?Gowers talks about computer proofs of mathematical theorems andautomatic mathematical reasoning. Gowers 'says that this may lead to abrief golden age, when mathematicians still dominate in original thinkingand computer programs focus on technical details, “but I think it will last avery short time,” given that there is no reason that computers cannot eventuallybecome proficient at the more creative aspects of mathematical research as well.'
Certainly, as I"ve mentioned, proof assistants provide all of the elements ofgames which makes them perfect for reinforcement learning, the finaltraining stage of GPT systems. All this requires is a bright PhD candidate.
What few people so far have done is realize the impact this will have oncomputer algebra. Certainly a GPT system can answer questions aboutthe proper handling of sqrt(4) and/or working through both the positiveand negative root assumptions in a calculation by conducting the calculationapplying the simplification for 2 and -2 and presenting both results. All theGPT prompt would have to say is "assuming sqrt(4) is -2 compute ...".
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/082f0305-e06f-4623-aa62-4eb8412e0b23n%40googlegroups.com.
…
One more thing: early in history of AI there was Eliza.
It was simple pattern matcher clearly having no inteligence,
yet it was able to fool some humans to belive that they
communicate with other human (ok, at least for some time).
Some people take this to consider all solved AI problems
as kind of fake and show that the problem was not about
inteligence. But IMO there is different possiblity: that
all our inteligence is "fake" in similar vein. In other
words, we do not solve general problem but use tricks which
happen to work in real life. Or to put it differently,
we may be much more limited than we imagine. Eliza clearly
shows that we can be easily fooled into assuming that
something has much more abilities than it really has
(and "something" may be really "we").
On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
> On Wed, Jan 17, 2024 at 11:09 PM Tim Daly <axio...@gmail.com> wrote:
>
> They can raise the issue all they like. What they are not seeing is that
> artificial stupidity (AI) systems are limited. As I said above. The only
> intelligence you will find in these systems is the stuff generated by human
> intelligence. No artificial stupidity (AI) system can ever exceed the
> limitations of the programming entailed in them.
Well, humans are at least as limited: your claim as true as claim
that "humans can not ever exceed the limitations of the programming
entailed in them". In case of humans programming meaning both things
hardcoded in genome and chemical machinery of the body and "learned
stuff". Already at age 1 toys, customized environment and interactions
with other humans make significant difference to learning. At later
stages there are stories which were perfected for thousends of years,
school curricula and books. There were myths that people from
non-western cultures are less "inteligent" than people from western
culture. More deeper research showed that part of our "inteligence"
is really "programmed" (learned) and "programming" in differnet
cultures were different.
In slightly different spirit, in fifties there were efforts to
define inteligence and researcheres from that time postulated
several abilities that every inteligent being should have.
Based on that there were "proofs" that artificial inteligence
is impossible. One of such "proofs" goes as follows: people
can prove math theorems. But Goedel and Church proved that
no machine can prove math theorem. So no machine will match
humans. The fallacy of this argument is classic abuse of
quantifiers: humans can prove same (easy) math theorems.
No machine or human can prove _each_ math theorem. Actually,
we still do not know how hard is proving, but common belief
is that complexity of proving is exponential in length of the
proof. What is proven is that that there is no computable
bound on length of shortest proof. Clearly this difficulty,
that is large length of proofs affect humans as much as
computers.
To put is differently, if you put strong requirements on
inteligence, like ability to prove each math theorem, then
humans are not inteligent. If you lower your requirements,
so that humans are deemed inteligent, than appropriately
programmed computer is likely to qualify.
One more thing: early in history of AI there was Eliza.
It was simple pattern matcher clearly having no inteligence,
yet it was able to fool some humans to belive that they
communicate with other human (ok, at least for some time).
Some people take this to consider all solved AI problems
as kind of fake and show that the problem was not about
inteligence. But IMO there is different possiblity: that
all our inteligence is "fake" in similar vein. In other
words, we do not solve general problem but use tricks which
happen to work in real life. Or to put it differently,
we may be much more limited than we imagine. Eliza clearly
shows that we can be easily fooled into assuming that
something has much more abilities than it really has
(and "something" may be really "we").
--
Waldek Hebisch
--
You received this message because you are subscribed to the Google Groups "FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fricas-devel...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/Zaf4t0bQZRQAsouA%40fricas.org.
While the question of human intelligence is interestingplease note that my post topic has been restricted to theapplication of GPT-like / AI-like systems to the automationof proof and computer algebra.
Given the changes I'm seeing and the research I'm doing itis likely that there will be a very rapid change from the legacyversion of computer algebra we now use, taking advantage ofthe growing power of these systems.
I've been involved and published in AI-like systems since the 1973, including such"AI-like" areas of machine vision, robotics, expert systems, speech, neuralnets, knowledge representation, planning, etc. All of these were considered"intelligent behavior" ... until they weren't. Indeed, in the 1960s computeralgebra-like systems were considered "intelligent".
Intelligence is a "suitcase word" [0], interesting but irrelevant.
The topic at hand is "computational mathematics", meaning the applicationof recent technology (proof assistants, theorem databases, algorithmproofs, and reasoning assistants) to moving beyond 50-year-old legacycomputer algebra. Proving computer algebra algorithms correct andproviding useful interfaces is the long-term goal.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/538038f1-74ca-4a0a-ac07-6ccaf9115356n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/7847d8fa-e682-4808-9ea6-5cade706234dn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/307db63c-87f0-4d39-9402-6e0c9ee8aae2n%40googlegroups.com.