Inquiry Into Inquiry • On Initiative

23 views
Skip to first unread message

Jon Awbrey

unread,
Jul 10, 2022, 10:15:23 AM7/10/22
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Inquiry Into Inquiry • On Initiative 1
http://inquiryintoinquiry.com/2022/07/10/inquiry-into-inquiry-on-initiative-1/

Re: R.J. Lipton and K.W. Regan
https://rjlipton.wpcomstaging.com/about-me/
::: Sorting and Proving
https://rjlipton.wpcomstaging.com/2022/06/13/sorting-and-proving/

<QUOTE Lipton and Regan:>
GPT-3 works by playing a game of “guess the next word” in a phrase.
This is akin to “guess the next move” in chess and other games,
and we will have more to say about it.
</QUOTE>

As a person who struggles on a daily basis to rise to the level of sentience
I've learned it has more to do with beginning than ending this sentence.

Resources —

• Survey of Inquiry Driven Systems
( https://inquiryintoinquiry.com/2020/12/27/survey-of-inquiry-driven-systems-3/ )

• Survey of Theme One Program
( https://inquiryintoinquiry.com/2022/06/12/survey-of-theme-one-program-4/ )

Regards,

Jon

cc: Conceptual Graphs • Cybernetics • Laws of Form • Ontolog Forum
cc: FB | Inquiry Driven Systems • Structural Modeling • Systems Science

Jon Awbrey

unread,
Jul 20, 2022, 9:06:09 AM7/20/22
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Inquiry Into Inquiry • On Initiative 2
https://inquiryintoinquiry.com/2022/07/20/inquiry-into-inquiry-on-initiative-2/

Re: Scott Aaronson ( https://scottaaronson.blog/ )
(1) ( https://scottaaronson.blog/?p=6524 )
(2) ( https://scottaaronson.blog/?p=6534 )
(3) ( https://scottaaronson.blog/?p=6541 )

<QUOTE SA:>
Personally, I'd give neither of them [Bohr or Einstein] perfect marks,
in part because they not only both missed Bell's Theorem, but failed
even to ask the requisite question (namely: what empirically verifiable
tasks can Alice and Bob use entanglement to do, that they couldn't have
done without entanglement?). But I'd give both of them very high marks
for, y'know, still being Albert Einstein and Niels Bohr.
</QUOTE>

To Ask The Requisite Question
=============================

This brings me to the question I was going to ask on the AI post,
but was afraid to ask.

Does GPT-3 ever ask an original question on its own?

Simply asking for clarification of an interlocutor's prompt is not
insignificant but I'm really interested in something more spontaneous
and “self‑starting” than that. Does it ever wake up one morning, as
it were, and find itself in a “state of question”, a state of doubt or
uncertainty so compelling as to bring it to ask on its own initiative
what we might recognize as a novel question?

Resources
=========

Survey of Inquiry Driven Systems
https://inquiryintoinquiry.com/2020/12/27/survey-of-inquiry-driven-systems-3/

Survey of Theme One Program
https://inquiryintoinquiry.com/2022/06/12/survey-of-theme-one-program-4/

Regards,

Jon

Jon Awbrey

unread,
Mar 1, 2023, 12:57:03 PMMar 1
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Inquiry Into Inquiry • On Initiative 3
http://inquiryintoinquiry.com/2023/03/01/inquiry-into-inquiry-on-initiative-3/

Re: Scott Aaronson on Large Language Models ( https://scottaaronson.blog/?p=7042 )
::: My Comment ( https://scottaaronson.blog/?p=7042#comment-1946961 )

All,

The more fundamental problem I see here is the failure to grasp the nature of
the task at hand, and this I attribute not to a program but to its developers.

Journalism, Research, and Scholarship are not matters of generating probable
responses to prompts (stimuli). What matters is producing evidentiary and
logical supports for statements. That is the task requirement the developers
of LLM-Bots are failing to grasp.

There is nothing new about that failure. There is a long history of attempts
to account for intelligence and indeed the workings of scientific inquiry based
on the principles of associationism, behaviorism, connectionism, and theories of
that order. But the relationship of empirical evidence, logical inference, and
scientific information is more complex and intricate than is dreamt of in those
reductive philosophies.

Regards,

Jon ( https://mathstodon.xyz/@Inquiry )

Jon Awbrey

unread,
May 22, 2023, 11:55:23 AM (6 days ago) May 22
to Cybernetic Communications, Laws of Form, Ontolog Forum, Structural Modeling, SysSciWG
Cf: Inquiry Into Inquiry • On Initiative 5
https://inquiryintoinquiry.com/2023/05/22/inquiry-into-inquiry-on-initiative-5/

Re: Inquiry Into Inquiry • On Initiative 2
https://inquiryintoinquiry.com/2022/07/20/inquiry-into-inquiry-on-initiative-2/
Re: Mathstodon • Joeri Sebrechts
https://mathstodon.xyz/@joe...@mstdn.social/110401673746671834

<QUOTE JS:>
That's not how it works. The model lacks agency. It is a machine
whose gears are cranked by the user's prompt. It can ask questions,
but only when prompted to. It is not doing anything at all when it
isn't being prompted.
</QUOTE>

Sure, I understand that. The hedge “as it were” is used advisedly for the
sake of the argument. (I wrote my own language learner back in the 80s.)

Speaking less metaphorically, the program and its database are always in their
respective states and the program has the capacity to act on the database even
when not engaged with external prompts.

Is there any reason why the program's “housekeeping” functions should not include
one to measure its current state of “uncertainty” (entropy of a distribution) with
regard to potential questions — or any reason why it should “hurt to ask”?

As it were …

Regards,

Jon
Reply all
Reply to author
Forward
0 new messages