We’re living in an era of incredible AI hype. Every week, a new model is announced that promises to “reason,” “think,” and “plan” better than the last. We hear about OpenAI’s o1 o3 o4, Anthropic’s “thinking” Claude models, and Google’s gemini frontier systems, all pushing us closer to the holy grail of Artificial General Intelligence (AGI). The narrative is clear: AI is learning to think.
What if these multi-billion dollar models, promoted as the next step in cognitive evolution, are actually just running a more advanced version of autocomplete?
That’s the bombshell conclusion from a quiet, systematic study published by a team of researchers at Apple. They didn’t rely on hype or flashy demos. Instead, they put these so-called “Large Reasoning Models” (LRMs) to the test in a controlled environment, and what they found shatters the entire narrative.
Hi John and others,
Alex,
That is true:
Alex: There is a hype and anti-hype. You know.
Apple's anti-hype is absolutely correct about the majority of GPT promoters.
Fortunately, there are a fair number who recognize the dangers of "educated guesses" by LLM-based systems. There are various hybrids that use symbolic methods (both traditional and AI-based) to evaluate the LLM guesses and weed out the errors and hallucinations.
The best ones had developed successful symbolic AI systems long before LLMs were invented. They combine the best of both worlds. Unfortunately, many of the new LLM-based developers don't have sufficient background in symbolic methods to develop the hybrids.
John
Yep!
BTW -- I've published some additional articles about LLMs and
their symbiotic relationship with Semantic Web Project related
systems (e.g., Knowledge Graphs) that some may not have seen on
this list.
[1] https://www.linkedin.com/pulse/large-language-models-llms-knowledge-graph-symbiosis-revisited-ty95e/ -- Large Language Models (LLMs) & Knowledge Graph Symbiosis — Revisited
[2]
https://www.linkedin.com/pulse/from-complexity-clarity-how-natural-language-roles-around-idehen-jtm8e/
-- From Complexity to Clarity: How Natural Language is
Transforming Software—and the Roles Around It
[3]
https://www.linkedin.com/pulse/why-philosophy-eat-ai-kingsley-uyi-idehen-rdd9e/
-- Why Philosophy Will Eat AI
[4] https://www.linkedin.com/pulse/semantic-web-project-didnt-fail-waiting-ai-yin-its-yang-idehen-j01se/ -- The Semantic Web Project Didn’t Fail — It Was Waiting for AI (The Yin of its Yang)
Kingsley
From: "alex.shkotin" <alex.s...@gmail.com>
John and All,
As Medium article is moneywalled I asked chatGPT to clarify situation https://chatgpt.com/s/t_685a5600a8888191ab2ccdf750abf51eIt is full of links and of course should be read with caution.
There is a hype and anti-hype. You know.
Alex
вторник, 24 июня 2025 г. в 00:00:16 UTC+3, John F Sowa:This is what I've been saying for the past two years. LLMs are very good for generating hypotheses (good guesses), but without symbolic methods for evaluating what they find, the results are just guesses -- and hallucinations.
When I need information, I do my own searching and use Wikipedia and other reliable web sites. I never trust anything generated by any system that uses GPT or other LLM systems -- UNLESS they use symbolic systems to evaluate anything generated by LLMs AND they provide links to the SOURCES of the information.
Our Permion system does that.
Short excerpt below.
John__________________________
Apple Just Pulled the Plug on the AI Hype. Here’s What Their Shocking Study Found
We’re living in an era of incredible AI hype. Every week, a new model is announced that promises to “reason,” “think,” and “plan” better than the last. We hear about OpenAI’s o1 o3 o4, Anthropic’s “thinking” Claude models, and Google’s gemini frontier systems, all pushing us closer to the holy grail of Artificial General Intelligence (AGI). The narrative is clear: AI is learning to think.
But what if it’s all just an illusion?What if these multi-billion dollar models, promoted as the next step in cognitive evolution, are actually just running a more advanced version of autocomplete?
That’s the bombshell conclusion from a quiet, systematic study published by a team of researchers at Apple. They didn’t rely on hype or flashy demos. Instead, they put these so-called “Large Reasoning Models” (LRMs) to the test in a controlled environment, and what they found shatters the entire narrative.
-- Regards, Kingsley Idehen Founder & CEO OpenLink Software Home Page: http://www.openlinksw.com Community Support: https://community.openlinksw.com Social Media: LinkedIn: http://www.linkedin.com/in/kidehen Twitter : https://twitter.com/kidehen
Dear and respected colleague,
Allow me to recommend to you the book written by Karen Hoa: Empire AI.
Forget the narrative around Sam Altman et al. She is competent. Disclosing the various aspects of data cleaning she is very close to realizing the role of ontology.
Best wishes.
Mihai Nadin
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/a0893aa5-ff51-4ed3-8c8d-e02d388eee4e%40openlinksw.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897A1C193C28A1175CC2913DA7BA%40BL3PR01MB6897.prod.exchangelabs.com.
John,
This is one of the paradoxes of intelligence: it is based on the knowledge of a huge mass of absolutely primitive simple facts, for recording each of which neither high qualifications nor education are required, only conscientiousness. But collected together they provide the opportunity to "see an elephant" or even "act like an elephant".
I was interested in the ideas from the book. Mihai Nadin could confirm or deny that chatGPT correctly presented them and did not miss anything. You, using the quote for further reasoning, confirmed that you consider the idea expressed by chatGPT to be correct.
This is the situation: the work done by a large number of conscientious, low-paid and so on people, plus a universal algorithm created a system of good intelligence.
Here you can compare Kenyan workers with those builders who have been building roads in the mountains, bridges, pyramids and other structures since time immemorial.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/7e2d869036b646d3a567bfe483c7dac0%40a24dad87e56a4ed19ede6b56af8399a2.
Alex: This is the situation: the work done by a large number of conscientious, low-paid and so on people, plus a universal algorithm created a system of good intelligence.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/4dbabd4b7bdd41a8a2d827110450b435%404ad8da9c87054f5283b48dd949162683.
John,
We don't know the full stack of genAI technologies. But we know, for example, that they use manual labor.
Here we can see something in common between their technologies and Permion technologies: you show us some pieces, but the essence is hidden, classified.
genAI are looking at how far they can advance without databases and algorithms, and nothing prevents them from adding these components at some point in addition to manual labor.
They lack world models, as Gary Marcus writes https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread.
And the question arises: where is the world model in formal ontology? Is it an A-box?
By the way, Gary mentions abstraction, and it would be worth writing out the full stack of knowledge processing operations. Abduction, deduction, induction, testing are the tip of the iceberg.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/10d9fec0e42149d094fd13f6be59f8a4%40d2783c1292304592bc29fb9d82190080.
To get a feeling of real rules of knowledge processing in math look at "Rules for transforming propositions and phrases" in Specific tasks of Ugraphia on a particular structure (formulations, solutions, placement in the framework). And for natural language rules - at "knowledge processing - derivation rules" in https://www.researchgate.net/publication/366216531_English_is_a_HOL_language_message_1X.
And this is just the beginning.
Alex
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/f6dbe690-dec2-438a-8f55-df973c53d861n%40googlegroups.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/f0c56bdb5ed44ee59d95d40cb898da37%408d8835761ca9438fad557c731a464651.
John,
Hi John,
Ravi,
The Big AI Engine isn't watching people. It's just gathering as much as it can of everything on the WWW and making it available to anybody and everybody who wants to look for anything they can about anybody, including you.
As those studies show, even Elon Musk, the developer of the biggest system, can't control what it says about himself.
In one sense, that's humorous. But when you think about it, that could be you who is being discussed in ways that are very bad or dangerous.
John
Absolutely! Unfortunately, this is going to start happening to a variety of innocent people sooner rather than later.
LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.
Kingsley
From: "Ravi Sharma" <drravi...@gmail.com>
John's comment about Apple and AI led me to the link: and there is the interesting link with situational awareness i.e. tool, you the user and deciphering what you are thinking to cast you as a person entity? (Ken please note on your favorite topic of SA)So does the AI engine watching you create any extraordinary problem more than when we sign a waiver of privacy for example?Regards
Thanks.Ravi
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/f0c56bdb5ed44ee59d95d40cb898da37%408d8835761ca9438fad557c731a464651.
John,
Reduction, one of the ways of thinking, is a feature of math-logic, which in many cases is useless.
It is very possible that any reasoning can be reduced to "Abduction, deduction, induction, testing" but I doubt it. And anyway we need to study how.
We should begin from rules of reasoning we use in our everyday life, in physics, in other sciences and technologies.
To get a feeling of real rules of knowledge processing in math look at "Rules for transforming propositions and phrases" in Specific tasks of Ugraphia on a particular structure (formulations, solutions, placement in the framework). [1]
And for natural language rules look at "knowledge processing - derivation rules" in https://www.researchgate.net/publication/366216531_English_is_a_HOL_language_message_1X. [2]
And this is just the beginning. By the way, generalization is a powerful knowledge processing technik.
We can take any everyday task like to set a goal to drink a cup of tea. And create a plan, an algorithm of doing so. And look at the rules of knowledge processing we use step by step to create this plan.
Abstraction is one of the most important classes of mental processing.
For me your way of thinking is REDUCTIONISM🦉
Alex
Summary of rules used
knowledge processing - derivation rules
Derivation rules of a particular domain of knowledge should be studied separately.
We have examples of FOL derivations and can study these forms for operational sentences.
For example,
Every human is mortal. Socrates is human. Hence "Socrates is mortal."
((Every human) is mortal) --"Every" - puo, "is" - bo.
(Socrates is human)
Hence
(Socrates is mortal).
Derivation rule behind "hence".
We need to substitute "Socrates" instead of "(Every human)"
and generalization is
Whenever you have schemas `((every X) is Y)` and `(Z is X)` applied, i.e. X, Y, Z have a specific value, applying the first schema to the second gives `(Z is Y)`:
((every X) is Y), (Z is X) |= (Z is Y) --algorithm: {substitute Z instead of (every X)}
What is the operational form for FOL derivation rules?
If we are for a while in FOL statements, they have simple CFG and the structure behind is a derivation tree with non-terminals in the internal nodes. But first of all FOL has additional variables:
((Every human) is mortal) == (Every x:human mortal(x))
(Socrates is human) == human(Socrates)
(Socrates is mortal) == mortal(Socrates)
And we substitute "Socrates" to the second occurrence of "x" eliminating Every-phrase!
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/89765cd0a4544d7299078ef95c33f7a4%4024f752b18c564d2a961b115b2c05d703.
Hi John and others,
Kingsley,
Thanks for emphasizing that point:
KI: LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.
Yes. Those systems sound intelligent because they produce answers in clear, syntactically correct sentences. But clarity does not imply truth.
Yes!
There are 60 years of traditional symbolic methods of AI that can do the deduction and testing. But there are many, many different ways of combining those methods with LLMs to form hybrid systems..
Some methods are better than others, and there is a lot of research & development that is being implemented and tested.
John
LLMs are now in very dangerous territory for untrained (or uninitiated) users. The marketing of LLMs as some kind of human-like intelligence is not just irresponsible—it’s downright dangerous.
As you know, man-made tools are simply that: tools. They exist to overcome operational inefficiencies understood by their creators. I’ve had numerous experiences with LLMs that genuinely terrify me, particularly due to their sleight-of-hand tendencies—subtle changes introduced either by hallucinations or biases in their training data that everyday users will never catch.
The problem? Everyday users are everywhere across the command hierarchies of organizations that impact large groups of people—employees, national citizens, families—you name it.
If LLMs were marketed for what they really are—UI/UX stack additions—the risk would be significantly lower.
My most recent and ironic example of LLM-related dangers happened while exploring the seminal AI Conference at Dartmouth University (circa 1955). I’ve captured the full story in a LinkedIn post here: [1].
[1] Comment I posted to a Discussion about AI Hype
I've also attached a recent comic strip I knocked up using ChatGPT where I refer to LLMs as Langulators 🙂
Kingsley
From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>
Hi John,On 6/30/25 2:20 PM, John F Sowa wrote:Ravi,
The Big AI Engine isn't watching people. It's just gathering as much as it can of everything on the WWW and making it available to anybody and everybody who wants to look for anything they can about anybody, including you.
As those studies show, even Elon Musk, the developer of the biggest system, can't control what it says about himself.
In one sense, that's humorous. But when you think about it, that could be you who is being discussed in ways that are very bad or dangerous.
JohnAbsolutely! Unfortunately, this is going to start happening to a variety of innocent people sooner rather than later.
LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.
Kingsley
From: "Ravi Sharma" <drravi...@gmail.com>
John's comment about Apple and AI led me to the link: and there is the interesting link with situational awareness i.e. tool, you the user and deciphering what you are thinking to cast you as a person entity? (Ken please note on your favorite topic of SA)So does the AI engine watching you create any extraordinary problem more than when we sign a waiver of privacy for example?Regards
Thanks.Ravi
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/7cff0f7392574821b3a6c28a56c8aeb1%4073de18f7fa6e4590b6d2e1fb186c404f.
John,
From "All those rules are versions of deduction." I got that you are simply classifying any real particular rule of knowledge processing under one of four titles. With a very broad sense for each. Like "Learn=Abduction, Plan=Deduction, Act=Test, Reflect=Induction." or "Observe=Induction, Orient=Abduction, Decide=Deduction, Act=Test". This is a separate topic on how to classify real rules of knowledge processing.
Let me just point out that in https://plato.stanford.edu/ we have an article only for abduction https://plato.stanford.edu/entries/abduction/. But even in this case the meaning differs:"In the philosophical literature, the term “abduction” is used in two related but different senses. In both senses, the term refers to some form of explanatory reasoning. However, in the historically first sense, it refers to the place of explanatory reasoning in generating hypotheses, while in the sense in which it is used most frequently in the modern literature it refers to the place of explanatory reasoning in justifying hypotheses. In the latter sense, abduction is also often called “Inference to the Best Explanation.”"
The absence of an article for deduction and induction (sorry, I did not search for "test") for me means that they get their meaning only inside of one or another theory.
The topic of what kind of rules of knowledge processing we have in sciences, technologies, and everyday life is much more important for me.
I would be happy if you could classify the rules I have found on my way to knowledge concentration and formalization.
Let's restrict our knowledge processing by text. In this case a rule is how to take one text and create a new one using only this text and maybe some parameters.
Usually for a rule we have just one sentence as an input, and one or more for output.
If we look at this sequence of sentences itself we have something logical (see third column below). And the rule applied on every step is a way to get this sequence.
For example, let's take a step back to https://www.researchgate.net/publication/374265191_Theory_framework_-_knowledge_hub_message_1 because a proof is the simplest way of knowledge processing.
I took sequence of sentences from here Theory framework - knowledge hub. message #1 (-:PUBLIC:-) which is the latest version of the article.
"In this case, the section of each language, in addition to the language identifier, contains the number of the sentence in this section.
Moreover, now for each sentence the identifier of the frame element on which this sentence is based is indicated (usually this is a definition) or numbers of sentences of a given proof from which the current sentence follows. This column is called "premises".
The last column indicates how the current sentence is obtained from the sentences specified in the list of premises. This column is called "method of inference". The study of real methods of inference found in proofs is a separate important work, because these are not usually the rules of inference of formal logic.
The statement eng.1 is based on the definition of the term “simple edge”, which has the identifier simpleE in the framework. And in the column “method of inference” it is indicated “a-priory”. In eng.2 parameter [d] refers to the definition of term d from which we can derive our sentence.
Getting eng.4 from eng.3 is conventionally called “summation”, as a type of reformulation, because it is obvious that both statements are equivalent.
And getting eng.3 from eng.1 and eng.2 is called "union" to emphasize that this is not just LOGICAL AND of two statements but includes some reformulation.
"
This proof in the framework is here proof Pr1_1__1 Th1_1
I would be happy to get your analysis and classification for these three rules: "a-priory", "union", "summation".
Let me point out that a solution for a task has a tree structure, not a sequence. You may find them in Specific tasks of Ugraphia on a particular structure (formulations, solutions, placement in the framework). Let me cite rules description you put under deduction umbrellar [1].
I am looking forward to aligning our terminology i.e. understand each other properly.
Alex
[1] from Specific tasks of Ugraphia on a specific structure(-:PUBLIC:-)
There are many ways to get from one text to another or several others. It is assumed that the processing of the original text is replaced by the processing of those obtained from it, and we know how to process the received ones, i.e. having received their values, get the value of the original one. We will call such processing methods rules.
The texts resulting from applying the rules are called subtasks.
The following are descriptions of different rules.
A rule named “subtask” only indicates that this subtask is separated into a separate task, i.e. must have its own solution block, and if it exists, it is indicated in the “parameters” column (see below), and if the cell keeps "???", then the corresponding task has not yet been added to the framework! Strictly speaking, the task block has not been completed to the end.
This rule makes the transition from terms of theory to terms of structures.
For example, in "There exists x in U such that _e1 is incident to x." the term "incident" is introduced in the theories of Biria and used in Ugraphia for inc global variable. Knowing its “binding” with inc, we can interpret the statement as “There exists x in U such that (_e1 x) in inc.” where the term “incident” is not used.
It is important to emphasize that no terms of theories are used in the resulting statement!
This rule refers to the substitution of the definition of a term at the place of its use. Typically, a definition consists of a precondition formulated in a sentence beginning with the word “Let” and the definition itself, consisting of a phrase using the term, a syntactic connective (for example, “if and only if”) and a determinant - a phrase that specifies the meaning of the term. Definitions of terms are given in one theory or another - in our case, it is Ugraphia. From a programming point of view, a definition is a macro command, and a “substitution” is a macro substitution, and sometimes, the substituted text itself (preconditions and determinant) is modified.
The substitution comes down to the fact that the statements of the precondition form a linear block, and the determinant - a node of the decision tree.
For example, consider the statement
_e1 parallel to _e4.
It uses the term from Ugraphia - “parallel”.
Applying his definition:
We get two precondition statements:
_e1 is an edge.
_e4 is an edge.
which need to be checked for truth,
and a subtask:
_e1 and _e4 are different and _e1 has the same endpoints as _e4.
It is easy to see that the actual parameters of the place where the term is used, _e1 and _e4, are substituted in place of the formal parameters e1, e2 of the “macro command” of the definition.
A quantifier always runs over some set or sequence and is expanded by running into an operation on statements or phrases written for each element of the set - an excellent move to finitism.
For example, consider the expansion of the second quantifier "every" in
"every member of every pair of inc belongs to U."
we need to run by inc and, in our case, see the __inc block of the framework; we will get from inc for the first element:
"every member of (_e1 _v1) belongs to U."
etc., for each pair in inc.
Notes. For example, the finitistic way for the quantifier expansion in mathematical logic can be found in Esenin-Volpin's works. And it can be stated using an example like this
Let S is {e1 e2} and p() is an unary predicate on S. Then
"∀x:S p(x)" expands into "p(e1)∧p(e2)"
the generalization to the case of any finite number of elements in non-empty S is obvious.
Thus, the quantifier statement is expanded into several more specific statements and the meaning of the original statement is obtained by applying the operation ∧ or +, etc., to the expanded values.
This is a situation when a conjunction such as “and” or the words “equals”, “plus”, and the like is applied to two specific statements or phrases in the original statement.
For example, in
"_e1 and _e4 are different and _e1 has the same endpoints as _e4."
we split along the second “and”, obtaining two statements - subtasks:
"_e1 and _e4 are different."
"_e1 has the same end vertices as _e4."
According to this rule, various free texts in NL are converted into equivalent but more regular ones.
For example,
"_e1 is incident to some element from U."
becomes
"there exists x in U such that _e1 is incident to x."
Execution of a rule consists of SEARCHING in a set and determining the presence or absence of an element that satisfies a given condition.
For example, applying the "choice" rule to a statement
" there is x in U such that (_e1 x) in inc. "
consists of running through U and checking for the current element that it, paired with _e1, is present in inc. If such an element is found, then the statement is considered valid; otherwise - it is false.
A subset of elements is selected from a specific set according to some criterion.
for example, in the phrase
" number of elements of U such that it's an edge and simple and incident _v1."
Before counting the quantity, it is necessary to obtain a subset of U with the described elements, and then the phrase is converted into
" number of elements in list (_e1, _e2, _e4) ".
With text, the mental action of obtaining its value is performed.
For example, it is obvious that the following statements are true:
"_e1 and _e4 are different."
"_e1 is an element of U."
"(_e1 _v1) in inc."
However, you need to look at U and inc in the last two cases, respectively.
The calculation formulated in the phrase is performed in the mind.
For example,
counting "number of elements in list (_e1, _e2, _e4)" will give the result 3.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/db603a1013be468b847320ac266d9669%407c6ff9ce891243f28097bc2811be419d.
John,
John,
We all are busy with our own projects and ideas. So I was really pleasantly surprised with your "If you think that some method of reasoning doesn't fit in that four-step cycle, describe it or send a reference, and I'll show exactly how to map it to that cycle or to some part of the cycle."
On the other hand, a classification is just a classification, but a cigar can be smoked.
Finally, let me give you an example of what solving a problem looks like in a case more complex than proving a theorem. [1]
I am on the way to do the same for Statics.
Alex
[1] Specific tasks of Ugraphia on a specific structure(-:PUBLIC:-)
In this block, we again encounter lines that do not have a “tree” identifier in the second column of the row. See details below in the next section, “(t)description of the structure of the solution block”.
In addition, the values in the “value” column (#4) of lines 1.2.1 and 1.2.2 are not simple but composite - ordered pairs.
We begin in this block from the YAFOLL statement and in the rule column keep an id of the Interpreter to call (Yp).
(substitution nuances)
"e1 and e2 denote different edges" in the definition turns(!) into "_e1, _e4 are different." i.e. “edges” goes away and “denote” turns into “are”; which of course, strictly speaking, is a “reformulation” within the substitution.
(about the solution in general)
Strictly speaking, the solution has not been completed because three subtasks do not have a link to the solution block; this is precisely how a filled block in a framework should usually not exist: in a framework block, each “subtask” will have a link to its block as a parameter. Thus, the presented framework should be considered educational. The fact that despite the absence of a solution block in the “parameter” cell (#5) there is a value in the "value" cell (#4) indicates that instead of the “subtask” rule, the “obviously” rule was applied, i.e., the meaning was obtained in mind.
A chain of lines that do not indicate a place in the decision tree represents a linear section of requirements checks from the precondition in the definition of the substituted term. This linear section can most easily be attached to a tree node and located, for example, to the right of the node at the same level.
The first subtask has a link to its solution block, where TRUE was received.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/d906f5a1fd984a6d93fc468367e6dcd8%40a741b97fb9924cb1876685e580217491.
John,