A very sad case but no-one deliberately set out to design an AI system that helped people to commit suicide. Thinking about the incident in terms of legal cases and punishment (huge fines, compensation, etc.) may well not be the way to prevent such suicides happening again. What is needed is genuinely independent airplane crash type enquiry when the primary aim is to avoid similar repeat incidents.
I write because I am in a position to understand, to some extent, what the parents are going through. In 1985 one of my daughters killed herself indirectly because of the events following an incorrect medical diagnosis made at a police station. As a result of the distress her sister became mentally ill and I was so distressed that I took early retirement, abandoning my research (using CODIL) into AI. In 2001 her sister killed herself after being wrongfully arrested in the SAME police station. Serious mistakes by the police and the local mental health hospital lead to her ?unintentionally? killing herself in an attempt to get medical hekp. She thought the police were planning to throw her in prison - when, in reality, all that had happened was that a policeman had made a clerical error!
What is clear from our experience, in the UK, is that if the primary aim is to PUNISH people or organisations who unintentionally made unfortunate decisions, attempts will be made to withhold information and lessons will not be learnt. The result will be that parents are put through hell having to deal with the lies and half truth that emerge, and will also know that the same kind of deaths will probablycontinue.
If this Chatbot case is closed due to an "out of court" agreement the real cause will be kept secret - maximising the chances of other people making the same unintended errors and further deaths occurring.
Chris.
Chris
I absolutely agree that this is the situation with all general-purpose tools. Moreover, the same LLMs have now begun to be used by criminals, for example, for phishing and other methods of cyberattacks.
Of course, chatGPT should answer the question of how much it can prevent advice on illegal actions.
And we will return to the topic of the algorithm for checking the result of the chatbot's work.
And yes, this can be another specially trained LLM.
But of course it would be better to have more reliable algorithms.
It's like with a formal proof: the verification algorithm is trivial, but finding a proof is not easy.
Alex
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/1756590582.7dqbt04pic0k0ook%40webmail.easily.uk.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/3a4c4023a5304072b4380ec3a12a585f%40d037007d5abe484e80895888582e9daa.
It is useful to think of human intelligence and large language models in evolutionary terms. If an animal invents a tool to help it survive this has little evolutionary long term advantage unless it can pass details of the tool to the next generation. Human intelligence is based to the invention of a tool, called language, which allows the tool making skill to be passed from one generation to the next.
This involves mapping information about the tool stored in a neural network (a brain) into a sequential string (natural language) and then converting it back to a neural network (in the receiving human brain) with high efficiency and with the minimum of information loss. This process allows each generation to improve the tool and this includes improving the language tool itself. Thus the human brain "CPU" needs a "compiler" to generate a "high level language" (natural language) and also a "decompiler" to repeat the process.
This raises the question - How far is human intelligence due to the algoritms in the brain's "CPU" and how far is intelligence due to the natural language acting as a high level programming language.
Issac Newton answered this question when he said "'If I have seen further than others, it is by standing upon the shoulders of giants" - Advances in intelligence depend on the exchange of information between generations and intelligent advances are based on intelligent information embedded in shared cultural information.
Large language models read cultural information and work well enough to recognise repeated patterns - but clearly does not understand what the information means. In effect the large language models act as if they were decompiling statements in a high level cultural lane called Culture. Hallucination and other LLM limitations arise because there are serious bugs in the "decompiler" algorithms. These bugs arises because the underlying large language model is inappropriate and I am sure that everyone using the LL model is aware that the brain does not store information as numbers in an numerically address array using a profoundly deep understanding of statistics.
The danger is in using a powerful underlying mathematical model with an indefinitely large numbers of variables to predict future patterns in a system which generates patterns. This danger is taken for granted by anyone understanding the history of science. Over two thousand years ago the Babylonians and the Greeks recorded the patterns created by the planets in the sky and archaeologists have shown that over 2000 years ago the Greeks actually built a clockwork "computer" which, to an acceptable degree of accuracy, predicted the future position of the planets, and also eclipses. This was based on a mathematicasl model using epicycles where the accuracy of the prediction could always be corrected by adding more epicycles. Galileo realised, using later technology to measure the position of the planets, that these earlier prediction were not accurate. He knew that he could have corrected the epicycle model by adding further layers of epicyclea - but realised that a very much simpler mathematical model was obtained by putting the sun in the centre of the solar system. Later research by Kepler and Newton showed that elipses worked better than circles.
It is appropriate to ask if large language models (which can be "improved" by adding billions more array cells holding probabilities) are making the same mistake that the Greeks made in modelling the planets. One may get always fractionally better results at very much higher computing costs - but the results tell you very little about how the human brain actually evolved to become intelligent or the actual algorithms it uses to map cultural intelligent information onto the brain's neural network. The advantage of hyping the paradigms which require funding and building ever bigger and more powerful computing system is that the more money you spend on the hyped research the more prestige you get and the more influence you have on the direction of future research - including using the peer review system to anonymously rejecting draft papers which criticise the currently favoured AI paradigm.
CODIL was based on the observation of the average human of the 1960's (most current researchers will be biased because they were taught how the stored program computer works at school). It assumes information is handled in sets (with an ontology of names chosen by the human users) held in a neural network, and uses a single small highly recursive algorithm (which could be interpreted in evolutionary terms where efficiency is significant) to find valid pathways through the associatively addressed network.
To summarise. The Greeks thought models involving the earth at the centre and movement in perfect circles the answer (however many circle had to be invoked to correct the "errors") - when what was needed was a far simpler model with a minimal number of elipses round the sun. Could large language models be making a similar error of starting from the wrong basic assumption and only get a better looking approximate result by using modern computer technology to handle a situation where the number of variables needed is trending towards infinity and in the energy require is in danger of exaserbating climate warming ?
The important thing about CODIL is that it was based on first hand neurodiverse-inspired observations of real people (rather than slavishly following expert views and mathematical models) This involved how people educated before the coming of computers actually processed information in several very different real life complex task areas. In effect the CODIL research, almost accidentally, modelled the brain's neural network's symbolic assembly language, in a way that is counter-intuitive to people who have been trained to program stored program computers.
I am sure that one of the reasons CODIL failed to get properly funded is that the language does not include an EXPLICIT "IF". Everyone knows that a procedural programming language MUST contain IF statements - so therefore CODIL got no funding and support for such a silly idea. But the whole point of CODIL is there is an "intelligent" network search routine which decides in real time which items of information are to be used as "data", a "command" or as a open/closed gateway (IF) through the network. Because there is an intelligent decision making procedure there is no need for a human programmer (or a clever conventional AI program) to design an explicit procedural program. The reason is because, like the human brain, CODIL is designed to handle dynamic real life complex tasks which, in parts at least, cannot be "programmed" in advance because the relevant a priori information is not available.
Chris ReynoldsChris
That's why we ask from time to time at our meetings: Has CODIL been restored? When can we try it out?
How is this project progressing?
Give me CODIL and I will change the world - as Archimedes said.
Alex
It is useful to think of human intelligence and large language models in evolutionary terms. If an animal invents a tool to help it survive this has little evolutionary long term advantage unless it can pass details of the tool to the next generation. Human intelligence is based to the invention of a tool, called language, which allows the tool making skill to be passed from one generation to the next.
This involves mapping information about the tool stored in a neural network (a brain) into a sequential string (natural language) and then converting it back to a neural network (in the receiving human brain) with high efficiency and with the minimum of information loss. This process allows each generation to improve the tool and this includes improving the language tool itself. Thus the human brain "CPU" needs a "compiler" to generate a "high level language" (natural language) and also a "decompiler" to repeat the process.
This raises the question - How far is human intelligence due to the algoritms in the brain's "CPU" and how far is intelligence due to the natural language acting as a high level programming language.
Issac Newton answered this question when he said "'If I have seen further than others, it is by standing upon the shoulders of giants" - Advances in intelligence depend on the exchange of information between generations and intelligent advances are based on intelligent information embedded in shared cultural information.
Large language models read cultural information and work well enough to recognise repeated patterns - but clearly does not understand what the information means. In effect the large language models act as if they were decompiling statements in a high level cultural lane called Culture. Hallucination and other LLM limitations arise because there are serious bugs in the "decompiler" algorithms. These bugs arises because the underlying large language model is inappropriate and I am sure that everyone using the LL model is aware that the brain does not store information as numbers in an numerically address array using a profoundly deep understanding of statistics.
The danger is in using a powerful underlying mathematical model with an indefinitely large numbers of variables to predict future patterns in a system which generates patterns. This danger is taken for granted by anyone understanding the history of science. Over two thousand years ago the Babylonians and the Greeks recorded the patterns created by the planets in the sky and archaeologists have shown that over 2000 years ago the Greeks actually built a clockwork "computer" which, to an acceptable degree of accuracy, predicted the future position of the planets, and also eclipses. This was based on a mathematicasl model using epicycles where the accuracy of the prediction could always be corrected by adding more epicycles. Galileo realised, using later technology to measure the position of the planets, that these earlier prediction were not accurate. He knew that he could have corrected the epicycle model by adding further layers of epicyclea - but realised that a very much simpler mathematical model was obtained by putting the sun in the centre of the solar system. Later research by Kepler and Newton showed that elipses worked better than circles.
It is appropriate to ask if large language models (which can be "improved" by adding billions more array cells holding probabilities) are making the same mistake that the Greeks made in modelling the planets. One may get always fractionally better results at very much higher computing costs - but the results tell you very little about how the human brain actually evolved to become intelligent or the actual algorithms it uses to map cultural intelligent information onto the brain's neural network. The advantage of hyping the paradigms which require funding and building ever bigger and more powerful computing system is that the more money you spend on the hyped research the more prestige you get and the more influence you have on the direction of future research - including using the peer review system to anonymously rejecting draft papers which criticise the currently favoured AI paradigm.
CODIL was based on the observation of the average human of the 1960's (most current researchers will be biased because they were taught how the stored program computer works at school). It assumes information is handled in sets (with an ontology of names chosen by the human users) held in a neural network, and uses a single small highly recursive algorithm (which could be interpreted in evolutionary terms where efficiency is significant) to find valid pathways through the associatively addressed network.
To summarise. The Greeks thought models involving the earth at the centre and movement in perfect circles the answer (however many circle had to be invoked to correct the "errors") - when what was needed was a far simpler model with a minimal number of elipses round the sun. Could large language models be making a similar error of starting from the wrong basic assumption and only get a better looking approximate result by using modern computer technology to handle a situation where the number of variables needed is trending towards infinity and in the energy require is in danger of exaserbating climate warming ?
The important thing about CODIL is that it was based on first hand neurodiverse-inspired observations of real people (rather than slavishly following expert views and mathematical models) This involved how people educated before the coming of computers actually processed information in several very different real life complex task areas. In effect the CODIL research, almost accidentally, modelled the brain's neural network's symbolic assembly language, in a way that is counter-intuitive to people who have been trained to program stored program computers.
I am sure that one of the reasons CODIL failed to get properly funded is that the language does not include an EXPLICIT "IF". Everyone knows that a procedural programming language MUST contain IF statements - so therefore CODIL got no funding and support for such a silly idea. But the whole point of CODIL is there is an "intelligent" network search routine which decides in real time which items of information are to be used as "data", a "command" or as a open/closed gateway (IF) through the network. Because there is an intelligent decision making procedure there is no need for a human programmer (or a clever conventional AI program) to design an explicit procedural program. The reason is because, like the human brain, CODIL is designed to handle dynamic real life complex tasks which, in parts at least, cannot be "programmed" in advance because the relevant a priori information is not available.
Chris Reynolds--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/0e04a224ce5a4651bf054e23ccd183df%40bb0e80bb40604a238b9a6f559b6c91c7.
John,
For us, the most interesting case is when both the theory and its model are in the computer, and as data, not as hardware.
And we have some algorithms to process the theory as theoretical knowledge, for example, to semi-automatically prove its theorems. And also to process the model itself, which is primarily some finite mathematical structure satisfying the axioms of the theory. But in addition, it is tied to reality. And the calculations that we perform on it tell us something about reality.
The CODIL structure is some graph, i.e. mathematical representation of knowledge about the world, i.e. a model.
But Chris has not yet described the algorithm for processing this graph in response to a request.
And this is always a question for the author of the language: where is the processor that can work with these language texts?
What theoretical and factual knowledge do large ontologies contain is already a task of labor-intensive research, since they are really large.
One of the remarkable testing grounds for formal theories and their models is Geometry. There are about ten theories of mathematical objects in Euclidean space, and each drawing with various geometric figures is a model of these theories.
If we tie the drawing to reality, it will be a very useful model of reality, precisely because of its abstractness, i.e. simplicity.
It is proposed to concentrate theoretical knowledge in frameworks: each theory separately. An example for the theory of undirected graphs is here (PDF) Theory framework - knowledge hub message #1.
The framework of Hilbert's theory for Euclidean geometry is just around the corner.
And then statics, as a section of mechanics, will catch up.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/a65dd7606fa94c44a8f567c791d83920%40903148f215694539b769f4235e27b36a.
John,
For us, the most interesting case is when both the theory and its model are in the computer, and as data, not as hardware.
And we have some algorithms to process the theory as theoretical knowledge, for example, to semi-automatically prove its theorems. And also to process the model itself, which is primarily some finite mathematical structure satisfying the axioms of the theory. But in addition, it is tied to reality. And the calculations that we perform on it tell us something about reality.
The CODIL structure is some graph, i.e. mathematical representation of knowledge about the world, i.e. a model.
But Chris has not yet described the algorithm for processing this graph in response to a request.
And this is always a question for the author of the language: where is the processor that can work with these language texts?
What theoretical and factual knowledge do large ontologies contain is already a task of labor-intensive research, since they are really large.
One of the remarkable testing grounds for formal theories and their models is Geometry. There are about ten theories of mathematical objects in Euclidean space, and each drawing with various geometric figures is a model of these theories.
If we tie the drawing to reality, it will be a very useful model of reality, precisely because of its abstractness, i.e. simplicity.
It is proposed to concentrate theoretical knowledge in frameworks: each theory separately. An example for the theory of undirected graphs is here (PDF) Theory framework - knowledge hub message #1.
The framework of Hilbert's theory for Euclidean geometry is just around the corner.
And then statics, as a section of mechanics, will catch up.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/c38089716a4e456f928cb9fd4f77a95f%40c9c3bcec12014248a1414820cd650b83.
John,
Any mathematical model is good if it describes reality with the required accuracy. Some use four-dimensional space, and some use Hilbert space. 100% accuracy is rarely needed and even more rarely achievable. But, for example, the question of how many people are in a given room right now can usually be answered accurately.
I am glad if Linda Uyechi has developed or is developing a theory of universal phonology. Then we, formal ontologists, only have to formalize this theory.
In the meantime, we have to formalize the axiomatic theory of Euclidean geometry as presented by Hilbert. And on its basis, statics as presented by Landau, Lifshitz, Course of Theoretical Physics, Vol. I, Mechanics.
Take away: a formal ontologist does not create new knowledge, he only systematizes and mathematically records knowledge created by others. Since in this form this knowledge can be processed by algorithms 100% accurately.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/c38089716a4e456f928cb9fd4f77a95f%40c9c3bcec12014248a1414820cd650b83.
The phrase "all models are wrong" was attributed[1] to George Box who used the phrase in a 1976 paper to refer to the limitations of models, arguing that while no model is ever completely accurate, simpler models can still provide valuable insights if applied judiciously.[2]: 792 In their 1983 book on generalized linear models, Peter McCullagh and John Nelder stated that while modeling in science is a creative process, some models are better than others, even though none can claim eternal truth.[3][4] In 1996, an Applied Statistician's Creed was proposed by M.R. Nester, which incorporated the aphorism as a central tenet.[1]
___________
John,
Formalization, that is, the construction of formal theories and mathematical objects, structures as their models, is a very specific activity. Formalizers are not satisfied with even an axiomatic theory, for example, in the form in which Hilbert wrote it for geometry. And they will not rest until they write out all its axioms, definitions, theorems, proofs in some formal language: Isabell, Coq, HOL4 etc.
Surprisingly, this topic is close to our community, since the formal part is a significant innovation in our ontologies, without which they would be just explanatory dictionaries, reference books.
Usually our ontologies contain mainly theoretical propositions, but often, especially in the form of knowledge graphs, they also contain a bunch of facts, i.e. a model of the theory.
But the formalization approach is that some scientific or engineering text is taken and formalized.
For example, I can formalize your reasoning about building a bridge, or about banking.
However, usually a textbook or an article or an engineering report is taken.
One of the fundamental questions: what mathematical objects, systems of objects do we use as models of our theories and how do we connect them with reality. Of course, this is done with some practical purpose.
There are many interesting topics here.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/dd972096d6634e41b391da96b41883c4%4080f337356cb74a3c8aaf0861ab40f880.