--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/b408c208-368b-44c5-b193-21f24b1275f2n%40googlegroups.com.
I have been following with great interest the UK effort to create a national digital twin for the built environment.
--
Dear Mike,
I have been following with great interest the UK effort to create a national digital twin for the built environment.
[MW] Since 2017 I have been advising the UK Govt on the development of a National Digital Twin, providing technical leadership for the initiative. Here are some of the documents we produced.
BORO was chosen to model a common ontology to enable various database-driven systems to be connected using common meanings of terms etc. I think it was a good practical choice.
[MW] The choice was for a 4-Dimensionalist approach, that is one that sees objects as extended in time as well as space. BORO is a very small Top Level Ontology which all of the ontologies we had identified as possible start points were based on. The rationale for the choice can be found here:
Is this use of an ontology performing the same function as a data catalog?
[MW] No. BORO is a TLO that is perhaps better thought of as an approach to analysis of requirements. The formal methodology starts with the data you are having problems with using or sharing and discovers the patterns in the data by doing a 4D analysis.
Is BORO copyrighted or is it OK to also use it?
[MW] It is copyright. Chris Partridge, the owner of BORO, is on this forum. Being copyright does not mean you cannot use it. But it would be polite to let him know what you are doing.
I looked for an example data model of BORO (maybe in MS Access) to experiment with. I couldn't find one so have built one to use on a database-driven cloud project I am working on. And slowly filling it up.
My idea was to use something like BORO to create relationships between abstract classes of things like PERSON, HOMO_SAPIENS, MAMMAL, ORGANISM, MOLECULE, etc. And for this to then be reference data
[MW] Hmm. Reference data usually refers to more (but more detailed) classes that the one you mentioned, rather than data about particular persons. Is that what you mean?
You talk about relationships between classes. A good way to tell how well you are adapting to 4D thinking is to see the percentage of your relationships are actually whole-part, classification, or subtype-supertype. That should be about 90%. If you want to see some data model patterns that are trying to apply 4D principles and illustrate this, then you could try my book “Developing High Quality Data Models”. Equally, you can just see the data model here:
And then in a separate place also use the other aspects of BORO to create 4D relationships between real things that are also instances of abstract classes. An example is Ringo Starr born in Liverpool, a member of the Beatles, etc.
[MW] As an example, both of those are whole-part, but you probably don’t see it immediately (the birth of Ringo Star was part of all the activities going on in Liverpool during the period of his birth. The Beatles was a band that had a drummer as a part, and a temporal part of Ringo Star was a temporal part of the Beatles drummer – I believe there was another drummer before Ringo).
Regards
Matthew
Dr Matthew West OBE
Director – Information Junction
+44 750 338 5279
matthe...@informationjunction.co.uk
http://www.matthew-west.org.uk/
This email originates from Information Junction Ltd. Registered in England and Wales No. 6632177.
Registered office: 28, Connemara Crescent, Whiteley, Fareham, Hampshire, PO15 7BE.
Any suggestions would be welcome. I am on the other side of the world so replies will be delayed.
Thanks
Mike Peters
-----------------------------------
Redworks Studio
PO Box 902
Invercargill 9840
New Zealand
M 64+ 22 600 5006
Skype redworksnz
Email mi...@redworks.co.nz
Facebook www.facebook.com/NZMikePeters
Home www.mtchocolate.com
Art Studio www.redworks.co.nz
Software Architecture www.blog.ajabbi.com
------------------------------------------
--
Dear Alex,
Hi Mike,
Let me write from scratch:
[MW] That is incorrect. BORO is a TLO. The article below is about using it as a Foundation for an Enterprise Ontology, not that it is an Enterprise Ontology.
2) it's better to look at ontology as a very advanced schema of data.
3) If you have some particular information processing task it would be great to find ontology to be a schema of data for this task.
Task is first, ontology is second:-)
[MW] The task that BORO and the National Digital Twin programme is addressing is one of large scale data sharing and integration, and not any particular application the data might be used for. The key problem being data consistency, and consistent extensibility to new subject areas. Beyond that, 4D analysis will just give you a better and more rigorous idea of what you are dealing with.
Regards
Matthew West
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROQGeDYk5hDuTN3ZLFBqKWF4RffrtVVEoTAO3PQ1HMUXig%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/023101d970fd%24d72dcc80%2485896580%24%40gmail.com.
Dear Alex,
The point of a TLO is that there is not something it is not suitable for. You just have to extend it to meet the need, which is what Mike seemed to be interested in doing. My experience of application ontologies that have been developed for a particular requirements is that they generally meet that requirements, but are not suitable for even something quite close to the original requirements without a level of rework that is significant (as in a blank piece of paper can be a better starting point). Indeed, a large part of the work we do in developing the ontology for the NDT is in redeveloping application level ontologies into something more general and reusable for data sharing and integration – but likely not as good for the original purpose as the original ontology.
Regards
Matthew West
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROTpp0O9CpfOHatgqUTJCicDSYU35tEG7vasYh_3VSW65A%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/008a01d97146%24163a4ff0%2442aeefd0%24%40gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/cf7a53af-c5d6-49b7-9cfd-ac8a2e9afc2en%40googlegroups.com.
Dear Matthew
BORO was chosen to model a common ontology to enable various database-driven systems to be connected using common meanings of terms etc. I think it was a good practical choice.
[MW] The choice was for a 4-Dimensionalist approach, that is one that sees objects as extended in time as well as space. BORO is a very small Top Level Ontology which all of the ontologies we had identified as possible start points were based on. The rationale for the choice can be found here:
[MP] Thanks for clearing that up.
Is this use of an ontology performing the same function as a data catalog?
[MW] No. BORO is a TLO that is perhaps better thought of as an approach to analysis of requirements. The formal methodology starts with the data you are having problems with using or sharing and discovers the patterns in the data by doing a 4D analysis.
MP - I need something that can store ontologies in a relational database and also be a data catalog. Do you have any suggestions for a better approach?
Is BORO copyrighted or is it OK to also use it?
[MW] It is copyright. Chris Partridge, the owner of BORO, is on this forum. Being copyright does not mean you cannot use it. But it would be polite to let him know what you are doing.
MP - Good to know, I will contact Chris
I looked for an example data model of BORO (maybe in MS Access) to experiment with. I couldn't find one so have built one to use on a database-driven cloud project I am working on. And slowly filling it up.
My idea was to use something like BORO to create relationships between abstract classes of things like PERSON, HOMO_SAPIENS, MAMMAL, ORGANISM, MOLECULE, etc. And for this to then be reference data
[MW] Hmm. Reference data usually refers to more (but more detailed) classes that the one you mentioned, rather than data about particular persons. Is that what you mean?
You talk about relationships between classes. A good way to tell how well you are adapting to 4D thinking is to see the percentage of your relationships are actually whole-part, classification, or subtype-supertype. That should be about 90%. If you want to see some data model patterns that are trying to apply 4D principles and illustrate this, then you could try my book “Developing High Quality Data Models”. Equally, you can just see the data model here:
[MP] - We are probably using different terms which is causing confusion. I am not up with the correct terminology for ontology
And then in a separate place also use the other aspects of BORO to create 4D relationships between real things that are also instances of abstract classes. An example is Ringo Starr born in Liverpool, a member of the Beatles, etc.
[MW] As an example, both of those are whole-part, but you probably don’t see it immediately (the birth of Ringo Star was part of all the activities going on in Liverpool during the period of his birth. The Beatles was a band that had a drummer as a part, and a temporal part of Ringo Star was a temporal part of the Beatles drummer – I believe there was another drummer before Ringo).
Yes exactlyDear Peter,
See below in red.
Dear Matthew
BORO was chosen to model a common ontology to enable various database-driven systems to be connected using common meanings of terms etc. I think it was a good practical choice.
[MW] The choice was for a 4-Dimensionalist approach, that is one that sees objects as extended in time as well as space. BORO is a very small Top Level Ontology which all of the ontologies we had identified as possible start points were based on. The rationale for the choice can be found here:
[MP] Thanks for clearing that up.
Is this use of an ontology performing the same function as a data catalog?
[MW] No. BORO is a TLO that is perhaps better thought of as an approach to analysis of requirements. The formal methodology starts with the data you are having problems with using or sharing and discovers the patterns in the data by doing a 4D analysis.
MP - I need something that can store ontologies in a relational database and also be a data catalog. Do you have any suggestions for a better approach?
[MW] I don’t see why a relational database is a requirement. I would consider that a technical solution. I suggest looking at Protégé. It is free, is based on RDF/RDFS/OWL and should support the things you describe including import and export. I suspect by data catalog you mean what I would call a data dictionary or data model. These days a data catalog is usually used to describe a collection of data sets.
Regards
Matthew
Is BORO copyrighted or is it OK to also use it?
[MW] It is copyright. Chris Partridge, the owner of BORO, is on this forum. Being copyright does not mean you cannot use it. But it would be polite to let him know what you are doing.
MP - Good to know, I will contact Chris
I looked for an example data model of BORO (maybe in MS Access) to experiment with. I couldn't find one so have built one to use on a database-driven cloud project I am working on. And slowly filling it up.
My idea was to use something like BORO to create relationships between abstract classes of things like PERSON, HOMO_SAPIENS, MAMMAL, ORGANISM, MOLECULE, etc. And for this to then be reference data
[MW] Hmm. Reference data usually refers to more (but more detailed) classes that the one you mentioned, rather than data about particular persons. Is that what you mean?
You talk about relationships between classes. A good way to tell how well you are adapting to 4D thinking is to see the percentage of your relationships are actually whole-part, classification, or subtype-supertype. That should be about 90%. If you want to see some data model patterns that are trying to apply 4D principles and illustrate this, then you could try my book “Developing High Quality Data Models”. Equally, you can just see the data model here:
[MP] - We are probably using different terms which is causing confusion. I am not up with the correct terminology for ontology
And then in a separate place also use the other aspects of BORO to create 4D relationships between real things that are also instances of abstract classes. An example is Ringo Starr born in Liverpool, a member of the Beatles, etc.
[MW] As an example, both of those are whole-part, but you probably don’t see it immediately (the birth of Ringo Star was part of all the activities going on in Liverpool during the period of his birth. The Beatles was a band that had a drummer as a part, and a temporal part of Ringo Star was a temporal part of the Beatles drummer – I believe there was another drummer before Ringo).
Yes exactly
Thanks
Mike
On Wednesday, 19 April 2023 at 06:44:34 UTC+12 Mike Peters wrote:
Hi Alex
Yes indeed, thanks I need some luck
Do you have any suggestions on some good ontology design tools? I have never used one. Preferably ones that can import and export ontologies to save typing
Mike
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/319eac0e-e288-4dee-85cb-8a63ed2e7f38n%40googlegroups.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/904db3be-9886-4656-b0b8-3dec26787f42n%40googlegroups.com.
Hi Alex and Matthew
Dear Mike,
Hi Alex and Matthew
Just got back from a film industry training workshop. Lots of fun I tell you. Got to play with a Panther camera crane and hear some stories of million-dollar film equipment being wrecked on a movie set. All this was preventable and a place for ontologies and schema so the left and right hands know what they are doing in an extreme working environment when people are tired.
The film industry is all about problem-solving. Not enough money or crew and trying to create an original masterpiece with the unexpected just over the horizon.
Back in 1999, to solve a big problem in NZ related to biodiversity, I started to design and build what became known as Pipi, teaching myself along the way. Pipi1 was CGI, 2 was VB, 3 was coldfusion+SQL server, and by the time it got to 4 there was $NZ1,000,000 in govt funding, in-kind, a $600K software grant from ESRI. It drove the 17th most popular website in NZ at the time and was designed to integrate with DoC and a government research institute. It had a metadata repository based on David Marco's book, geodatabases, 850 relational tables and several thousand methods plus workflows etc. It anticipated cloud computing by several years especially using a messaging layer back in 2004. However, it was monolithic and eventually fell victim to the 2007 financial crisis and change of government. The Christchurch earthquake finished off our server farm.
Back then, I had come across John Sowa, Topic Maps, The Semantic Web, John Zachman, John C Hay, Business Rules and a myriad of other goodies and decided that if I ever got the chance again, Pipi would be done very differently.
Starting 2016, I rebuilt the system from scratch from memory (I'm autistic with party tricks) and then spent 2 years doing a self-learning crash course in cloud computing, science, maths and DevOps. Then started refining Pipi through versions 6 (modules), 7 (microservices), 8 (namespaces) and currently 9 (emergent systems).
In between playing with grips and camera equipment, I doodled ways that UOM, algorithims, primatives (with dialectical, duality, conjugal variable relationships), database entities and their properties, domains, ontologies, world views, data catalogues and schema.could all be connected up. Nothing like a good puzzle to solve.
Chris Partridge was very helpful and put me on the scent of "constructional story" part of 4D (BORO etc). There was a great seminar in 2021 at Cambridge University where both Chris and Matthew presented. I seem to have lost the link, but well worth a look at. Video and PDF.
[MW] It’s here: https://gateway.newton.ac.uk/event/tgmw80/programme
About your point on ontology and schema. My first thought is that they are different and both important. Schema to me is an .xsd document, with many schema to an ontology. Anyway thats what I'm going with.
[MW] The key difference is that an ontology is about what exists, and a schema (or data model) is about the data you hold about what exists. It should be reasonably obvious that it is helpful if your schema aligns closely with your ontology. Schemas/data models come in various guises with various purposes and formats, so anything from the DDL for an SQL data base to the XSD of an XML Schema to an OWL ontology intended to be populated with instances can be ways a schema might be represented.
I have been following many of the discussions on the Ontlogy Forum for a number of years now. Seem to recall John Sowa writing something along the lines of "Ontologies are just different subjective world views of the same reality". Sorry John if I got that wrong. But I'm building along those lines. World View is baked into Pipi9. Everything from Mr Pierce, Satre, Marx and Engles, Dewy, etc etc. Its like a separate layer or window explicitly linked to ontologies which are just a way to order the same entities and primatives which are objective.
[MW] That’s OK, but you still need an underlying and coherent view of reality that everything is brought into that these are layered on top of, or else you just have lots of disjoint and inconsistent data.
Regards
Matthew West
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/e026bb41-13e6-4e32-b039-1c4a11ac9b46n%40googlegroups.com.
Mike,
As soon as we encounter formal ontologies, i.e. we mathematically write down the logic of the terms of a particular subject area; philosophy begins to play in us - everyone has their own, but some prefer it hotter: C.S. Pierce or G.W.F. Hegel.
xsd-schema defines the data structure, and formal ontology defines the relationship between the terms of the subject area.
For me, formal ontology is the forerunner of the formal theory of a particular science or technology.
You have a project and you are going to use formal ontologies in it. Welcome on board:-)
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/e026bb41-13e6-4e32-b039-1c4a11ac9b46n%40googlegroups.com.
"Some work has been undertaken to provide interoperability between the W3C's RDF/OWL/SPARQL family of semantic web standards and the ISO's family of Topic Maps standards though the two have slightly different goals.[citation needed]
The semantic expressive power of Topic Maps is, in many ways, equivalent to that of RDF,[citation needed] but the major differences are that Topic Maps (i) provide a higher level of semantic abstraction (providing a template of topics, associations and occurrences, while RDF only provides a template of two arguments linked by one relationship) and (hence) (ii) allow n-ary relationships (hypergraphs) between any number of nodes, while RDF is limited to triplets"
Hi Mike,
We included an OMG metamodel for Topic Maps in the Ontology Definition Metamodel (ODM) years ago, but when we started new work to revise it, Lars Marius Garshol, who was one of the original authors and who supported the original ODM effort, told us that it was no longer widely used and that no one from their team had interest / bandwidth to work on it with us. The best source for an update would be the convenor of SC34 WG8. I’ll be in a meeting for SC32 WG2 tomorrow afternoon, and can ask if anyone has heard about recent work (the ISO rep on the call may know if he is there), but the fact that the original working group was disbanded and based on Lars’ comments to us on the ODM a few years ago, it may not prove a fruitful direction for you.
Best regards,
Elisa
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
On Behalf Of Mike Peters
Sent: Monday, April 24, 2023 4:39 PM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Ontology Vs Data Catalogue
Hi Alex and Matthew
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/bc78bcaa-2472-49a5-ab1f-a9b15154722fn%40googlegroups.com.
Dear Mike,
Elisa confirmed in detail what I was going to hint at.
I was involved with some of the people working on Topic Maps in its early days. The original idea seems to have been to try to annotate and link web resources. It had semantic capabilities as part of that. Of course what we actually use for finding stuff on the web is Google (or your favourite search engine) and Topic Maps never really caught on for its semantic capabilities alone.
On the other hand, RDF/RDFS/OWL is well established and has tool support, and has a good logical grounding (which is not to suggest they are perfect by any means).
The only alternative I would consider is SQL, and my choice would probably depend on the nature of the problem. The RDF/RDFS/OWL stack is good for graphy data. SQL is good for transactional data. Both can do either, it is just a matter of efficiency.
Regards
Matthew
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BYAPR11MB38324816945AA71E54CC3B4CA5679%40BYAPR11MB3832.namprd11.prod.outlook.com.
what we actually use for finding stuff on the web is Google
On Apr 25, 2023, at 5:50 PM, Mike Peters <mi...@redworks.co.nz> wrote:
Hi Elisa
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/ffc01c54-8749-4f71-bdd2-a4c82a7eec64n%40googlegroups.com.
Hi Mike,
According to the ISO web site for ISO/IEC JTC 1/SC 34 (see https://www.iso.org/committee/45374.html), none of the Topic Maps standard elements, including ISO/IEC 13250 2011 Information technology — Topic Maps, parts 1-6 and ISO/IEC 19756:2011 Information technology — Topic Maps — Constraint Language (TMCL), have been touched since 2015 (only one section was modified then), and the content shifted working groups as I mentioned, but the dates are not obvious from the ISO web site.
I’ve reached out to the working group chair and will let you know if I learn anything.
Best,
Elisa
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
On Behalf Of Mike Peters
Sent: Tuesday, April 25, 2023 1:50 PM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Ontology Vs Data Catalogue
Hi Elisa
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/ffc01c54-8749-4f71-bdd2-a4c82a7eec64n%40googlegroups.com.
Dear Mike,
Just one thing to pick up on here:
“The weakness of RDF is it is not n'ary. And nature is n'ary and multi-inheritance.”
Formally of course, any n’ary relation can be “reified” – turned into an object with n binary relations – this may not be very efficient for some applications.
It turns out that if you adopt 4D n’ary issues fall away, as things tend to be reified naturally, and binary relations work fine (give me some examples of where you think you need n’ary relations if you like) and there is nothing to stop you deploying multiple inheritance.
RDF does have weaknesses of course, though for me the problem is not being able to distinguish between the record and what it is about.
Regards
Matthew West
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Mike Peters
Sent: Tuesday, April 25, 2023 9:50 PM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Ontology Vs Data Catalogue
Hi Elisa
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/ffc01c54-8749-4f71-bdd2-a4c82a7eec64n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/8877f0b8-b227-463e-9447-dae9739ca488n%40googlegroups.com.
An unexpected emergent property of a complex system may be a result of the interplay of the cause-and-effect among simpler, integrated parts (see biological organisation). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart."
------A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.[1]
Realizations of these random variables are generated and inserted into a model of the system. Outputs of the model are recorded, and then the process is repeated with a new set of random values. These steps are repeated until a sufficient amount of data is gathered. In the end, the distribution of the outputs shows the most probable estimates as well as a frame of expectations regarding what ranges of values the variables are more or less likely to fall in.[1]
Often random variables inserted into the model are created on a computer with a random number generator (RNG). The U(0,1) uniform distribution outputs of the random number generator are then transformed into random variables with probability distributions that are used in the system model.[2]
Stochastic originally meant "pertaining to conjecture"; from Greek stokhastikos "able to guess, conjecturing": from stokhazesthai "guess"; from stokhos "a guess, aim, target, mark". The sense of "randomly determined" was first recorded in 1934, from German Stochastik.[3]
In order to determine the next event in a stochastic simulation, the rates of all possible changes to the state of the model are computed, and then ordered in an array. Next, the cumulative sum of the array is taken, and the final cell contains the number R, where R is the total event rate. This cumulative array is now a discrete cumulative distribution, and can be used to choose the next event by picking a random number z~U(0,R) and choosing the first event, such that z is less than the rate associated with that event.
A probability distribution is used to describe the potential outcome of a random variable.
Limits the outcomes where the variable can only take on discrete values.[4]
Dear Mike,
A couple of comments below.
Hi Matthew
Good points.
I copied some well written words I found on Wikipedia etc that sort of describe what I have actually built and the problem I'm trying to solve. Pipi9 architecture works a bit like Markus Covert mycoplasma simulator (its really fun to download and run) but the architecture is completely different.
------
Markus W. Covert (born April 24, 1973) is a researcher and professor of bioengineering at Stanford University who led the simulation of the first organism in software.[1][2][3] Covert leads an interdisciplinary lab of approximately 10 graduate students and post-doctoral scholars.
-------
Modelling biological systems
"Modelling biological systems is a significant task of systems biology and mathematical biology.[a] Computational systems biology[b][1] aims to develop and use efficient algorithms, data structures, visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism, signal transduction pathways and gene regulatory networks), to both analyze and visualize the complex connections of these cellular processes.[2]
An unexpected emergent property of a complex system may be a result of the interplay of the cause-and-effect among simpler, integrated parts (see biological organisation). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart."
------
Stochastic simulation
A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.[1]
Realizations of these random variables are generated and inserted into a model of the system. Outputs of the model are recorded, and then the process is repeated with a new set of random values. These steps are repeated until a sufficient amount of data is gathered. In the end, the distribution of the outputs shows the most probable estimates as well as a frame of expectations regarding what ranges of values the variables are more or less likely to fall in.[1]
Often random variables inserted into the model are created on a computer with a random number generator (RNG). The U(0,1) uniform distribution outputs of the random number generator are then transformed into random variables with probability distributions that are used in the system model.[2]
[MW] In my youth my PhD was about process control, and stochastic control was a then (50 years ago) novel approach to the control of non-linear systems.
Etymology[edit]
Stochastic originally meant "pertaining to conjecture"; from Greek stokhastikos "able to guess, conjecturing": from stokhazesthai "guess"; from stokhos "a guess, aim, target, mark". The sense of "randomly determined" was first recorded in 1934, from German Stochastik.[3]
Discrete-event simulation[edit]
In order to determine the next event in a stochastic simulation, the rates of all possible changes to the state of the model are computed, and then ordered in an array. Next, the cumulative sum of the array is taken, and the final cell contains the number R, where R is the total event rate. This cumulative array is now a discrete cumulative distribution, and can be used to choose the next event by picking a random number z~U(0,R) and choosing the first event, such that z is less than the rate associated with that event.
Probability distributions[edit]
A probability distribution is used to describe the potential outcome of a random variable.
Limits the outcomes where the variable can only take on discrete values.[4]
--------------
So Pipi9 has a lot of input variables hence 'nary.
[MW] n variables does not necessarily equate to n’ary relations. On the other hand, if you have a time series of multiple variables you might well find a relational table convenient to hold the results, but this is really a view on the underlying data.
Also want to import as ready only ontologies expressed in RDF format. Then import the abstract classes into my version of Boro, then run Boro in reverse to feed a good old fashioned relational database entity generator. The properties expressed in Boro becoming the database table columns, and /or code classes, parameters, workflow objects etc.
[MW] Interestingly, both Chris and I have for a long time (well before RDF) favoured what might be considered a “single table implementation” (in practice you might have some auxiliary tables). It just has a lot more flexibility and allows the whole system to be data driven.
Regards
Matthew
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/dc1bbb55-a691-4bf9-a823-639863e145b5n%40googlegroups.com.
Hi Ontolog-Forum
Just jumping in to say “hi” to the familiar “faces” and pay tribute to a dear friend and mentor Dan Corwin (FB) (August 21, 1945 - May 7, 2015). Dan was an expert in topic maps and NLP, so reading this thread brought back his memory. Dan and I spent many hours discussing topic maps vs. RDF and OWL, each of us wanting to understand better why we differed in our preferences. For a few years, Dan contributed to BioPAX, especially in the local working group that met in MIT’s Stata Center. Dan was my first “full-time boss” at Wang Lab’s R&D group (Dept 14) and the author of the Wang Word Processor editing software. I was 21 then, I’m 66 now. I’m grateful that I had Dan as my first techy boss and that we developed a close friendship and close working relationship.
There may be some relevant information on Dan’s company page – thanks to the Wayback Machine Internet Archive: https://web.archive.org/web/20150801224333/http://lexikos.com/
Thanks for listening.
Kind Regards, Joanne COLLEGE OF SCIENCE AND MATHEMATICS How would you rate our Customer Service? Click here. |
|
For non-UVI business, contact me at te...@data-llc.net or schedule a meeting at: https://calendly.com/joanne-luciano
Joanne S. Luciano, Ph.D.
SuperUROP Program Administrator
Electrical Engineering and Computer Science
77 Massachusetts Avenue | Bldg. 38-439
Cambridge, MA 02139
E: jluc...@mit.edu P: 617-258-6059 | EECS | SuperUROP
From:
<ontolo...@googlegroups.com> on behalf of "dr.matt...@gmail.com" <dr.matt...@gmail.com>
Reply-To: "ontolo...@googlegroups.com" <ontolo...@googlegroups.com>
Date: Wednesday, May 3, 2023 at 3:33 AM
To: "ontolo...@googlegroups.com" <ontolo...@googlegroups.com>
Subject: [External] RE: [ontolog-forum] Ontology Vs Data Catalogue
This message was sent from a non-IU address. Please exercise caution when clicking links or opening attachments from external sources.
To view this discussion on the web visit
https://groups.google.com/d/msgid/ontolog-forum/010b01d97d91%247e17d000%247a477000%24%40gmail.com.
Also want to import as ready only ontologies expressed in RDF format. Then import the abstract classes into my version of Boro, then run Boro in reverse to feed a good old fashioned relational database entity generator. The properties expressed in Boro becoming the database table columns, and /or code classes, parameters, workflow objects etc.
[MW] Interestingly, both Chris and I have for a long time (well before RDF) favoured what might be considered a “single table implementation” (in practice you might have some auxiliary tables). It just has a lot more flexibility and allows the whole system to be data driven.
Regards
Matthew