Catalog of Biases

12 views
Skip to first unread message

Paola Di Maio

unread,
Apr 18, 2020, 4:18:45 AM4/18/20
to ontolog-forum, W3C AIKR CG
This is a very good find for me
 and hopefully also for fellows on the lists

I am researching bias as a pathology resulting from poor knowledge modelling, the remedy is 
knowledge representation

It happens to be structured as a taxonomy, what fun

PDM
 

Frank Guerino

unread,
Apr 20, 2020, 9:08:04 PM4/20/20
to Ontolog Forum, W3C AIKR CG

Hi Paola,

 

This is very interesting.  Thank you for sharing it.

 

In addition to researching bias as a pathology resulting from poor knowledge modeling, you may want to also consider the reverse (i.e. poor modelling/models that result from biases).  One such bias arises from the notion that model structures must be pre-designed and imprinted in database schemas in order to capture model data, forcing data to be restructured/transformed to fit the model’s design rather than having the model result from the ever changing data, itself.  We see this with enterprise modeling tools (e.g. Architecture Modeling Tools, Cause & Effect Models, CMDBs, etc.).  I’ve personally spent years working with data-driven schema-less models that help eliminate such biases and open up a world of model representations that allow knowledge to form freely and adjust dynamically to data changes.

 

Another example is “standards” (which are like belly buttons because everyone has one).  Often, standards establish pre-conceived notions and cause severe narrowmindedness, yielding the opposite of their original intent.

 

There are many such biases that cause bad modelling/models and you may want to explore them as well.

 

My Best,


Frank

--

Frank Guerino, Principal Managing Partner

The International Foundation for Information Technology (IF4IT)
http://www.if4it.com
1.908.294.5191 (M)

Guerino1_Skype (S)

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAMXe%3DSo%2B%3D1X3A4VGN6Ecv78MD604vWRU7600oimG3jDr0fsLtw%40mail.gmail.com.

Paola Di Maio

unread,
Apr 20, 2020, 10:27:09 PM4/20/20
to ontolog-forum, W3C AIKR CG
Hello Frank
Thanks for reply and for your interest
(At the back of my mind I wonder if you are related to Nicola)

I am working on FAT AI - yes, there is strong AI. weak AI and FAT AI - ha ha
In particular, I developing a knowledge object for FAT KR, fair, accountable transparent

Please note this is an infographic, not a UML nor flowchart

I am preparing a lecture and writing up note do nto ahve a narrative yet but in sum, we need a way of instilling the notion of adequacy
into KR. At the moment it is a bit notionally done. And FAT is one set of such possible evaluation criteria for adequacy 

(Also others of course)
I am interested in feedback  on the diagram , can you make sense of it?
can it be clarified/improved?

 I’ve personally spent years working with data-driven schema-less models that help eliminate such biases and open up a world of model representations that allow knowledge to form freely and adjust dynamically to data changes.

 

Please do share your stuff , i d like to include/reference it in this work
cheeers

PDM

Paola Di Maio

unread,
Apr 21, 2020, 4:02:15 AM4/21/20
to ProjectParadigm-ICT-Program, ontolog-forum, W3C AIKR CG
Milton
I wonder if youd be up for translating /mapping to KR the debiasing algos currently in use

This would be a valuable deliverable from us
P

On Tue, Apr 21, 2020 at 12:11 AM ProjectParadigm-ICT-Program <metadat...@yahoo.com> wrote:
Bias can result from poor knowledge modeling, but IMHO when we conduct scientific research bias arises from (1) the scientific method domain of research specific implementation, (2) instrumentation bias, both in (i) technical, (ii) data recording, (iii) significant numbers of data, (3) observer caused bias where the mere observation itself causes a perturbation in the observed system.

The resulting knowledge modeling bias can only be corrected if the qualitative and quantitative aspects of (remote) sensory input are fully understood.

Here is where neuroscientists, cognitive scientists, psychologists, philosophers and physicists come in.

There is no SINGLE knowledge representation scheme. But only categories of knowledge representation.

We can use AI and category theory to find which categories of KR are most suited for each domain of scientific discourse.

For each well established category knowledge modeling bias can then be corrected by appropriate KR schemes.

Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development


Reply all
Reply to author
Forward
0 new messages