Re: KR for Human in the Loop": Two challenges related to KR..

Skip to first unread message

Owen Ambur

Nov 30, 2022, 11:11:47 PM11/30/22
to Paola Di Maio, carl mattocks, Naval Sarda,,, Milton Ponson
Paola, if anyone (humans in the loop) who is working on "machine learning algorithms" would like to make their plan(s) explicit in a format readily comprehensible to human beings, I'll be happy to: 

a) render their plan(s) in StratML format, and 
b) collaborate with them to see if the algorithm(s) themselves can be reasonably documented in StratML format to make it(their) purposes and results comprehensible to human beings.

Carl, if you can remind me where on the Web to find the document you referenced, I'll take another look and see if it constitutes the makings of a plan that we might pursue further together, with or without other participants in the AIKR CG.  If memory serves me correctly, our previous effort foundered because of a disconnect with Paul Alagna, who apparently either had something else in mind or was unable to do what he thought he could in terms of leveraging StratML for query/discovery purposes.

BTW, the StratML-enabled query service on which Naval is now working for me reveals 96 instances of goals and objectives referencing "artificial intelligence" among the >5K documents currently in the StratML collection.  By contrast, my Google site-specific query feature discovers 137 referencing AI somewhere in the full text of the documents.

For "algorithms" the comparison is 19 for the StratML query service versus 85 for Google.

For "machine learning" the comparison is 359 versus 95 but the larger number appears to result from logic that considers those two terms separately and includes results for either of them. 

Naval, if so, that logic should be changed to require an exact match of the terms entered in the query fields.  

Note also that Google queries/results can be shared via their URLs.  It would be nice if the StratML query service could support that capability as well.  Values to be added by the StratML-enabled query service include more explicit discovery and direct referencing of relevant goals and objectives.  As you know, we need to figure out how best to achieve the latter, leveraging the identifiers associated with each goal and objective.

On Tuesday, November 29, 2022 at 12:05:09 AM EST, Paola Di Maio <> wrote:

Hello Owen
happy thanksgiving to you
the suggestion is simply that your contribution is welcome in relation to the AI KR
topics being discussed - rather than soley turning everything simply into on stratML which is not the primary focus of the list , we do not need to know everytime you mint a stratML url

I have also suggested in my reply to Carl that stratML is not in our agenda and that as you say,
the reference he mention is not clear - it is not clear what it is, nor how it relates to the work being done/discussed

But if you can make stratML a schema for making explicit machine learning
algorithms into human readable format, maybe

On Tue, Nov 29, 2022 at 12:38 PM Owen Ambur <> wrote:
Carl & Paola, my wife and I were away last week for Thankgiving with family.  We just got home this evening and I'm not sure I fully grasp this message thread.

For example, I'm not sure exactly what this statement means:  "Startpoint is the document produced during our meetings on  'Leveraging the StratML specification for AIKR'".  I don't seem to have that document included in my collection at

If any of the members of the group are willing and able to come to consensus on a potential output that we might produce together, I will be more than happy to render the plan in StratML format and do my best to contribute to realizing any goals and objectives it may set forth.

BTW, as per Paola's suggestion, I am addressing this reply only to the two of you.  I'm getting the sense that it may be time for me to sign off the list but I'm not one to burn any bridges that may someday prove useful.

On Wednesday, November 23, 2022 at 01:17:10 AM EST, Paola Di Maio <> wrote:

Carl and all

Thanks for offering to organise a call- I somewhat glad to see that the overall mission for this AI KR CG is starting to sink in, :-)

 it may be good to hear how CG members tackle the challenge from their perspective  (that may include stratml?)

Please excuse me for probably not attending call at this stage, but look forward to learn about possible useful conversations you may hold if someone signs up for it.  This area is very complex

In addition to the call, or as an alternative to the call, or both, may I suggest that  you consider inviting interested members to contribute a paragraph (on a wiki) by
a) defining the challenges/problems not yet met taking ito account the state of the art,which is vast and mysterious (this implying have a grasp of the SOTA which is a tough one, but every little bit helps)
b) making a short statement of interest as how the member in question addresses/ solves such challenges, including pointers to relevant work such as papers, talks , publication or display of interest in the topic

This could help us gather the field without unnecessary expectation and maybe stimulate members to pull their act together-
Keeping in mind the overall mission of a possible WG, can we every draft anything from W3C point of view based on what emerges?
In fact, I would be inclined to invite to a call only members who have formulated
their expression of interest in an articulate form *ie, qualifying members

or something like that


On Wed, Nov 23, 2022 at 2:52 AM carl mattocks <> wrote:
Paola, Pete et al

Thanks for your comments and phrasing used in the modified call below..  

I invite everyone to indicate your level of interest in participating in a series of meetings :
  • Objective is to determine how to Use KR to support Humans in the AI loop
  • One task is to explain the challenges that Human in the Loop Knowledge Representation would  address.
  • Startpoint is the document produced during our meetings on  'Leveraging the StratML specification for AIKR'
Happy Thanksgiving

Carl Mattocks

It was a pleasure to clarify

On Mon, Nov 21, 2022 at 9:20 PM Paola Di Maio <> wrote:
Carl, Human in the AI loop is a good idea,

Human in the loop is very broad, what about being a touch more precise
Using KR to support humans in the AI loop
(maybe you can phrase it even better)

The goals and content would have to be aligned to the title
StratML however useful is neither specific to AI or KR

i have absolutely no problem with the fact that Owen translates every statement
to stratML, however this list is not about stratml at all
apparently some members are confused by the frequency of stratml posts
and wonder if this list is about stratml

May we suggest that Owen, when kindly and cheerfully makes a stratml page for everything that we discuss here, refrains from making each time a public announcement on the list about it, and just pings the statement owner ?

Owen of course you are very welcome to continue to contribute to all discussions, but maybe we do not need to be informed everytime you make a stratml entry?
what do you think? :-)
I think there may be scope for using stratML to make explicit statements about AI, each AI could have a stratML like schema to declare what it does and how it does it
however if I remember correctly you said you have no plan to modify stratML at the moment

On Tue, Nov 22, 2022 at 4:28 AM Peter Rivett <> wrote:
Hi Carl,
I don't know if it's a copy-and-paste error but I don't see how the title "KR for Human in the Loop" matches the objective which is about the somewhat legacy XML language StratML; which AFAIK is for strategic performance planning as opposed to AI, human involvement in AI, or knowledge representation except for the very narrow domain of knowledge of strategic plans.

Apologies for missing background from previous pre-COVID discussions, but I'm sure I won't be the only one: are there any archives or outputs?
Maybe an explanation of the specific problem space related to Human in the Loop Knowledge Representation would help: for example the competency questions it's hoped to address.


Federated Knowledge, LLC (LEI 98450013F6D4AFE18E67)
Schedule a meeting at

From: carl mattocks <>
Sent: Monday, November 21, 2022 10:29 AM
To: Dave Raggett <>; W3C AIKR CG <>
Cc: Stanislav Srednyak, Ph.D. <>
Subject: KR for Human in the Loop": Two challenges related to KR..
KR Folk

To Give a measure of Thanks at this time of Thanks Giving .. I invite members to show their level of interest in participating in a regular conference call to discuss "KR for Human in the Loop"

Objective is to continue defining  how "StratML" helps explain AI KR.  Specifically, before Covid, we had mapped out how "Human in Loop" was a significant factor in shaping use of AI KR .. But we had no "language" for that interaction.


Carl Mattocks
It was a pleasure to clarify

On Mon, Nov 21, 2022 at 6:05 AM Dave Raggett <> wrote:
If you want an natural language notation for math, you might be interested in EasyMath from work in the late nineties:

EzMath provides an easy to learn notation for embedding mathematical expressions in Web pages. The notation is inspired by how expressions are spoken aloud together with a few abbreviations for conciseness (e.g. x^y denotes x raised to the power y).

Sadly, the browser plugin is now defunct as it relies on an interface long abandoned by modern browsers.  It wouldn’t be that hard (one week's work) to reimplement it as a JavaScript library using the HTML CANVAS element as its target.

However, that is a million miles from work on AI agents like Minerva.

Minerva is a sophisticated deep learning based system. It starts from general purpose large language model (PaLM) and refines it with training against a mathematical dataset, producing impressive results.

However, the approach described in the paper (linked above) is limited to agents with a single purpose. For agents designed for general purposes, we need a more flexible approach. That is why I am proposing work on direct manipulation of latent semantics, along with mimicking the way that the brain separates different kinds of knowledge across different parts of the cortex. The idea is to combine intuitive (System 1) thinking with deliberative, analytic thinking (System 2).  Minerva only supports the former.

On 21 Nov 2022, at 10:00, Paola Di Maio <> wrote:

You and I are on different planets, and speak different languages :-)

So it seems. :-)

Dave Raggett <>

Reply all
Reply to author
0 new messages