Greetings Ontologers!!
Hope everyone is keeping well on this list
There is great confusion and anxiety over the risks of irresponsible use of AI, esp agentic AI
Can knowledge representation and ontology help to shed light?
For the better part of a decade, governments around the world invested enormous resources in telling the public that AI must be developed safely and ethically. The European Union produced the AI Act. The United States issued executive orders on responsible AI. The OECD published principles. UNESCO adopted recommendations. The G7 agreed on a code of conduct. Hundreds of millions of dollars flowed into AI safety research. Thousands of pages of policy were drafted, debated, revised, and celebrated. The message was unambiguous: AI is powerful, AI is dangerous, AI must be governed by ethical constraints. source
We now have agentic AI, non-agentic AI, and large language models (LLMs) that enable natural language interfaces for both.
These technologies are transforming how systems are designed, deployed, and interacted with.
However, the current AI technology landscape is increasingly complex and often conflated. LLMs, agentic AI, autonomous agents, and non-agentic AI are frequently bundled together within products, platforms, and ecosystems without clear distinctions. While the technical capabilities are unprecedented, the challenges are equally unprecedented — and the overall landscape remains confusing.
Technologists, developers, users, investors, and policymakers often lack access to coherent knowledge maps that could help them gain orientation within this rapidly evolving domain. Without shared conceptual frameworks, terminology, and structured understanding, it becomes difficult to distinguish between architectures, capabilities, limitations, and risks.
In such a confused environment, misunderstandings can easily arise. These misunderstandings can lead to misaligned investments, premature regulation, policy overreach, or unnecessary market panic. The absence of clear knowledge structures increases the likelihood of strategic and legislative mistakes.
What is urgently needed is greater conceptual clarity, shared taxonomies, and transparent mapping of the AI ecosystem — so that innovation can proceed responsibly, markets can remain stable, and policymaking can be informed rather than reactive.
May other fellow ontologers be fervently interested and working on this topic as I am?
Anyone using large language models (LLMs) for learning, productivity, teaching, analysis, drafting, or brainstorming likely recognizes their transformative value — they quite literally put a smile on people’s faces. These systems are expanding cognitive capacity and accelerating knowledge work across domains.
Of course, LLMs can make mistakes — but so can humans. The advancement of verification capabilities, critical reasoning skills, and structured validation methods is therefore part of essential new skill-building in the AI era. Responsible use, not avoidance, is the path forward.
At the same time, while comprehensive knowledge maps of the AI ecosystem are still being developed, research communities and industry actors are simultaneously negotiating standards, influence, and market positioning. This can create knowledge vacuums that are disorienting for technologists, investors, policymakers, and the broader public.
One current example is the discussion around MCP — a new agentic standard introduced by Anthropic — which in some popular media narratives is being mixed up with “webMCP.” However, webMCP is not yet an established standard and has barely been formally drafted or discussed in standards forums, while being
test deployed *Google Canary. Misrepresentation or premature framing of emerging technical artifacts at this critical junction can have significant consequences, including policy confusion, misplaced investments, and distorted public understanding. And vulnerabilities.
It is in this spirit that I am working on Agentic Ontology v1, alongside an interoperability framework aimed at clarifying conceptual distinctions and enabling shared orientation across the ecosystem.
I would very much welcome collaboration with others working on related standards, ontologies, knowledge maps, or interoperability initiatives. To share my stuff and collaborate with others working on this. Let it not be said that knowledge representation was nailed in a coffin when in fact it is the only way out of uncertainty.Pl
Please share links to calls, workshops etc or let's organise something around this topic?
Best regards,
Paola Di Maio
Paper: [ Di Maio, Paola (2026). AIAO : AI Agent Ontology *v1. figshare. Journal contribution. https://doi.org/10.6084/m9.figshare.31231783.v3 ]
Technical Note 4 (Starborn Repository, GitHub): [https://github.com/Starborn/webmcp/blob/main/TN4.md]