This report is directed at a technical audience who are interested understanding how to prepare their data for use in the UK National Digital Twin.
In 2017, the National Infrastructure Commission published Data for the Public Good (NIC, 2017) which set out a number of recommendations including the development of a UK National Digital Twin supported by an Information Management Framework of standards for sharing infrastructure data, under the guidance of a Digital Framework Task Group set up by the Centre for The National Digital Twin programme.
Much work has been done following this, but in particular
In particular they identified the need for:
This report provides a technical description of the process at the heart of the Thin Slices Methodology with the aim of providing a common technical resource for training and guidance in this area. As such it forms part of the wider effort to provide common resources for the development of the Information Management Framework.
This report focuses on the process at the core of the Thin Slices Methodology. It identifies a requirement for a minimal foundation for these kinds of processes. In the companion report, Top-Level Categories (Partridge, forthcoming), the foundation adopted by the Information Management Framework is described. Together, the two reports cover the details of the developing thin slices process.
Regards,
Chris Partridge
Chris Partridge |
Chief Ontologist | BORO Solutions Limited | www.BOROSolutions.co.uk
M: +44 790 5167263 | e: partr...@borogroup.co.uk
BORO Solutions Limited | Registered Office: 2 West Street, Henley on Thames, Oxfordshire RG9 2DU
Registered in England & Wales | Company No: 06025010 | VAT No. GB 905 6100 58
Chris and all,
The “thin slices” document is interesting, though by itself not quite yet fully answering one of my main interests. I probably have a more limited viewpoint and interest than this document is trying to satisfy, and in particular I was drilling in to see if I could learn something directly usable for schema harmonisation work in my domain (or perhaps even use our domain’s experience to help clarify the NDT approach).
I fully support all the material on context and benefits, and the approach at a high level is clear.
I am interested in the internals of the bCLEARer “evolve” process. The figure expanding “evolve” shows 2 system schemas become one neutral schema through a process called “entification”. I didn’t see further definition of that process. I checked the linked references that seemed to be most relevant to my line of questioning. I appreciated the linked paper by Chris “The Role of Ontology in Integrating Semantically Heterogeneous Databases” (2002) which has relevant examples of heterogeneity with semantic similarity, and identifies ontological considerations which mostly are ways to see past simplifications in representations where the real-world possibilities are more complex.
I am still wondering how well-defined is the process of entification. Experts can perform this process using the understanding of the domain, system schemas, ontological considerations as discussed in the paper, and agreed top level categories, but I wonder whether beyond those things there are any further systematic steps, principles or rules defined.
Regards, Ian CornwellRegards,
Chris Partridge
Chris Partridge |
Chief Ontologist | BORO Solutions Limited | www.BOROSolutions.co.uk
M: +44 790 5167263 | e: partr...@borogroup.co.uk
BORO Solutions Limited | Registered Office: 2 West Street, Henley on Thames, Oxfordshire RG9 2DU
Registered in England & Wales | Company No: 06025010 | VAT No. GB 905 6100 58
Chris and all,
The “thin slices” document is interesting, though by itself not quite yet fully answering one of my main interests. I probably have a more limited viewpoint and interest than this document is trying to satisfy, and in particular I was drilling in to see if I could learn something directly usable for schema harmonisation work in my domain (or perhaps even use our domain’s experience to help clarify the NDT approach).
I fully support all the material on context and benefits, and the approach at a high level is clear.
I am interested in the internals of the bCLEARer “evolve” process. The figure expanding “evolve” shows 2 system schemas become one neutral schema through a process called “entification”. I didn’t see further definition of that process. I checked the linked references that seemed to be most relevant to my line of questioning. I appreciated the linked paper by Chris “The Role of Ontology in Integrating Semantically Heterogeneous Databases” (2002) which has relevant examples of heterogeneity with semantic similarity, and identifies ontological considerations which mostly are ways to see past simplifications in representations where the real-world possibilities are more complex.
I am still wondering how well-defined is the process of entification. Experts can perform this process using the understanding of the domain, system schemas, ontological considerations as discussed in the paper, and agreed top level categories, but I wonder whether beyond those things there are any further systematic steps, principles or rules defined.
--
You received this message because you are subscribed to the Google Groups "UK NDT FDM" group.
To unsubscribe from this group and stop receiving emails from it, send an email to uk-ndt-fdm+...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/uk-ndt-fdm/e3e1426d-8288-40c8-bb87-8e75a06652e0n%40googlegroups.com.