TheDLMS UA Certificate of Compliance certifies compliance against the core DLMS UA specifications (Blue Book and Green Book). This certification serves as a mark of excellence for data exchange interoperability.
The DLMS UA Certificate of Compatibility is issued for one or more Generic Companion Profiles that are released by DLMS UA. The compatibility certification ensures certified devices are plug and play within a specific application, regardless of the device manufacturer.
Once the member has determined the type of test tool required, they can then choose to buy a Software-as-a-Service license (annual subscription), a perpetual license (a one-off fee) or sub-license the test tool from an existing DLMS UA Member in Good Standing with a valid test tool license or from a DLMS UA Accredited Test Laboratory.
Members are required to test products against the communication technology that is to be supported by the implemented DLMS UA communication profile to ensure that the product under certification is compliant in its declared native communication technology.
Once the DLMS UA Test Tool, Golden Communication Environment, and/or Golden Communication Device have been licensed or purchased, the DLMS UA Test Tool distributor will provide the license keys to proceed with the DLMS UA Test Tool installation. Members will provide self-test results via the online platform.
The Yellow Book is the reference document describing the entire DLMS UA Qualification Process and testing procedures for certification of compliance and provides all the necessary information on how to proceed with the DLMS UA Qualification Program application.
The first step in obtaining the DLMS UA Certificate of Compatibility or IDIS Package 3 compatibility is to ensure that your device has been certified by DLMS UA for compliance. The device must embed the DLMS UA Generic Companion Profile(s) as specified by DLMS UA against which compatibility should be proven.
Following the merge of the IDIS Association with DLMS UA, it is now possible to apply for IDIS Package 3 certification using the online DLMS UA Qualification Portal as described here. IDIS Package 2 may continue to be certified as described here, however, it will be phased out in December 2024.
Members should decide which DLMS UA Generic Companion Profile(s) that they wish to certify. Multiple Generic Companion Profiles can be implemented in a specific device. However, compatibility testing will be carried out separately for each DLMS UA Generic Companion Profile.
Compatibility testing will be performed by a DLMS UA Accredited Test Laboratory. Members can choose the Accredited Test Laboratory during the DLMS UA Qualification Process. The full list of Accredited Laboratories can be found here.
Once the application to the DLMS UA Qualification Program has been submitted, DLMS UA will issue an invoice for the eligibility verification and issuance of the DLMS UA Product Qualification Certificate. On receipt of its payment, DLMS UA will trigger the accredited test laboratory to begin the testing process. The accredited test laboratory will have already queued the certification application upon submission. The price associated with the work to be carried out by the accredited test laboratory will be charged directly by the accredited test laboratory to the applicant.
The DLMS UA Accredited Test Laboratory has the capability to conduct pre-certification and certification runs of the DLMS UA Test Tool. The price associated with these tests are published in the DLMS UA Internal Regulations (Bylaws).
On completion of testing, the test report will be verified by DLMS UA, and if successful, the Certificate of Compatibility will be issued by DLMS UA. The member can then use the DLMS UA Compatibility Certification trademark on the certified product.
The importance and value of sharing data is well known and increasingly accepted by the scientific community (Piwowar & Vision 2013); the benefits too great to ignore. Research can be better validated and understood by fellow researchers. Existing research can be reproduced and expanded. Researchers who want to build on published research can reuse existing data to arrive at new conclusions (Bierer et al. 2017). In addition, linking scholarly literature and data leads to increased visibility, discovery and retrieval of both literature and data, facilitating reuse, reproducibility and transparency. In a digital world where data can be more easily shared and documented, scholarly literature and its underpinning data are increasingly seen as inseparable.
At the same time, while the importance of data sharing is accepted, there are essential questions that still require an answer (Borgman 2012). For example, why should authors go through the effort of documenting and publishing datasets, if their career depends on the publication of articles (Mongeon et al. 2017)? How can funding bodies and other assessment boards include data in the evaluation of projects and people when there is no recognized metric or method to measure the quality and impact of the published data? How can the community create the necessary infrastructure for publication and evaluation of data, if there is no standard for metadata and basic attribution information around data? Several RDA projects are underway to provide answers to these questions by creating a framework to measure data reuse in a standardized fashion.
Finding the right way to measure the impact of shared data is crucial if research data is to be included as one of the scholarly outputs used for research evaluation. The current meritocratic system in academia relies heavily on the publication of scientific results in recognized academic journals, supported by an international editorial board and peer review system. The most commonly used metric to measure the impact of a publication is counting the number of times it receives a citation from other publications that are also peer reviewed and published in recognized journals. This currently offers a base to support quality assessments for research projects, career advancement, and funding opportunities (Cantu-Ortiz 2017).
The temptation to use the same metrics for data, and measure citations of datasets in articles, is certainly strong. However, the interaction and impact of research data is more complex than that. The very definition of what a citation for data is, is fuzzier than the equivalent for articles. At the time of writing, community practices around data citations have not fully evolved: it will take time and discipline to reach a unified and standard citation method for data (Silvello 2018). In addition, there are different ways to interact with a dataset. As described in Kratz and Strasser (2015), the value that a researcher gets from data is not given by simply opening its description page. For articles, reading it online or downloading it offers practically the same value to the reader, but a dataset would need to be downloaded to be fully consumed. While citations remain the most popular metric to measure the impact of any scholarly output within the academic community (Kratz and Strasser 2015), measuring the impact of data encompasses more dimensions. Citations will therefore need to be accompanied by other metrics, which could include data usage statistics and social media mentions.
In this paper, we describe how the outputs of two RDA working groups (WGs), the Scholix WG and the Data Usage Metrics WG, can be used to assess data reuse and make data usage statistics and citations available. We will first outline how data repositories and publishers can expose article-data links using Scholix approaches and data usage metrics following the new code of practice for research data. We will then explain how they can consume this information to make DLMs available and help researchers get credit for their work.
The goal of the Scholix WG was to establish a high-level framework for exchanging article-data links. It aimed to enable an open information ecosystem to understand systematically what data underpins literature and what literature references data (Burton et al. 2017a).
While there are clear benefits to literature and data linking, in practice these links are difficult to find or share. The main reason for this is that there is no universal way of exchanging link information between organizations and systems which hold this information. Instead, there are different bilateral agreements and technical frameworks for exchanging link information between the different partners and systems that hold this information.
The Scholix WG addressed this problem. Its goal was to improve the links between scholarly literature and research data as well as between datasets, thereby making it easier to discover, interpret, and reuse scholarly information. The Scholix initiative offers:
The conceptual model (Figure 1) is about the link between two objects, such as a journal article and the underpinning data. Rather than describing in detail the properties of each of the two objects, the conceptual model focuses on the relationship between the objects. It also enables a record of who asserted the link and who made the link available.
Scholix information model. Providers contribute links by sharing information about the source object (article or dataset), target object (article or dataset) and the nature and direction of the relationship.
As mentioned in the previous section, within the Scholix framework organizations contribute information through community hubs. The majority of scholarly publishers work with non-profit organization Crossref to share metadata about publications. These metadata records include comprehensive information about the items being registered, and increasingly include links to related scholarly artifacts such as data, software, protocols, and reviews. When data citations are included in Crossref metadata records they are made available to the wider community.
3a8082e126