Akamu 0.3: A suite of Python tools for XML Filesystem and RDF dataset management for Web Applications

12 views
Skip to first unread message

Chime Ogbuji

unread,
Jul 12, 2012, 3:55:54 PM7/12/12
to ak...@googlegroups.com, akar...@googlegroups.com, fuxi-di...@googlegroups.com, semant...@w3.org
Apologies for the Cross Posting.

I've recently been working on an (updated) architecture for managing XML documents in a filesystem along with an RDF Graph Store that acts as a faithful rendition [1] of its content such that changes to the XML documents are reflected in the Graph Store. This is a reincarnation of the 4Suite repository architecture [2], which had this capability and was used as the content management component of SemanticDB [3]. It worked well as a framework for health information management and so I have ported this architecture (or a part of it) over onto the Akara platform [4] and use it as infrastructure for web-based, health management applications.

It has been committed to a new Google Code project (with an Apache 2.0 license) that also includes RDF management and query capabilities along with some XSLT extensions. It is called Akamu [5] and is an early work in progress. Any feedback is very welcome. I still have not yet decided whether setting up a new google group would be appropriate since it mostly works as infrastructure for existing projects.

[1] http://www.w3.org/TR/grddl/#sec_rend
[2] http://www.ibm.com/developerworks/xml/tutorials/x-4suite5/section5.html
[3] http://clevelandclinic.academia.edu/ChristopherPierce/Papers/1728666/SemanticDB_A_Semantic_Web_Infrastructure_for_Clinical_Research_and_Quality_Reporting
[4] http://akara.info/


[5] http://code.google.com/p/akamu


--
Chime Ogbuji
Sent with Sparrow (http://www.sparrowmailapp.com)


Chime Ogbuji

unread,
Sep 1, 2012, 11:40:04 AM9/1/12
to ak...@googlegroups.com, akar...@googlegroups.com, fuxi-di...@googlegroups.com, semant...@w3.org, akamu-di...@googlegroups.com
Apologies for the Cross Posting.



Akamu has been updated to version 0.4. In particular, it now includes an implementation of an HTTP protocol [1] for managing its XML/RDF filesystem and a library [2] for distributed content caching and management (based on memcache) and the ability to control cache-specific HTTP headers (Cache-Control, Pragma, Expires, Last-Modified, ETag, and Vary) in the response to services that make use of it. The wikis have been updated with documentation for these capabilities as well as the others.

It now has a google group mailing list [3] (akamu-di...@googlegroups.com) where subsequent updates such as these will be posted and where issues or suggestions can be be reported and discussed.

The list of changes is below:

- fix to @xslt_rest to ensure if response is not 200, we just return it
- akamu.diglot.Resource.getContent now takes mediaType argument, a mime type that will cause the response to be the XML content if it is None or application/xml, otherwise it will serialize the RDF graph using the appropriate syntax (application/rdf+xml, text/n3, text/turtle, text/plain (ntriples), etc.)
- Added integrated support for wheezy-based HTTP caching of Akara servies using memcache (akamu.wheezy)
- Added DiglotFS HTTP protocol (DFSP) implementation
- foaf.xslt includes reverse mapping (RDF -> XML) as an example of how it is used with DFSP
- Added fully-ported GRDDL client that makes use of amara test/GRDDLAmara.py
- updated DiglotFS test suite to include tests for DFSP
- added delete() method to DiglotFS resources
- changed ggs namespace (for reverse transform definitions)
- added implementation for PUT and DELETE in DFSP
- added better support for 404 responses in DFSP
- added GET/POST/PATCH/PUT/DELETE tests for DFSP
- added means for DFSP to make use of caching capabilities
- added support for bypassing caching capabilities if 'wheezy.http.noCache' is in environ
- DFSP now supports caching using the DiglotFS path as the cache name (which can be invalidated by user-specific services)
- DFSP service decorator now takes additional caching and cacheability keyword parameters. The former is boolean and indicates whether or not to cache and the latter (used if caching is True) is the cacheability option 'public', etc.
- Added support for making use of environ values in composing cache key



Future features include: integration of the Triclops server [4], including its SPARQL 1.1 RIF and OWL2-RL entailment, query management, Proxy SPARQL Endpoint, Service Description, and dataset browsing capabilities

[1] http://code.google.com/p/akamu/wiki/DiglotFileSystemProtocol
[2] http://code.google.com/p/akamu/wiki/HTTPCaching
[3] http://groups.google.com/group/akamu-discussion
[4] http://code.google.com/p/python-dlp/wiki/Triclops
Reply all
Reply to author
Forward
0 new messages