> Are there e.g. Jupyter notebooks with executable snippets demonstrating use of at least every store?
Not that I'm aware of, but some Jupyter demos would be useful!
We might consider a stand-alone documentation repo in the RDFlib GitHub organization with Jupyter examples?
- [ ] choose a project url and name
- rdflib/notebooks
- pypi:notebooks is no good
- rdflib/rdflib_notebooks
- rdflib/rdflibnotebooks
- rdflib/TBD # ~ RDF bnodes
- [ ] generate an rdflib/TBD org project from the/a GitHub project template?
- [ ] create an rdflib/ project template
(To be clear, I'm often only good for *suggesting* tasks.)
- [ ] DOC,BLD,ENH: Create a Jupyter Book folder and configuration for RDFlib demo/testing/store_benchmark notebooks
`jupyter-book create TBD/`
- [ ] DOC,ENH: can and shouldn't the sphinx-apidocs for rdflib.* -- maybe even gh:rdflib/* -- be included in a Jupyter-Book (Sphinx) JAMstack static HTML website, just like they are in the rdflib docs (Sphinx)?
- [ ] BLD: Configure the repo for CI
- [ ] GitHub Actions / Drone
- Comprehensive RDFlib Store benchmarks (in notebooks) could cause a prohibitively slow build; which may be justified given the expected contribution frequency for this project
> Which RDFlib tests/ directory should be the most comprehensive reference
of what does and doesn't work with a {given RDFlib store, SPARQL
endpoint, }?
- [ ] DOC: Distill store test guidelines for QA and performance purposes
- `ls store_*/tests?/* | basename | sort -u`
- [ ] TST,ENH,PERF: rdflib/rdflib: ITestRDFlibStore, ITestSPARQLendpoint;
- [ ] DOC,ANN,REQ: @RDFlib/store_maintainers/*: add an e.g. tests/test_store_performance.py for e.g. the rdflib/rdflib_notebooks Jupyter Book benchmarks and better
- re: benchmarks and python [web] app performance:
- In context to this existing (Python) fast open source webapp for hosting trained ML models, is RDFlib like an ML framework for *predictive* inferencing?
- [ ] DOC,ENH: One or more [nbgrader-able] notebooks; for +training @online +selfpaced
- [ ] BLD,ENH: generate a Jupyter Lite (WASM) build that includes rdflib, store plugins, and a recent build of JupyterLab
I think we should:
- try and improve the documentation of Stores
- perhaps prepend all store examples with store_, so we should have:
- store_berkeleydb.py (not berkeleydb_example.py)
- store_starqlstore.py (not sparqlstore_example.py)
- grow the number of store examples
- HDT
- SQLAlchemy
- LevelDB
- SPARQL ReadWrite
Good plan.
What are some good DRY ways to include actual test cases from tests/ as rdflib API usage demos in Jupyter notebooks?
- IPython / Jupyter magics:
- %pdoc
- %pfile
- %psource
- fastai/nbdev is another way to be DRY (Don't Repeat Yourself) about tests and notebook demo examples with output and docs.
> Automatically generate docs from Jupyter notebooks. These docs are searchable and automatically hyperlinked to appropriate documentation pages by introspecting keywords you surround in backticks
- Copy/paste and adapt from ~ demo examples in tests/
- Copying and pasting is forking; and who will continue to merge RDFlib api changes back to the docs notebooks when the CI tests of the demo notebooks (`jupyter-book build`) fail due to Exceptions caused by justified breaking refactorings?