--
You received this message because you are subscribed to the Google Groups "Learning Registry Developers List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to learningreg-d...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Steve, Jim, Pat, and Joe,
Thank you very much for your extended and much appreciated feedback! Some comments here (in a logical sequence).
Pat and Jim:
I was well aware of your brilliant Chrome extensions. My objective is to extend the idea, by providing discovery “functionalities” (in addition to meta or para “data”) from the rich snippets, that is Queries By Examples starting from the specific learning resource. This requires me to go back and forth from the LR, hence my need for efficient filtering data services, like those provided by EXTRACT.
Jim:
I built this set of data services some time ago […] They were never part of the official API, but more of a simplified way for node owners to create and host dedicated custom slice-like apis.
You developed them as an example of a general design pattern for a layer of services on top of core LR services. But your “example” is actually offering a very useful set of services for developers close to final users (without the need to be a node owner) – and I am really happy to learn that there is every intention to maintain (and possibly extend) them.
Joe:
And then of course I went looking for the code and found that you were actually on the right track. I forgot there was a flip-side that allowed you to feed in a resource and get standards alignments back. Here’s how you get the standards for a resource: http://node01.public.learningregistry.net/extract/standards-alignment-related/discriminator-by-resource?resource=http://illuminations.nctm.org/WebResourceReview.aspx?ID=2079
Indeed: in my example I was using “discriminator-by-resource” in this way. But I was alarmed because the *same* call that used to produce correct results a few days ago (I am 100% sure), stopped producing results after the LR node01 was down for a little while. The problem might not be due to pre-LRMI development, but perhaps to reindexing as suggested by Jim (plus caching, I suspect). It’s fine for the moment - I’ll let you know if I spot additional erratic behaviour.
I haven’t worked with these dataservices in a while, but it seems like it would be worthwhile to update. Would be even better if it handled CCSS urls and dot-notation IDs as discriminators as well.
Yes, definitely - that would be very useful! Some comments about this below in Steve’s section.
Or you can cross your fingers that I’ll find an extra couple of hours to get to it.
Fingers crossed then!
Steve:
If you can share your requirements for the service: what data you want out of the system, etc, that would be helpful so we can make sure the new search API will accommodate it.
My perspective is as an edge node consumer agent (while I do understand your core focus is on the distribution infrastructure).
My personal wish list would certainly start with Joe’s suggestion, that is to extend the existing EXTRACT service to cover additional standards beyond ASN, while covering common payload schemas:
resources / ID_only / COUNT_only - associated to a given standard [and vice-versa, to make it more efficient than getting the whole resource data and parse them]
While not essential, COUNT_only would be useful to quickly provide “volume indicators” to final users.
This can be further extended to other “filtering” capabilities on basic metadata as provided by other learning resources portals. While tag and author can be currently handled via the SLICE API, additional ones could include topic / audience / age / resource_type… (but I am well aware of the complexity arising from your schema agnostic strategy).
Thank you all again for your precious feedback,
Renato