-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
List of operations to support via API #6
Comments
To me, this is asking for an "all you know in quads" endpoint. The rest is an implementation detail that should not be a concern for shacl-vue. Therefore "shacl-vue could receive several optional inputs, including..." is a needless complication IMHO. A (URL) pointer to a TTL file should be (made) indistinguishable from an API endpoint that supplies the exact same thing. In both cases, this pointer would be a parameter of a deployment/runtime-sessions. And it would/should be optional.
To me, this is configuration. shacl-vue (typically?) will be pointed to one-and-exactly-one schema-version-record-dump. So it should get the associated label as a deployment/runtime parameter. In the current concept, the schema name/version/variant is all multiplexed onto a single label.
A "local" query for refer to information that is not (yet) submitted to the service, right? If we support this kind of staging, we also need to (introduce?) some tracking of what has been modified in the entire session, to be able to "send" efficiently and a later point in time. |
Should At the moment,
No I meant a local query as in querying RDF data that has already been retrieved from the service endpoint and currently resides in a client side RDF store. For implementation, I'm thinking some sort of chaining of the query and then consolidation: first look at the local store, then the service endpoint, then consolidate the records. If the RDF store automatically handles redundant triples effectively (I need to check this), then the process can be switch around: first query the service endpoint, add response data to the local rdf store, store consolidates data, then query the store. Although, your point is also an important one. At the moment, |
Although you say no, your explanation tells me that you mean yes -- and I might not have explained what I meant properly.
No. Check https://concepts.trr379.de/s/base/unreleased/ for example. You'll find that it contains all the classes you mentioned -- defined in different source schemas -- under a single, common umbrella. I think this is the normal situation. If we would have to support different schema versions in one and the same editing environment, we'd also need to check the various schemas for compatibility, and also enable a user to pick the right schema for a particular class (many could provide it), and explain to the user how to pick, so they don't end up describing two things that needs to work together in two different schemas. The only reason I can come up with there would be more than one schema (in the dump backend!), is the preservation of information in a historic configuration (backward compatibility). I cannot imaging a system (other than a migration script) that would need access to the same information in multiple schemas variants. |
Thanks for pointing this out, my previous understanding did not include this common umbrella schema. |
Some background here: https://hub.datalad.org/datalink/tools/issues/13#issue-62
We need to start with a specification on what exactly to support as API endpoints. A useful use case to consider is
shacl-vue
, which allows annotating existing RDF data and creating new RDF data.shacl-vue
would depend on fetching RDF data from thedump-things-service
to support its operation. Considershacl-vue
being used in a research consortium, where researchers have to use it to add their publications, submit new data annotations, etc. They would want to have maximal and intuitive access to existing data to minimize their effort, think: add aPerson
record once by ORCID, link it as anAuthor
in multiple places by just selecting it from a list, and having all previously enteredPerson
records across the consortium available in that same list. Or a user might browse ashacl-vue
-supported catalog of the consortium, browse to a specificResearcher
and the page would need to display any/all data related to that entity.The background issue started suggestions for a list of operations, and more are added here:
rdf:type
)Some practical aspects of the functioning of
shacl-vue
are important to consider:shacl-vue
currently fetches the complete set of RDF data and SHACL Shapes that it needs to operate on upfront, from served TTL files, using thefetch-lite
package which returns a stream of quadsshacl-vue
stores RDF data in the browser using anrdf.dataset()
from therdf-ext
packagerdf.dataset()
is queried whenever a list of things need to be displayed, e.g. when aPerson
has to be selected from a list ofPerson
sshacl-vue
does not have knowledge of the LinkML schema (and version) that SHACL shapes were exported from (it's not part of the export), i.e. it would not be able to supply those parameters when making an API request.Thoughts for improving
shacl-vue
and other services in light of the above:shacl-vue
could receive several optional inputs, including:shacl-vue
use (referenced in the last paragraph of this comment too: Refactor for modularity and reusability shacl-vue#65 (comment))shacl-vue
a flag to query both (or either of) the localrdf.dataset()
and the relevantdump-things-service
endpoint for records; if both, this implies some sort of consolidation process which requires more thinkingshacl-vue
, we could consider having thedump-things-service
serve a regularly updated static export in TTL format.The text was updated successfully, but these errors were encountered: