-
Notifications
You must be signed in to change notification settings - Fork 19
Set metadata directly in the LDPC #141
Comments
yes, the problem is how can the server decide which metadata it should set for all resources?
But that could slow down quite dramatically the response of the server - and the response feels arbitrary. The best solution imho is to have as metadata just what is usually considered file system metadata. Ie: data that answers: We currently do serve some of this infomration with some pretty simple ontology that other rww servers use, but that is not very satisfactory, as you can see if you
One may also want to use some of the vocabulary used by Activity Streams 2.0 Working Draft which is I think an rdfized version of Atom. But then the other solution would be to instead make use of the QUERY/SEARCH method that I have implemented, though that would require that a SEARCH on a container can return information about its ldp:contents . If the SEARCH query is not too complex that should at least allow the client to get what he needs for a particular query. Note that that means the etag of a container must change if any of the contents changes. Note that usually I think LDP Containers are not what other resources of interest are pointing at. They will be pointing at the contents of LDPRs that are contained within. So one other answer may be to just create an LDPR and put all the contents in there. For example if I create my personal profile, I don't think it is a good idea to think of my Profile document - call it As you can see there are a number of ways of getting things done here. |
Right. Probably a combination of basic metadata and a QUERY for more advanced info would work. My use case is a todo list. As explained in #140, I'm using a LDPC so that we can include todos from other servers. Currently I need to fetch the LDPC, then fetch every single todo so that I can display its rdfs:label in the html. |
I suppose you want to have each TODO in its own file so that you can give each one its own ALCs. |
Yes, actually that's what I first did. But I figured it would make more sense to have each todo being a LDPR, for acl reasons, but also for the modularity reason explained in #140. In general, I wonder if there is a best practice on how to decide at which level to put LDPC and LDPR. |
These are all good questions and we should somehow capture this on the wiki. |
That makes sense. Thinking of documents and user interest in data is probably the way to go. In this example it is probably on LDPR per todo. So getting the list with all labels in a single request makes sense. Is it possible to get all the content of a LDPC in one graph in one request? |
I think that will require adding the SEARCH feature that goes beyond the LDPC. But then I don't think a SPARQL 1.1 CONSTRUCT query can do this btw... The CONSTRUCT query does not allow subgraphs in the result - which is pretty weird given that this has been available in N3 since the beginning. What would be needed would be something like this
But that does not work. Sadly the point to bring these issues up was before the end of the WG. Still these can be brought up for the next LDP WG, and for the Social Web one and others. |
One could argue that if one requested the LDPC in a format that allowed for subgraphs, then that would be equivalent to allowing the LDPC to return the content of the contains. But json-ld is a quad format, and it is specified for LDP and there is no requirement that all the contents should also be downloaded. In any case requiring that behavior for all LDPCs would be pretty bad, as it would probably make it impossible for a server to respond for large containers - unless paging is included... |
Ok, so a |
While at Pellucid, we had to tackle the exact same problem. Our solution was very simple: the data for the TODOs would be inlined along with the data related to the container. No named graph. No reification. This works very well if you assume that everybody does Linked Data, which is a bit more that just passing some RDF around. The assumption is that the authoritative source for the data attached to a RDF resource is the corresponding document for that resource. |
That works @betehess if you are in a closed world, not if you are in an open world, where others may add data to the database, as that would otherwise potentially lead to contradictions in the data. You can not disquote data automatically. Mergers between graphs can only occur in RDF if both graphs are thought to be true, and one cannot assume that for all data. |
In that case I think I have misunderstood the initial question. |
Mhh, I should not put what I said in terms of "closed" and "open" world as that is a different issue. Rather as the conclusion of my argument pointed out, the ability to inline the data without quotation ( ie. without named graphs ) in an automatic way, requires the server to know that all the contents are in principle coherent and true. This requires a high degree of oversight of the data, which might just be the case in a company like Pellucid. But on the web if I want an LDPC to be writeable by different agents, or if I don't want to do a full coherence check of all the data in an LDPC ( which would require specialised knowledge for each vocabulary) then I need to be more careful about when I merge data. I described these issues in detail on the LDP mailing list as these issues came up. In a generic way a client ( or a server ) cannot tell from a container being an LDP Basic Container, what types of constraints need to be put on the data for it to be coherent. So as far as BasicContainers go it is not really possible to automate the process of inlining without making assumptions. Those assumptions may hold but they have to be passed out of band, or one would need another ontology and some way of giving that info to the server, and that would require a more complex server, as well as agreement of how such containers would work. To keep things simple and flexible I think it is easier to allow a query on a container to run over the named graphs it contains, and then for explicit GETs to be made for each resource by the client. But that does require adding that feature to the server. |
That is what I initially thought. I still have some troubles to reconnect your comments with the problem outlined by Sylvain. The thing is: the burden of "assessing the truth" is always on the client. Using named graphs, or triple/graph reification techniques, or provenance ontology, etc. only help you encoding assessments about the truth, but they do not help you making decision about the truth. You cannot change that. At the end of the day, the only good way to know the truth is to interact with the relevant resources on the Web. That is something outside of the RDF model/semantics, and that's where Linked Data begins. In that world, I do not see what named graphs really bring to the party, and inlining linked data is just plain enough. That's what we were doing at my previous job, and it really was ok. Anyway, that is my take on that and I don't think I can provide a better feedback than that. |
Indeed. That was my argument at Scala eXchange 2014 in London.
Named graphs or just N3 graphs allow you to quote, or in other words to work with contexts ( that can also be thought of as a form of modal logic ). It is really just the same as using a literal, except that it is easy to parse. It is really worth looking at Guha's thesis Contexts: a Formalisation and some applications. It's the difference between saying "Laura Lane believes Klark Kent is a boring journalist" and "Laura Lane believes Superman is not a journalist". Even though Clark Kent and superman are names that co-refer, and you can in logic subsitute co-referential terms sava veritate, you cannot do so in belief contexts. And this is what one has to take care of in these areas. |
It would save a lot of web requests if we could include in the LDPC some metadata about the LDPRs is contains. The basic example would be a rdfs:label that would enable to list the elements to the user without fetching all of them first.
The text was updated successfully, but these errors were encountered: