-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Representation of capabilities #15
Comments
That's right. The checking for capabilities needs to be configurable, and the One way this could look: type CapabilityCheck = (parent: Capability, child: Capability) => Promise<boolean>
export async function isValid(ucan: Ucan, attenuationChecks?: Record<string, CapabilityCheck>): Promise<boolean>
// usage:
const wnfsCapability: CapabilityCheck = async (parent, child) => {
if (typeof parent.wnfs !== "string" || typeof child.wnfs !== "string") return false
if (typeof parent.cap !== "string" || typeof child.cap !== "string") return false
if (parent.cap !== child.cap) return false // simplified for now
if (child.wnfs.startsWith(child.wnfs) && parent.wnfs.endsWith("/")) return true
return false
}
await isValid(child, { "wnfs": wnfsCapability }) The reason this isn't supported by this library today is that we're using something else in production right now. We're in the progress of both upgrading what we're running in production to the new version of the spec (which is what this library is based on) and doing the necessary upgrades to this library to support our production use-cases. |
I was going to suggest exactly that, but have not got around to doing it yet. However I would suggest alternative API instead more along the lines of: /**
* Validates that UCAN does not escalate capabilities. Returns holds set of errors for capabilities that
* escalate it's privileges.
*/
declare function validate <C>(ucan: Ucan<C>, attenuationChecks?: Record<string, CapabilityCheck<C>>): Promise<Result<EscalationError<Capability>[], void>>
type CapabilityCheck<Capability> =
(parent: Capability, child: Capability) => Promise<Result<EscalationError<Capability>, void>>
interface EscalationError<Capability> extends Error {
available: Capability
claimed: Capability
}
type Result<X, T> =
| { ok: true, value: T }
| { ok: false, error: X } |
Minor nitpick: This forces a single type of capability per UCAN (the I agree that we need to provide more information than I disagree about providing that much information on errors. It's not like that would be useful for error messages in apps to users with broken UCAN chains. It's more useful for developers, but I'd rather keep the API simple and point developers to debugging tools like https://ucancheck.fission.app/. |
Not sure if helpful, but an idea: We could also use type CapResult<X, T> =
| {ok: true, value: T, unknowns: string[] }
| {ok: false, error: X} Of course you won't be able to consume the unknowns semantically, but it leaves it open to extension while acknowledging the limitations of the type. |
Using validation tools like https://ucancheck.fission.app/ is undoubtedly helpful in development, though! I think everyone on this thread is a fan of what types buy us, though they're certainly not a silver bullet when you don't know the complete type ahead of time |
I also think that fact that there is no general way to compare capabilities might be a design flaw. While it makes system open ended it also makes it bit confusing, I'm still not sure what to put in It appears that I think it would be better if capabilities
Well I was assuming C would be a type union, so yes it would be a single type but you could differentiate |
So when we get a request on the service endpoint with escalated capabilities we would like to respond with an error explaining what the problem is. Rather than just responding with invalid token. I think it especially useful for user defined capabilities where tools like https://ucancheck.fission.app/ can not really compare unknown capabilities or tell you which one escalates and how. Custom checker would have domain knowledge and therefor provide more meaningful explanation. Additionally you could always wrap that into simpler API: const isValid = async <C> (ucan:UCAN<C>, check:(parent:C, child:C) => boolean) => {
const result = await validate(ucan, (parent, child) => {
if (await check(parent, child)) {
return { ok: true, value: undefined }
} else {
return { ok: false, error: new EscalationError(parent, child) }
}
})
return result.ok
} |
This may be a separate issue, but is related enough that I think it would make sense to capture here. While I see value in capabilities been open ended, I do thing that makes them kind of confusing and possibly counter productive. More specifically from the definition:
{
$TYPE: $IDENTIFIER,
"cap": $CAPABILITY
} It is all but obvious what one should put in I suspect that putting little more constraint and structure on capability definition may make them less confusing and possibly provide a way to:
That is to suggest I think it might be a good idea if capability represented conceptual triple of type Capability<
OP extends string = string,
Subject extends string = string,
Restriction extends Contraint = Contraint
> = {
[key in `${OP}:${Subject}`]: Restriction
}
type Contraint =
| undefined
| Scope
| Limit
| ConstraintGroup
type ConstraintGroup = {[key: string]: string|number}
type Limit = number
type Scope = string
const checkContraint = (parent:Constraint, child:Constraint) => {
switch (typeof parent) {
// If child scope is contained in parent scope constraint holds
case 'string':
return checkScope(parent, child)
// If child parent limit is greater than of child constraint holds
case 'number':
return checkLimit(parent, child)
// If parent has no contraint it's unlimited so contraint holds
case 'undefined':
return holds()
default:
return typeof child === 'undefined'
? exceeds(parent, child)
: typeof child !== 'object'
? incomparable(parent, child)
: checkGroup(parent, child)
}
}
const checkScope = (parent:Scope, child:Constraint, id?:string) =>
child === undefined
? exceeds(parent, '', id)
: typeof child !== 'string'
? incomparable(parent, child, id)
: child.startsWith(parent)
? holds()
: exceeds(parent, child, id)
const checkLimit = (parent:Limit, child:Constraint, id?:string) =>
child === undefined
? exceeds(parent, Infinity, id)
: typeof parent !== 'number'
? incomparable(parent, child, id)
: child > parent
? exceeds(parent, child, id)
: holds()
const checkGroup = (parent:ConstraintGroup, child:ConstraintGroup) => {
let violations:EscalationError<Constraint>[] = []
for (const [id, value] of Object.entries(child)) {
const base = parent[id]
switch (typeof base) {
// if parent had no such restrition contraints holds because child
// is more restricted
case 'undefined':
break
// If child limit is greater than of parent it is escalating.
case 'number':
const limit = checkLimit(base, value, id)
violations = limit.ok ? violations : [...violations, limit.error]
break
case 'string':
const scope = checkScope(base, value, id)
violations = scope.ok ? violations : [...violations, scope.error]
break
}
}
return violations.length === 0 ? holds() : escalates(parent, child, violations)
}
declare function holds():({ok:true})
declare function incomparable <C1, C2> (parent:C1, child:C2, id?:string):{ok:false, error:IncomparableError<C1, C2>}
declare function exceeds <C extends Constraint>(parent:C, child:C, id?:string):{ok:false, error:EscalationError<C>}
declare function escalates <C>(parent:C, child:C, violations:EscalationError<Constraint>[]):{ok:false, error:EscalationError<C>}
interface IncomparableError<C1, C2> extends EscalationError<C1|C2> {
parent: C1
child: C2
}
interface EscalationError<C> extends Error {
parent: C
child: C
} |
Thinking more about this I think it would be better to represent ucan capabilities not as an array of type Capabilies<ID extends Capability = Capability> = {
[key in ID]?: Constraint
}
type Capability<OP extends string = string, Subject extends string = string> = `${OP}:${Subject}`
type Constraint =
| undefined
| Scope
| Limit
| ConstraintGroup
type ConstraintGroup = {[key: string]: string|number}
type Limit = number
type Scope = string
const check = <ID extends Capability> (parent:Capabilies<ID>, child:Capabilies<ID>) => {
const violations = []
for (const [id, contraint] of iterateCapabilities(child)) {
const result = checkContraint(contraint, child[id])
if (!result.ok) {
violations.push(result.error)
}
}
return violations.length === 0 ? holds() : escalates(parent, child, violations)
}
const iterateCapabilities = <ID extends Capability, C>(capabilities:Capabilies<ID>) =>
<Array<[ID, Constraint]>>Object.entries(capabilities) |
There are use cases for overlapping capabilities. For example, I may want to be able to do atomic updates to multiple file systems, in which case I need several WNFS capabilities. Inside a single file system, I may only have write access to two directories, but not a common parent. (There are of course also use cases for disparate capabilities) |
I think that aligns with proposed encoding e.g. here you have capability to write into {
"write:wnfs": "/gozala/notes/",
"write:wnfs": "/gozala/archive/"
} I was referring to different kind of overlaps e.g example below, where both capabilities are the same but different restrictions are imposed. Which complicates things as it would require extra logic for merging those. [
{
"w3": { maxSpace: 7000 }
"cap": "upload"
},
{
"w3": { maxSpace: 6000, maxFiles: 50 }
"cap": "upload"
}
] In the below example unified capability could be |
Oh I am a fool, my first example has same key twice. |
I think my general sentiment is it would be nice if capabilities were structured in a way that removed any surface for an ambiguity. I am conceptualizing a capability is an type Capabilies<ID extends Capability = Capability> = {
[key in ID]?: Constraint
}
type Capability<OP extends string = string, Subject extends string = string> = `${OP}:${Subject}` I made a mistake earlier but with above definition following will be represented as illustrated below
{
"write:/wnfs/gozala/notes/": {},
"write:/wnfs/gozala/archive/": {}
} Keys here end up as {
"PATCH wnfs://gozala.io/notes/": {}
"PATCH wnfs://gozala.io/archive/": {}
} I also like how intuitive it becomes to integrate this into existing HTTP API for auth, for web3.storage API we could directly translate our HTTP endpoints to capabilities: {
'POST web3.storage:/car': {
requestLimit: 32 * GiB,
sizeLimit: 1 * TiB
},
'GET web3.storage:/car': {} // can request any car
'HEAD web3.storage:/car': {} // can get stats
[`GET web3.storage:/uploads/${user}`]: {} // can check user uploads
} Derived capability from above: {
'POST web3.storage:/car': { // inherits requestLimit and restricts sizeLimit
sizeLimit: 60 * GiB
},
'GET web3.storage:/car': {} // can request any car
'HEAD web3.storage:/car': {} // can get stats
// can only list subset of uploads created by this token
[`GET web3.storage:/uploads/${user}/${audience}`]: {}
} |
It is however worse highlighting though that such a structure would only support encoding constraints where all restrictions must be met. It will not allow encoding a structure in which one of the several constraints must be met. |
First of all: I love to see you so engaged @Gozala ! Attenuation Format Changes
I generally like this line of thinking, however, that'd be more of a spec discussion and as such its benefits should be weighed against breaking existing implementations. This library (that supports the most recent version of the spec) isn't used in production, but as far as I now, Qri are using this version of the UCAN spec in production (https://github.com/qri-io/ucan), so maybe we shouldn't break it until we've thoroughly evaluated this version in production with multiple entities? More precise Capability TypingQuoting some code: type Capabilies<ID extends Capability = Capability> = {
[key in ID]?: Constraint
}
type Capability<OP extends string = string, Subject extends string = string> = `${OP}:${Subject}`
type Constraint =
| undefined
| Scope
| Limit
| ConstraintGroup
type ConstraintGroup = {[key: string]: string|number}
type Limit = number
type Scope = string Those are some cool types 😃 Precise Capability Error Messages
Who will read this error message? A user or a developer?
That's a good point :) I don't think it would hurt to have a function like |
Love the thinking 👍 I'll never walk away from a discussion on type safety 🤓🤓🤓 I totally get why you're raising an eyebrow on the array. I had the same gut reaction at first. An array of tuples and a map have subtly different properties — but that may not actually be a problem here. What we're emulating is a (max) set, and all of these encoding achieve that.
Agreed — or rather we want to express the appropriate level of overlap. I don't know if an array improves the accuracy of the information that we're presenting. I'm certainly not against updating the spec to have a different encoding, but am unsure if we gain much here by switching to packing more information into keys encoded as flat strings. For example, in the reused key scenario that you mentioned earlier, what should one do on receiving those keys as JSON? The behaviour is equally defined as it with an array. The top level of the UCAN is a constructive union of all of its capabilities: if they overlap, then great! Even if a user gives the same path multiple times, we take the highest capability. For example: [
{
"wnfs": { // <- Namespace, AKA which semantics to use
"path": "/wnfs/gozala/notes/",
"cap": "write",
"maxFiles": 3, // Extended as an example, we don't actually do this
"ifCosignedBy": "did:someoneElse" // Or this
}
},
{ // Looser constraint, more power, supercedes above
"wnfs": {
"path": "/wnfs/gozala/notes",
"cap": "write"
}
},
{
"w3s": {
"cap": "GET",
"path": "/car"
}
}
]
Oh yeah, the mapping of the UCAN resource to the REST resource is pretty nice ✨ I don't love that we now need to separately parse the keys. Why not encode each as a straightforward map (as the earlier example)? In the longer string version, it feels like we're trying to force a lot of information together, which is a bit "stringly typed". In terms of increasing the typesafety, I think that there's some fundamental information theoretic challenges if we want to keep this user extensible. Static types need AOT information, which we won't always have. This puts us in the land of term-level validation at least some of the time. A smart constructor (i.e. a forced runtime invariant check) may be a better solution, where the types force us to validate the capability. To make the further case for smart constructors, they can also be made pluggable in the UCAN library's parser. This includes whatever predicate logic you need for escalation semantics, and nice error messages, with a check that's enforced by the library (assuming that they're using the type checker). |
I love the idea of packaging up capabilities into
|
Fare enough! What would be a good place to continue this discussion ? |
I've typed those in order to be precise in communicating the structure I'm thinking of. Whether those actual definition should be used by individual library is separate discussion IMO. I don't think token parser should take on the role of type checking input, it can continue parsing it as JSON it's just library checking capabilities can recognize a certain structure and discard all the remaining fields / values. |
Users often create bug reports and better the error messages the better the reports allowing devs to address them effectively. Generating new token will not help if the bug was cased by the code that generated the token, chances are new token will have same issue. |
Oh, I didn't mean to send you away or something like that, sorry 😅 I was mostly tryng to argue "that would be a bigger change ('spec discussion') and as such needs to have high payoff". |
We can summon Brendan to this thread if needed 😛 I agree that a common place for everyone to collab on this stuff would be great. We're actually about to break a bunch of this stuff out into its own UCAN org and whatnot, but here is good now! |
I'm not even against smaller changes! Let's keep the open conversation going and keep exploring 💯 @Gozala I'm really appreciating the thought & conversation so far! |
@expede I do share the dissatisfaction with packing two fields into a single key and it may indeed be a bad idea. Maybe it is indeed better to let them be their separate fields but prescribe how multiple capabilities to perform same OP on the same Resource can be unified. What I was hoping to achieve by packing them together is to make that unification token issuer problem so that verifier does not need to have custom code to do this. In other words arbitrary verifier could check claims are met without domain knowledge. But again on the flip side you can't then express (at least in currently proposed encoding) either meet this criteria or that. |
I also would like to call out that I find this structure to be really confusing
I think simply making it following would be an improvement interface Capability<T extends string, I extends string, C extends string> {
type: T
id: I
cap: C
} For what it's worth I end up ditching const token = await UCAN.build({
audience: did,
issuer: service.keypair,
lifetimeInSeconds: 24 * 60 * 60, // a day
capabilities: [
{
cap: "PUT",
// @ts-ignore - type is restricted to strings
storageLimit: 32 * 1.1e12, // 1TiB
},
{
// can only list uploads for the given user
cap: "LIST",
scope: `/upload/${did}/`,
},
],
}) I might be missing something but I find |
On the second thought this may not be necessary. Verifier could compare each claimed capability to each capability in the parent and if any is met claim is valid (basically gives either satisfy constraint A or B).
However keys also made scopes explicit, which could be what I'm dissatisfied in current spec. Specifically given these capabilities [
{
"wnfs": "gozala/",
"cap": "OVERWRITE",
"role: "wizard"
},
{
"wnfs": "gozala/photos",
"cap": "OVERWRITE",
"role": "painter"
}
] It is not clear if |
Yeah I agree with @Gozala. It seems clearer to have constant keys, just like everywhere else in the UCAN. I'd propose this: interface Capability {
type: string
cap: string
}
// Examples
const att = {
type: "wnfs",
cap: "SOFT_DELETE",
path: "/public/Apps" // we're allowed to add any other fields besides `type` and `cap`
} But then the question becomes "why require the |
The reason that this type feels awkward is that we're trying to capture heterogeneous types as a monolithic type. They're not the same shape, since it's open to extension. We can use subtyping at the concrete callsite, but not outside of that. I think that @Gozala is on the right track for the TS implementation with an interface like...
...where we treat that as a failable parser (AKA a For the direct issue at hand: I'm certainly not against moving to common keys. Maybe not |
Well this thread has gotten way off the initial issue 😅 It's all good, we need to have these discussions anyways! @matheus23, @bgins, and I chatted about this a bit. We have a few options to ponder. At its heart, the data that we're trying to represent is:
With that in mind, the question becomes how we want to typecheck and serialize that. How to parse these is an open discussion in another issue, but I think making that as easy as possible in JS is the right move (I'll comment over there). Which fields are required is a bigger question as we see more use cases: StructureUnspecified StructureLet capabilities be any object, and leave the format checking to the validator. The downside here is that there's zero namespacing, but you can pass in an arbitrary sum type to represent the capabilities you have. Namespaced OnlyOnly require a single namespace key, and everything else is free form. This has the advantage of avoiding overlapping keys being mistaken for the wrong semantics. It continues to have some challenges. Require AllHave strictly required keys for 1-3 in the top list. Decision PointsTarget resource should be an identifier of some kindWe can break these up, like Require/Structure the Potency Field?The potency is important for a lot of use cases. Having this forced to be set makes extensibility clearer, but is not strictly speaking required. Is a simple string enum enough? I can imagine cases where it's more complex than a string (e.g. Unix-style RWX, though that's often expressed as a string in practice). Maybe requiring the key but having the value be freeform? If it's freeform, how is it different from the other open fields? |
Just to keep stuff rolling, now do we feel about something like this: [
{
ns: "fission/wnfs", // Namespace
rs: "boris.fission.codes/vacation/photos/", // Resource
pt: "APPEND" // Potency
},
{
ns: "ucan/storage",
rs: "web3.storage/expede",
pt: {
max_gb: 100,
write: true,
delete: true
}
},
{
ns: "ucan/storage",
rs: "web3.storage/boris",
pt: {delete: true}
},
{
ns: "ucan/rest",
rs: "example.com/api/v2/",
pt: ["POST", "PATCH"]
}
] Here we have well-known top-level keys, and one unstructured potency field. |
Adding One case that I can think of where you might want to leave the resource open ended is giving access to all resources in a namespace. But in this case, if We could ask a similar question for potency. Is there ever a case where a potency does not need to be specified? Would an unspecified potency mean "you can do all the things with this resource"? Maybe there are cases where potency doesn't matter? I'm not sure about potency, but keeping it unstructured makes sense because the structure will depend on the resource. |
Some feedback / notes and proposed structure Whether all 3 fields should be required or not ?From my limited experience I would suggest making "ns" optional, because:
Should potency field be a string variant or whatever struct ?I would personally prefer requiring a string & suggest all other details to be pushed into different fields because:
Name sheddingUltimately this does not matter, yet clear naming can aid understanding. Variant: UCAN do toI like to think of capabilities as things you can do to a specific resources. Hence it makes sense to me to rename [
{
to: "wnfs://boris.fission.codes/vacation/photos/", // Resource
do: "APPEND" // Potency
},
{
to: "ucan+storage://web3.storage/expede",
do: "PATCH" // write + delete although I'd prefer encoding them separately
max_gb: 100
},
{
to "ucan+storage://web3.storage/boris",
do: "DELETE"
},
{
to: "ucan+rest://example.com/api/v2/",
do: ["PATCH", "POST"] // ok fine we can support arrays too, but not a fan
}
] P.S.: I like more generic Variant 2.Less funky option would be:
|
💯 I would go as far as suggesting that it's better to not even have a predefined "catch all", so that users have to define one specific for a domain. Leaving it out is just too ambiguous |
If you frame potency as a things you can do, I would argue that it always matters. If it feels like it does not it's probably because either access is to
If I think this provides better balance between structure and open extensibility. |
@Gozala thanks for the quick reply! 🙌 Below in more granular detail, but I think that this has mostly come full circle with some refinements. Required Potency ✅
Awesome, this is also my gut feel. Back to a required field it is! No Potency Arrays ✅
Actually same‚ I was mostly grasping for examples of different types as illustration 😛 These should totally be separate capabilities (like it is today) Wildcards ✅
We have a concrete use case for this at Fission. Part of what a UCAN gives you is the ability to delegate permissions without moving keys. Without a wildcard, every new resource you make would send you back to the initial device to re-delegate. You'd also be in a very bad position if you lost that key. With a wildcard, you can essentially say "everything, including future resources (optionally scoped in this area)". We use this to great effect in Webnative. URIs
I think we chatted about this briefly on our call a week or two back. I like that it builds on existing concepts! If we go this direction, and especially with namespacing, we could consider URNs (e.g. The two things that are a bit odd: the first is that In the URI style, you have no choice but to parse the scheme, which can come in several formats (with or without We could also add some scheme helpers to the validator DSL, but that's starting to drift away from what's directly in the JSON, which could make it harder for folks to learn. I'm inclined to the split-out version, but as you say, the URI parser is baked in. I don't have a super strong opinion; it's equally easy to adjust for the existing code. @matheus23 any feelings? Bike Shedding"There are two hard problems in computer science: naming, caching, and off-by-one errors."
Precisely. It's annoying from a purely technical perspective, but we do want to make these easy to think about. I used to be in the "semantics over syntax" camp, but have come to acknowledge that it really does matter to human factors.
Haha on brand — I love it! 😍 Done
Really splitting hairs here, but [
{
with: "ucan+account://boris.fission.name",
can: "ADMINISTER"
},
{
with: "ucan+store://[email protected]",
can: "PUT",
max_gb: 100
}
{
with: "ucan+store://[email protected]",
can: "DELETE"
}
] (I'm personally less concerned about the stray character or two than with clarity.) |
On the URI/URN concept: it looks like the const uri = new URL("ucan:storage:boris.fission.codes")
// Not ideal
uri.pathname == "storage:boris.fission.codes"
uri.protocol == "ucan:"
usi.hostname == ""
const url = new URL("ucan:storage://boris.fission.codes")
// Also not ideal
url.pathname == "storage://boris.fission.codes"
url.protocol == "ucan:"
url.hostname == ""
const betterURL = new URL("ucan+storage://boris.fission.codes")
// Mostly better?
betterURL.pathname == "//boris.fission.codes"
betterURL.protocol == "ucan+storage:"
url.hostname == ""
// Versus with HTTP
const http = new URL("http://boris.fission.name/foo/bar")
http.pathname == "/foo/bar",
http.protocol == "http:"
http.hostname == "boris.fission.name" |
To be clear I'm not opposing unconstrained delegation, I was just arguing that I do realize however that this has following limitations:
On the other hand those could be viewed not as limitations but rather as a more explicit encoding, e.g. I could see mistakes in case 2, where accesses not to all resources were intended. That said I do not feel really strong about it, just wanted to highlight why I'm biased against special |
I was kind of handweavy about URIs and URLs because, in practice browsers only support former, yet they do parse URIs as later:
Those are valid concerns, especially if you go past URLs and into URI, URN realm. That said P.S.: 👇 this is why I suggested
Please note that I was not making an argument against namespace/type field, I was just suggesting that it should be optional as in some cases it may make more sense to encode that in a resource field instead. That way if capability omits |
It does support few more known protocols, but more importantly it is well specified so everyone (supposedly) follows it: |
I like this! Only thing to consider though is that |
Hey @Gozala 👋 ts-ucan/tests/capabilitiy/wnfs.test.ts Lines 13 to 39 in 0324e88
I might actually move the wnfs capabilities (public & private) into the codebase at Please check it out :) EDIT: Maybe it makes sense to take a look at a simpler example first to figure out the new API: ts-ucan/tests/attenuation.test.ts Lines 11 to 46 in 0324e88
|
Hey @matheus23 @expede, I just wanted to check back to see where we are in regards to refining representation of capabilities. We have discussed several options, but I am not sure how to get to a consensus. We have decided to implement support for UCANs in our services & would really love to settle on the representation before we start deploying it. I really think this library should either choose to:
|
Hey @Gozala Thanks for the ping!
Amazing 🎉
Yup that makes sense to me. We're doing essentially the same thing in
Yeah, there's two approaches here, validation versus parsing. I'd recommend something that looks like the below, but also agree that we can do this one next. function check <Capability> (
claims: Capability[],
ucan: Ucan<Capability>,
verify: (claim: Capability, capabilities: Capability[]) => Promise<boolean>
): Promise<{from: PK, to: PK, until: UTCTime goodCaps: Capability[], unknownCaps : string[]} | {errors: Error[]}> @matheus23 and I were talking about adding a bunch of helpers of this kind to this library. We'll put together some API types and loop you in. Which Version?If you're implementing the existing version, then the above makes sense verbatim. I've taken a bunch of your feedback on 0.8 Spec StatusI almost have the updated spec with some of your feedback wrapped up here, though the bit you're interested in I think isn't finished here yet? https://github.com/ucan-wg/spec/pull/10/files (Also FYI https://github.com/ucan-wg/spec/pull/10/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5R354, let me know if you're comfortable with that in there) |
Re: a Line 58 in e10bdec
This can be used to make sure that a given UCAN actually provides claimed capability, given some semantics for that capability. Also, this might be related to your wish for a generic capability type: The type There are some reasons for why I've done it this way:
If you're looking for more feedback than only a boolean of whether it actually delegates or not - you'll get more information from parsing the JWT into the |
@expede I might be missing something but I'm not sure how verify:(claim:Capability, capabilities:Capability[]) => Array<Capability[]> | Promise<Array<Capability[]>> Is so that If I am also not sure I understand return type in your signature specifically not sure what {from: PK, to: PK, until: UTCTime goodCaps: Capability[], unknownCaps : string[]} From the
Exciting! I'll take a look at PR and provide feedback there. |
Oh yeah, you're right 💯 We're doing this lazily so need to know which ones to follow
Public Key, though I suppose it should be
It's sometimes useful to know that there's stuff that you are unable to use. It's not a failure on a For example, if you have a collaborative process that knows how to discharge email-related actions, you
Thank you 🙏 |
@Gozala whoops, I also missed @matheus23's comment above: It sounds like library may have the relevant functionality added since we had previously spoken. If it doesn't, we may be not be understanding the specific needs that are being missed. Give it Philipp's comment a look, and we can also sync live for lower-latency conversation. |
Actually @matheus23 that function doesn't select which witnesses were used, correct? That may be what he needs. Do we have a function that can enable that style of lazy check? |
@matheus23 I had to read through the
I'm not sure I fully follow this, do you mean parsing it from instead ? In which case I understand it, although to be honest this indirection kind of complicates things. I think things would have been easier if export interface CapabilitySemantics<C extends Capability> {
/**
* Returns capability back or `null` if capability is not supported.
*/
tryParsing(cap: C): C | null
/**
* This figures out whether a given `childCap` can be delegated from `parentCap`.
* There are three possible results with three return types respectively:
* - `Ok`: The delegation is possible and results in the rights returned.
* - `UnrelatedError`: The capabilities from `parentCap` and `childCap` are unrelated and can't be compared nor delegated.
* - `EscalationError`: It's clear that `childCap` is meant to be delegated from `parentCap`, but there's a rights escalation.
*/
tryDelegating(parentCap: C, childCap: C): Ok | EscalationError | UnrelatedError
}
💯
I do not understand this, could you please try reframing this ? |
So is |
To follow up on my last comment, I guess it would make sense to return representation of the proof chain so it's possible to introspect / capture how claimed capability was met. So maybe instead of a type claim <C> = (capability:C, ucan:UCAN<C>, verifier:Verifier<C>):Result<Escalation, Proof> |
Strong agree. We need to improve the language for a bunch of this stuff 💯
Oooh I see, your earlier comments are starting to become a lot clearer! This isn't "officially" a thing yet in 0.7 (though conceptually you could do it), but is has a section in the 0.8 spec ("rights amplification"). We may not need to support it today, but will in the near-to-medium term. Per your earlier type... verify:(claim:Capability, capabilities:Capability[]) => Array<Capability[]> | Promise<Array<Capability[]>> ...this is why you need to pass it all of the proofs and return the matching proof UCAN(s) and some way of pointing at the relevant capabilities inside them, so that you can recur. I think that you'll want a bit more information on both sides of that function, something like... (issuer: DID, cap: Capability, proofs: Ucan[]) => Result<Error, {proofUcan: Ucan, proofCap: Capability}[]>
// Could alternately use an index inside the returned proofUcan. Passing the entire proof UCAN is just a convenient way to pass all of the relevant data to the next recursive check invocation. The core ideas here are that:
The idea here being that you'll need both the relevant proof's capability, and access to its (recursive) list of proofs or the issuer DID (if the resource is owned by them). Now, that's very general and takes a lot of information. We can make narrower versions of this function and pass it to a wrapper that will translate it into this more general context. Something like |
Following example illustrated the issue https://observablehq.com/d/7fb5d13d63f667b9, for convenience including snippet here as well
The text was updated successfully, but these errors were encountered: