-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Smart Tokens #385
Comments
On the topic of smart tokens. JWT today is the most prevalent use of smart/structured tokens. I want to think of them as serialised capabilities. The most prominent argument against JWT is that it makes it hard to revoke. And if you add in ways to revoke JWTs, then it is no different from just using session tokens that we have been doing. But I argue that JWTs are useful regardless of whether you have a way to revoke them or not. I will point to revocable capabilities in capabilities myths demolished paper - "The Irrevocability Myth". The basic idea is provide a capability to a proxy, and then control that proxy if you want to revoke the capability. The extra level of indirection does impact performance. But this is no different from the performance of using session tokens as we have been doing. The paper even addresses how we basically try to make session tokens faster... through caching indirect references!
Therefore transitioning to using of JWTs is not about performance. The performance is either the same or worse than using traditional session tokens. It's about power and flexibility. It expands the capabilities of secure systems to be more flexible and dynamic beyond monolithic ACLs (no matter whether they are MAC or DAC). Capabilities are meant to also encode the address to the resource it enables access to. To make this possible, we can also think of JWTs like CIDs (content identifiers). The more prevalent deployment of this idea is the usage of one-time URLs or secret URLs like how github secret gists are done. Usability wise, I would think that embedding the address within the JWT is not enough. Nothing knows how to interpret it, but if we take a page CID, all we would need to do is add a protocol and make a URL. Thus Of course whether a cap URL is useful or not depends on the agent opening the URL. If it is a browser, the expectation is that it supposed to load a web page. But as these capabilities/JWTs are intended to be shared among machines, it makes sense that we're talking about agents beyond just web browsers, and these agents encompass our microservice applications that have to make use of JWTs to access third party services. We are just in the beginning of JWT adoption. Right now people are still using JWTs like they used to use API keys. But eventually we will realise that JWTs are capabilities. And if they are capabilities, we move from treating them like pets to treating them like cattle. We just need the right mental model, libraries/tools, software solutions that can manipulate JWTs properly and we will soon have a "Web of Capabilities". And in this new world, we could imagine PK as a powerbox. |
There's a connection between OOP, object capability languages (https://en.wikipedia.org/wiki/Object-capability_model), and the adoption of JWTs as capabilities in this new web3 world. When thinking about OOP, forget about Java, C++ class inheritance OOP, go back to smalltalk OOP. Where it's all about objects, late binding, and messages. The common idea between object capability languages and functional programming type theory community is "correct by construction". Where OCL people want programs that enforce POPL not from outside-in, like putting untrusted programs in faraday cages (containerisation, virtualisation, isolated-vms, WASM... etc), but from inside-out, where POPL is embedded from the lowest level constructs to high level constructs. With this in mind, JWT as capabilities means that JWT scopes or permissions or grants should be structured to include both the address and the permission. Consider the OOP expression I think fundamentally PK can provide a new programming model, not a new programming language, but by having a local PK node available (side-car style), we can provide the ability to manipulate JWT capabilities like first-class concepts in programming languages (might need to provide SDKs like pulumi though). They are are still a decentralised smart-token, like how smart-contracts are wielded, they don't apply to the program's own resources, but resources across the web. |
Another issue involving POLA. Software supply chain security. POLA is violated by downloading software, and running remote code (trusted but unverified), then giving them ambient authority such as environment variables injected into CI/CD pipelines, and where those pipelines download software from the internet to execute. Namespacing isn't sufficient here, because parent process environment is inherited by child process. If tokens are capabilities, and these tokens are inherited by the child processes transitively, then that's not POLA, unless parent processes tightly control what child processes inherit. This is the last time delivery mechanism problem, but it can be helped by changing things in 2 ways. Making capabilities only useful for the designated target, and changing the delivery mechanism of capabilities from env variables which are inherited automatically and thus insecure, to passing as parameters (and in particular parameter files), or inverting the control, where target processes ask for capabilities (but this ends up with a secret 0 problem). See examples:
|
Natural progression to smarter tokens even for the people to machine: https://tidbits.com/2022/06/27/why-passkeys-will-be-simpler-and-more-secure-than-passwords/. |
Based on this discussion containers/skopeo#434 (comment) it appears:
This explains why it's possible to set file variables in the gitlab interface: |
Something to be aware of is that environment variables are also scoped in gitlab, so we can place variables for jobs that have a defined scope. While this means you can create new env variables for different scopes, there's still a problem of capability tokens not being composable. If 2 programs both are using |
In other news, regarding the security of chocolatey packages. Right now chocolatey is using packages provided by the chocolatey community. In particular the bill of materials include nodejs and python, although extra packages may be needed in the future. In that sense, its no greater or lesser secure than npm packages and nixpkgs. All rely on the community. Officially they recommend hosting your own packages and internalizing them to avoid network access, especially given that packages are not "pinned" in chocolatey unlike nixpkgs (and part of the reason why we like nixpkgs). But from a security perspective no matter what you're always going to be running trusted (ideally) but unverified code. This is not ideal, all signatures can do is reifying the trust chain, and that ultimately results in a chain of liability. But this is a ex post-facto security technique. Regardless of the trust chain (https://www.chainguard.dev/), a vulnerability means the damage is already done. Preventing damage ahead of time requires more than just trust. And this leads to Principle of least privilege, which is enforced through one of 2 ways:
Most security attempts are done through the first technique. Some form of isolation, whether by virtual machines, containerisation, vm isolation, network isolation, and even environment variable filtering with the env command or namespaces, and even the above technique of using environment scopes. The second technique is not practical due to the legacy of our software architecture in the industry. The fundamental problem with technique one, is that everything starts as open, and are we trying to selectively trying to close things, this is privacy as an after-thought. This is doomed to failure, because it is fundamentally not scalable. The fundamental problem with technique two, is that it makes interoperability something that requires forethought, this is privacy by default. |
Regarding OCLs and derivative/proxy capabilities see Cloudflare Zero Trust. I've signed up already for it. And this makes sense as the proxy resource, that you can setup access control for. PK would then be a decentralised version of that. It would need to have sufficient programmability to be able to derive secret capability... and it does seem like it. This is quite a huge change from VPN network boundaries, it's designed to integrate into SSO and directly allow access to internal systems. It's a bit complicated, and seems to have some flexibility, but I think the onboarding process of tailscale is superior for creating zero-trust networks between users of tailscale. But due to working with HTTP instead of VPN, it is thus more technologically more portable. |
#472 goes into separating |
What is your research hypothesis/question?
Split off from #166 so that this issue can focus on smart token research.
Review existing ideas, literature and prior work
Research conclusion
Sub-Issues & Sub-PRs created
The text was updated successfully, but these errors were encountered: