Skip to content

Commit

Permalink
Merge branch 'ceramicnetwork:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
osrm authored Dec 20, 2024
2 parents 38c6067 + 0a92347 commit eacf949
Show file tree
Hide file tree
Showing 6 changed files with 210 additions and 46 deletions.
29 changes: 29 additions & 0 deletions docs/composedb/create-ceramic-app.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,35 @@ Get up and running quickly with a basic ComposeDB application with one command.
- **Node.js v20** - If you are using a different version, please use `nvm` to install Node.js v20 for best results.
- **npm v10** - Installed automatically with NodeJS v20

You will also need to run a ceramic-one node in the background which provides Ceramic
data network access. To set it up, follow the steps below:

:::note
The instructions below cover the steps for the MacOS-based systems. If you are running on a Linux-based system, you can find the
instructions [here](https://github.com/ceramicnetwork/rust-ceramic?tab=readme-ov-file#linux---debian-based-distributions).
:::

1. Install the component using [Homebrew](https://brew.sh/):

```bash
brew install ceramicnetwork/tap/ceramic-one
```

2. Start the `ceramic-one` using the following command:
```bash
ceramic-one daemon --network in-memory
```

:::note
By default, the command above will spin off a node which connects to a `in-memory`. You can change this behaviour by providing a `--network` flag and specifying a network of your choice. For example:

```ceramic-one daemon --network testnet-clay```
:::

---

## Start the ComposeDB example app

You can easily create a simple ComposeDB starter project by using our CLI and running the following command:

<Tabs
Expand Down
117 changes: 117 additions & 0 deletions docs/composedb/examples/taco-access-control.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# TACo with ComposeDB

*Store sensitive data on ComposeDB, using decentralized access control to enforce fine-grained decryption rights.*

This guide explains how to integrate [TACo](https://docs.threshold.network/applications/threshold-access-control) into ComposeDB, which enables the storing and sharing of non-public data on Ceramic. A more detailed version of this tutorial is available [here](https://docs.threshold.network/app-development/threshold-access-control-tac/integration-guides/ceramic-+-taco).

## TACo Overview

TACo is a programmable encrypt/decrypt API for applications that handle sensitive user data, without compromising on privacy, security or decentralization. TACo offers a distinct alternative to centralized, permissioned, and TEE-dependent access control services.

TACo is the first and only end-to-end encrypted data sharing layer in which access to data payloads is always collectively enforced by a distributed group. Today, over 120 service-providers permissionlessly run TACo clients. They independently validate whether a given data request satisfies pre-specified conditions, only then provisioning decryption material fragments for client-side assembly, decryption, and plaintext access.

TACo offers a flexible access control framework and language, in which access conditions can be configured individually and combined logically. Developers can compose dynamic access workflows for their users – for example, using
the sequential conditions feature to predicate the input to a given access condition on the output of a previous condition or call. Conditions may also be programmatically combined with both on-chain and off-chain authentication methods.

TACo’s encrypt/decrypt API – [taco-web](https://github.com/nucypher/taco-web) – is straightforward to integrate into any web app and usable in parallel with core Web3 infrastructure like Ceramic.

### Use Cases

- **Social networks & Knowledge Bases:** Leverage Ceramic's verifiable credentials and TACo's credential-based decryption to ensure that private user-generated content is only viewable by those who are supposed to see it, and nobody else.

- **IoT event streams:** Let data flow from sensors to legitimate recipients, without trusting an intermediary server to handle the routing and harvest sensitive (meta)data. For example, a medical professional can be issued a temporary access token if the output data from a patient's wearable device rises above a certain threshold.

- **LLM chatbots:** Messages to and from a chatbot should be 100% private, not mined by a UX-providing intermediary. Harness Ceramic's web-scale transaction processing and TACo's per-message encryption/condition granularity to provide a smooth and safe experience for users of LLM interfaces.

## Example Application & Repo

The "TACo with ComposeDB Message Board [Application](https://github.com/nucypher/taco-composedb/tree/main)" is provided as an example and reference for developers – illustrating how TACo and ComposeDB can be combined in a browser-based messaging app. Once installed, a simple UI shows how messages can be encrypted by data producers with access conditions embedded, and how data consumers can view messages *only* if they satisfy those conditions. Launching the demo also involves running a local Ceramic node, to which TACo-encrypted messages are saved and immediately queryable by data requestors.

The following sections explain the core components of TACo’s access control system – access conditions, encryption, and decryption.

### Specifying access conditions & authentication methods

There are two ways in which a recipient, or data consumer, must prove their right to access the private data – (1) authentication and (2) condition fulfillment. The data producer must specify the authentication methods and condition(s) before encrypting the private data, as this configuration is embedded alongside the encrypted payload.

In the example snippet below, we are using RPC conditions. The function will check the *data consumer’s* Ethereum wallet balance, which they prove ownership of via the chosen authentication method – in this case via a EIP4361 (Sign-In with Ethereum) message. Note that this message has already been solicited and utilized by the application, analogous to single-sign-on functionality. This setup is the same as in the demo code above and can be viewed directly in the [repo](https://github.com/nucypher/taco-composedb/blob/main/src/fragments/chatinputbox.tsx#L26-L34).

```TypeScript
import { conditions } from "@nucypher/taco";

const rpcCondition = new conditions.base.rpc.RpcCondition({
chain: 80002,
method: 'eth_getBalance',
parameters: [':userAddressExternalEIP4361'],
returnValueTest: {
comparator: '>',
value: 0,
},
});
```

### Encrypting & saving the data

To complete the encryption step, the following are added as arguments:
a. `domain` – testnet or mainnet
b. `ritualId` – the ID of the cohort of TACo nodes who will collectively manage access to the data
c. a standard web3 provider

The output of this function is a payload containing both the encrypted data and embedded metadata necessary for a qualifying data consumer to access the plaintext message.

```TypeScript
import { initialize, encrypt, conditions, domains, toHexString } from '@nucypher/taco';
import { ethers } from "ethers";

await initialize();

const web3Provider = new ethers.providers.Web3Provider(window.ethereum);
const ritualId = 0
const message = "I cannot trust a centralized access control layer with this message.";
const messageKit = await encrypt(
web3Provider,
domains.TESTNET,
Message,
rpcCondition,
ritualId,
web3Provider.getSigner()
);
const encryptedMessageHex = toHexString(messageKit.toBytes());
```

### Querying & decrypting the data

Data consumers interact with the TACo API via the `decrypt` function. They include the following arguments:

a. `provider`
b. `domain`
c. `encryptedMessage`
d. `conditionContext`

`conditionContext` is a way for developers to programmatically map methods for authenticating a data consumer to specific access conditions – all executable at decryption time. For example, if the condition involves proving ownership of a social account, authenticate via OAuth.

```TypeScript
import {conditions, decrypt, Domain, encrypt, ThresholdMessageKit} from '@nucypher/taco';
import {ethers} from "ethers";

export async function decryptWithTACo(
encryptedMessage: ThresholdMessageKit,
domain: Domain,
conditionContext?: conditions.context.ConditionContext
): Promise<Uint8Array> {
const provider = new ethers.providers.Web3Provider(window.ethereum);
return await decrypt(
provider,
domain,
encryptedMessage,
conditionContext,
)
}
```

Note that the EIP4361 authentication data required to validate the user address (within the condition) is supplied via the `conditionContext` object. To understand this component better, check out the demo [repo](https://github.com/nucypher/taco-composedb/blob/main/src/fragments/chatcontent.tsx#L47).

### Using ComposeDB & TACo in production

For Ceramic, connect to Mainnet (`domains.MAINNET`).

For TACo, a funded Mainnet ritualID is required – this connects the encrypt/decrypt API to a cohort of independently operated nodes and corresponds to a DKG public key generated by independent parties. A dedicated ritualID for Ceramic + TACo projects will be sponsored soon. Watch for updates here.
12 changes: 0 additions & 12 deletions docs/composedb/guides/composedb-server/server-configurations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -183,18 +183,6 @@ Only Postgres is currently supported for production usage.

:::

## History Sync
By default, Ceramic nodes will only index documents they observe using pubsub messages. In order to index documents created before the node was deployed or configured to index some models, **History Sync** needs to be enabled on the Ceramic node, in the `daemon.config.json` file:

```json
{
...
"indexing": {
...
"enable-historical-sync": true
}
}
```

## IPFS Process
### Available Configurations
Expand Down
87 changes: 56 additions & 31 deletions docs/composedb/interact-with-data.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ In the [Create your composite](./create-your-composite.mdx) guide, we fetched tw

```graphql
query{
postIndex(first: 2) {
postsIndex(first: 2) {
edges {
node {
text
body
}
}
}
Expand All @@ -67,16 +67,16 @@ You should see a response similar to the one below. Here, nodes correspond to st
```json
{
"data": {
"postIndex": {
"postsIndex": {
"edges": [
{
"node": {
"text": "This is my first post."
"text": "A Post created using composites and GraphQL"
}
},
{
"node": {
"text": "My second post about ComposeDB!"
"text": "This is my second post!"
}
}
]
Expand All @@ -97,10 +97,10 @@ You have options to retrieve specific records or last `n` indexed records as wel

```graphql
query{
postIndex(last: 3) {
postsIndex(last: 3) {
edges {
node {
text
body
}
}
}
Expand All @@ -121,11 +121,15 @@ Let’s say, you would like to create a post and add it to the graph. To do that


```graphql
mutation CreateNewPost($i: CreatePostInput!){
createPost(input: $i){
document{
id
text
mutation CreateNewPost($i: CreatePostsInput!){
createPosts(input: $i){
document{
id
title
body
tag
ranking
created_at
}
}
}
Expand All @@ -141,7 +145,11 @@ mutation CreateNewPost($i: CreatePostInput!){
{
"i": {
"content": {
"text": "A Post created using composites and GraphQL"
"title": "New post",
"body": "My new post on Ceramic",
"tag": "User post",
"ranking": 5,
"created_at": "2024-12-03T10:15:30Z"
}
}
}
Expand All @@ -160,10 +168,14 @@ The result of the query above will be a new document with a unique ID and the co
```json
{
"data": {
"createPost": {
"createPosts": {
"document": {
"id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l",
"text": "A Post created using composites and GraphQL"
"id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
"title": "New post",
"body": "My new post on Ceramic",
"tag": "User post",
"ranking": 5,
"created_at": "2024-12-03T10:15:30Z"
}
}
}
Expand All @@ -173,7 +185,8 @@ The result of the query above will be a new document with a unique ID and the co

:::note

Stream IDs are unique. The “id” you will see in the response when performing the mutation above will be different.
Stream IDs are unique. The “id” you will see in the response when performing the mutation above will be different. Keep that in mind
as you follow this guide and update the id to the one that you see in your response.

:::

Expand All @@ -191,11 +204,15 @@ You can find your post’s ID in the response after you ran the `CreateNewPost`
**Query:**

```graphql
mutation UpdatePost($i: UpdatePostInput!) {
updatePost(input: $i) {
mutation UpdatePost($i: UpdatePostsInput!) {
updatePosts(input: $i) {
document {
id
text
title
body
tag
ranking
created_at
}
}
}
Expand All @@ -208,24 +225,32 @@ mutation UpdatePost($i: UpdatePostInput!) {
```json
{
"i": {
"id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l",
"id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
"content": {
"text": "My best post!"
"title": "New post",
"body": "My new post on Ceramic using ComposeDB",
"tag": "User post",
"ranking": 5,
"created_at": "2024-12-03T10:15:30Z"
}
}
}
```

This mutation will update the record with ID `kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l`.
This mutation will update the record with ID `kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q`.

**Response:**
```json
{
"data": {
"updatePost": {
"updatePosts": {
"document": {
"id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l",
"text": "My best post!"
"id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
"title": "New post",
"body": "My new post on Ceramic using ComposeDB",
"tag": "User post",
"ranking": 5,
"created_at": "2024-12-03T10:15:30Z"
}
}
}
Expand All @@ -241,8 +266,8 @@ mutation with the `shouldIndex` option set to `true`, and the post ID as variabl
**Query:**

```graphql
mutation EnableIndexingPost($input: EnableIndexingPostInput!) {
enableIndexingPost(input: $input) {
mutation EnableIndexingPost($i: EnableIndexingPostsInput!) {
enableIndexingPosts(input: $i) {
document {
id
}
Expand All @@ -257,19 +282,19 @@ mutation EnableIndexingPost($input: EnableIndexingPostInput!) {
```json
{
"i": {
"id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l",
"id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q",
"shouldIndex": false
}
}
```

This mutation will un-index the record with ID `kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l`.
This mutation will un-index the record with ID `kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q`.

**Response:**
```json
{
"data": {
"enableIndexingPost": {
"enableIndexingPosts": {
"document": null
}
}
Expand Down
6 changes: 3 additions & 3 deletions docs/wheel/wheel-reference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -112,9 +112,9 @@ This section dives deeper into the Ceramic parameters you can configure when you

An option to define if IFPS runs in the same compute process as Ceramic. You have two options to choose from:

- Bundled - IPFS running in same compute process as Ceramic; recommended for early prototyping.
- Remote - IPFS running in separate compute process; recommended for production and everything besides early prototyping.
This assumes that you have the IPFS process setup and can provide an IPFS Hostname.
- Remote - IPFS running in separate compute process; recommended for all Ceramic versions that use `ceramic-one`. This configuration requires an IPFS Hostname. Default value is `http://localhost:5101`
- Bundled - IPFS running in same compute process as Ceramic; used only with older Ceramic versions that use Kubo.


### State Store

Expand Down
5 changes: 5 additions & 0 deletions sidebars.ts
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,11 @@ const sidebars: SidebarsConfig = {
type: "doc",
id: "composedb/examples/verifiable-credentials",
label: "Verifiable Credentials"
},
{
type: "doc",
id: "composedb/examples/taco-access-control",
label: "TACo with ComposeDB"
}
]
},
Expand Down

0 comments on commit eacf949

Please sign in to comment.