diff --git a/docs/composedb/create-ceramic-app.mdx b/docs/composedb/create-ceramic-app.mdx index 934454c6..d3993ede 100644 --- a/docs/composedb/create-ceramic-app.mdx +++ b/docs/composedb/create-ceramic-app.mdx @@ -11,6 +11,35 @@ Get up and running quickly with a basic ComposeDB application with one command. - **Node.js v20** - If you are using a different version, please use `nvm` to install Node.js v20 for best results. - **npm v10** - Installed automatically with NodeJS v20 +You will also need to run a ceramic-one node in the background which provides Ceramic +data network access. To set it up, follow the steps below: + +:::note +The instructions below cover the steps for the MacOS-based systems. If you are running on a Linux-based system, you can find the +instructions [here](https://github.com/ceramicnetwork/rust-ceramic?tab=readme-ov-file#linux---debian-based-distributions). +::: + +1. Install the component using [Homebrew](https://brew.sh/): + +```bash +brew install ceramicnetwork/tap/ceramic-one +``` + +2. Start the `ceramic-one` using the following command: +```bash +ceramic-one daemon --network in-memory +``` + +:::note +By default, the command above will spin off a node which connects to a `in-memory`. You can change this behaviour by providing a `--network` flag and specifying a network of your choice. For example: + +```ceramic-one daemon --network testnet-clay``` +::: + +--- + +## Start the ComposeDB example app + You can easily create a simple ComposeDB starter project by using our CLI and running the following command: ', + value: 0, + }, +}); +``` + +### Encrypting & saving the data + +To complete the encryption step, the following are added as arguments: +a. `domain` – testnet or mainnet +b. `ritualId` – the ID of the cohort of TACo nodes who will collectively manage access to the data +c. a standard web3 provider + +The output of this function is a payload containing both the encrypted data and embedded metadata necessary for a qualifying data consumer to access the plaintext message. + +```TypeScript +import { initialize, encrypt, conditions, domains, toHexString } from '@nucypher/taco'; +import { ethers } from "ethers"; + +await initialize(); + +const web3Provider = new ethers.providers.Web3Provider(window.ethereum); +const ritualId = 0 +const message = "I cannot trust a centralized access control layer with this message."; +const messageKit = await encrypt( + web3Provider, + domains.TESTNET, + Message, + rpcCondition, + ritualId, + web3Provider.getSigner() +); +const encryptedMessageHex = toHexString(messageKit.toBytes()); +``` + +### Querying & decrypting the data + +Data consumers interact with the TACo API via the `decrypt` function. They include the following arguments: + +a. `provider` +b. `domain` +c. `encryptedMessage` +d. `conditionContext` + +`conditionContext` is a way for developers to programmatically map methods for authenticating a data consumer to specific access conditions – all executable at decryption time. For example, if the condition involves proving ownership of a social account, authenticate via OAuth. + +```TypeScript +import {conditions, decrypt, Domain, encrypt, ThresholdMessageKit} from '@nucypher/taco'; +import {ethers} from "ethers"; + +export async function decryptWithTACo( + encryptedMessage: ThresholdMessageKit, + domain: Domain, + conditionContext?: conditions.context.ConditionContext +): Promise { + const provider = new ethers.providers.Web3Provider(window.ethereum); + return await decrypt( + provider, + domain, + encryptedMessage, + conditionContext, + ) +} +``` + +Note that the EIP4361 authentication data required to validate the user address (within the condition) is supplied via the `conditionContext` object. To understand this component better, check out the demo [repo](https://github.com/nucypher/taco-composedb/blob/main/src/fragments/chatcontent.tsx#L47). + +### Using ComposeDB & TACo in production + +For Ceramic, connect to Mainnet (`domains.MAINNET`). + +For TACo, a funded Mainnet ritualID is required – this connects the encrypt/decrypt API to a cohort of independently operated nodes and corresponds to a DKG public key generated by independent parties. A dedicated ritualID for Ceramic + TACo projects will be sponsored soon. Watch for updates here. diff --git a/docs/composedb/guides/composedb-server/server-configurations.mdx b/docs/composedb/guides/composedb-server/server-configurations.mdx index e9035251..c67d0bd5 100644 --- a/docs/composedb/guides/composedb-server/server-configurations.mdx +++ b/docs/composedb/guides/composedb-server/server-configurations.mdx @@ -183,18 +183,6 @@ Only Postgres is currently supported for production usage. ::: -## History Sync -By default, Ceramic nodes will only index documents they observe using pubsub messages. In order to index documents created before the node was deployed or configured to index some models, **History Sync** needs to be enabled on the Ceramic node, in the `daemon.config.json` file: - -```json -{ - ... - "indexing": { - ... - "enable-historical-sync": true - } -} -``` ## IPFS Process ### Available Configurations diff --git a/docs/composedb/interact-with-data.mdx b/docs/composedb/interact-with-data.mdx index 7c06cace..dd39b2ae 100644 --- a/docs/composedb/interact-with-data.mdx +++ b/docs/composedb/interact-with-data.mdx @@ -44,10 +44,10 @@ In the [Create your composite](./create-your-composite.mdx) guide, we fetched tw ```graphql query{ - postIndex(first: 2) { + postsIndex(first: 2) { edges { node { - text + body } } } @@ -67,16 +67,16 @@ You should see a response similar to the one below. Here, nodes correspond to st ```json { "data": { - "postIndex": { + "postsIndex": { "edges": [ { "node": { - "text": "This is my first post." + "text": "A Post created using composites and GraphQL" } }, { "node": { - "text": "My second post about ComposeDB!" + "text": "This is my second post!" } } ] @@ -97,10 +97,10 @@ You have options to retrieve specific records or last `n` indexed records as wel ```graphql query{ - postIndex(last: 3) { + postsIndex(last: 3) { edges { node { - text + body } } } @@ -121,11 +121,15 @@ Let’s say, you would like to create a post and add it to the graph. To do that ```graphql -mutation CreateNewPost($i: CreatePostInput!){ - createPost(input: $i){ - document{ - id - text +mutation CreateNewPost($i: CreatePostsInput!){ + createPosts(input: $i){ + document{ + id + title + body + tag + ranking + created_at } } } @@ -141,7 +145,11 @@ mutation CreateNewPost($i: CreatePostInput!){ { "i": { "content": { - "text": "A Post created using composites and GraphQL" + "title": "New post", + "body": "My new post on Ceramic", + "tag": "User post", + "ranking": 5, + "created_at": "2024-12-03T10:15:30Z" } } } @@ -160,10 +168,14 @@ The result of the query above will be a new document with a unique ID and the co ```json { "data": { - "createPost": { + "createPosts": { "document": { - "id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l", - "text": "A Post created using composites and GraphQL" + "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q", + "title": "New post", + "body": "My new post on Ceramic", + "tag": "User post", + "ranking": 5, + "created_at": "2024-12-03T10:15:30Z" } } } @@ -173,7 +185,8 @@ The result of the query above will be a new document with a unique ID and the co :::note -Stream IDs are unique. The “id” you will see in the response when performing the mutation above will be different. +Stream IDs are unique. The “id” you will see in the response when performing the mutation above will be different. Keep that in mind +as you follow this guide and update the id to the one that you see in your response. ::: @@ -191,11 +204,15 @@ You can find your post’s ID in the response after you ran the `CreateNewPost` **Query:** ```graphql -mutation UpdatePost($i: UpdatePostInput!) { - updatePost(input: $i) { +mutation UpdatePost($i: UpdatePostsInput!) { + updatePosts(input: $i) { document { id - text + title + body + tag + ranking + created_at } } } @@ -208,24 +225,32 @@ mutation UpdatePost($i: UpdatePostInput!) { ```json { "i": { - "id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l", + "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q", "content": { - "text": "My best post!" + "title": "New post", + "body": "My new post on Ceramic using ComposeDB", + "tag": "User post", + "ranking": 5, + "created_at": "2024-12-03T10:15:30Z" } } } ``` -This mutation will update the record with ID `kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l`. +This mutation will update the record with ID `kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q`. **Response:** ```json { "data": { - "updatePost": { + "updatePosts": { "document": { - "id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l", - "text": "My best post!" + "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q", + "title": "New post", + "body": "My new post on Ceramic using ComposeDB", + "tag": "User post", + "ranking": 5, + "created_at": "2024-12-03T10:15:30Z" } } } @@ -241,8 +266,8 @@ mutation with the `shouldIndex` option set to `true`, and the post ID as variabl **Query:** ```graphql -mutation EnableIndexingPost($input: EnableIndexingPostInput!) { - enableIndexingPost(input: $input) { +mutation EnableIndexingPost($i: EnableIndexingPostsInput!) { + enableIndexingPosts(input: $i) { document { id } @@ -257,19 +282,19 @@ mutation EnableIndexingPost($input: EnableIndexingPostInput!) { ```json { "i": { - "id": "kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l", + "id": "kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q", "shouldIndex": false } } ``` -This mutation will un-index the record with ID `kjzl6kcym7w8y9xlffqruh3v7ou1vn11t8203i6te2i3pliizt65ad3vdh5nl4l`. +This mutation will un-index the record with ID `kjzl6kcym7w8y5ygh1fyvstbjztd69suybc4ez8bet2hun7jezrc2m0uwg5bm3q`. **Response:** ```json { "data": { - "enableIndexingPost": { + "enableIndexingPosts": { "document": null } } diff --git a/docs/wheel/wheel-reference.mdx b/docs/wheel/wheel-reference.mdx index c9767b63..ceb6f2ec 100644 --- a/docs/wheel/wheel-reference.mdx +++ b/docs/wheel/wheel-reference.mdx @@ -112,9 +112,9 @@ This section dives deeper into the Ceramic parameters you can configure when you An option to define if IFPS runs in the same compute process as Ceramic. You have two options to choose from: -- Bundled - IPFS running in same compute process as Ceramic; recommended for early prototyping. -- Remote - IPFS running in separate compute process; recommended for production and everything besides early prototyping. - This assumes that you have the IPFS process setup and can provide an IPFS Hostname. +- Remote - IPFS running in separate compute process; recommended for all Ceramic versions that use `ceramic-one`. This configuration requires an IPFS Hostname. Default value is `http://localhost:5101` +- Bundled - IPFS running in same compute process as Ceramic; used only with older Ceramic versions that use Kubo. + ### State Store diff --git a/sidebars.ts b/sidebars.ts index fa299d2a..3f6abdc5 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -241,6 +241,11 @@ const sidebars: SidebarsConfig = { type: "doc", id: "composedb/examples/verifiable-credentials", label: "Verifiable Credentials" + }, + { + type: "doc", + id: "composedb/examples/taco-access-control", + label: "TACo with ComposeDB" } ] },