This KILT Indexer project builds a customized database with information from the KILT Blockchain , leveraging the robust capabilities of SubQuery as its backbone. This Indexer tailors the generic framework provided by SubQuery for collecting, processing, and storing data from chain node interactions. The processed data is made available for customers to query via a website or HTTP requests.
The majority of the information collected focuses on Identity solutions rather than transactions. This includes data related to decentralized identifiers (DIDs) and verifiable credentials (VCs).
You can visit our deployed websites to query information. It is also possible to run the indexer locally, allowing you to customize it to your needs.
You can interact with our public GraphQL Servers via the Playgrounds deployed under the following links:
- To query data from the KILT production Blockchain, aliased as Spiritnet, visit: https://indexer.kilt.io/
- To query data from the KILT development Blockchain, aliased as Peregrine, visit: https://dev-indexer.kilt.io/
- Clone repository and install all necessary modules.
- Define your environment variables.
- Run the Indexer.
Make sure you installed these required software before running this project:
After cloning the repository, to install all required modules, run:
yarn install
.
You only need to define one environment variable to run this project, namely RPC_ENDPOINTS
By default, it will be assumed that the production KILT Blockchain, Spiritnet, will be indexed.
Please, assign a Spiritnet endpoint node to RPC_ENDPOINTS
.
You can find some of them on our documentation.
There are default values for all other the environment variables.
You can use other values by assigning them inside an .env
-file.
In the root directory of this repository, is an .env.example
-file that lists how to name environment variables and what is their use.
Add the .env
file to the same directory level as the .env.example
file.
First make sure the docker daemon is running.
The easiest way to run your project is by running yarn dev
or npm run-script dev
.
This command sequentially executes three steps.
Each of these steps can also be executed independently using the following commands:
yarn codegen
- Generates types from the GraphQL schema definition and saves them in the
/src/types
directory. - Must be run after each change to the
schema.graphql
file.
- Generates types from the GraphQL schema definition and saves them in the
yarn build
- Builds and packages the SubQuery project into the
/dist
directory
- Builds and packages the SubQuery project into the
yarn start:docker
- An alias to
docker-compose pull && docker-compose up
. - Fetches and runs three Docker containers: an indexer, a PostgreSQL DataBase, and a query service.
- The
docker-compose.yml
file manages this Docker Compose application, orchestrating the various containers needed for the project. It specifies which images to pull and configures how to (re)start the services. - This requires Docker to be running locally.
- An alias to
You can watch the three services start (it may take a few minutes on first start). Once all are running, you can head to http://localhost:3000 in your browser and you should see a GraphQL Playground showing with the schemas ready to query.
If you change schema.graphql
, project.ts
or mappingHandlers.ts
all files autogenerated during yarn dev
will be overwritten, with the big exception of the database inside .data/
.
If you make incompatible changes to any of those files, make sure you delete .data/
before running yarn dev
again.
After making changes to schema.graphql
you need to renew your database.
Additionally, you can run yarn slash
to delete all autogenerated files at once, including the database.
Fork this repository before making any changes. We would love to see any improvement suggestions from the community in the form of Pull Requests.
On top of this working SubQuery project, you can add customizations by changing the following files:
- The project manifest
project.yaml
:- This defines the key project configuration and mapping handler filters.
project.yaml
is autogenerated fromproject.ts
wich consumes the environment variables.
- The GraphQL Schema
schema.graphql
:- That defines the shape of the resulting data being indexed.
- It defines the data models for the database.
- The Mapping directory
src/mappings/
:- Contains typescript functions that handle transformation logic from chain information to database entries.
- Defines the mapping handlers listed on the project manifest.
To get more logs while debugging open docker-compose.yml
and uncomment - --log-level=trace
.
For more details, read to the documentation of:
You can get support from SubQuery, by joining the SubQuery discord and messaging in the #technical-support
channel.
For support from KILT, you can join the KILT Telegram chat or join the Kilt discord and message us.
For this project, you can visit the playground under http://localhost:3000/ and try to query one of the following GraphQL codes to get a taste of how it works.
To help you explore the different possible queries and entities, you can draw the documentation on the right of the GraphQL playground.
Most of the example queries below take advantage of the example fragments. You need to add the fragments to the playground as well, if you want to run queries using those fragments.
Tip: Commas are irrelevant.
GraphQL provides reusable units called fragments. Fragments let you construct sets of fields, and then include them in queries where needed.
fragment wholeBlock on Block{
id,
hash,
timeStamp,
}
fragment wholeAttestation on Attestation {
id,
claimHash,
cTypeId,
issuerId,
payer,
delegationID,
valid,
creationBlock {
...wholeBlock,
},
revocationBlock {
...wholeBlock,
},
removalBlock {
...wholeBlock,
},
}
fragment DidNames on Did {
id
web3NameId
}
There is a small collection of query examples for you to get started. You can find it in the exampleQueries folder.
Most of the examples take advantage of the fragments, but they are optional. Here are two variants of the same query to show how they work.
-
Find Attestation by its claim hash:
- without using fragments:
query { attestations( filter: { claimHash: { equalTo: "0x7554dc0b69be9bd6a266c865a951cae6a168c98b8047120dd8904ad54df5bb08" } } ) { totalCount nodes { id claimHash cTypeId issuerId payer delegationID valid creationBlock { id hash timeStamp } } } }
- taking advantage of fragments:
query { attestations( filter: { claimHash: { equalTo: "0x7554dc0b69be9bd6a266c865a951cae6a168c98b8047120dd8904ad54df5bb08" } } ) { totalCount nodes { ...wholeAttestation } } }
This project leverages the SubQuery Testing Framework to ensure that the data processing logic works as expected and to help catch errors early in the development process.
One or more test cases are written for every handler
and all tests are re-run during every pull request.
This checks that the data coming from the blockchain is being processed and saved as expected.
The tests are written with information coming from the KILT production blockchain Spiritnet to ensure perpetuity.
The easier, but slower version to run the tests is via: yarn contained:test
.
The recommended and (after setup) faster option is by running yarn test
, but it has a couple requirements that can be fulfilled just by following these steps:
- Install all packages by running:
yarn install
- Start the postgres data base container.
Sadly, the tests can only interact with the postgres container if it is available on port
5432
The easiest way to set it up, is to run fist the project viayarn dev
and after a while stop the unnecessary subquery-node and graphql-engine containers. - Run the subquery-node container in test mode, by running
yarn test
.
Please write new test cases inside the src/test
directory.
Please, try to group the tests in files similarly as the src/mappings
does it; that is based on Entities.
For documentation about writing the tests cases please refer to the official SubQuery documentation.