Centralized cache for blockchain data
- scrape any data available on blockchain (events, tx data, traces)
- easily provide semantic layers on top
- ensure data consistency
Package | Version | Description |
---|---|---|
etl |
Core package - responsible for etl | |
graphql-api |
Exposes data as GraphQL API | |
test-utils |
Utils for integration tests | |
utils |
Common reusable extractors etc. | |
validation |
Scripts to validate spock data with Google Big Query |
npm install @oasisdex/spock-etl
spock-etl yourconfig.js|ts
- dist/bin/migrate config βΒ launches database migrations (core and defined in the config)
- dist/bin/etl config β launches ETL process (long running process)
- dist/index.js config β run general GraphQL api exposing database schema
api
We can automatically cache slow graphql queries. To enable it add: VL_GRAPHQL_CACHING_ENABLED=true
env variable or in
your config:
api: {
responseCaching: {
enabled: true,
duration: "15 seconds" // default
};
};
Probably you don't want users to issue any query on GraphQL API. That's why we support query whitelisting.
Enable it by:
{
// ...
api: {
whitelisting: {
enabled: true,
whitelistedQueriesDir: "./queries",
bypassSecret: "SECRET VALUE 123",
},
}
}
We rely on special operationName
(part of request's body) parameter to match requested query with a query that is
defined in whitelistedQueriesDir
.
You can bypass whole mechanism (for example to test new queries) by providing bypassSecret
as devMode
in request's
body.
spock pulls all the data from ethereum node. Nodes can differ greatly between each other, and some are simply not reliable / consistent. Based on our tests:
- Alchemy works
- Infura DOESN'T WORK. Sometimes it can randomly return empty sets for getLogs calls
- Self hosted nodes should work (not tested yet) but keep in mind that spock can generate quite a lot of network calls (around 500k daily)
yarn build
βΒ build everythingyarn build:watch
- build and watchyarn test:fix
- run tests, auto fix all errors
Tip: Use yarn link
to link packages locally.
docker-compose up -d
docker-compose stop
docker-compose down
We use consola for logging. By default it will log everything. To adjust
logging levels set VL_LOGGING_LEVEL
. env variable. Ex. use
VL_LOGGING_LEVEL=4
to omit detailed db logs (most verbose).
Configure sentry by providing environmental variables:
SENTRY_DSN=...
SENTRY_ENV=production
We will only report critical errors (ie. stopped jobs).
There are two main requirements for transformers in Spock, both of them are related to processing reorged blocks:
- Transformers should be written as a "pure" functions operating only on the arguments provided. There can be no internal caches placed in the closures etc.
- All data written to the database has to be linked via foreign keys to the processed block.
When reorg happens spock will cascade delete reorged blocks with all related data. Then it will resync new block.