- Table of Contents
- Overview
- Major Components
- Setup
- Increase IPFS memory limits (for complex use cases)
- Development setup and instructions
- Testing Environment Setup
- Find us
Snapshotter Core Edge is the next iteration of Snapshotter Core (Pooler) repository.
Key Architecture Improvements:
- Message Queue: The system now uses Redis with Dramatiq instead of RabbitMQ for improved performance and simplified deployment
- Modular Services: New Periphery services (Block Fetcher, Transaction Processor, Epoch Syncer) are introduced and are separate components that work together to provide efficient blockchain data processing.
- Dynamic Worker Generation: Worker services are automatically generated based on your project and aggregator configurations
- Streamlined Directory Structure: Compute modules linked to snapshotter-computes and configurations in snapshotter-configs live in
/computesand/configrespectively
The Epoch Syncer service (Periphery service) replaces the previous System Event Detector and provides enhanced functionality:
- Monitors both source chain blocks and protocol state events
- Verifies data availability in Redis cache before triggering snapshot generation
- Ensures all required blocks, transactions, and receipts are cached
- Sends events to the Processor Distributor via Redis/Dramatiq queues
- Prevents wasted computation by checking cache completeness
Related information and other services depending on these can be found in previous sections: State Transitions, Configuration.
The Processor Distributor, defined in processor_distributor.py, is the central coordinator of the snapshot generation process.
- It loads the preloader, base snapshotting, and aggregator config information from the settings files
- It receives events from the Epoch Syncer via the Redis-backed Dramatiq queue:
f'powerloom-event-detector_{settings.namespace}_{settings.instance_id}' - It creates and distributes processing messages based on:
- Preloader configuration in
config/preloader.json - Project configuration in
config/projects.json - Aggregator configuration in
config/aggregator.json
- Preloader configuration in
- For
EpochReleasedevents:- Executes preloaders if configured to prepare data
- Distributes work to snapshot workers for each project type
- Each project type has its own dedicated worker pool
- For
ProcessingCompleteevents:- Triggers aggregation workers to process completed base snapshots
- Routes messages based on aggregation dependencies
The system uses specialized workers for different tasks, all communicating through Redis/Dramatiq message queues. Worker services are automatically generated based on your configuration files.
- Purpose: Build base snapshots according to
config/projects.json - Auto-generation: Each project type gets its own worker service (e.g.,
snapshotter-worker-trade-volume,snapshotter-worker-pair-total-reserves) - Queue: Listen on project-specific queues:
f'powerloom-snapshotter_{settings.namespace}_{settings.instance_id}-{project_type}' - Compute Logic: Execute compute modules from
/computes/directory - Implementation:
snapshotter/utils/snapshot_worker.py
- Purpose: Build aggregate snapshots according to
config/aggregator.json - Processing: Transform completed base snapshots into higher-order aggregates
- Queue: Listen on:
f'powerloom-aggregator_{settings.namespace}_{settings.instance_id}' - Compute Logic: Execute modules from
/computes/aggregates/directory - Implementation:
snapshotter/utils/aggregation_worker.py
- Purpose: Manage snapshot data caching and state updates
- Events Handled:
SnapshotSubmitted,SnapshotFinalized,SnapshotBatchSubmitted - Queue: Listen on:
f'powerloom-cacher_{settings.namespace}_{settings.instance_id}' - Functionality: Maintains project data in Redis with TTL management
- Implementation:
snapshotter/cacher.py
Upon receiving a message from the processor distributor, the workers validate inputs and call the compute() function on the configured compute class to generate snapshots.
This component is one of the most important and allows you to access the finalized protocol state on the smart contract running on the anchor chain. Find it in core_api.py.
The Core API service exposes several endpoints for accessing snapshot data and system information. All API endpoints are accessible via HTTP on port 8002 by default.
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Check the health status of the Snapshotter service |
/current_epoch |
GET | Get the current epoch data from the protocol state contract |
/latest_epoch_info |
GET | Get the latest epoch information for active pools |
| Endpoint | Method | Description |
|---|---|---|
/epoch/{epoch_id} |
GET | Get epoch information for a specific epoch ID |
/last_finalized_epoch/{project_id} |
GET | Get the last finalized epoch information for a given project |
/get_previous_epoch_info/{epoch_id} |
GET | Get previous epoch information for a given epoch ID |
| Endpoint | Method | Description |
|---|---|---|
/data/{epoch_id}/{project_id}/ |
GET | Get snapshot data for a given epoch and project ID |
/cid/{epoch_id}/{project_id}/ |
GET | Get the finalized CID for a given epoch and project ID |
/previous_snapshots_data/{pool_address}/{epoch_id} |
GET | Get previous snapshots data for a specific pool address |
| Endpoint | Method | Description |
|---|---|---|
/time_series/{epoch_id}/{start_time}/{step_seconds}/{project_id} |
GET | Get time series data points at specified intervals |
Parameters:
epoch_id: The epoch ID to end the series atstart_time: Unix timestamp in seconds of when to begin datastep_seconds: Length of time in seconds between data pointsproject_id: The ID of the project to get data for
Limitations:
- Maximum 200 observations per request
- Start time must be before the epoch timestamp
The authentication service (snapshotter/auth/server_entry.py) provides user management and API key functionality:
| Endpoint | Method | Description |
|---|---|---|
/user |
POST | Create a new user |
/user/{email}/api_key |
POST | Generate an API key for a user |
/user/{email}/api_key |
DELETE | Delete a user's API key |
/user/{email} |
GET | Get user information |
/users |
GET | Get all users (admin only) |
Note: Authentication endpoints require proper authorization headers and are typically used for managing access to the Core API endpoints.
The Uniswap V3 compute modules expose a rich set of endpoints for accessing detailed data. These are defined in computes/api/router.py.
| Endpoint | Method | Description |
|---|---|---|
/pool/{pool_address}/metadata |
GET | Retrieve metadata for a specific Uniswap V3 pool. |
/token/{token_address}/pools |
GET | Retrieve all pools associated with a specific token. |
/ethPrice or /ethPrice/{block_number} |
GET | Retrieve the ETH price snapshot for the latest or a specific block. |
/token/price/{token_address}/{pool_address} |
GET | Retrieve the price of a token in a specific pool, optionally at a specific block. |
/snapshot/base_all_pools/{token_address} |
GET | Retrieve base snapshots for all pools of a given token. |
/snapshot/base/{pool_address} |
GET | Retrieve the base snapshot for a specific pool, optionally at a specific block. |
/snapshot/trades/{pool_address} |
GET | Retrieve the trades snapshot for a specific pool, optionally at a specific block. |
/snapshot/allTrades |
GET | Retrieve the trades snapshot for all pools, optionally at a specific block. |
/tokenPrices/all/{token_address} |
GET | Retrieve all price snapshots for a token, optionally at a specific block. |
/tradeVolumeAllPools/{token_address}/{time_interval} |
GET | Retrieve aggregated trade volume for all pools of a token over a time interval. |
/tradeVolume/{pool_address}/{time_interval} |
GET | Retrieve aggregated trade volume for a specific pool over a time interval. |
/poolTrades/{pool_address}/{start_timestamp}/{end_timestamp} |
GET | Retrieve all trades for a pool between two timestamps. |
/timeSeries/{token_address}/{pool_address}/{time_interval}/{step_seconds} |
GET | Retrieve a time series of token prices. |
/dailyActiveTokens |
GET | Get a paginated list of daily active tokens with frequencies. |
/dailyActivePools |
GET | Get a paginated list of daily active pools with frequencies. |
The Snapshotter includes several specialized Periphery services that handle specific aspects of blockchain data processing. These services work together to provide efficient, scalable, and reliable data collection and processing.
The Block Fetcher Service (snapshotter-periphery-blockfetcher)[https://github.com/powerloom/snapshotter-periphery-blockfetcher/] is responsible for continuously fetching blockchain blocks and caching them in Redis for downstream processing.
Key Features:
- Continuous Block Monitoring: Tracks the latest blocks on the source blockchain
- Efficient Caching: Stores complete block data in Redis for quick access
- Configurable Polling: Adjustable polling intervals for different network conditions
- Test Mode: Special mode for development and testing with single block processing
- Graceful Shutdown: Handles shutdown signals properly to ensure data consistency
Architecture:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Blockchain RPC │◄────┤ Block Fetcher │────▶│ Redis Cache │
│ │ │ Service │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Local Testing:
cd snapshotter-periphery-blockfetcher
docker-compose --profile local up --buildThe Transaction Processor Service snapshotter-periphery-txprocessor processes transactions from cached blocks and extracts relevant information for snapshot generation.
Key Features:
- Transaction Receipt Processing: Fetches and caches detailed transaction receipts
- Event Log Extraction: Extracts and indexes event logs from transactions
- Redis-based Queue System: Consumes processing tasks from Redis queues
- Parallel Processing: Handles multiple transactions concurrently for efficiency
- Comprehensive Logging: Detailed logging for monitoring and debugging
Architecture:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Redis Queue │────▶│ TX Processor │────▶│ Redis Cache │
│ (Block Data) │ │ Service │ │ (TX Receipts) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Blockchain RPC │
└─────────────────┘
The Epoch Syncer Service snapshotter-periphery-epochsyncer monitors blockchain events and ensures data availability before triggering snapshot generation.
Key Features:
- Dual Chain Monitoring: Monitors both source chain blocks and protocol state events
- Cache Verification: Ensures both block and transaction data are cached before processing
- Event Detection: Detects
DayStartedEventandSnapshotBatchSubmittedevents - Dramatiq Integration: Uses Dramatiq for reliable message queue processing
- Adaptive Polling: Automatically adjusts polling intervals based on throughput
- Complete Cache Validation: Verifies both block and transaction cache completeness
Architecture:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ │ │ │ │ │
│ Blockchain RPC │◄────┤ EpochSyncer │◄────┤ Rate Limiter │
│ │ │ │ │ │
└─────────────────┘ └────────┬────────┘ └─────────────────┘
│
▼
┌─────────────────┐ ┌─────────────────┐
│ │ │ │
│ Redis Cache │◄────┤ Cache Checker │
│ │ │ (Background) │
└─────────────────┘ └────────┬────────┘
│
▼
┌─────────────────┐
│ │
│ Dramatiq │
│ Workers │
│ │
└─────────────────┘
Key Components:
- Source Chain Block Detection: Monitors source blockchain for new blocks
- Protocol Event Detection: Monitors protocol state contract for epoch events
- Cache Completeness Verification: Ensures all required data is cached
- Message Queue Integration: Sends messages to downstream workers via Dramatiq
The Rate Limiter Service (rate-limiter)[https://github.com/powerloom/rate-limiter/] provides centralized rate limiting for all RPC calls across the Snapshotter ecosystem.
Key Features:
- Multiple Rate Limits: Configure different rate limits for different keys
- Statistics Tracking: Track hourly and daily usage for each key
- In-Memory Storage: Fast, efficient rate limit checking
- RESTful API: Simple HTTP API for rate limit management
- Health Monitoring: Built-in health check endpoint
API Endpoints:
| Endpoint | Method | Description |
|---|---|---|
/check/{key} |
GET | Check if a key is within its rate limit |
/configure |
POST | Configure a custom rate limit for a key |
/stats/{key} |
GET | Get usage statistics for a key |
/health |
GET | Health check endpoint |
Rate Limit Format:
{number}/{unit}
Where:
number: A positive integerunit: One of "second", "minute", "hour", "day"
Examples: 10/second, 100/minute, 1000/hour, 5000/day
Usage Example:
# Check rate limit
curl http://localhost:8000/check/my-api-key
# Configure custom rate limit
curl -X POST http://localhost:8000/configure \
-H "Content-Type: application/json" \
-d '{"key": "my-api-key", "limit": "100/minute"}'
# Get statistics
curl http://localhost:8000/stats/my-api-keyThe snapshotter is a distributed system with multiple moving parts. The easiest way to get started is by using the Docker-based setup.
-
Clone the repository:
git clone https://github.com/PowerLoom/snapshotter-core-edge.git cd snapshotter-core-edge -
Configure your environment:
cp env.example .env # Edit .env with your settings -
Run the bootstrap script to set up Periphery services:
./bootstrap.sh
-
Build and run:
./build.sh
The bootstrap.sh script automatically:
- Sets up all Periphery services (Block Fetcher, TX Processor, Epoch Syncer)
- Configures Redis and IPFS
- Generates the docker-compose.yaml with all required services
- Creates worker services based on your project configuration
Note - RPC usage is highly use-case specific. If your use case is complicated and needs to make a lot of RPC calls, it is recommended to run your own RPC node instead of using third-party RPC services as it can be expensive.
If you want to increase the memory limits for IPFS, you can do so by running the following commands, this will reset on system restart though:
sudo sysctl -w net.core.rmem_max=8388608
sudo sysctl -w net.core.wmem_max=8388608
sudo sysctl -w net.ipv4.udp_mem='8388608 8388608 8388608'
sudo sysctl -w net.core.netdev_max_backlog=5000
sudo sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
sudo sysctl -w net.ipv4.tcp_wmem='4096 87380 8388608'To make these changes permanent, add or modify the following lines in /etc/sysctl.conf:
net.core.rmem_max=8388608
net.core.wmem_max=8388608
net.ipv4.udp_mem=8388608 8388608 8388608
net.core.netdev_max_backlog=5000
net.ipv4.tcp_rmem=4096 87380 8388608
net.ipv4.tcp_wmem=4096 87380 8388608
Apply the changes with:
sudo sysctl -pRestart the docker service
sudo systemctl restart dockerFinally, bring up your Docker Compose stack again:
./clean_stop.sh
./build.shThe snapshotter needs the following config files to be present:
-
config/auth_settings.json: Authentication configuration. Copy fromconfig/auth_settings.example.json. This enables an authentication layer over the core API. -
config/settings.json: The primary configuration file. Copy fromconfig/settings.example.jsonand configure:instance_id: Your unique node identifiernamespace: Project namespace for consensus- RPC endpoints and rate limits
- Redis connection settings
- Protocol state contract details
-
config/projects.json: Defines base snapshot computation tasks. Each entry maps a project type to a compute module. -
config/aggregator.json: Defines aggregation tasks over base snapshots. Copyconfig/aggregator.example.jsontoconfig/aggregator.json. -
config/preloader.json: Optional preloading configuration for data that needs to be cached before snapshot generation. Copyconfig/preloader.example.jsontoconfig/preloader.json.
The .env file contains all the runtime configuration. Key variables include:
# Source Chain RPC
SOURCE_RPC_URL=https://your-ethereum-rpc
SOURCE_RPC_RATE_LIMIT="100/second"
# PowerLoom Protocol
NAMESPACE=your_namespace
INSTANCE_ID=your_unique_instance_id
SIGNER_ACCOUNT_ADDRESS=0x...
SIGNER_ACCOUNT_PRIVATE_KEY=0x...
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0
# IPFS Configuration
IPFS_URL=http://ipfs:5001
IPFS_READER_URL=http://ipfs:8080
# Protocol State Contract
PROTOCOL_STATE_CONTRACT=0x...
POWERLOOM_RPC_URL=https://rpc-prost.powerloom.ioWhen you run ./build.sh, it:
- Reads your
projects.jsonand creates a worker service for each project type - Reads your
aggregator.jsonand configures aggregation workers - Generates the final
docker-compose.yamlwith all services properly configured - Sets up networking and dependencies between services
For example, if you have project types trade_volume and pair_reserves, the build script will create:
snapshotter-worker-trade-volumeservicesnapshotter-worker-pair-reservesservice- Plus all the Periphery services (block fetcher, tx processor, epoch syncer)
The new distributed architecture requires configuration for each Periphery service. These services work together to provide efficient blockchain data processing.
The Block Fetcher service requires the following environment variables:
# RPC Configuration
SOURCE_RPC_URL=https://your-rpc-endpoint
SOURCE_RPC_RATE_LIMIT="100/second"
SOURCE_RPC_REQUEST_TIMEOUT=30
SOURCE_RPC_RETRY_COUNT=3
# Redis Configuration
REDIS_URL=redis://localhost:6379/0
# Service Configuration
NAMESPACE=your_namespace
POLLING_INTERVAL=1.0
TEST_MODE=false # Set to true for single block testingThe Transaction Processor service configuration:
# RPC Configuration
SOURCE_RPC_URL=https://your-rpc-endpoint
SOURCE_RPC_RATE_LIMIT="100/second"
# Redis Configuration
REDIS_URL=redis://localhost:6379/0
# Service Configuration
NAMESPACE=your_namespace
CONCURRENT_WORKERS=10
BATCH_SIZE=50The Epoch Syncer service requires both source chain and PowerLoom chain configuration:
# Source Chain RPC
SOURCE_RPC_URL=https://your-source-rpc-endpoint
SOURCE_RPC_RATE_LIMIT="100/second"
# PowerLoom Chain RPC
POWERLOOM_RPC_URL=https://your-powerloom-rpc-endpoint
POWERLOOM_RPC_RATE_LIMIT="50/second"
# Redis Configuration
REDIS_URL=redis://localhost:6379/0
# Protocol Configuration
PROTOCOL_STATE_CONTRACT_ADDRESS=0x...
DATA_MARKET_CONTRACT_ADDRESS=0x...
NAMESPACE=your_namespace
INSTANCE_ID=your_instance_id
# Performance Settings
BATCH_SIZE=50
MAX_CONCURRENT_CACHE_CHECKS=20
ADAPTIVE_POLLING=trueThe Rate Limiter service configuration is minimal:
# Default rate limit for all keys
DEFAULT_RATE_LIMIT="10/second"
# Service port
PORT=8000For local development, each Periphery service includes a Docker Compose configuration. To run services locally:
# Block Fetcher
cd snapshotter-periphery-blockfetcher
docker-compose --profile local up --build
# Transaction Processor
cd snapshotter-periphery-txprocessor
docker-compose up --build
# Epoch Syncer
cd snapshotter-periphery-epochsyncer
docker-compose up --build
# Rate Limiter
cd rate-limiter
docker build -t rate-limiter .
docker run -p 8000:8000 rate-limiterThe Periphery services integrate with the core snapshotter through:
- Shared Redis Cache: All services use the same Redis instance for data sharing
- Message Queues: Services communicate through Redis-based queues and Dramatiq
- Rate Limiting: All RPC calls go through the centralized rate limiter
- Namespace Consistency: All services must use the same namespace configuration
Ensure all services are configured with:
- Same Redis connection parameters
- Same namespace value
- Compatible RPC endpoints
- Proper network connectivity between services
To ensure a consistent and correct testing environment, follow these steps to configure your virtual environment and verify the test configuration loading mechanism.
This project uses Poetry 2.0+ for dependency management and requires Python 3.12.
Required Tools:
- Python 3.12.x: Using
pyenvfor Python version management is strongly recommended - Poetry 2.0+: Modern Python dependency management tool
1. Install Python 3.12 with pyenv (Recommended)
If you don't have pyenv installed, follow the pyenv installation guide.
# Install Python 3.12 (use the latest available patch version)
pyenv install 3.12.11
# Verify installation
pyenv versions2. Install Poetry
If you don't have Poetry installed, follow the official Poetry installation guide.
# Verify Poetry version (should be 2.0+)
poetry --version3. Set Up Project Environment
Navigate to the project root directory and set up the Python version:
# Navigate to project root
cd /path/to/snapshotter-core-edge
# Set Python version for this project
pyenv local 3.12.11
# Verify correct Python version is active
python --version # Should show Python 3.12.114. Install Dependencies
# Install all dependencies including development tools
poetry install
# Verify installation
poetry env info # Shows virtual environment details5. Activate Environment
With Poetry 2.0, you have several options to work with the virtual environment:
# Option A: Use poetry run for individual commands
poetry run python --version
poetry run pytest tests/
# Option B: Get activation command (recommended for development)
poetry env activate
# Then source the provided activation command
# Option C: Spawn a new shell with environment activated (Requires the shell plugin)
poetry shell6. Create Test Environment Configuration
Before running tests, create your test environment configuration:
# Copy the test environment template
cp env.test.example .env.test
# Edit .env.test with your test configuration valuesThe .env.test file contains all the configuration values needed for running tests. Here's a breakdown of each section and what values to use:
These settings configure the main blockchain RPC endpoints for testing:
# Main RPC endpoint - use a reliable Ethereum RPC provider
TEST_RPC_URL_FULL_NODE_1=https://eth-mainnet.alchemyapi.io/v2/YOUR_API_KEY
# Archive node (optional) - for historical data queries
TEST_RPC_URL_ARCHIVE_NODE_1=https://eth-mainnet.alchemyapi.io/v2/YOUR_ARCHIVE_KEY
# Connection settings (defaults are usually fine)
TEST_RPC_REQUEST_TIMEOUT=30 # Request timeout in seconds
TEST_RPC_RETRY_COUNT=3 # Number of retry attempts
TEST_RPC_MAX_CONNECTIONS=100 # Max concurrent connections
TEST_RPC_MAX_KEEPALIVE_CONNECTIONS=50 # Max persistent connections
TEST_RPC_KEEPALIVE_EXPIRY=300 # Connection keep-alive timeThese configure the Powerloom anchor chain (if different from main RPC):
# Powerloom-specific anchor chain endpoint
TEST_ANCHOR_RPC_URL_FULL_NODE_1=
TEST_ANCHOR_RPC_URL_ARCHIVE_NODE_1=
# Lower connection limits for anchor chain
TEST_ANCHOR_RPC_MAX_CONNECTIONS=5
TEST_ANCHOR_RPC_MAX_KEEPALIVE_CONNECTIONS=2Configure IPFS for data storage and retrieval:
# Local IPFS node (recommended for testing)
TEST_IPFS_URL=/ip4/127.0.0.1/tcp/5001
# Or remote IPFS service
# TEST_IPFS_URL=/dns/your-ipfs-provider.com/tcp/443/https
# IPFS connection settings
TEST_IPFS_TIMEOUT=60 # Request timeout
TEST_IPFS_MAX_RETRIES=3 # Retry attemptsConfigure Redis for caching and state management:
TEST_REDIS_HOST=localhost # Redis server host
TEST_REDIS_PORT=6379 # Redis server port
TEST_REDIS_DB=0 # Database number (0-15)
TEST_REDIS_PASSWORD= # Password (empty for no auth)
TEST_REDIS_TIMEOUT=5 # Connection timeoutConfigure the snapshotter core API:
TEST_CORE_API_PORT=8002 # Port for core API server
TEST_BLOCK_SHIFT_FOR_BITMAP_INDEX=22400000 # Block indexing offsetSet the contract addresses and namespace:
# Your unique namespace identifier
TEST_NAMESPACE=my_test_namespace
# Smart contract addresses
TEST_PROTOCOL_STATE_CONTRACT_ADDRESS=0x3B5A0FB70ef68B5dd677C7d614dFB89961f97401
TEST_DATA_MARKET_CONTRACT_ADDRESS=0xae32c4FA72E2e5F53ed4D214E4aD049286Ded16f
# Chain Wrapped ETH
TEST_WETH_ADDRESS=0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2Configure external data providers:
# Etherscan API
TEST_ETHERSCAN_API_KEY=your_etherscan_api_key_here
TEST_ETHERSCAN_URL=https://api.etherscan.io/v2/
# CoinMarketCap API (for price data)
COINMARKETCAP_API_KEY=your_cmc_api_key_here
COINMARKETCAP_API_URL=https://pro-api.coinmarketcap.com
COINMARKETCAP_API_PRICE_TOLERANCE=5 # Acceptable price variance %7. Run Tests
# Run all tests
poetry run pytest
# Run specific test files
poetry run pytest tests/shared_fixtures/test_config_loading.py
# Run with verbose output
poetry run pytest -v tests/shared_fixtures/test_config_loading.py::test_ipfs_settings_are_correctTo verify your environment is set up correctly:
# Check Python version
poetry run python --version
# Check that pytest is available
poetry run pytest --version
# Verify test configuration loads correctly
poetry run pytest tests/shared_fixtures/test_config_loading.py::test_app_settings_loaded_successfully -vTests require specific environment variables to be set, which control aspects like RPC endpoints, contract addresses, and other test parameters. These are loaded from a .env.test file located in the project root.
- Create
.env.test:- Copy the example file
env.test.exampleto.env.testin the project root directory:cp env.test.example .env.test
- Crucially, edit
.env.testand replace all placeholder values (likeyour_actual_test_rpc_url,0xYourTestContractAddress...) with valid data for your testing environment. The tests will not pass with placeholder values.
- Copy the example file
A dedicated test suite verifies that the test configuration mechanism (driven by tests/shared_fixtures/conftest.py and your .env.test file) works correctly. This test ensures that the application's main configuration files (e.g., config/settings.json) are correctly populated with test-specific values at runtime.
- Run the Test:
- Ensure your virtual environment is activated.
- From the project root directory, run the following Pytest command:
The
poetry run pytest tests/shared_fixtures/test_config_loading.py -s
-sflag is optional but helpful as it showsprintstatements from yourconftest.pyand tests, which can aid in debugging if issues arise.
- Expected Outcome:
- All tests within
test_config_loading.pyshould pass. - You should see output from
conftest.pyindicating:- The project root being added to
sys.path. - Loading of environment variables from
.env.test. - Copying of
*.example.jsonfiles to their active names (e.g.,settings.json). - Population of
settings.jsonandauth_settings.jsonwith test data. - At the end of the session, restoration of original config files (or removal of test-generated ones if no originals existed).
- The project root being added to
- If all tests pass, your environment is correctly set up for running the broader test suite, as the core mechanism for providing test-specific configurations to the application is working.
- All tests within
FileNotFoundError: .env.test: Ensure.env.testexists in the project root.- Tests Failing with Placeholder Values: Double-check that you've replaced all placeholder values in your
.env.testwith actual, valid data. ModuleNotFoundError:- Ensure your virtual environment is active (
source .venv/bin/activateorpoetry shell). - Confirm that
poetry install --with devcompleted successfully. - The
conftest.pyautomatically adds the project root tosys.path. If module import issues persist, verify thePROJECT_ROOTdefinition intests/shared_fixtures/conftest.pycorrectly points to your project's top-level directory.
- Ensure your virtual environment is active (
- Errors during config file population/restoration: The print statements from
pytest_sessionstartandpytest_sessionfinishinconftest.pyshould provide details on which file operations are failing. Check file permissions and paths.
The Core API service provides interactive API documentation through FastAPI's built-in SwaggerUI:
http://localhost:8002/docs
This interface allows you to:
- Explore all available endpoints
- Test API calls directly from the browser
- View request/response schemas
- Understand parameter requirements
