Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix(contracts): improve contracts and add tests (#534)
* Refactor: Improve schema setup and benchmark case handling Updated the benchmark setup to use tree structure for schemas and enhanced the benchmark case handling. Adjusted setup functions, created a `SetupSchemasInput`, modified results handling, and added more descriptive comments. * Increase state machine timeout and add comment in benchmark Extended the state machine timeout from 30 to 120 minutes to accommodate longer-running benchmarks. Added a comment in the runSingleTest function to clarify the query of the index-0 stream as the root stream. * Add depth check to prevent benchmarks from exceeding limits Introduce a check in the benchmark setup to ensure that the tree's maximum depth does not exceed the PostgreSQL limitations, preventing potential errors. Added a new constant, maxDepth, set at 179, based on empirical findings. * Increase timeouts and adjust polling intervals Extended the state machine timeout from 2 hours to 6 hours to accommodate longer processing times. Adjusted task-specific timeouts and added a new polling interval to optimize the frequency of status checks during prolonged operations. * Update execution timeout parameter in Step Functions Changed the "timeoutSeconds" parameter to "executionTimeout" for better clarity. Also corrected the naming convention of the "TimeoutSeconds" constant to align with the updated AWS guideline. * Optimize get_index_change to remove nested loop * Set default LOG_RESULTS to true and conditionally print results. Added default setting for LOG_RESULTS to true in TestBench to ensure results are logged unless specified otherwise. Modified benchmark.go to conditionally print results based on the LOG_RESULTS environment variable. Updated step_functions.go to explicitly set LOG_RESULTS to false when executing benchmark from the deployed environment. * Add index change tests for contract validation Implement test cases to validate index change and YoY index calculations. Includes initialization, data insertion, and result conversion to ensure accuracy and coverage of edge cases. * Refactor error handling in benchmark workflow Updated the benchmark workflow to introduce a `formatErrorState` pass state for better error formatting and handling. Replaced `Fail` state with a chain of `Pass` and `Fail` to ensure structured error information is passed upstream. Adjusted error catching and chaining to integrate with the new error handling structure. * Remove unsupported micro instances from benchmark types Micro instances were causing errors and hangs during tests, hence they have been commented out from the list of tested EC2 instance types. Medium and large instance types have been added to ensure thorough benchmarking. * Optimize record insertion process Modified insertRecordsForPrimitive function to use bulk insert for faster database operations. The records are now batched into a single SQL insert statement, significantly improving performance by reducing the number of individual insert operations. * Add unit tests for NewTree function Implemented comprehensive unit tests for the NewTree function covering various scenarios such as different quantities of streams and branching factors. The tests also include checks for tree structure, node properties, and special cases for larger trees. * fix null without types at streams template * Refactor composed_stream_template procedures Added parameters `child_data_providers` and `child_stream_ids` to `get_raw_record` and `get_raw_index`. Updated the logic to buffer and emit ordered results, ensuring proper handling of data arrays' lengths and emitting results by date and taxonomy index sequentially. * Assign default value to avoid null error in buffer handling Assigned default value 0 to the buffer length to prevent null errors during buffer length evaluation. This ensures the buffer dates are processed correctly and avoids unexpected termination due to null length assignment. * Add os package import to benchmark.go * Fix critical bugs and optimize get_raw_record and get_raw_index procedures - Add checks for empty child taxonomies to prevent null upper bound errors - Improve buffer handling and array initialization to avoid potential issues - Refactor loop structure for better efficiency and correctness - Update comments and improve code readability * Simplify data composition logic Refactored the logic for handling child data providers and stream IDs by removing unnecessary buffering and looping. This results in cleaner code that directly returns data values and indices in a simplified manner, ensuring proper ordering by date and taxonomy index. * Refactor and optimize taxonomy processing Simplify the loop logic for processing taxonomies and emitting values by removing unnecessary steps and optimizing array handling. Introduce a new approach to handling array element removal and managing date-based value emission efficiently. This reduces code complexity and enhances maintainability. * Add CSV export alongside markdown in Export Results Lambda The function now merges CSV files and saves both a markdown and a CSV file back to the results bucket. New code handles reading and uploading the merged CSV file to S3, ensuring both formats are available. * Wrap errors with more context for better debugging Added `github.com/pkg/errors` to wrap errors throughout the codebase, providing more context and improving the debugging process. This includes error wrapping in file operations, schema creation, metadata insertion, and benchmark runs. * Update kwil-db dependencies to latest versions Upgraded the kwil-db, kwil-db/core, and kwil-db/parse modules to their latest revisions in the go.mod and go.sum files. This ensures we are using the most current features and fixes provided by these libraries. * Add complex composed tests for contract validation Introduce comprehensive test cases within `complex_composed_test.go` to validate various scenarios including record retrieval, index checks, latest value checks, out-of-range data handling, and error scenarios. Deploy necessary contracts and initialize datasets for testing. * Add ToDisplay method to Tree and a test for visualization The new ToDisplay method provides a string representation of the tree, showing parent-child relationships. A corresponding test function, TestDisplayTree, has been added to verify the output for various branching factors. * Refactor load_test.go for improved stream and depth testing Simplified the shapePairs for better clarity and added new test cases to evaluate the cost of adding streams and depth. Reduced the number of samples from 10 to 3, and adjusted the days array to exclude 3 days. Commented out tests that caused errors or had call stack size issues. * Add CloudWatch log group for SSM command execution Created a CloudWatch log group to capture logs for EC2 benchmark tasks. Updated IAM role to include CloudWatch managed policy for logging. * Retry benchmark execution up to 3 times upon failure Added a loop to attempt running the benchmark up to three times before giving up, with a 10-second interval between retries. This change ensures more robust handling of transient failures during benchmark execution. Also, removed redundant command concatenations for better readability. * Fix tree initialization for single stream scenario Ensure that the root node is correctly marked as a leaf when there is only one stream. This change returns the initialized tree immediately if the condition is met, optimizing the tree setup process. * Chunk long-running tests to prevent Postgres timeouts Split function tests into groups of 10 to avoid exhausting Postgres during execution. Introduced a helper function `chunk` to divide tests, ensuring better test reliability and stability. * Remove retry logic from benchmark script Simplified the benchmark step by removing the retry logic in the script. The benchmark will now run just once without reattempting on failure. * Refactor benchmark functions and add results handling. Introduced a results channel for collecting benchmark results and improved test robustness with retry logic. Added logging to track benchmark execution and integrated a cleanup function to handle interruptions gracefully. * Add README for internal contracts directory Introduce a README file for the `internal/contracts` directory, detailing the purpose and contents of the Kuneiform contracts used in the Truflation Stream Network (TSN). This includes descriptions of each contract file, synchronization practices, and links to additional resources. * Refactor timeout handling in benchmark state machine Updated timeout handling to use a centralized constant in the benchmark state machine. This improves maintainability by defining `TotalTimeout` in a new constants file and referencing it across the code. Consequently, it ensures consistency and eases future modifications. * Parallelize schema parsing and batch metadata insertion This update parallelizes the schema parsing process using goroutines to improve efficiency and adds a bulk insertion for metadata. These changes enhance the performance and overall speed of the setup operation. * Disable 800 stream test cases due to memory issues Commented out the test cases involving 800 streams as they cause memory starvation in t3.small instances. These tests significantly impact memory usage because they store the entire tree in memory.
- Loading branch information