You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
There are no publically accessible reports for benchmarking related to the performance of Clamp-Core. These reports as well as scripts to replicate setup and run should be available as part of clamp-core. This is a feature request to create scripts provision and execute benchmarking scripts and publish the benchmarking reports.
Describe the solution you'd like
Following things are requested as part of this feature request:
A set of Terraform scripts to stand up the infrastructure for benchmarking in AWS cloud
A set of database seed to optionally pre-create data in the database
A set of scripts with a benchmarking tool to execute and measure requests
With these things in place, the benchmarking scripts should be able to measure concurrent requests that can be triggered per second. These results can be published as part of the documentation with the details of (existing dataset in the database, type of setup for DB, RabbitMQ, and Kafka)
Describe alternatives you've considered
No specific alternatives at the moment.
Additional context
The environment should be repeatable to iterate and improve the benchmarking numbers. These could also be updated or published as artifacts periodically with automated runs.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
There are no publically accessible reports for benchmarking related to the performance of Clamp-Core. These reports as well as scripts to replicate setup and run should be available as part of clamp-core. This is a feature request to create scripts provision and execute benchmarking scripts and publish the benchmarking reports.
Describe the solution you'd like
Following things are requested as part of this feature request:
With these things in place, the benchmarking scripts should be able to measure concurrent requests that can be triggered per second. These results can be published as part of the documentation with the details of (existing dataset in the database, type of setup for DB, RabbitMQ, and Kafka)
Describe alternatives you've considered
No specific alternatives at the moment.
Additional context
The environment should be repeatable to iterate and improve the benchmarking numbers. These could also be updated or published as artifacts periodically with automated runs.
The text was updated successfully, but these errors were encountered: