You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there are 2 vectors where Shiksha can be benchmarked as opposed to 1 where there are no BFFs. This pattern allows for more flexibility by decoupling the backend/microservices from the APIs that the frontend requires.
BFF/Adapter/Middleware Layer
Actual Backend
Although since 1 is dependent on 2, it tests the backend as well. Although, benchmarking. adds an additional dependency on the backend to scale as well. Benchmarking 1 also allows us to create a generic implementation of the benchmarking suite that can be maintained along with the creation of adapters and also to figure out the bottlenecks on the backend as well.
Endpoint vs. Scenario Testing
When performance testing, you can call a single URL or multiple URLs in succession. The most straightforward approach is just testing single API endpoints. For a more realistic view, it can be helpful to write a few scenarios of multiple API calls that typically happen in succession.
The tests should cover both.
Tools
It is proposed that for both Scenarios and Endpoint testing, we use Locust as it allows testing for both use cases. The Workflow could be built by playing the workflow on Chrome and downloading the HAR file. The HAR file can then be converted to locust using this convertor and played around easily.
The individual scenarios can then be played with a list of users using a simple plugin like this.
The text was updated successfully, but these errors were encountered:
Benchmarking Vector
Currently, there are 2 vectors where Shiksha can be benchmarked as opposed to 1 where there are no BFFs. This pattern allows for more flexibility by decoupling the backend/microservices from the APIs that the frontend requires.
Although since 1 is dependent on 2, it tests the backend as well. Although, benchmarking. adds an additional dependency on the backend to scale as well. Benchmarking 1 also allows us to create a generic implementation of the benchmarking suite that can be maintained along with the creation of adapters and also to figure out the bottlenecks on the backend as well.
Endpoint vs. Scenario Testing
When performance testing, you can call a single URL or multiple URLs in succession. The most straightforward approach is just testing single API endpoints. For a more realistic view, it can be helpful to write a few scenarios of multiple API calls that typically happen in succession.
The tests should cover both.
Tools
It is proposed that for both Scenarios and Endpoint testing, we use Locust as it allows testing for both use cases. The Workflow could be built by playing the workflow on Chrome and downloading the HAR file. The HAR file can then be converted to locust using this convertor and played around easily.
The individual scenarios can then be played with a list of users using a simple plugin like this.
The text was updated successfully, but these errors were encountered: