-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load simulator #3125
Load simulator #3125
Conversation
Add rems.simulate and rems.simulate-util namespaces. Load simulator works by spawning threads that operate indefinitely on separate headless chrome instances. Each thread then run actions against supplied REMS URL. Thread daemon wakes up periodically to check thread status and restarts if necessary. Threads choose random users and actions.
rems.simulate imports both rems.test-browser and rems.browser-test-util namespaces, which imports rems.main, which imports rems.simulate, causing circular reference.
Tests may call (mount/start) which inadvertently could start up simulator if namespace was loaded, because arguments were not validated.
f039ba0
to
4059403
Compare
Only parse CLI opts when args are passed from CLI, and support passing args from REPL as regular map.
Threads could read stale data when fetching new user and use same user. Technically not a problem, but underlying mechanism was designed to assign unique users to each action. (locking) should achieve this, together with swapping atom value. More statistics are now reported, and more frequently. Task logging gets very noisy with many threads, so it now has debug as default log level.
Experimental namespace is probably better suited for load simulator, since it's not exactly testing but more an experimental tool to figure out REMS limits.
Namespace rems.experimental.load-simulator uses test-browser and browser-test-util which depend on test sources and dependencies. Since browser test utilities can be desirable outside of test context, it seems less enticing to try and split specific functionalities so that test dependencies or sources would not leak into build. Including these files and dependencies should not cause any issues in build.
d13cf0e
to
0a9e75d
Compare
@@ -0,0 +1,172 @@ | |||
(ns rems.experimental.load-simulator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this doesn't need to be marked as experimental. Maybe simulator
package? Or even load-simulator
if there won't be others. It's perhaps also not a traditional load testing (because it's rather heavy, uses real browser etc.) but definitely it is a "user simulator".
@@ -498,7 +498,7 @@ | |||
sort | |||
rest)) | |||
|
|||
(defn- random-long-string [& [n]] | |||
(defn random-long-string [& [n]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably move this to some util?
(defn get-task-user! [task-id] | ||
(locking current-tasks | ||
(let [current-users (map :user-id (vals @current-tasks)) | ||
available-users (-> (simu/get-all-users) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On my machine it tries to use perf tester users which are not present on the login screen. I think overall we must still figure out a solution for specifying the users.
Some users might not exist in test data (e.g. perf users), so using direct login link should be more robust for simulation tasks.
superceded by #3343 |
Initial version of load simulator. Run concurrent headless chrome instances against target REMS. CLI accepts following optional arguments:
--url
- which url to run against (default: http://localhost:3000/)--concurrency
- how many threads to start (default: 8)Quick start:
lein run
)lein run load-simulator [--url url] [--concurrency n]
(no args uses default values)Checklist for author
Remove items that aren't applicable, check items that are done.
Documentation
Testing
Follow-up