Skip to content

Commit

Permalink
docs: Add documentation about Locust processes (#11439)
Browse files Browse the repository at this point in the history
This is a more convenient way to scale Locust on a single machine.
  • Loading branch information
aborg-dev authored May 31, 2024
1 parent f508262 commit dbce312
Showing 1 changed file with 17 additions and 5 deletions.
22 changes: 17 additions & 5 deletions pytest/tests/loadtest/locust/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,23 @@ hundred of users.
In the Locust UI, check the "Workers" tab to see CPU and memory usage. If this
approaches anything close to 100%, you should use more workers.

Luckily, locust has the ability to swarm the load generation across many processes.
To use it, start one process with the `--master` argument and as many as you
like with `--worker`. (If they run on different machines, you also need to
provide `--master-host` and `--master-port`, if running on the same machine it
will work automagically.)
Luckily, Locust has the ability to swarm the load generation across many processes.

The simplest way to do this on a single machine is to use `--processes` argument:
```sh
locust -H 127.0.0.1:3030 \
-f locustfiles/ft.py \
--funding-key=$KEY \
--processes 8
```

This will spawn 8 Locust Python processes, each capable of fully utilizing one CPU core.
According to the current measurements, Locust on a single CPU core can send 500 transactions per
second, and this number linearly scales with the number of processes.

To scale further to multiple machines, start one process with the `--master` argument and as many as
you like with `--worker`. (If they run on different machines, you also need to provide
`--master-host` and `--master-port`, if running on the same machine it will work automagically.)

Start the master:

Expand Down

0 comments on commit dbce312

Please sign in to comment.