Skip to content

Commit

Permalink
Bind torchserve container ports to localhost ports (#2646)
Browse files Browse the repository at this point in the history
* Update misc references in torchserve from 0.0.0.0 to 127.0.0.1

* Bind torchserve container ports to localhost ports

---------

Co-authored-by: Naman Nandan <[email protected]>
Co-authored-by: lxning <[email protected]>
  • Loading branch information
3 people authored Oct 3, 2023
1 parent 1f1ab2b commit 4051ae3
Show file tree
Hide file tree
Showing 13 changed files with 54 additions and 54 deletions.
2 changes: 1 addition & 1 deletion benchmarks/benchmark-ab.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ def docker_torchserve_start():
management_port = urlparse(execution_params["management_url"]).port
docker_run_cmd = (
f"docker run {execution_params['docker_runtime']} {backend_profiling} --name ts --user root -p "
f"{inference_port}:{inference_port} -p {management_port}:{management_port} "
f"127.0.0.1:{inference_port}:{inference_port} -p 127.0.0.1:{management_port}:{management_port} "
f"-v {execution_params['tmp_dir']}:/tmp {enable_gpu} -itd {docker_image} "
f'"torchserve --start --model-store /home/model-server/model-store '
f"\--workflow-store /home/model-server/wf-store "
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/jmeter.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ The benchmarks can also be used to analyze the backend performance using cProfil
If using external docker container for TorchServe:
* start docker with /tmp directory mapped to local /tmp and set `TS_BENCHMARK` to True.
```
docker run --rm -it -e TS_BENCHMARK=True -v /tmp:/tmp -p 8080:8080 -p 8081:8081 pytorch/torchserve:latest
docker run --rm -it -e TS_BENCHMARK=True -v /tmp:/tmp -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 pytorch/torchserve:latest
```

3. Register a model & perform inference to collect profiling data. This can be done with the benchmark script described in the previous section.
Expand Down
26 changes: 13 additions & 13 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,39 +194,39 @@ The following examples will start the container with 8080/81/82 and 7070/71 port
For the latest version, you can use the `latest` tag:

```bash
docker run --rm -it -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 pytorch/torchserve:latest
docker run --rm -it -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 -p 127.0.0.1:7070:7070 -p 127.0.0.1:7071:7071 pytorch/torchserve:latest
```

For specific versions you can pass in the specific tag to use (ex: pytorch/torchserve:0.1.1-cpu):

```bash
docker run --rm -it -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 pytorch/torchserve:0.1.1-cpu
docker run --rm -it -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 -p 127.0.0.1:7070:7070 -p 127.0.0.1:7071:7071 pytorch/torchserve:0.1.1-cpu
```

#### Start CPU container with Intel® Extension for PyTorch*

```bash
docker run --rm -it -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 torchserve-ipex:1.0
docker run --rm -it -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 -p 127.0.0.1:7070:7070 -p 127.0.0.1:7071:7071 torchserve-ipex:1.0
```

#### Start GPU container

For GPU latest image with gpu devices 1 and 2:

```bash
docker run --rm -it --gpus '"device=1,2"' -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 pytorch/torchserve:latest-gpu
docker run --rm -it --gpus '"device=1,2"' -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 -p 127.0.0.1:7070:7070 -p 127.0.0.1:7071:7071 pytorch/torchserve:latest-gpu
```

For specific versions you can pass in the specific tag to use (ex: `0.1.1-cuda10.1-cudnn7-runtime`):

```bash
docker run --rm -it --gpus all -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 pytorch/torchserve:0.1.1-cuda10.1-cudnn7-runtime
docker run --rm -it --gpus all -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 -p 127.0.0.1:7070:7070 -p 127.0.0.1:7071:7071 pytorch/torchserve:0.1.1-cuda10.1-cudnn7-runtime
```

For the latest version, you can use the `latest-gpu` tag:

```bash
docker run --rm -it --gpus all -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 pytorch/torchserve:latest-gpu
docker run --rm -it --gpus all -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 -p 127.0.0.1:7070:7070 -p 127.0.0.1:7071:7071 pytorch/torchserve:latest-gpu
```

#### Accessing TorchServe APIs inside container
Expand All @@ -244,7 +244,7 @@ To create mar [model archive] file for TorchServe deployment, you can use follow
1. Start container by sharing your local model-store/any directory containing custom/example mar contents as well as model-store directory (if not there, create it)
```bash
docker run --rm -it -p 8080:8080 -p 8081:8081 --name mar -v $(pwd)/model-store:/home/model-server/model-store -v $(pwd)/examples:/home/model-server/examples pytorch/torchserve:latest
docker run --rm -it -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 --name mar -v $(pwd)/model-store:/home/model-server/model-store -v $(pwd)/examples:/home/model-server/examples pytorch/torchserve:latest
```
1.a. If starting container with Intel® Extension for PyTorch*, add the following lines in `config.properties` to enable IPEX and launcher with its default configuration.
Expand All @@ -254,7 +254,7 @@ cpu_launcher_enable=true
```
```bash
docker run --rm -it -p 8080:8080 -p 8081:8081 --name mar -v $(pwd)/config.properties:/home/model-server/config.properties -v $(pwd)/model-store:/home/model-server/model-store -v $(pwd)/examples:/home/model-server/examples torchserve-ipex:1.0
docker run --rm -it -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 --name mar -v $(pwd)/config.properties:/home/model-server/config.properties -v $(pwd)/model-store:/home/model-server/model-store -v $(pwd)/examples:/home/model-server/examples torchserve-ipex:1.0
```
2. List your container or skip this if you know container name
Expand Down Expand Up @@ -310,11 +310,11 @@ For example,
docker run --rm --shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
-p8080:8080 \
-p8081:8081 \
-p8082:8082 \
-p7070:7070 \
-p7071:7071 \
-p 127.0.0.1:8080:8080 \
-p 127.0.0.1:8081:8081 \
-p 127.0.0.1:8082:8082 \
-p 127.0.0.1:7070:7070 \
-p 127.0.0.1:7071:7071 \
--mount type=bind,source=/path/to/model/store,target=/tmp/models <container> torchserve --model-store=/tmp/models
```
Expand Down
4 changes: 2 additions & 2 deletions docker/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ do
if test $
then
IMAGE_NAME="pytorch/torchserve:dev-gpu"
GPU_DEVICES='--gpus '"\"device=$2\""''
GPU_DEVICES='--gpus '"\"device=$2\""''
shift
fi
shift
Expand All @@ -29,7 +29,7 @@ do
done
echo "Starting $IMAGE_NAME docker image"

docker run $DOCKER_RUNTIME $GPU_DEVICES -d --rm -it -p 8080:8080 -p 8081:8081 $IMAGE_NAME > /dev/null 2>&1
docker run $DOCKER_RUNTIME $GPU_DEVICES -d --rm -it -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 $IMAGE_NAME > /dev/null 2>&1

container_id=$(docker ps --filter="ancestor=$IMAGE_NAME" -q | xargs)

Expand Down
2 changes: 1 addition & 1 deletion docker/test_container_health.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ CONTAINER="test-container-py${IMAGE_TAG}"


healthcheck() {
docker run -d --rm -it -p 8080:8080 --name="${CONTAINER}" "${IMAGE_TAG}"
docker run -d --rm -it -p 127.0.0.1:8080:8080 --name="${CONTAINER}" "${IMAGE_TAG}"

echo "Waiting 5s for container to come up..."
sleep 5
Expand Down
2 changes: 1 addition & 1 deletion docker/test_container_model_prediction.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ torchserve --start --ts-config=/home/model-server/config.properties --models mni
EOF

echo "Starting container ${CONTAINER}"
docker run --rm -d -it --name "${CONTAINER}" -p 8080:8080 -p 8081:8081 -p 8082:8082 \
docker run --rm -d -it --name "${CONTAINER}" -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 -p 127.0.0.1:8082:8082 \
-v "${FILES_PATH}/mnist.py":"${SERVER_PATH}/mnist.py" \
-v "${FILES_PATH}/mnist_cnn.pt":"${SERVER_PATH}/mnist_cnn.pt" \
-v "${FILES_PATH}/mnist_handler.py":"${SERVER_PATH}/mnist_handler.py" \
Expand Down
2 changes: 1 addition & 1 deletion docs/batch_inference_with_ts.md
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ models={\
* Start serving the model with the container and pass the config.properties to the container

```bash
docker run --rm -it --gpus all -p 8080:8080 -p 8081:8081 --name mar -v /home/ubuntu/serve/model_store:/home/model-server/model-store -v $ path to config.properties:/home/model-server/config.properties pytorch/torchserve:latest-gpu
docker run --rm -it --gpus all -p 127.0.0.1:8080:8080 -p 127.0.0.1:8081:8081 --name mar -v /home/ubuntu/serve/model_store:/home/model-server/model-store -v $ path to config.properties:/home/model-server/config.properties pytorch/torchserve:latest-gpu
```
* Verify that the workers were started properly.
```bash
Expand Down
Loading

0 comments on commit 4051ae3

Please sign in to comment.