- Node.js (required)
- NPM (required)
- openssl (optional, needed only if you have to generate a new CA)
NITM can be configured through a set of environment variables.
Here is a table that represents all available configuration options.
variable name | default value | description |
---|---|---|
HTTP_HOST | localhost | public proxy host |
HTTP_PORT | 8080 | public proxy port |
HTTPS_HOST | localhost | internal proxy host |
HTTPS_PORT | 8443 | internal proxy port |
CA_KEY | .cert/rootCA.key | path to root CA key |
CA_CERT | .cert/rootCA.pem | path to root CA cert |
CA_PASS | pass | root CA password |
More configuration options will be available soon.
Feeling lucky with the default settings? Just proceed to the to certificates section.
If you would like to update the configuration, the easiest way would be to make a new copy of env.example
.
cp env.example .env
In order to use NITM, your system or application should be capable of using a custom root CA.
For testing purpose, you can simply rely on the pre-generated keys and forget about all of this.
Otherwise, the name of the generate-root-ca.sh script should be pretty self-explanatory.
It has been used for the freely-distributed certificates under the .cert directory.
Install the missing dependencies and start the service.
npm install
npm run start
Alternatively, you can start the dev
script, which will spawn nodemon
process.
npm install
npm run dev
There is a GitHub workflow that automates the processes of building and publishing the image.
The most recent image build is located at alesandar/nitm:latest.
Looking for other another build? Inspect the public tags.
The command below create a new container, based on the latest build.
docker run -e HTTP_HOST=0.0.0.0 -p 127.0.0.1:8080:8080 -t alesandar/nitm:latest`
If you do not want to declare variables one by one, like we did above, just pass an environment file.
In case you did not create an environment file earlier, please go back to the setup section.
docker run --env-file .env -p 127.0.0.1:8080:8080 -t alesandar/nitm`
Docker Compose makes it even simpler, since most of the configuration is predeclared inside docker-compose.yml.
docker-compose up
For local development, you might want to pass the --build
argument (it will build a local image).
docker-compose up --build
There's a simple cURL wrapper, located in .bin/curl.sh, which must take care of all prerequisites.
.bin/curl.sh https://github.com
Alternatively, you can execute the actual cURL binary with the following arguments:
curl --proxy localhost:8080 --cacert .cert/rootCA.pem https://github.com
The proxy server can be specified as a command-line argument, but that's not the case with the certificate authority.
Users are required import certificate authorities manually. Follow this steps:
- open
Settings
and go toPrivacy and security
Security
Manage certificates
- focus the
Authorities
tab and click on theImport
button - select the
Trust this certificate for identifying websites
checkbox and hitOK
- navigate to the project's root directory and select the root CA.
As a final step, start the shell script by doing so:
.bin/chromium.sh
Alternative, start Chromium like so:
chromium --proxy-server=localhost:8080
Now open a website of your choice and take a look at the service logs.
The system-wide setup is untested and discouraged. If you want to experiment - do it at your own risk.
Arch Linux users might wish to read the Transport Layer Security/Certificate_authorities.
Anybody else should refer to the documentation of his current operating system.
- performance and metrics:
- implement a monitoring stack in docker-compose.yml (Prometheus, Grafana, ElasticSearch, etc)
- create a custom Prometheus exporter for exposing request metrics
- implement a compression algorithm, such as gzip/deflate/
- implement a caching mechanism
- refactor:
- rewrite the logic from generate-root-ca.sh inside cert.js and generate root CAs internally
- create a method that verifies the validity of the root CA
- follow permanent (301 and 308) and temporary (302, 303, 307) redirects
- rewrite some of the main methods (e.g.
initHTTP
andinitHTTPS
) into a class instances - use a method for inline documentation, such as JSDoc
- consider rewriting the everything in TypeScript
- decide what type of tagging/versioning system to use for releases
- features and improvements:
- compile static binaries that will ease the distribution of the service
- create a CLI module (e.g.
./src/lib/cli.js
) for interacting with the service through the command-line
- unit-testing:
- choose a library (tape?)
- write performance-oriented tests
- create a stress-testing environment for high load
- CI/CD:
- push images to GitHub's registry as well
- add a workflow for publishing to the NPM registry as well
Finally, here is a flowchart that illustrates what the service does behind the scenes. It requires a few improvement, but should be more than enough for now.
graph TB
classDef client stroke:#83a598
classDef nitm stroke:#fb4934
classDef dest stroke:#b8bb26
A(web client):::client
B{MITM status}:::nitm
C(root CA):::nitm
D(HTTP proxy):::nitm
E(HTTPS proxy):::nitm
Z(destination):::dest
%% normal flow (MITM is disabled)
A ---|1. request|B ---->|2a. disabled| Z -->|3a. response| A
%% abnormal flow (MITM is enabled)
B ---->|2b. enabled| C
C ---->|3b. trusted| D
D ---->|4b. intercept| E
E ---->|5b. intercept| Z
Z ---->|6b. response| E
E ---->|7b. intercept| D
linkStyle default stroke:#fb4934
linkStyle 0,1 stroke:#83a598
linkStyle 2 stroke:#b8bb26