This is a trivial Dockerfile to build a proxy container. It will use the famous Squid proxy, configured to work in transparent mode.
If you build a lot of containers, and have a not-so-fast internet link, you might be spending a lot of time waiting for packages to download. It would be nice if all those downloads could be automatically cached, without tweaking your Dockerfiles, right?
Or, maybe your corporate network forbids direct outside access, and require you to use a proxy. Then you can edit this recipe so that it cascades to the corporate proxy. Your containers will use the transparent proxy, which itself will pass along to the corporate proxy.
You can use the squid proxy directly via docker and iptables rules, there is
also a fig.yml
for convenience to use fig to launch the system. For more
information on tuning parameters see below.
You can manually run these commands
docker run --net host -d jpetazzo/squid-in-a-can
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to 3129 -w
After you stop you will need to cleanup the iptables rules:
iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to 3129 -w
There is a fig.yml
file to enable launching via fig and a separate container
which will setup the iptables rules for you. To use this you will need a
local checkout of this repo and have fig
and docker
installed.
- Run the following commands
fig up -d squid && fig run tproxy
That's it. Now all HTTP requests going through your Docker host will be transparently routed through the proxy running in the container.
If you your tproxy instance goes down hard without cleaning up use the following command:
iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to 3129 -w
Note: it will only affect HTTP traffic on port 80.
Note: traffic originating from the host will not be affected, because
the PREROUTING
chain is not traversed by packets originating from the
host.
Note: if your Docker host is also a router for other things (e.g. if it runs various virtual machines, or is a VPN server, etc), those things will also see their HTTP traffic routed through the proxy. They have to use internal IP addresses, though.
Note: if you plan to run this on EC2 (or any kind of infrastructure where the machine has an internal IP address), you should probably tweak the ACLs, or make sure that outside machines cannot access ports 3128 and 3129 on your host.
Note: It will be available to as a proxy on port 3128 on your local machine if you would like to setup local proxies yourself.
The jpetazzo/squid-in-a-can
container runs a really basic Squid3 proxy.
Rather than writing my own configuration file, I patch the default Debian
configuration. The main thing is to enable intercept
on another port
(here, 3129). To update the iptables for the intercept the command needs
the --privileged flag.
Then, this container should be started using the network namespace of the
host (that's what the --net host
option is for).
Another strategy would be to start the container with its own namespace.
Then, the HTTP traffic can be directed to it with a DNAT
rule.
The problem with this approach, is that Squid will "see" the traffic as
being directed to its own IP address, instead of the destination HTTP
server IP address; and since Squid 3.3, it refuses to honor such requests.
(The reasoning is, that it would then have to trust the HTTP Host:
header to know where to send the request. You can check CVE-2009-0801
for details.)
The docker image can be tuned using environment variables.
Squid has a maximum object cache size. Often when caching debian packages vs
standard web content it is valuable to increase this size. Use the
-e MAX_CACHE_OBJECT=1024
to set the max object size (in MB)
The squid disk cache size can be tuned. use
-e DISK_CACHE_SIZE=5000
to set the disk cache size (in MB)
The contents of squid.conf will only be what's defined in SQUID_DIRECTIVES giving the user full control of squid.
This will append any contents of the environment variable to squid.conf. It is expected that you will use multi-line block quote for the contents.
Being docker when the instance exits the cached content immediately goes away
when the instance stops. To avoid this you can use a mounted volume. The cache
location is /var/cache/squid3
so if you mount that as a volume you can get
persistent caching. Use -v /home/user/persistent_squid_cache:/var/cache/squid3
in your command line to enable persistent caching.
Ideas for improvement:
- easy chaining to an upstream proxy