Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add transparent mode for Linux #428

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 12 additions & 13 deletions doc/transparent_proxy.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,23 @@
```sh
sysctl -w net.ipv4.ip_forward=1
```
# Transparent mode

The transparent mode describes an operation mode currently only available on Linux where applications without any knowledge of any Proxy can use Proxydetox.
Applications do not need to be aware of the proxy nore do they need to be configured in any way.
This approach works only for TCP based connections i.e., it cannot be used for UDP based protocols.

For the transparent mode `iptables` or `nftables` are used to redirect outgoing traffic via Proxydetox. Proxydetox contains a special handling for this kind of connections and forwards it to the correct upstream proxy.

For this to work the PAC file must handle IP based rules correctly (since the destination hostname is not available anymore since the hostname got resolved before the connection reaches Proxydeotx).

```sh
addgroup --system proxydetox
usermod -aG proxydetox $(id -u)
sysctl -w net.ipv4.ip_forward=1
```

```sh
iptables -A OUTPUT -t nat -p tcp --dport 80 -m owner ! --gid-owner proxydetox -j DNAT --to 127.0.0.1:3125
iptables -A OUTPUT -t nat -p tcp --dport 80 -j DNAT --to 127.0.0.1:3128
```

```sh
iptables -t nat -L -v --line-numbers
```

Run `proxydetox` in the `proxydetox` group sucht that the own `proxydetox`
traffic does not get matched with the `iptables` rule from above (otherwise we
would end up in a endless loop).

```sh
sg proxydetox -c proxydetox
```
With this approach it must be ensured that the firewall rules are defined such, that the Proxydetox outgoing traffic is not redirect again to Proxydetox in an infinite loop.
126 changes: 103 additions & 23 deletions proxydetoxlib/src/server.rs
Original file line number Diff line number Diff line change
Expand Up @@ -32,24 +32,24 @@ pub struct Control {
shutdown_complete_rx: tokio::sync::mpsc::Receiver<()>,
}

struct Handler {
struct HttpHandler {
addr: SocketAddr,
conn: hyper::server::conn::http1::Connection<TokioIo<TcpStream>, Session>,
shutdown_request: CancellationToken,
shutdown_complete_tx: tokio::sync::mpsc::Sender<()>,
}

impl Handler {
impl HttpHandler {
#[instrument(skip(self), fields(peer = debug(self.addr)))]
async fn run(self) {
let Handler {
let HttpHandler {
addr: _,
conn,
shutdown_request,
shutdown_complete_tx,
} = self;
let conn = conn.with_upgrades();
tracing::debug!("peer connected");
tracing::debug!("http peer connected");
let mut conn = std::pin::pin!(conn);
loop {
select! {
Expand All @@ -70,6 +70,70 @@ impl Handler {
}
}

struct TcpHandler {
addr: SocketAddr,
dst: SocketAddr,
context: Arc<Context>,
shutdown_request: CancellationToken,
shutdown_complete_tx: tokio::sync::mpsc::Sender<()>,
}

impl TcpHandler {
#[instrument(skip(self), fields(peer = debug(self.addr)))]
async fn run(self) {
let TcpHandler {
addr: _,
dst,
context,
shutdown_request,
shutdown_complete_tx,
} = self;
tracing::debug!("tcp peer connected");
let uri = Uri::builder()
.scheme(http::uri::Scheme::HTTP)
.authority(dst.to_string().parse().expect("IP is valid authority"))
.build()
.expect("URI");
let proxies = context.find_proxy(uri).await;
let conn = proxies.clone().into_iter().map({
let cx = self.context.clone();
let method = req.method();
let uri = req.uri();
move |p| {
let cx = cx.clone();
let race = cx.race_connect;
async move {
let r = cx.connect(p, method.clone(), uri.clone()).await;
if let Err(ref cause) = r {
if race {
tracing::debug!(%cause, "unable to connect");
} else {
tracing::warn!(%cause, "unable to connect");
}
}
r
}
}
});
loop {
select! {
c = conn.as_mut() => {
if let Err(cause) = c {
tracing::error!(%cause, "server connection error");
}
tracing::debug!("peer disconnected");
break;
},
_ = shutdown_request.cancelled(), if !shutdown_request.is_cancelled() => {
tracing::debug!("shutdown requested");
conn.as_mut().graceful_shutdown();
}
}
}
drop(shutdown_complete_tx);
}
}

impl Proxy {
#[allow(clippy::new_ret_no_self)]
pub fn new<A>(acceptor: A, context: Arc<Context>) -> (Server<A>, Control)
Expand Down Expand Up @@ -104,6 +168,40 @@ impl<A> Server<A>
where
A: futures_util::Stream<Item = std::io::Result<tokio::net::TcpStream>> + Send + Unpin + 'static,
{
fn accept(&mut self, stream: std::io::Result<tokio::net::TcpStream>) -> std::io::Result<()> {
let stream = match stream {
Ok(stream) => stream,
Err(cause) => {
tracing::error!(%cause, "listener error");
return Err(cause);
}
};
let addr = stream.peer_addr().expect("peer_addr");
let orig_dst_addr = crate::socket::original_destination_address(&stream);
if let Some(dst) = orig_dst_addr {
let handler = TcpHandler {
addr,
dst,
context: self.context.clone(),
shutdown_request: self.shutdown_request.clone(),
shutdown_complete_tx: self.shutdown_complete_tx.clone(),
};
tokio::spawn(handler.run());
} else {
let conn = self.http_server.serve_connection(
TokioIo::new(stream),
Session::new(self.context.clone(), addr, orig_dst_addr),
);
let handler = HttpHandler {
addr,
conn,
shutdown_request: self.shutdown_request.clone(),
shutdown_complete_tx: self.shutdown_complete_tx.clone(),
};
tokio::spawn(handler.run());
}
Ok(())
}
#[instrument(skip(self))]
pub async fn run(&mut self) -> std::io::Result<()> {
while !self.shutdown_request.is_cancelled() {
Expand All @@ -112,25 +210,7 @@ where
break;
},
stream = self.acceptor.next() => {
let stream = match stream {
Some(Ok(stream))=> {
stream
},
Some(Err(cause)) => {
tracing::error!(%cause, "listener error");
return Err(cause);
},
None => unreachable!(),
};
let addr = stream.peer_addr().expect("peer_addr");
let conn = self.http_server.serve_connection(TokioIo::new(stream), Session::new(self.context.clone(), addr));
let handler = Handler {
addr,
conn,
shutdown_request: self.shutdown_request.clone(),
shutdown_complete_tx: self.shutdown_complete_tx.clone(),
};
tokio::spawn(handler.run());
self.accept(stream.expect("infinite stream of TcpStream"))?
},
}
}
Expand Down
13 changes: 10 additions & 3 deletions proxydetoxlib/src/session.rs
Original file line number Diff line number Diff line change
Expand Up @@ -109,11 +109,16 @@ pub struct Session(Arc<Inner>);
struct Inner {
context: Arc<Context>,
addr: SocketAddr,
orig_dst_addr: Option<SocketAddr>,
}

impl Session {
pub fn new(context: Arc<Context>, addr: SocketAddr) -> Self {
Self(Arc::new(Inner { context, addr }))
pub fn new(context: Arc<Context>, addr: SocketAddr, orig_dst_addr: Option<SocketAddr>) -> Self {
Self(Arc::new(Inner {
context,
addr,
orig_dst_addr,
}))
}
}

Expand All @@ -125,7 +130,9 @@ impl Inner {
) -> std::result::Result<http::Response<BoxBody<Bytes, hyper::Error>>, Infallible> {
// TODO: management console must also be choosen, when authority is pointing to us
// (or abort the connection), since otherwise we create an endless loop.
let res = if req.uri().authority().is_some() {
let res = if let Some(dst) = self.orig_dst_addr {
self.forward().await
} else if req.uri().authority().is_some() {
self.proxy_request(req).await
} else if req.method() != hyper::Method::CONNECT {
self.management_console(req).await
Expand Down
67 changes: 67 additions & 0 deletions proxydetoxlib/src/socket.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,13 @@ extern "C" {
) -> libc::c_int;
}

use std::net::SocketAddr;

#[cfg(unix)]
use std::os::unix::io::AsFd;
#[cfg(windows)]
use std::os::windows::io::AsSocket as AsFd;

#[cfg(target_family = "unix")]
type RawSocket = std::os::unix::io::RawFd;
#[cfg(target_family = "windows")]
Expand Down Expand Up @@ -127,6 +134,66 @@ fn listenfds(
Ok(result)
}

#[cfg(target_os = "linux")]
pub fn original_destination_address(socket: &tokio::net::TcpStream) -> td::io::Result<SocketAddr> {
use std::os::fd::AsRawFd;

let fd = socket.as_fd().as_raw_fd();
let mut addr6: libc::sockaddr_in6 = unsafe { std::mem::zeroed() };

match socket.local_addr() {
Ok(SocketAddr::V4(_)) => {
let mut addr4: libc::sockaddr_in = unsafe { std::mem::zeroed() };
let mut optlen = std::mem::size_of_val(&addr4) as libc::socklen_t;
let rc = unsafe {
libc::getsockopt(
fd,
libc::SOL_IP,
libc::SO_ORIGINAL_DST,
&mut addr4 as *mut _ as *mut _,
&mut optlen as *mut libc::socklen_t,
)
};
if rc == -1 {
None
} else {
let ip = std::net::Ipv4Addr::from_bits(u32::from_be(addr4.sin_addr.s_addr));
let port = u16::from_be(addr4.sin_port);

Some(SocketAddr::from((ip, port)))
}
}
Ok(std::net::SocketAddr::V6(_)) => {
let mut addr6: libc::sockaddr_in6 = unsafe { std::mem::zeroed() };
let mut optlen = std::mem::size_of_val(&addr6) as libc::socklen_t;
let rc = unsafe {
libc::getsockopt(
fd,
libc::SOL_IPV6,
libc::SO_ORIGINAL_DST,
&mut addr6 as *mut _ as *mut _,
&mut optlen as *mut libc::socklen_t,
)
};
if rc == -1 {
None
} else {
let ip = std::net::Ipv6Addr::from_bits(u128::from_be(addr6.sin6_addr.s6_addr));
let port = u16::from_be(addr6.sin6_port);

Some(SocketAddr::from((ip, port)))
}
}
_ => None,
}
}

#[cfg(not(target_os = "linux"))]
pub fn original_destination_address(_socket: &tokio::net::TcpStream) -> Option<SocketAddr> {
// Not implemented for this OS
None
}

#[cfg(test)]
mod tests {
#[cfg(all(target_family = "unix", not(target_os = "macos")))]
Expand Down
Loading