Self-hosting
with tunnels
16th March 2025
I have been using Nextcloud for a long time, running on a publicly-accessible VPS. The machine it is running on is not very powerful, but the server runs fine and I’ve never had much trouble with it.
However, I want to scale up the storage on it. I have a 4TB hard drive, which I bought with the intention of using it to store backups. I am much too paranoid to store backups of my personal laptop in the cloud. And even if I did, it would be very expensive: just 320GB of attached storage costs over $30 USD per month at Hetzner, which is nearly triple what I’m paying at the moment.
The obvious suggestion is to host these services at home. But I don’t have a static IP at home, so I never really understood how to route traffic to it.
What’s tunnelling?
A tunnel is a long-running connection between two computers that are unable to establish the connection directly. Often this works by opening a connection in the reverse direction first - for instance, your local machine sends a request to a public server, and the connection is kept open. When the public server receives requests from other people, some of those requests get forwarded to the local machine, it sends a response back over the open channel, and the public server forwards them to the initial requestor.
There is a list of tunnelling solutions at awesome-tunnelling.
They recommend Cloudflare
Tunnel. In this case, you run a program called
cloudflared
locally, which maintains an open
connection to cloudflare’s infrastructure. You register your
domain name with them, and they connect any remote requests for
your domain to the local service. This is configurable, so you
can have e.g. nextcloud.danielittlewood.xyz
pointed
to localhost:8993
and
mastodon.danielittlewood.xyz
pointed to
localhost:9967
. ngrok (now
proprietary) works on a similar principle.
That’s all well and good, cloudflared
is even
open source. But I don’t really understand how it works, and it
seems to be tied to cloudflare’s services (which are generally
non-free). Mac Chaffee (McAfee?) seems to have struggled with
the same issue in his (apparently very similar) article Flouting
the Internet Protocols with Tunnels.
As documented there, you can achieve a simple compromise using ssh tunnelling. In this case, as long as two computers can connect via ssh to a trusted third party, you can maintain a tunnel between them. Here is an example snippet:
# on client 1
ssh -N -T -R 22222:localhost:22 your-remote-server
# on client A
ssh -p 22222 your-remote-server
# client A is now logged into client 1
This is very simple; so simple I even tried to implement it
in the past. But it means maintaining a very small server
running just to proxy connections, which is a crazy waste of
resources. It turns out someone beat me to writing the software,
and wrote sish. I found it
via Awesome-Selfhosted.
In fact, the maintainers of sish
even run a cheap
managed service call tuns.sh
. It is part of pico.sh, which costs about $2 USD
per month. There are some other cute things they will give you
as well, like static site hosting. Giving them money feels
playing my small part in breaking up the homogeneity of the
web.
Example tuns.sh configuration
To get an account on https://pico.sh, I ran ssh pico.sh
and used their terminal client to register (very unusual, but
also very cute!). Then I gave $30 away for a year of the premium
service. To set up the reverse tunnel seems very easy:
ssh -R ssh:22:localhost:22 tuns.sh
The effect is that another client can run
ssh -J tuns.sh danielittlewood-ssh
to connect.
danielittlewood
is my username, and
ssh
is just a name for this particular tunnel. Port
assignment is (username, tunnel-name)-local, so you can have
multiple services exposed on port 443, for instance.
This is actually enough for me to get a simple network
attached storage. At home, I have two laptops, the “client”
(which I want to back up) and the “server” (which I want to run
services from). I copied my public ssh key by hand from the
client into the server’s .ssh/authorized_keys
file.
I also created a new ssh key pair on the server with the
following snippet:
ssh-keygen -t ed25519 -C "dan-selfhosting"
On the client, I ran ssh pico.sh
and added the
new public key by hand. This means both machines will be able to
connect to https://tuns.sh, which is necessary for the
tunnel to work. I run that ssh -R
line above on the
server, and on the client:
ssh -J nue.tuns.sh danielittlewood-ssh
Connection to nue.tuns.sh closed by remote host.
Connection closed by UNKNOWN port 65535
It didn’t work! The reason is that when you set up a “private
alias” you have to specify on the server command
line all the SSH fingerprints of the people who are allowed to
connect. In this case, I want everyone in my
~/.ssh/authorized_keys
file to be able to
connect:
ssh -R ssh:22:localhost:22 nue.tuns.sh \
tcp-aliases-allowed-users=$(ssh-keygen -lf ~/.ssh/authorized_keys \
| awk '{ print $2; }' | paste -sd ",")
Note that I also specified nue.tuns.sh
rather
than tuns.sh
- that was deliberate too. I spoke to
hello@pico.sh
about it:
When you use
ssh tuns.sh
, it selects the datacenter closest to you (Nuremberg DE, nue.tuns.sh or Ashburn VA-US, ash.tuns.sh). You can select the server which is closest to you manually by usingssh {ash or nue}.tuns.sh
. That will ensure that your tunnel server is “durable” as we don’t use a global routing tunnel for tuns (at least not yet).
P.S. if you want to use sshfs, here is the snippet for that:
sshfs -o ssh_command="ssh -J tuns.sh" danielittlewood-ssh:/media mount