I have a nextcloud instance being hosted from my home network. The URL associated with it points directly at my home’s IP. I don’t want to host the instance on a VPS because disk space is expensive. So, instead, I want to point the URL at the VPS, and then somehow route the connection to my home’s nextcloud instance without leaking my home’s ip.

How might I go about doing this? Can this be achieved with nginx?

EDIT: Actually, not leaking my home’s IP is not essential. It is acceptable if it is possible to determine the IP with some effort. What I really want is to be able to host multiple websites with my single home IP without those websites being obviously connected, and to avoid automatic bots constantly looking for vulnerabilities in my home network.

  • You can set up nginx to do reverse proxy to your home IP, and then limit the traffic on your home IP to the VPS IP.

    You can also setup a wireguard VPN between VPS and your home machine, so the traffic between VPS and your hoke machine is encrypted.

    For DNS you just point to the VPS, and manage connections there, and on home network allow only VPS IP to connect. Then manage your security on the VPS.

    •  Max   ( @Max@mander.xyz ) OP
      link
      fedilink
      English
      01 year ago

      Thanks a lot! This is kind of the configuration that I have converged to, with nginx and WireGuard. The last thing I need to set up correctly is for the SSL handshake to occur between the client and my home server, and not between the client and the internet-facing VPS, such that the information remains encrypted and unreadable to the VPS. The two strategies that I have seen can do this is SNI routing with nginx or to use stunnel. I still have not been able to set up either!

      • In that case, you’re better off just using the VPS machine as port forwarding port 443 to your home machine’s wireguard IP address and handle the SSL/TLS termination on the home machine.

        This way all HTTPS traffic will be passing trough the VPS and being decrypted on your home machine, and encrypted data will be sent from your home machine back to the client. Anyone gets in or sniffs traffic will see encrypted traffic. Plus it’s already sent over encrypted VPN network. To really see what’s happening, they need to get into the machine and technically could use the wireguard private keys to decrypt the traffic, but they will still see the encrypted HTTPS traffic. So you’re good, technically.

        •  Max   ( @Max@mander.xyz ) OP
          link
          fedilink
          English
          11 year ago

          In that case, you’re better off just using the VPS machine as port forwarding port 443 to your home machine’s wireguard IP address and handle the SSL/TLS termination on the home machine.

          This is what I would like to do! I was trying to handle the SSL termination ‘automatically’ by simply forwarding the connections to 443 of my machine’s wireguard IP using nginx, but I did not manage to get it to work. That’s when I found that I need to use something like ‘stunnel’ to handle the SSL termination. But I think that you may be suggesting an even simpler method of using port-forwarding instead of the reverse proxy. I am not sure how to achieve that, I will look into it using these terms.

            •  Max   ( @Max@mander.xyz ) OP
              link
              fedilink
              English
              2
              edit-2
              1 year ago

              After lots of testing I found a configuration that works for me! In the end it is very simple, but I am quite a newbie at this so it took some effort to figure out what works. ChatGPT helped a bit too - and also confused me a lot - but it helped.

              What I do now is:

              I set up a wireguard tunnel. The VPS in this example has the ‘wireguard’ ip of 10.222.0.1, and my home network is 10.222.0.2. These are my configs (/etc/wireguard/wg0.conf):

              VPS wireguard config:

              spoiler
              [Interface]
              Address = 10.222.0.1/24
              ListenPort = 51820
              PrivateKey = <VPS Private key>
              
              [Peer]
              PublicKey = <Home network public key>
              AllowedIPs = 10.222.0.2/32
              PersistentKeepalive = 25
              

              Home network (Respberry pi) config :

              spoiler
              [Interface]
              Address = 10.222.0.2/32
              PrivateKey = <Home network private key>
              
              [Peer]
              PublicKey = <VPS Public Key>
              Endpoint = <VPS_IP>:51820
              AllowedIPs = 10.222.0.0/16
              PersistentKeepalive = 25
              
              

              Then, I use the following iptables commands in the VPS to map requests to port 80 and 443 to the ports 80 and 443 of the tunnel. What really confused me for a while was that I did not know that I needed to include the “POSTROUTING” step so that the packets get sent back the correct way, and that I had to set net.ipv4.ip_forward=1 in /etc/sysctl.conf:

              IP tables in VPS:

              spoiler
              
              iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination 10.222.0.2:443
              iptables -t nat -A POSTROUTING -p tcp -d 10.222.0.2 --dport 443 -j SNAT --to-source 10.222.0.1
              iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.222.0.2:80
              iptables -t nat -A POSTROUTING -p tcp -d 10.222.0.2 --dport 80 -j SNAT --to-source 10.222.0.1
              
              

              Then, in my home network I use the standard nginx config:

              spoiler
              server {
                server_name website.com;
                listen 80;
                location / {
                      return 301 https://$host$request_uri;
                }
              }
              
              server {
                server_name website.com;
                  listen 443;
                  location / {
                      proxy_set_header Host $host;
                      proxy_pass http://0.0.0.0:<Website Port>;
                  }
                  # certificate management here
                  ssl_certificate /etc/letsencrypt/live/website.com/fullchain.pem; # managed by Certbot
                  ssl_certificate_key /etc/letsencrypt/live/website.com/privkey.pem; # managed by Certbot
                  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
                  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
              }
              
              

              This configuration seems to work, and since both ports 80 and 433 are mapped you can use certbot to generate and renew the SSL certificates automatically.

              I am still learning, and this is the first thing that worked - so there might be a better way! But a lot of things I tried would not complete the SSL handshake correctly. > push m