12 KiB
Proxy Host - Central Reverse Proxy
Central reverse proxy at 141.56.51.1 running as a full VM (not LXC container).
Overview
- Hostname: proxy
- IP Address: 141.56.51.1
- Type: Full VM (not LXC)
- Services: HAProxy, OpenSSH (ports 1005, 2142)
- Role: Central traffic router for all StuRa HTW Dresden services
Architecture
The proxy is the central entry point for all HTTP/HTTPS traffic:
- All public DNS records point to this IP (141.56.51.1)
- HAProxy performs SNI-based routing for HTTPS traffic
- HTTP Host header-based routing for unencrypted traffic
- Automatic redirect from HTTP to HTTPS (except ACME challenges)
- Backend services handle their own TLS certificates
Services
HAProxy
HAProxy routes traffic using two methods:
1. HTTP Mode (Port 80)
- Inspects the HTTP Host header
- Routes requests to appropriate backend
- Forwards ACME challenges (/.well-known/acme-challenge/) to backends
- Redirects all other HTTP traffic to HTTPS (301 redirect)
- Default backend serves an index page listing all services
2. TCP Mode (Port 443)
- SNI inspection during TLS handshake
- Routes encrypted traffic without decryption
- Backends terminate TLS and serve their own certificates
- 1-second inspection delay to buffer packets
Key features:
- Logging: All connections logged to systemd journal
- Stats page: Available at http://127.0.0.1:8404/stats (localhost only)
- Max connections: 50,000
- Buffer size: 32,762 bytes
- Timeouts: 5s connect, 30s client/server
SSH Services
Port 1005: Admin SSH Access
- Primary SSH port for administrative access
- Configured in
services.openssh.listenAddresses - Used for system management and deployments
Port 2142: SSH Jump to srs2
- Forwards SSH connections to srs2 (141.56.51.2:80)
- 30-minute session timeout
- TCP keep-alive enabled
- Used for accessing legacy systems
Auto-Generated Forwarding
The proxy configuration is partially auto-generated:
- Static forwards: Manually defined in the
forwardsattribute set in default.nix - Dynamic forwards: Automatically generated from all NixOS configurations in this flake that have
services.nginx.enable = true
When you deploy a new host with Nginx enabled, the proxy automatically:
- Detects the nginx virtualHosts
- Extracts the IP address from the host configuration
- Generates HAProxy backend rules for ports 80 and 443
No manual proxy configuration needed for nginx-enabled hosts!
Deployment Type
Unlike other hosts in this infrastructure, the proxy is a full VM (not an LXC container):
- Has dedicated hardware-configuration.nix
- Uses disko for declarative disk management (hetzner-disk.nix)
- Network interface:
ens18(noteth0like LXC containers) - Requires more resources and dedicated storage
Disko Configuration
The proxy uses Btrfs with the following layout (hetzner-disk.nix):
- Filesystem: Btrfs
- Compression: zstd
- Subvolumes:
/- Root filesystem/nix- Nix store/var- Variable data/home- User home directories (if needed)
This provides better performance and snapshot capabilities compared to LXC containers.
Deployment
See the main README for deployment methods.
Initial Installation
Using nixos-anywhere (recommended):
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
This handles disk partitioning via disko automatically.
Updates
Note: SSH runs on port 1005, so you need an SSH config entry:
# ~/.ssh/config
Host 141.56.51.1
Port 1005
Then deploy:
# From local machine
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
# Or use auto-generated script
nix run .#proxy-update
Network Configuration
- Interface: ens18 (VM interface)
- IP: 141.56.51.1/24
- Gateway: 141.56.51.254
- DNS: 9.9.9.9, 1.1.1.1 (public DNS, not HTW internal)
- Firewall: nftables enabled
- Open ports: 22, 80, 443, 1005, 2142
Adding New Services
Method 1: Deploy a host with Nginx (Automatic)
The easiest method - just deploy a new host with nginx enabled:
# hosts/newservice/default.nix
services.nginx = {
enable = true;
virtualHosts."newservice.htw.stura-dresden.de" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# your config
};
};
};
The flake automatically:
- Discovers the nginx virtualHost
- Extracts the IP address from networking configuration
- Generates HAProxy forwarding rules
- No manual proxy changes needed!
You still need to:
- Add DNS record pointing to 141.56.51.1
- Deploy the proxy to pick up changes:
nix run .#proxy-update
Method 2: Manual forwarding (for non-Nginx services)
For services not using Nginx, manually add to the forwards attribute set in hosts/proxy/default.nix:
forwards = {
# ... existing forwards ...
newservice = {
dest = "141.56.51.XXX";
domain = "newservice.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
};
Then deploy the proxy.
SNI Routing Explained
SNI (Server Name Indication) is a TLS extension that includes the hostname in the ClientHello handshake.
How it works:
- Client initiates TLS connection to 141.56.51.1
- Client sends ClientHello with SNI field (e.g., "wiki.htw.stura-dresden.de")
- HAProxy inspects the SNI field (does not decrypt)
- HAProxy routes the entire encrypted connection to the backend (e.g., 141.56.51.13)
- Backend terminates TLS and serves the content
Benefits:
- No TLS decryption at proxy (end-to-end encryption)
- Each backend manages its own certificates
- Simple certificate renewal via ACME/Let's Encrypt
- No certificate syncing required
ACME Challenge Forwarding
Let's Encrypt verification works seamlessly:
- Backend requests certificate from Let's Encrypt
- Let's Encrypt queries DNS, finds 141.56.51.1
- Let's Encrypt requests
http://domain/.well-known/acme-challenge/<token> - HAProxy detects ACME challenge path
- HAProxy forwards to backend without HTTPS redirect
- Backend responds with challenge token
- Let's Encrypt verifies and issues certificate
This is why HTTP→HTTPS redirect has an exception for /.well-known/acme-challenge/ paths.
HAProxy Stats
Access HAProxy statistics page (localhost only):
# SSH into proxy
ssh -p 1005 root@141.56.51.1
# Access stats via curl
curl http://127.0.0.1:8404/stats
# Or forward port to your local machine
ssh -p 1005 -L 8404:127.0.0.1:8404 root@141.56.51.1
# Then browse to http://localhost:8404/stats
The stats page shows:
- Current connections per backend
- Health check status
- Traffic statistics
- Error counts
Configuration Structure
The HAProxy configuration is generated from Nix using lib.foldlAttrs:
forwards = {
service_name = {
dest = "141.56.51.XXX";
domain = "service.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
};
This generates:
- ACL rules for HTTP Host header matching
- Backend definitions for ports 80 and 443
- SNI routing rules for HTTPS
- HTTP→HTTPS redirects (except ACME)
Current Forwards
The proxy currently forwards traffic for:
In this repository:
- git.adm.htw.stura-dresden.de → 141.56.51.7 (Forgejo)
- wiki.htw.stura-dresden.de → 141.56.51.13 (MediaWiki)
- pro.htw.stura-dresden.de → 141.56.51.15 (Redmine)
- cloud.htw.stura-dresden.de → 141.56.51.16 (Nextcloud)
External services (managed outside this repo):
- stura.htw-dresden.de → 141.56.51.3 (Plone)
- tix.htw.stura-dresden.de → 141.56.51.220 (Pretix)
- vot.htw.stura-dresden.de → 141.56.51.57 (OpenSlides)
- mail.htw.stura-dresden.de → 141.56.51.14 (Mail server)
- lists.htw.stura-dresden.de → 141.56.51.14 (Mailing lists)
Troubleshooting
HAProxy not starting
# Check HAProxy status
systemctl status haproxy
# Check configuration syntax
haproxy -c -f /etc/haproxy/haproxy.cfg
# View HAProxy logs
journalctl -u haproxy -f
Backend not reachable
# Check backend connectivity
curl -v http://141.56.51.XXX:80
curl -vk https://141.56.51.XXX:443
# Check HAProxy stats for backend status
curl http://127.0.0.1:8404/stats | grep backend_name
# Test DNS resolution
dig +short domain.htw.stura-dresden.de
# Check firewall rules
nft list ruleset | grep 141.56.51.XXX
ACME challenges failing
# Verify HTTP forwarding works
curl -v http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
# Check HAProxy ACL for ACME
grep -i acme /etc/haproxy/haproxy.cfg
# Verify no HTTPS redirect for ACME paths
curl -I http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
# Should return 200/404, not 301 redirect
SSH Jump not working
# Check SSH jump frontend
systemctl status haproxy
journalctl -u haproxy | grep ssh_jump
# Test connection to srs2
telnet 141.56.51.2 80
# Check HAProxy backend configuration
grep -A 5 "ssh_srs2" /etc/haproxy/haproxy.cfg
Files and Directories
- HAProxy config:
/etc/haproxy/haproxy.cfg(generated by Nix) - HAProxy socket:
/run/haproxy/admin.sock(if enabled) - Disk config:
./hetzner-disk.nix(disko configuration) - Hardware config:
./hardware-configuration.nix(VM hardware) - NixOS config:
./default.nix(proxy configuration)
Security Considerations
- No TLS decryption: End-to-end encryption maintained
- No access logs: HAProxy logs to journal (can be filtered)
- Stats page: Localhost only (not exposed)
- Firewall: Only necessary ports open
- SSH: Custom port (1005) reduces automated attacks
Zentraler Reverse Proxy Für ehemals öffentliche IP-Adressen
(Original German documentation preserved)
Die Instanzen können weitestgehend unverändert weiterlaufen, es müssen nur alle DNS-Einträge auf 141.56.51.1 geändert werden.
HAProxy
Zum Weiterleiten der Connections wird HAProxy verwendet. HAProxy unterstützt verschiedene Modi. Für uns sind http und tcp relevant. Unverschlüsselte Verbindungen werden mit dem http Modul geparst und weitergeleitet. Bei SSL-Verbindungen wird beim Session-Aufbau der Hostname beobachtet und anhand dessen, die komplette verschlüsselte Verbindung an das jeweilige System weitergeletet. Damit können alle Systeme weiterhin selbst mit certbot ihre TLS-Zertifikate anfordern, da auch die ACME-Challenge weitergeleitet wird.
Config
Relevant HAProxy config:
frontend http-in
bind *:80
acl is_cloud hdr(host) -i cloud.htw.stura-dresden.de
acl is_dat hdr(host) -i dat.htw.stura-dresden.de
acl is_plone hdr(host) -i stura.htw-dresden.de
acl is_plone_alt hdr(host) -i www.stura.htw-dresden.de
acl is_pro hdr(host) -i pro.htw.stura-dresden.de
acl is_tix hdr(host) -i tix.htw.stura-dresden.de
acl is_vot hdr(host) -i vot.htw.stura-dresden.de
acl is_wiki hdr(host) -i wiki.htw.stura-dresden.de
use_backend cloud_80 if is_cloud
use_backend dat_80 if is_dat
use_backend plone_80 if is_plone
use_backend plone_alt_80 if is_plone_alt
use_backend pro_80 if is_pro
use_backend tix_80 if is_tix
use_backend vot_80 if is_vot
use_backend wiki_80 if is_wiki
default_backend plone_80
frontend sni_router
bind *:443
mode tcp
tcp-request inspect-delay 1s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend cloud_443 if { req_ssl_sni -i cloud.htw.stura-dresden.de }
use_backend dat_443 if { req_ssl_sni -i dat.htw.stura-dresden.de }
use_backend plone_443 if { req_ssl_sni -i stura.htw-dresden.de }
use_backend plone_alt_443 if { req_ssl_sni -i www.stura.htw-dresden.de }
use_backend pro_443 if { req_ssl_sni -i pro.htw.stura-dresden.de }
use_backend tix_443 if { req_ssl_sni -i tix.htw.stura-dresden.de }
use_backend vot_443 if { req_ssl_sni -i vot.htw.stura-dresden.de }
use_backend wiki_443 if { req_ssl_sni -i wiki.htw.stura-dresden.de }
# this block is repeated for each backend
backend cloud_80
mode http
server cloud 141.56.51.16:80 # no check here - also proxy if haproxy thinks this is down
backend cloud_443
mode tcp
server cloud 141.56.51.16:443 check
...
...
...
See Also
- Main README - Deployment methods and architecture
- Git README - Forgejo configuration
- Wiki README - MediaWiki configuration
- Redmine README - Redmine configuration
- Nextcloud README - Nextcloud configuration
- HAProxy Documentation