587 lines
16 KiB
Markdown
587 lines
16 KiB
Markdown
# Proxy Host - Central Reverse Proxy
|
|
|
|
Central reverse proxy at 141.56.51.1 running as a full VM (not LXC container).
|
|
|
|
## Overview
|
|
|
|
- **Hostname**: proxy
|
|
- **IP Address**: 141.56.51.1
|
|
- **Type**: Full VM (not LXC)
|
|
- **Services**: HAProxy, BIND DNS, Chrony NTP, OpenSSH (ports 1005, 2142)
|
|
- **Role**: Central traffic router, DNS resolver, and NTP server for all StuRa HTW Dresden services
|
|
|
|
## Architecture
|
|
|
|
The proxy is the central entry point for all HTTP/HTTPS traffic:
|
|
- All public DNS records point to this IP (141.56.51.1)
|
|
- HAProxy performs SNI-based routing for HTTPS traffic
|
|
- HTTP Host header-based routing for unencrypted traffic
|
|
- Automatic redirect from HTTP to HTTPS (except ACME challenges)
|
|
- Backend services handle their own TLS certificates
|
|
|
|
## Services
|
|
|
|
### HAProxy
|
|
|
|
HAProxy routes traffic using two methods:
|
|
|
|
**1. HTTP Mode (Port 80)**
|
|
- Inspects the HTTP Host header
|
|
- Routes requests to appropriate backend
|
|
- Forwards ACME challenges (/.well-known/acme-challenge/) to backends
|
|
- Redirects all other HTTP traffic to HTTPS (301 redirect)
|
|
- Default backend serves an index page listing all services
|
|
|
|
**2. TCP Mode (Port 443)**
|
|
- SNI inspection during TLS handshake
|
|
- Routes encrypted traffic without decryption
|
|
- Backends terminate TLS and serve their own certificates
|
|
- 1-second inspection delay to buffer packets
|
|
|
|
**Key features:**
|
|
- Logging: All connections logged to systemd journal
|
|
- Stats page: Available at http://127.0.0.1:8404/stats (localhost only)
|
|
- Max connections: 50,000
|
|
- Buffer size: 32,762 bytes
|
|
- Timeouts: 5s connect, 30s client/server
|
|
|
|
### BIND DNS Resolver
|
|
|
|
The proxy provides recursive DNS resolution for the internal network (141.56.51.0/24).
|
|
|
|
**Configuration:**
|
|
- **Service**: BIND9 recursive resolver
|
|
- **Listen address**: 141.56.51.1
|
|
- **Port**: 53 (UDP/TCP)
|
|
- **Allowed networks**: 127.0.0.0/8, 141.56.51.0/24
|
|
- **Forwarders**: 9.9.9.9 (Quad9), 1.1.1.1 (Cloudflare)
|
|
- **IPv6**: Disabled
|
|
|
|
**Usage:**
|
|
All hosts in the internal network can configure their DNS resolver to use `141.56.51.1` for name resolution.
|
|
|
|
Example configuration for other hosts:
|
|
```nix
|
|
networking.nameservers = [ "141.56.51.1" ];
|
|
```
|
|
|
|
**Why BIND?**
|
|
- Provides caching for frequently accessed domains
|
|
- Reduces external DNS queries and improves performance
|
|
- Allows central control of DNS resolution policies
|
|
- More reliable than relying solely on external DNS servers
|
|
|
|
### Chrony NTP Server
|
|
|
|
The proxy serves network time to all systems in the internal network.
|
|
|
|
**Configuration:**
|
|
- **Service**: chrony NTP server
|
|
- **Port**: 123 (UDP)
|
|
- **Allowed network**: 141.56.51.0/24
|
|
- **Upstream servers**: pool.ntp.org
|
|
- **Sync mode**: Fast initial sync (iburst)
|
|
- **Fallback**: Serves time even if not synced (stratum 10)
|
|
|
|
**Usage:**
|
|
Other hosts can synchronize their system time with the proxy:
|
|
|
|
```nix
|
|
services.chrony = {
|
|
enable = true;
|
|
servers = [ "141.56.51.1" ];
|
|
};
|
|
```
|
|
|
|
Or for systems using systemd-timesyncd:
|
|
```nix
|
|
services.timesyncd = {
|
|
enable = true;
|
|
servers = [ "141.56.51.1" ];
|
|
};
|
|
```
|
|
|
|
**Benefits:**
|
|
- Centralized time synchronization for all internal hosts
|
|
- Reduced external NTP queries from HTW network
|
|
- Consistent time across all StuRa infrastructure
|
|
- Local fallback if upstream NTP servers are unreachable
|
|
|
|
### SSH Services
|
|
|
|
**Port 1005: Admin SSH Access**
|
|
- Primary SSH port for administrative access
|
|
- Configured in `services.openssh.listenAddresses`
|
|
- Used for system management and deployments
|
|
|
|
**Port 2142: SSH Jump to srs2**
|
|
- Forwards SSH connections to srs2 (141.56.51.2:80)
|
|
- 30-minute session timeout
|
|
- TCP keep-alive enabled
|
|
- Used for accessing legacy systems
|
|
|
|
### Documentation Site (Hugo)
|
|
|
|
The proxy hosts a self-documenting Hugo site at **docs.adm.htw.stura-dresden.de** that automatically builds from README files across the repository.
|
|
|
|
**Architecture:**
|
|
|
|
1. **Hugo Package Build** (in flake.nix):
|
|
- `docs-site` package built using `stdenv.mkDerivation`
|
|
- Uses Hugo static site generator with the hugo-book theme
|
|
- Script `docs/build-docs.sh` collects README files from:
|
|
- Main `README.md` → homepage
|
|
- `CLAUDE.md` → Claude Code guide page
|
|
- `hosts/*/README.md` → individual host documentation
|
|
- `keys/README.md` → key management guide
|
|
- Converts relative markdown links to work with Hugo's URL structure
|
|
- Builds static HTML site and stores it in the Nix store
|
|
|
|
2. **Local Nginx Server**:
|
|
- Serves the Hugo site on localhost only (not directly accessible)
|
|
- HTTP on 127.0.0.1:8080
|
|
- HTTPS on 127.0.0.1:8443
|
|
- Uses ACME certificate for `docs.adm.htw.stura-dresden.de`
|
|
- Clean URL handling: `try_files $uri $uri/ $uri.html =404`
|
|
|
|
3. **HAProxy Forwarding**:
|
|
- External requests to docs.adm.htw.stura-dresden.de forwarded to local nginx
|
|
- Entry in the `forwards` map routes port 80 → 8080, port 443 → 8443
|
|
- ACME challenges forwarded for automatic certificate renewal
|
|
|
|
**Updating Documentation:**
|
|
|
|
Just edit any README file in the repository and redeploy the proxy:
|
|
```bash
|
|
nix run .#proxy-update
|
|
```
|
|
|
|
The Hugo site will rebuild with the latest content from all README files.
|
|
|
|
**Hugo Configuration:**
|
|
- Config: `docs/hugo.yaml`
|
|
- Theme: hugo-book v13 (alex-shpak/hugo-book)
|
|
- Base URL: https://docs.adm.htw.stura-dresden.de/
|
|
- Repository link: https://codeberg.org/stura-htw-dresden/stura-infra
|
|
|
|
### Auto-Generated Forwarding
|
|
|
|
The proxy configuration is **partially auto-generated**:
|
|
|
|
1. **Static forwards**: Manually defined in the `forwards` attribute set in default.nix
|
|
2. **Dynamic forwards**: Automatically generated from all NixOS configurations in this flake that have `services.nginx.enable = true`
|
|
|
|
When you deploy a new host with Nginx enabled, the proxy automatically:
|
|
- Detects the nginx virtualHosts
|
|
- Extracts the IP address from the host configuration
|
|
- Generates HAProxy backend rules for ports 80 and 443
|
|
|
|
**No manual proxy configuration needed for nginx-enabled hosts!**
|
|
|
|
## Deployment Type
|
|
|
|
Unlike other hosts in this infrastructure, the proxy is a **full VM** (not an LXC container):
|
|
- Has dedicated hardware-configuration.nix
|
|
- Uses disko for declarative disk management (hetzner-disk.nix)
|
|
- Network interface: `ens18` (not `eth0` like LXC containers)
|
|
- Requires more resources and dedicated storage
|
|
|
|
## Disko Configuration
|
|
|
|
The proxy uses Btrfs with the following layout (hetzner-disk.nix):
|
|
- **Filesystem**: Btrfs
|
|
- **Compression**: zstd
|
|
- **Subvolumes**:
|
|
- `/` - Root filesystem
|
|
- `/nix` - Nix store
|
|
- `/var` - Variable data
|
|
- `/home` - User home directories (if needed)
|
|
|
|
This provides better performance and snapshot capabilities compared to LXC containers.
|
|
|
|
## Deployment
|
|
|
|
See the [main README](../../README.md) for deployment methods.
|
|
|
|
### Initial Installation
|
|
|
|
**Using nixos-anywhere (recommended):**
|
|
```bash
|
|
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
|
|
```
|
|
|
|
This handles disk partitioning via disko automatically.
|
|
|
|
### Updates
|
|
|
|
**Note**: SSH runs on port 1005, so you need an SSH config entry:
|
|
|
|
```bash
|
|
# ~/.ssh/config
|
|
Host 141.56.51.1
|
|
Port 1005
|
|
```
|
|
|
|
Then deploy:
|
|
|
|
```bash
|
|
# From local machine
|
|
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
|
|
|
|
# Or use auto-generated script
|
|
nix run .#proxy-update
|
|
```
|
|
|
|
## Network Configuration
|
|
|
|
- **Interface**: ens18 (VM interface)
|
|
- **IP**: 141.56.51.1/24
|
|
- **Gateway**: 141.56.51.254
|
|
- **DNS**: 9.9.9.9, 1.1.1.1 (public DNS, not HTW internal)
|
|
- **Firewall**: nftables enabled
|
|
- **Open TCP ports**: 22, 53 (DNS), 80, 443, 1005, 2142
|
|
- **Open UDP ports**: 53 (DNS), 123 (NTP)
|
|
|
|
## Adding New Services
|
|
|
|
### Method 1: Deploy a host with Nginx (Automatic)
|
|
|
|
The easiest method - just deploy a new host with nginx enabled:
|
|
|
|
```nix
|
|
# hosts/newservice/default.nix
|
|
services.nginx = {
|
|
enable = true;
|
|
virtualHosts."newservice.htw.stura-dresden.de" = {
|
|
forceSSL = true;
|
|
enableACME = true;
|
|
locations."/" = {
|
|
# your config
|
|
};
|
|
};
|
|
};
|
|
```
|
|
|
|
The flake automatically:
|
|
1. Discovers the nginx virtualHost
|
|
2. Extracts the IP address from networking configuration
|
|
3. Generates HAProxy forwarding rules
|
|
4. No manual proxy changes needed!
|
|
|
|
**You still need to:**
|
|
- Add DNS record pointing to 141.56.51.1
|
|
- Deploy the proxy to pick up changes: `nix run .#proxy-update`
|
|
|
|
### Method 2: Manual forwarding (for non-Nginx services)
|
|
|
|
For services not using Nginx, manually add to the `forwards` attribute set in hosts/proxy/default.nix:
|
|
|
|
```nix
|
|
forwards = {
|
|
# ... existing forwards ...
|
|
newservice = {
|
|
dest = "141.56.51.XXX";
|
|
domain = "newservice.htw.stura-dresden.de";
|
|
httpPort = 80;
|
|
httpsPort = 443;
|
|
};
|
|
};
|
|
```
|
|
|
|
Then deploy the proxy.
|
|
|
|
## SNI Routing Explained
|
|
|
|
**SNI (Server Name Indication)** is a TLS extension that includes the hostname in the ClientHello handshake.
|
|
|
|
How it works:
|
|
1. Client initiates TLS connection to 141.56.51.1
|
|
2. Client sends ClientHello with SNI field (e.g., "wiki.htw.stura-dresden.de")
|
|
3. HAProxy inspects the SNI field (does not decrypt)
|
|
4. HAProxy routes the entire encrypted connection to the backend (e.g., 141.56.51.13)
|
|
5. Backend terminates TLS and serves the content
|
|
|
|
**Benefits:**
|
|
- No TLS decryption at proxy (end-to-end encryption)
|
|
- Each backend manages its own certificates
|
|
- Simple certificate renewal via ACME/Let's Encrypt
|
|
- No certificate syncing required
|
|
|
|
## ACME Challenge Forwarding
|
|
|
|
Let's Encrypt verification works seamlessly:
|
|
|
|
1. Backend requests certificate from Let's Encrypt
|
|
2. Let's Encrypt queries DNS, finds 141.56.51.1
|
|
3. Let's Encrypt requests `http://domain/.well-known/acme-challenge/<token>`
|
|
4. HAProxy detects ACME challenge path
|
|
5. HAProxy forwards to backend without HTTPS redirect
|
|
6. Backend responds with challenge token
|
|
7. Let's Encrypt verifies and issues certificate
|
|
|
|
This is why HTTP→HTTPS redirect has an exception for `/.well-known/acme-challenge/` paths.
|
|
|
|
## HAProxy Stats
|
|
|
|
Access HAProxy statistics page (localhost only):
|
|
|
|
```bash
|
|
# SSH into proxy
|
|
ssh -p 1005 root@141.56.51.1
|
|
|
|
# Access stats via curl
|
|
curl http://127.0.0.1:8404/stats
|
|
|
|
# Or forward port to your local machine
|
|
ssh -p 1005 -L 8404:127.0.0.1:8404 root@141.56.51.1
|
|
# Then browse to http://localhost:8404/stats
|
|
```
|
|
|
|
The stats page shows:
|
|
- Current connections per backend
|
|
- Health check status
|
|
- Traffic statistics
|
|
- Error counts
|
|
|
|
## Configuration Structure
|
|
|
|
The HAProxy configuration is generated from Nix using `lib.foldlAttrs`:
|
|
|
|
```nix
|
|
forwards = {
|
|
service_name = {
|
|
dest = "141.56.51.XXX";
|
|
domain = "service.htw.stura-dresden.de";
|
|
httpPort = 80;
|
|
httpsPort = 443;
|
|
};
|
|
};
|
|
```
|
|
|
|
This generates:
|
|
- ACL rules for HTTP Host header matching
|
|
- Backend definitions for ports 80 and 443
|
|
- SNI routing rules for HTTPS
|
|
- HTTP→HTTPS redirects (except ACME)
|
|
|
|
## Current Forwards
|
|
|
|
The proxy currently forwards traffic for:
|
|
|
|
**In this repository:**
|
|
- docs.adm.htw.stura-dresden.de → 127.0.0.1 (Hugo documentation site, ports 8080/8443)
|
|
- git.adm.htw.stura-dresden.de → 141.56.51.7 (Forgejo)
|
|
- wiki.htw.stura-dresden.de → 141.56.51.13 (MediaWiki)
|
|
- pro.htw.stura-dresden.de → 141.56.51.15 (Redmine)
|
|
- cloud.htw.stura-dresden.de → 141.56.51.16 (Nextcloud)
|
|
|
|
**External services (managed outside this repo):**
|
|
- stura.htw-dresden.de → 141.56.51.3 (Plone)
|
|
- tix.htw.stura-dresden.de → 141.56.51.220 (Pretix)
|
|
- vot.htw.stura-dresden.de → 141.56.51.57 (OpenSlides)
|
|
- mail.htw.stura-dresden.de → 141.56.51.14 (Mail server)
|
|
- lists.htw.stura-dresden.de → 141.56.51.14 (Mailing lists)
|
|
|
|
## Troubleshooting
|
|
|
|
### HAProxy not starting
|
|
|
|
```bash
|
|
# Check HAProxy status
|
|
systemctl status haproxy
|
|
|
|
# Check configuration syntax
|
|
haproxy -c -f /etc/haproxy/haproxy.cfg
|
|
|
|
# View HAProxy logs
|
|
journalctl -u haproxy -f
|
|
```
|
|
|
|
### Backend not reachable
|
|
|
|
```bash
|
|
# Check backend connectivity
|
|
curl -v http://141.56.51.XXX:80
|
|
curl -vk https://141.56.51.XXX:443
|
|
|
|
# Check HAProxy stats for backend status
|
|
curl http://127.0.0.1:8404/stats | grep backend_name
|
|
|
|
# Test DNS resolution
|
|
dig +short domain.htw.stura-dresden.de
|
|
|
|
# Check firewall rules
|
|
nft list ruleset | grep 141.56.51.XXX
|
|
```
|
|
|
|
### ACME challenges failing
|
|
|
|
```bash
|
|
# Verify HTTP forwarding works
|
|
curl -v http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
|
|
|
|
# Check HAProxy ACL for ACME
|
|
grep -i acme /etc/haproxy/haproxy.cfg
|
|
|
|
# Verify no HTTPS redirect for ACME paths
|
|
curl -I http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
|
|
# Should return 200/404, not 301 redirect
|
|
```
|
|
|
|
### SSH Jump not working
|
|
|
|
```bash
|
|
# Check SSH jump frontend
|
|
systemctl status haproxy
|
|
journalctl -u haproxy | grep ssh_jump
|
|
|
|
# Test connection to srs2
|
|
telnet 141.56.51.2 80
|
|
|
|
# Check HAProxy backend configuration
|
|
grep -A 5 "ssh_srs2" /etc/haproxy/haproxy.cfg
|
|
```
|
|
|
|
### DNS resolution not working
|
|
|
|
```bash
|
|
# Check BIND status
|
|
systemctl status named
|
|
|
|
# View BIND logs
|
|
journalctl -u named -f
|
|
|
|
# Test DNS resolution from proxy
|
|
dig @127.0.0.1 google.com
|
|
|
|
# Test DNS resolution from another host
|
|
dig @141.56.51.1 google.com
|
|
|
|
# Check BIND configuration
|
|
named-checkconf /etc/bind/named.conf
|
|
|
|
# Check allowed networks
|
|
grep -i "allow-query" /etc/bind/named.conf
|
|
```
|
|
|
|
### NTP synchronization not working
|
|
|
|
```bash
|
|
# Check chrony status
|
|
systemctl status chronyd
|
|
|
|
# View chrony tracking information
|
|
chronyc tracking
|
|
|
|
# Check chrony sources
|
|
chronyc sources -v
|
|
|
|
# View chrony logs
|
|
journalctl -u chronyd -f
|
|
|
|
# Test NTP from another host
|
|
chronyc -h 141.56.51.1 tracking
|
|
|
|
# Check if NTP port is accessible
|
|
nc -uv 141.56.51.1 123
|
|
```
|
|
|
|
## Files and Directories
|
|
|
|
- **HAProxy config**: `/etc/haproxy/haproxy.cfg` (generated by Nix)
|
|
- **HAProxy socket**: `/run/haproxy/admin.sock` (if enabled)
|
|
- **Disk config**: `./hetzner-disk.nix` (disko configuration)
|
|
- **Hardware config**: `./hardware-configuration.nix` (VM hardware)
|
|
- **NixOS config**: `./default.nix` (proxy configuration)
|
|
|
|
## Security Considerations
|
|
|
|
- **No TLS decryption**: End-to-end encryption maintained
|
|
- **No access logs**: HAProxy logs to journal (can be filtered)
|
|
- **Stats page**: Localhost only (not exposed)
|
|
- **Firewall**: Only necessary ports open
|
|
- **SSH**: Custom port (1005) reduces automated attacks
|
|
|
|
---
|
|
|
|
## Zentraler Reverse Proxy Für ehemals öffentliche IP-Adressen
|
|
|
|
(Original German documentation preserved)
|
|
|
|
Die Instanzen können weitestgehend unverändert weiterlaufen, es müssen nur alle DNS-Einträge auf 141.56.51.1 geändert werden.
|
|
|
|
### HAProxy
|
|
|
|
Zum Weiterleiten der Connections wird HAProxy verwendet.
|
|
HAProxy unterstützt verschiedene Modi.
|
|
Für uns sind http und tcp relevant.
|
|
Unverschlüsselte Verbindungen werden mit dem http Modul geparst und weitergeleitet.
|
|
Bei SSL-Verbindungen wird beim Session-Aufbau der Hostname beobachtet und anhand dessen, die komplette verschlüsselte Verbindung an das jeweilige System weitergeletet.
|
|
Damit können alle Systeme weiterhin selbst mit certbot ihre TLS-Zertifikate anfordern, da auch die ACME-Challenge weitergeleitet wird.
|
|
|
|
### Config
|
|
|
|
Relevant HAProxy config:
|
|
|
|
```
|
|
frontend http-in
|
|
bind *:80
|
|
|
|
acl is_cloud hdr(host) -i cloud.htw.stura-dresden.de
|
|
acl is_dat hdr(host) -i dat.htw.stura-dresden.de
|
|
acl is_plone hdr(host) -i stura.htw-dresden.de
|
|
acl is_plone_alt hdr(host) -i www.stura.htw-dresden.de
|
|
acl is_pro hdr(host) -i pro.htw.stura-dresden.de
|
|
acl is_tix hdr(host) -i tix.htw.stura-dresden.de
|
|
acl is_vot hdr(host) -i vot.htw.stura-dresden.de
|
|
acl is_wiki hdr(host) -i wiki.htw.stura-dresden.de
|
|
|
|
|
|
use_backend cloud_80 if is_cloud
|
|
use_backend dat_80 if is_dat
|
|
use_backend plone_80 if is_plone
|
|
use_backend plone_alt_80 if is_plone_alt
|
|
use_backend pro_80 if is_pro
|
|
use_backend tix_80 if is_tix
|
|
use_backend vot_80 if is_vot
|
|
use_backend wiki_80 if is_wiki
|
|
|
|
|
|
default_backend plone_80
|
|
|
|
frontend sni_router
|
|
bind *:443
|
|
mode tcp
|
|
tcp-request inspect-delay 1s
|
|
tcp-request content accept if { req_ssl_hello_type 1 }
|
|
|
|
use_backend cloud_443 if { req_ssl_sni -i cloud.htw.stura-dresden.de }
|
|
use_backend dat_443 if { req_ssl_sni -i dat.htw.stura-dresden.de }
|
|
use_backend plone_443 if { req_ssl_sni -i stura.htw-dresden.de }
|
|
use_backend plone_alt_443 if { req_ssl_sni -i www.stura.htw-dresden.de }
|
|
use_backend pro_443 if { req_ssl_sni -i pro.htw.stura-dresden.de }
|
|
use_backend tix_443 if { req_ssl_sni -i tix.htw.stura-dresden.de }
|
|
use_backend vot_443 if { req_ssl_sni -i vot.htw.stura-dresden.de }
|
|
use_backend wiki_443 if { req_ssl_sni -i wiki.htw.stura-dresden.de }
|
|
|
|
|
|
# this block is repeated for each backend
|
|
backend cloud_80
|
|
mode http
|
|
server cloud 141.56.51.16:80 # no check here - also proxy if haproxy thinks this is down
|
|
backend cloud_443
|
|
mode tcp
|
|
server cloud 141.56.51.16:443 check
|
|
...
|
|
...
|
|
...
|
|
```
|
|
|
|
## See Also
|
|
|
|
- [Main README](../../README.md) - Deployment methods and architecture
|
|
- [Git README](../git/README.md) - Forgejo configuration
|
|
- [Wiki README](../wiki/README.md) - MediaWiki configuration
|
|
- [Redmine README](../redmine/README.md) - Redmine configuration
|
|
- [Nextcloud README](../nextcloud/README.md) - Nextcloud configuration
|
|
- [HAProxy Documentation](http://www.haproxy.org/#docs)
|