No description
Find a file
2026-03-13 17:37:22 +01:00
hosts change vps location 2026-03-13 17:37:22 +01:00
keys prepare sops and auto fmt devshell hooks 2026-03-13 17:19:31 +01:00
.gitignore ignore result symlink 2025-10-24 16:38:29 +02:00
.pre-commit-config.yaml prepare sops and auto fmt devshell hooks 2026-03-13 17:19:31 +01:00
.sops.yaml prepare sops and auto fmt devshell hooks 2026-03-13 17:19:31 +01:00
default.nix these defaults should be global 2026-02-28 18:55:24 +01:00
flake-show.png screenshot nix flake show 2025-11-14 16:41:31 +01:00
flake.lock prepare sops and auto fmt devshell hooks 2026-03-13 17:19:31 +01:00
flake.nix prepare sops and auto fmt devshell hooks 2026-03-13 17:19:31 +01:00
hardware-configuration.nix nixfmt rfc style 2025-05-26 11:57:29 +02:00
hetzner-disk.nix disko install 2025-05-23 16:50:58 +02:00
README.md add v6proxy docs 2026-03-13 17:32:05 +01:00

StuRa HTW Dresden Infrastructure - NixOS Configuration

Declarative infrastructure management for StuRa HTW Dresden using NixOS and a flake-based configuration. This repository replaces the hand-configured FreeBSD relay system with a modern, reproducible infrastructure.

Architecture

Overview

This infrastructure uses a flake-based approach with automatic host discovery:

  • Centralized reverse proxy: HAProxy at 141.56.51.1 routes all traffic via SNI inspection and HTTP host headers
  • IPv6 gateway: Hetzner VPS at 2a01:4f8:1c19:96f8::1 forwards IPv6 traffic to the IPv4 proxy
  • Automatic host discovery: Each subdirectory in hosts/ becomes a NixOS configuration via builtins.readDir
  • Global configuration: Settings in default.nix are automatically applied to all hosts
  • ACME certificates: All services use Let's Encrypt certificates managed locally on each host

Network

  • Network: 141.56.51.0/24
  • Gateway: 141.56.51.254
  • DNS: 141.56.1.1, 141.56.1.2 (HTW internal)
  • Domain: htw.stura-dresden.de

Repository Structure

stura-infra/
├── flake.nix              # Main flake configuration with auto-discovery
├── default.nix            # Global settings applied to all hosts
├── hosts/                 # Host-specific configurations
│   ├── proxy/            # Central reverse proxy (HAProxy)
│   │   ├── default.nix
│   │   ├── hardware-configuration.nix
│   │   ├── hetzner-disk.nix
│   │   └── README.md
│   ├── v6proxy/          # IPv6 gateway (Hetzner VPS)
│   │   ├── default.nix
│   │   ├── hardware-configuration.nix
│   │   ├── hetzner-disk.nix
│   │   └── README.md
│   ├── git/              # Forgejo git server
│   │   └── default.nix
│   ├── wiki/             # MediaWiki instance
│   │   └── default.nix
│   ├── nextcloud/        # Nextcloud instance
│   │   └── default.nix
│   └── redmine/          # Redmine project management
│       └── default.nix
└── README.md             # This file

Host Overview

Host IP Type Services Documentation
proxy 141.56.51.1 VM HAProxy, SSH Jump hosts/proxy/README.md
v6proxy 178.104.18.93 (IPv4)
2a01:4f8:1c19:96f8::1 (IPv6)
Hetzner VPS HAProxy (IPv6 Gateway) hosts/v6proxy/README.md
git 141.56.51.7 LXC Forgejo, Nginx hosts/git/README.md
wiki 141.56.51.13 LXC MediaWiki, MariaDB, Apache hosts/wiki/README.md
redmine 141.56.51.15 LXC Redmine, Nginx hosts/redmine/README.md
nextcloud 141.56.51.16 LXC Nextcloud, PostgreSQL, Redis, Nginx hosts/nextcloud/README.md

Deployment Methods

Use nixos-anywhere for initial system installation. This handles disk partitioning (via disko) and bootstrapping automatically.

For VM hosts (proxy):

nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1

For LXC containers (git, wiki, redmine, nextcloud):

nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7

This method is ideal for:

  • First-time installation on bare metal or fresh VMs
  • Complete system rebuilds
  • Migration to new hardware

Method 2: Container Tarball Deployment to Proxmox

Build and deploy LXC container tarballs for git, wiki, redmine, and nextcloud hosts.

Step 1: Build container tarball locally

nix build .#containers-git
# Result will be in result/tarball/nixos-system-x86_64-linux.tar.xz

Step 2: Copy to Proxmox host

scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/

Step 3: Create container on Proxmox

# Example for git host (container ID 107, adjust as needed)
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
  --hostname git \
  --net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
  --memory 2048 \
  --cores 2 \
  --rootfs local-lvm:8 \
  --unprivileged 1 \
  --features nesting=1

# Configure storage and settings via Proxmox web interface if needed

Step 4: Start container

pct start 107

Step 5: Post-deployment configuration

  • Access container: pct enter 107
  • Follow host-specific post-deployment steps in each host's README.md

Available container tarballs:

  • nix build .#containers-git
  • nix build .#containers-wiki
  • nix build .#containers-redmine
  • nix build .#containers-nextcloud

Note: The proxy host is a full VM and does not have a container tarball. Use Method 1 or 3 for proxy deployment.

Method 3: ISO Installer

Build a bootable ISO installer for manual installation on VMs or bare metal.

Build ISO:

nix build .#installer-iso
# Result will be in result/iso/nixos-*.iso

Build VM for testing:

nix build .#installer-vm

Deployment:

  1. Upload ISO to Proxmox storage
  2. Create VM and attach ISO as boot device
  3. Boot VM and follow installation prompts
  4. Run installation commands manually
  5. Reboot and remove ISO

Method 4: Regular Updates

For already-deployed systems, apply configuration updates:

Option A: Using nixos-rebuild from your local machine

nixos-rebuild switch --flake .#<hostname> --target-host root@<ip>

Example:

nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1

Note: This requires an SSH config entry for the proxy (uses port 1005):

# ~/.ssh/config
Host 141.56.51.1
    Port 1005

Option B: Using auto-generated update scripts

The flake generates convenience scripts for each host:

nix run .#git-update
nix run .#wiki-update
nix run .#redmine-update
nix run .#nextcloud-update
nix run .#proxy-update

These scripts automatically extract the target IP from the configuration.

Option C: Remote execution (no local Nix installation)

If Nix isn't installed locally, run the command on the target system:

ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"

Replace proxy with the appropriate hostname and adjust the IP address.

Required DNS Records

The following DNS records must be configured for the current infrastructure:

Name Type IP Service
*.htw.stura-dresden.de CNAME proxy.htw.stura-dresden.de Reverse proxy
proxy.htw.stura-dresden.de A 141.56.51.1 Proxy IPv4
proxy.htw.stura-dresden.de AAAA 2a01:4f8:1c19:96f8::1 IPv6 Gateway (v6proxy)

Note: All public services point to the proxy IPs. The IPv4 proxy (141.56.51.1) handles SNI-based routing to backend hosts. The IPv6 gateway (v6proxy at 2a01:4f8:1c19:96f8::1) forwards all IPv6 traffic to the IPv4 proxy. Backend IPs are internal and not exposed in DNS.

Additional services managed by the proxy (not in this repository):

  • stura.htw-dresden.de → Plone
  • tix.htw.stura-dresden.de → Pretix
  • vot.htw.stura-dresden.de → OpenSlides
  • mail.htw.stura-dresden.de → Mail server

Development

Code Formatting

Format all Nix files using the RFC-style formatter:

nix fmt

Testing Changes

Before deploying to production:

  1. Test flake evaluation: nix flake check
  2. Build configurations locally: nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
  3. Review generated configurations
  4. Deploy to test systems first if available

Adding a New Host

  1. Create host directory:

    mkdir hosts/newhostname
    
  2. Create hosts/newhostname/default.nix:

    { config, lib, pkgs, modulesPath, ... }:
    {
      imports = [
        "${modulesPath}/virtualisation/proxmox-lxc.nix"  # For LXC containers
        # Or for VMs:
        # ./hardware-configuration.nix
      ];
    
      networking = {
        hostName = "newhostname";
        interfaces.eth0.ipv4.addresses = [{  # or ens18 for VMs
          address = "141.56.51.XXX";
          prefixLength = 24;
        }];
        defaultGateway.address = "141.56.51.254";
        firewall.allowedTCPPorts = [ 80 443 ];
      };
    
      # Add your services here
      services.nginx.enable = true;
      # ...
    
      system.stateVersion = "25.11";
    }
    
  3. The flake automatically discovers the new host via builtins.readDir ./hosts

  4. If the host runs nginx, the proxy automatically adds forwarding rules (you still need to add DNS records)

  5. Deploy:

    nix run github:nix-community/nixos-anywhere -- --flake .#newhostname --target-host root@141.56.51.XXX
    

Repository Information

Flake Inputs

  • nixpkgs: NixOS 25.11
  • authentik: Identity provider (nix-community/authentik-nix)
  • mailserver: Simple NixOS mailserver (nixos-25.11 branch)
  • sops: Secret management (Mic92/sops-nix)
  • disko: Declarative disk partitioning

Common Patterns

Network Configuration

All hosts follow this pattern:

networking = {
  hostName = "<name>";
  interfaces.<interface>.ipv4.addresses = [{
    address = "<ip>";
    prefixLength = 24;
  }];
  defaultGateway.address = "141.56.51.254";
};
  • LXC containers use eth0
  • VMs/bare metal typically use ens18

Nginx + ACME Pattern

For web services:

services.nginx = {
  enable = true;
  virtualHosts."<fqdn>" = {
    forceSSL = true;
    enableACME = true;
    locations."/" = {
      # service config
    };
  };
};

This automatically:

  • Integrates with the proxy's ACME challenge forwarding
  • Generates HAProxy backend configuration
  • Requests Let's Encrypt certificates

Firewall Rules

Hosts only need to allow traffic from the proxy:

networking.firewall.allowedTCPPorts = [ 80 443 ];

SSH ports vary:

  • Proxy: port 1005 (admin access)
  • Other hosts: port 22 (default)