readme docs

This commit is contained in:
goeranh 2026-03-13 16:59:54 +01:00
parent 6e0d407b1c
commit 9466ab3656
No known key found for this signature in database
6 changed files with 1872 additions and 53 deletions

362
README.md
View file

@ -1,71 +1,331 @@
# StuRa HTW Dresden Mailserver
neue mailserver config, ersetzt von Hand konfiguriertes FreeBSD Relay System ohne Mailkonten.
# StuRa HTW Dresden Infrastructure - NixOS Configuration
Ziel ist es den Identity-Provider goauthentik mit ldap an simple-nixos-mailserver anzubinden.
Declarative infrastructure management for StuRa HTW Dresden using NixOS and a flake-based configuration. This repository replaces the hand-configured FreeBSD relay system with a modern, reproducible infrastructure.
![Flake Struktur](./flake-show.png "nix flake show")
In dieser Flake werden, wie durch den command `nix flake show` zu sehen, mehrere NixOS-Configuration und Pakete definiert.
## Architecture
# Ordner Hosts
jeder ornder ist ein system - es wird `builtins.readDir` verwendet, um alle Unterordner zu finden und ein nixos System fpr jeden davon zu generieren.
- authentik
- mail
- git
- redmine
### Overview
Datei `hosts/<name>/default.nix` wird evaluiert und muss die alle weiteren z.B. authentik.nix importieren.
Davon ausgenommen ist der inhalt von default.nix im Hauptordner, diese Datei enthält alle globalen Einstellungen, die in jedem System aktiviert werden.
This infrastructure uses a flake-based approach with automatic host discovery:
- **Centralized reverse proxy**: HAProxy at 141.56.51.1 routes all traffic via SNI inspection and HTTP host headers
- **Automatic host discovery**: Each subdirectory in `hosts/` becomes a NixOS configuration via `builtins.readDir`
- **Global configuration**: Settings in `default.nix` are automatically applied to all hosts
- **ACME certificates**: All services use Let's Encrypt certificates managed locally on each host
# Todo
- mailverteiler mitgliedschaft aus ldap gruppen?
- aliase aus ldap attributen?
- forgejo an authentik via oauth
### Network
- demo mäßg redmine in container kopieren
- demo mäßg forgejo in container einrichten
- **Network**: 141.56.51.0/24
- **Gateway**: 141.56.51.254
- **DNS**: 141.56.1.1, 141.56.1.2 (HTW internal)
- **Domain**: htw.stura-dresden.de
# Setup
Folgende DNS-Records werden benötigt:
| Name | Type | IP |
|------|------|----|
|mail.test.htw.stura-dresden.de|A|141.56.51.95|
|lists.test.htw.stura-dresden.de|A|141.56.51.95|
|test.htw.stura-dresden.de|A|141.56.51.95|
|auth.test.htw.stura-dresden.de|A|141.56.51.96|
## Repository Structure
Man könnte auch nur mail.test.htw.stura-dresden auf die ip zeigen lassen und die anderen beiden Records als CNAME Verweis auf diesen namen zeigen lassen
```
stura-infra/
├── flake.nix # Main flake configuration with auto-discovery
├── default.nix # Global settings applied to all hosts
├── hosts/ # Host-specific configurations
│ ├── proxy/ # Central reverse proxy (HAProxy)
│ │ ├── default.nix
│ │ ├── hardware-configuration.nix
│ │ ├── hetzner-disk.nix
│ │ └── README.md
│ ├── git/ # Forgejo git server
│ │ └── default.nix
│ ├── wiki/ # MediaWiki instance
│ │ └── default.nix
│ ├── nextcloud/ # Nextcloud instance
│ │ └── default.nix
│ └── redmine/ # Redmine project management
│ └── default.nix
└── README.md # This file
```
## Setup Authentik
## Host Overview
| Host | IP | Type | Services | Documentation |
|------|-----|------|----------|---------------|
| proxy | 141.56.51.1 | VM | HAProxy, SSH Jump | [hosts/proxy/README.md](hosts/proxy/README.md) |
| git | 141.56.51.7 | LXC | Forgejo, Nginx | [hosts/git/README.md](hosts/git/README.md) |
| wiki | 141.56.51.13 | LXC | MediaWiki, MariaDB, Apache | [hosts/wiki/README.md](hosts/wiki/README.md) |
| redmine | 141.56.51.15 | LXC | Redmine, Nginx | [hosts/redmine/README.md](hosts/redmine/README.md) |
| nextcloud | 141.56.51.16 | LXC | Nextcloud, PostgreSQL, Redis, Nginx | [hosts/nextcloud/README.md](hosts/nextcloud/README.md) |
## Deployment Methods
### Method 1: Initial Installation with nixos-anywhere (Recommended)
Use `nixos-anywhere` for initial system installation. This handles disk partitioning (via disko) and bootstrapping automatically.
**For VM hosts (proxy):**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#authentik --target-host root@141.56.51.96
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
```
### im installierten System
Authentik kann nicht ohne env datei starten
**For LXC containers (git, wiki, redmine, nextcloud):**
```bash
echo "AUTHENTIK_SECRET_KEY=$(openssl rand -hex 32)" > /var/lib/authentik_secret
```
danach muss man dann im browser den initial setup flow machen
und dann ldap provider einrichten
https://docs.goauthentik.io/add-secure-apps/providers/ldap/generic_setup
/var/lib/authentik-ldap-env
```
AUTHENTIK_HOST=https://auth.test.htw.stura-dresden.de
AUTHENTIK_TOKEN=<token>
nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7
```
## Setup Mail
This method is ideal for:
- First-time installation on bare metal or fresh VMs
- Complete system rebuilds
- Migration to new hardware
### Method 2: Container Tarball Deployment to Proxmox
Build and deploy LXC container tarballs for git, wiki, redmine, and nextcloud hosts.
**Step 1: Build container tarball locally**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#mail --target-host root@141.56.51.95
nix build .#containers-git
# Result will be in result/tarball/nixos-system-x86_64-linux.tar.xz
```
# Proxy (und andere Systeme) neu updaten
**Step 2: Copy to Proxmox host**
```bash
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
```
Die config liegt nicht mehr wie gehabt auf dem server sondern heir in dem git repo. Deployt wird vom eigenen Laptop über ssh in die Instanz.
`nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1`
**Step 3: Create container on Proxmox**
```bash
# Example for git host (container ID 107, adjust as needed)
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname git \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1
Das funktioniert nur wenn man einen ssh-config eintrag für die IP hat, da der Port gesetzt werden muss.
## Kein NIX auf der lokalen Maschine
Wenn lokal kein nix installiert is kann natürlich nich nixos-rebuild verwendet werden. Stattdessen kann man den command auf dem Zielsystem ausführen:
`ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"`
# Configure storage and settings via Proxmox web interface if needed
```
**Step 4: Start container**
```bash
pct start 107
```
**Step 5: Post-deployment configuration**
- Access container: `pct enter 107`
- Follow host-specific post-deployment steps in each host's README.md
**Available container tarballs:**
- `nix build .#containers-git`
- `nix build .#containers-wiki`
- `nix build .#containers-redmine`
- `nix build .#containers-nextcloud`
**Note**: The proxy host is a full VM and does not have a container tarball. Use Method 1 or 3 for proxy deployment.
### Method 3: ISO Installer
Build a bootable ISO installer for manual installation on VMs or bare metal.
**Build ISO:**
```bash
nix build .#installer-iso
# Result will be in result/iso/nixos-*.iso
```
**Build VM for testing:**
```bash
nix build .#installer-vm
```
**Deployment:**
1. Upload ISO to Proxmox storage
2. Create VM and attach ISO as boot device
3. Boot VM and follow installation prompts
4. Run installation commands manually
5. Reboot and remove ISO
### Method 4: Regular Updates
For already-deployed systems, apply configuration updates:
**Option A: Using nixos-rebuild from your local machine**
```bash
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip>
```
Example:
```bash
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
```
**Note**: This requires an SSH config entry for the proxy (uses port 1005):
```
# ~/.ssh/config
Host 141.56.51.1
Port 1005
```
**Option B: Using auto-generated update scripts**
The flake generates convenience scripts for each host:
```bash
nix run .#git-update
nix run .#wiki-update
nix run .#redmine-update
nix run .#nextcloud-update
nix run .#proxy-update
```
These scripts automatically extract the target IP from the configuration.
**Option C: Remote execution (no local Nix installation)**
If Nix isn't installed locally, run the command on the target system:
```bash
ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"
```
Replace `proxy` with the appropriate hostname and adjust the IP address.
## Required DNS Records
The following DNS records must be configured for the current infrastructure:
| Name | Type | IP | Service |
|------|------|-----|---------|
| *.htw.stura-dresden.de | CNAME | proxy.htw.stura-dresden.de | Reverse proxy |
| proxy.htw.stura-dresden.de | A | 141.56.51.1 | Proxy IPv4 |
| proxy.htw.stura-dresden.de | AAAA | 2a01:4f8:1c19:96f8::1 | Proxy IPv6 |
**Note**: All public services point to the proxy IP (141.56.51.1). The proxy handles SNI-based routing to backend hosts. Backend IPs are internal and not exposed in DNS.
Additional services managed by the proxy (not in this repository):
- stura.htw-dresden.de → Plone
- tix.htw.stura-dresden.de → Pretix
- vot.htw.stura-dresden.de → OpenSlides
- mail.htw.stura-dresden.de → Mail server
## Development
### Code Formatting
Format all Nix files using the RFC-style formatter:
```bash
nix fmt
```
### Testing Changes
Before deploying to production:
1. Test flake evaluation: `nix flake check`
2. Build configurations locally: `nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel`
3. Review generated configurations
4. Deploy to test systems first if available
### Adding a New Host
1. **Create host directory:**
```bash
mkdir hosts/newhostname
```
2. **Create `hosts/newhostname/default.nix`:**
```nix
{ config, lib, pkgs, modulesPath, ... }:
{
imports = [
"${modulesPath}/virtualisation/proxmox-lxc.nix" # For LXC containers
# Or for VMs:
# ./hardware-configuration.nix
];
networking = {
hostName = "newhostname";
interfaces.eth0.ipv4.addresses = [{ # or ens18 for VMs
address = "141.56.51.XXX";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
firewall.allowedTCPPorts = [ 80 443 ];
};
# Add your services here
services.nginx.enable = true;
# ...
system.stateVersion = "25.11";
}
```
3. **The flake automatically discovers the new host** via `builtins.readDir ./hosts`
4. **If the host runs nginx**, the proxy automatically adds forwarding rules (you still need to add DNS records)
5. **Deploy:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#newhostname --target-host root@141.56.51.XXX
```
## Repository Information
- **Repository**: https://codeberg.org/stura-htw-dresden/stura-infra
- **ACME Email**: cert@stura.htw-dresden.de
- **NixOS Version**: 25.11
- **Architecture**: x86_64-linux
## Flake Inputs
- `nixpkgs`: NixOS 25.11
- `authentik`: Identity provider (nix-community/authentik-nix)
- `mailserver`: Simple NixOS mailserver (nixos-25.11 branch)
- `sops`: Secret management (Mic92/sops-nix)
- `disko`: Declarative disk partitioning
## Common Patterns
### Network Configuration
All hosts follow this pattern:
```nix
networking = {
hostName = "<name>";
interfaces.<interface>.ipv4.addresses = [{
address = "<ip>";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
};
```
- LXC containers use `eth0`
- VMs/bare metal typically use `ens18`
### Nginx + ACME Pattern
For web services:
```nix
services.nginx = {
enable = true;
virtualHosts."<fqdn>" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# service config
};
};
};
```
This automatically:
- Integrates with the proxy's ACME challenge forwarding
- Generates HAProxy backend configuration
- Requests Let's Encrypt certificates
### Firewall Rules
Hosts only need to allow traffic from the proxy:
```nix
networking.firewall.allowedTCPPorts = [ 80 443 ];
```
SSH ports vary:
- Proxy: port 1005 (admin access)
- Other hosts: port 22 (default)