Compare commits

..

15 commits

Author SHA1 Message Date
goeranh
8703e7df98
flake.lock: Update
Flake lock file updates:

• Updated input 'authentik':
    'github:nix-community/authentik-nix/3df5c213032b8d28073d4baead699acea62ab50d?narHash=sha256-PPAgCKlRpxcZlEJ8NH2CGVaEogOc4nOs/eNF0hlAC2E%3D' (2026-02-21)
  → 'github:nix-community/authentik-nix/7e4730351fb6df479c46a1bf7e23d46a0b0c5d46?narHash=sha256-hcstQ1Z9aQSJM3AVCLb0/OPTicbME9nhP01GiPrOjZM%3D' (2026-03-07)
• Updated input 'authentik/authentik-go':
    'github:goauthentik/client-go/280022b0a8de5c8f4b2965d1147a1c4fa846ba64?narHash=sha256-Yyna75Nd6485tZP9IpdEa5QNomswe9hRfM%2Bw3MuET9E%3D' (2026-02-05)
  → 'github:goauthentik/client-go/4c1444ee54d945fbcc5ae107b4f191ca0352023d?narHash=sha256-zTEmvxe%2BBpfWYvAl675PnhXCH4jV4GUTFb1MrQ1Eyno%3D' (2026-02-23)
• Updated input 'authentik/authentik-src':
    'github:goauthentik/authentik/19ad8d3ae3f266ec1096bc4461fdf6bcda1aa079?narHash=sha256-alTyrMBbjZbw4jhEna8saabf93sqSrZCu%2BZ5xH3pZ7M%3D' (2026-02-12)
  → 'github:goauthentik/authentik/0dccbd4193c45c581e9fb7cd89df0c1487510f1f?narHash=sha256-0Vpf1hj9C8r%2BrhrCgwoNazpQ%2BmwgjdjDhuoKCxYQFWw%3D' (2026-03-03)
• Updated input 'authentik/flake-compat':
    'github:edolstra/flake-compat/65f23138d8d09a92e30f1e5c87611b23ef451bf3?narHash=sha256-4VBOP18BFeiPkyhy9o4ssBNQEvfvv1kXkasAYd0%2BrrA%3D' (2025-12-07)
  → 'github:edolstra/flake-compat/5edf11c44bc78a0d334f6334cdaf7d60d732daab?narHash=sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns%3D' (2025-12-29)
• Updated input 'authentik/flake-parts':
    'github:hercules-ci/flake-parts/a34fae9c08a15ad73f295041fec82323541400a9?narHash=sha256-XswHlK/Qtjasvhd1nOa1e8MgZ8GS//jBoTqWtrS1Giw%3D' (2025-12-15)
  → 'github:hercules-ci/flake-parts/57928607ea566b5db3ad13af0e57e921e6b12381?narHash=sha256-AnYjnFWgS49RlqX7LrC4uA%2BsCCDBj0Ry/WOJ5XWAsa0%3D' (2026-02-02)
• Updated input 'authentik/flake-parts/nixpkgs-lib':
    'github:nix-community/nixpkgs.lib/2075416fcb47225d9b68ac469a5c4801a9c4dd85?narHash=sha256-k00uTP4JNfmejrCLJOwdObYC9jHRrr/5M/a/8L2EIdo%3D' (2025-12-14)
  → 'github:nix-community/nixpkgs.lib/72716169fe93074c333e8d0173151350670b824c?narHash=sha256-cBEymOf4/o3FD5AZnzC3J9hLbiZ%2BQDT/KDuyHXVJOpM%3D' (2026-02-01)
• Updated input 'authentik/nixpkgs':
    'github:NixOS/nixpkgs/1412caf7bf9e660f2f962917c14b1ea1c3bc695e?narHash=sha256-AIdl6WAn9aymeaH/NvBj0H9qM%2BXuAuYbGMZaP0zcXAQ%3D' (2026-01-13)
  → 'github:NixOS/nixpkgs/2fc6539b481e1d2569f25f8799236694180c0993?narHash=sha256-0MAd%2B0mun3K/Ns8JATeHT1sX28faLII5hVLq0L3BdZU%3D' (2026-02-23)
• Updated input 'authentik/pyproject-build-systems':
    'github:pyproject-nix/build-system-pkgs/042904167604c681a090c07eb6967b4dd4dae88c?narHash=sha256-4bocaOyLa3AfiS8KrWjZQYu%2BIAta05u3gYZzZ6zXbT0%3D' (2025-11-20)
  → 'github:pyproject-nix/build-system-pkgs/04e9c186e01f0830dad3739088070e4c551191a4?narHash=sha256-7uXPiWB0YQ4HNaAqRvVndYL34FEp1ZTwVQHgZmyMtC8%3D' (2026-02-18)
• Updated input 'authentik/pyproject-nix':
    'github:pyproject-nix/pyproject.nix/2c8df1383b32e5443c921f61224b198a2282a657?narHash=sha256-xaKvtPx6YAnA3HQVp5LwyYG1MaN4LLehpQI8xEdBvBY%3D' (2025-11-26)
  → 'github:pyproject-nix/pyproject.nix/eb204c6b3335698dec6c7fc1da0ebc3c6df05937?narHash=sha256-nFJSfD89vWTu92KyuJWDoTQJuoDuddkJV3TlOl1cOic%3D' (2026-02-19)
• Updated input 'authentik/uv2nix':
    'github:pyproject-nix/uv2nix/4cca323a547a1aaa9b94929c4901bed5343eafe8?narHash=sha256-90d//IZ4GXipNsngO4sb2SAPbIC/a2P%2BIAdAWOwpcOM%3D' (2025-12-13)
  → 'github:pyproject-nix/uv2nix/abe65de114300de41614002fe9dce2152ac2ac23?narHash=sha256-gCojeIlQ/rfWMe3adif3akyHsT95wiMkLURpxTeqmPc%3D' (2026-02-27)
• Updated input 'disko':
    'github:nix-community/disko/a4cb7bf73f264d40560ba527f9280469f1f081c6?narHash=sha256-A5uE/hMium5of/QGC6JwF5TGoDAfpNtW00T0s9u/PN8%3D' (2026-02-23)
  → 'github:nix-community/disko/7b9f7f88ab3b339f8142dc246445abb3c370d3d3?narHash=sha256-khlHllTsovXgT2GZ0WxT4%2BRvuMjNeR5OW0UYeEHPYQo%3D' (2026-03-09)
• Updated input 'mailserver':
    'git+https://gitlab.com/simple-nixos-mailserver/nixos-mailserver?ref=nixos-25.11&rev=23f0a53ca6e58e61e1ea2b86791c69b79c91656d' (2025-12-24)
  → 'git+https://gitlab.com/simple-nixos-mailserver/nixos-mailserver?ref=nixos-25.11&rev=9cdd6869e513df8153db4b920c8f15d394e150f7' (2026-03-12)
• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/e764fc9a405871f1f6ca3d1394fb422e0a0c3951?narHash=sha256-sdaqdnsQCv3iifzxwB22tUwN/fSHoN7j2myFW5EIkGk%3D' (2026-02-24)
  → 'github:nixos/nixpkgs/0590cd39f728e129122770c029970378a79d076a?narHash=sha256-BHoB/XpbqoZkVYZCfXJXfkR%2BGXFqwb/4zbWnOr2cRcU%3D' (2026-03-11)
• Updated input 'sops':
    'github:Mic92/sops-nix/b027513c32e5b39b59f64626b87fbe168ae02094?narHash=sha256-YV17Q5lEU0S9ppw08Y%2Bcs4eEQJBuc79AzblFoHORLMU%3D' (2026-02-23)
  → 'github:Mic92/sops-nix/d1ff3b1034d5bab5d7d8086a7803c5a5968cd784?narHash=sha256-M3zEnq9OElB7zqc%2BmjgPlByPm1O5t2fbUrH3t/Hm5Ag%3D' (2026-03-09)
2026-03-13 18:07:20 +01:00
goeranh
d106386cc0
build a hugo docs page from the readme files 2026-03-13 18:06:20 +01:00
goeranh
bfe941217d
change vps location 2026-03-13 17:37:22 +01:00
goeranh
7e64664037 Merge pull request 'v6proxy' (#6) from v6proxy into master
Reviewed-on: https://codeberg.org/stura-htw-dresden/stura-infra/pulls/6
2026-03-13 17:33:18 +01:00
goeranh
6ea1a37bef
add v6proxy docs 2026-03-13 17:32:05 +01:00
goeranh
18f4d0c65f
ipv6 haproxy pass everything to 141.56.51.1 2026-03-13 17:24:06 +01:00
goeranh
dee37a55e2
prepare sops and auto fmt devshell hooks 2026-03-13 17:19:31 +01:00
goeranh
9466ab3656
readme docs 2026-03-13 16:59:54 +01:00
goeranh
6e0d407b1c
build container images 2026-03-13 16:59:36 +01:00
goeranh
4ec4081ddb
enable registration for now 2026-03-13 16:27:55 +01:00
goeranh
302ae0a8dc
git works now 2026-03-13 16:27:55 +01:00
goeranh
d537d7b20f
redirect post.htw 2026-03-13 16:27:16 +01:00
goeranh
d8d76776c1
forward mail and lists 2026-03-13 14:08:29 +01:00
goeranh
421259270c
only fqdn virtualhost is required 2026-03-13 14:08:16 +01:00
goeranh
8b2ffb35d8
domain is set in default 2026-03-13 14:08:03 +01:00
24 changed files with 2954 additions and 119 deletions

1
.pre-commit-config.yaml Symbolic link
View file

@ -0,0 +1 @@
/nix/store/1w2s62i701n28sj08gn1445qr4v3vijp-pre-commit-config.json

38
.sops.yaml Normal file
View file

@ -0,0 +1,38 @@
# SOPS configuration for StuRa HTW Dresden infrastructure
#
# This file defines which keys can decrypt which secrets.
# Add GPG public keys (.asc files) or age keys to keys/hosts/ and keys/users/
# to grant decryption access to hosts and users respectively.
keys:
# Admin/user keys - add GPG public keys here
# Example:
# - &user_admin_key age1... or pgp fingerprint
# Host keys - add host-specific keys here
# Example:
# - &host_proxy_key age1... or pgp fingerprint
# - &host_git_key age1... or pgp fingerprint
# Define which keys can access which files
creation_rules:
# Default rule: all secrets can be decrypted by admin keys
- path_regex: secrets/.*\.yaml$
# key_groups:
# - pgp:
# - *user_admin_key
# - age:
# - *user_admin_key
# Host-specific secrets (example)
# - path_regex: secrets/proxy/.*\.yaml$
# key_groups:
# - pgp:
# - *user_admin_key
# - *host_proxy_key
# - path_regex: secrets/git/.*\.yaml$
# key_groups:
# - pgp:
# - *user_admin_key
# - *host_git_key

369
README.md
View file

@ -1,71 +1,338 @@
# StuRa HTW Dresden Mailserver
neue mailserver config, ersetzt von Hand konfiguriertes FreeBSD Relay System ohne Mailkonten.
# StuRa HTW Dresden Infrastructure - NixOS Configuration
Ziel ist es den Identity-Provider goauthentik mit ldap an simple-nixos-mailserver anzubinden.
Declarative infrastructure management for StuRa HTW Dresden using NixOS and a flake-based configuration. This repository replaces the hand-configured FreeBSD relay system with a modern, reproducible infrastructure.
![Flake Struktur](./flake-show.png "nix flake show")
In dieser Flake werden, wie durch den command `nix flake show` zu sehen, mehrere NixOS-Configuration und Pakete definiert.
## Architecture
# Ordner Hosts
jeder ornder ist ein system - es wird `builtins.readDir` verwendet, um alle Unterordner zu finden und ein nixos System fpr jeden davon zu generieren.
- authentik
- mail
- git
- redmine
### Overview
Datei `hosts/<name>/default.nix` wird evaluiert und muss die alle weiteren z.B. authentik.nix importieren.
Davon ausgenommen ist der inhalt von default.nix im Hauptordner, diese Datei enthält alle globalen Einstellungen, die in jedem System aktiviert werden.
This infrastructure uses a flake-based approach with automatic host discovery:
- **Centralized reverse proxy**: HAProxy at 141.56.51.1 routes all traffic via SNI inspection and HTTP host headers
- **IPv6 gateway**: Hetzner VPS at 2a01:4f8:1c19:96f8::1 forwards IPv6 traffic to the IPv4 proxy
- **Automatic host discovery**: Each subdirectory in `hosts/` becomes a NixOS configuration via `builtins.readDir`
- **Global configuration**: Settings in `default.nix` are automatically applied to all hosts
- **ACME certificates**: All services use Let's Encrypt certificates managed locally on each host
# Todo
- mailverteiler mitgliedschaft aus ldap gruppen?
- aliase aus ldap attributen?
- forgejo an authentik via oauth
### Network
- demo mäßg redmine in container kopieren
- demo mäßg forgejo in container einrichten
- **Network**: 141.56.51.0/24
- **Gateway**: 141.56.51.254
- **DNS**: 141.56.1.1, 141.56.1.2 (HTW internal)
- **Domain**: htw.stura-dresden.de
# Setup
Folgende DNS-Records werden benötigt:
| Name | Type | IP |
|------|------|----|
|mail.test.htw.stura-dresden.de|A|141.56.51.95|
|lists.test.htw.stura-dresden.de|A|141.56.51.95|
|test.htw.stura-dresden.de|A|141.56.51.95|
|auth.test.htw.stura-dresden.de|A|141.56.51.96|
## Repository Structure
Man könnte auch nur mail.test.htw.stura-dresden auf die ip zeigen lassen und die anderen beiden Records als CNAME Verweis auf diesen namen zeigen lassen
```
stura-infra/
├── flake.nix # Main flake configuration with auto-discovery
├── default.nix # Global settings applied to all hosts
├── hosts/ # Host-specific configurations
│ ├── proxy/ # Central reverse proxy (HAProxy)
│ │ ├── default.nix
│ │ ├── hardware-configuration.nix
│ │ ├── hetzner-disk.nix
│ │ └── README.md
│ ├── v6proxy/ # IPv6 gateway (Hetzner VPS)
│ │ ├── default.nix
│ │ ├── hardware-configuration.nix
│ │ ├── hetzner-disk.nix
│ │ └── README.md
│ ├── git/ # Forgejo git server
│ │ └── default.nix
│ ├── wiki/ # MediaWiki instance
│ │ └── default.nix
│ ├── nextcloud/ # Nextcloud instance
│ │ └── default.nix
│ └── redmine/ # Redmine project management
│ └── default.nix
└── README.md # This file
```
## Setup Authentik
## Host Overview
| Host | IP | Type | Services | Documentation |
|------|-----|------|----------|---------------|
| proxy | 141.56.51.1 | VM | HAProxy, SSH Jump | [hosts/proxy/README.md](hosts/proxy/README.md) |
| v6proxy | 178.104.18.93 (IPv4)<br>2a01:4f8:1c19:96f8::1 (IPv6) | Hetzner VPS | HAProxy (IPv6 Gateway) | [hosts/v6proxy/README.md](hosts/v6proxy/README.md) |
| git | 141.56.51.7 | LXC | Forgejo, Nginx | [hosts/git/README.md](hosts/git/README.md) |
| wiki | 141.56.51.13 | LXC | MediaWiki, MariaDB, Apache | [hosts/wiki/README.md](hosts/wiki/README.md) |
| redmine | 141.56.51.15 | LXC | Redmine, Nginx | [hosts/redmine/README.md](hosts/redmine/README.md) |
| nextcloud | 141.56.51.16 | LXC | Nextcloud, PostgreSQL, Redis, Nginx | [hosts/nextcloud/README.md](hosts/nextcloud/README.md) |
## Deployment Methods
### Method 1: Initial Installation with nixos-anywhere (Recommended)
Use `nixos-anywhere` for initial system installation. This handles disk partitioning (via disko) and bootstrapping automatically.
**For VM hosts (proxy):**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#authentik --target-host root@141.56.51.96
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
```
### im installierten System
Authentik kann nicht ohne env datei starten
**For LXC containers (git, wiki, redmine, nextcloud):**
```bash
echo "AUTHENTIK_SECRET_KEY=$(openssl rand -hex 32)" > /var/lib/authentik_secret
```
danach muss man dann im browser den initial setup flow machen
und dann ldap provider einrichten
https://docs.goauthentik.io/add-secure-apps/providers/ldap/generic_setup
/var/lib/authentik-ldap-env
```
AUTHENTIK_HOST=https://auth.test.htw.stura-dresden.de
AUTHENTIK_TOKEN=<token>
nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7
```
## Setup Mail
This method is ideal for:
- First-time installation on bare metal or fresh VMs
- Complete system rebuilds
- Migration to new hardware
### Method 2: Container Tarball Deployment to Proxmox
Build and deploy LXC container tarballs for git, wiki, redmine, and nextcloud hosts.
**Step 1: Build container tarball locally**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#mail --target-host root@141.56.51.95
nix build .#containers-git
# Result will be in result/tarball/nixos-system-x86_64-linux.tar.xz
```
# Proxy (und andere Systeme) neu updaten
**Step 2: Copy to Proxmox host**
```bash
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
```
Die config liegt nicht mehr wie gehabt auf dem server sondern heir in dem git repo. Deployt wird vom eigenen Laptop über ssh in die Instanz.
`nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1`
**Step 3: Create container on Proxmox**
```bash
# Example for git host (container ID 107, adjust as needed)
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname git \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1
Das funktioniert nur wenn man einen ssh-config eintrag für die IP hat, da der Port gesetzt werden muss.
## Kein NIX auf der lokalen Maschine
Wenn lokal kein nix installiert is kann natürlich nich nixos-rebuild verwendet werden. Stattdessen kann man den command auf dem Zielsystem ausführen:
`ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"`
# Configure storage and settings via Proxmox web interface if needed
```
**Step 4: Start container**
```bash
pct start 107
```
**Step 5: Post-deployment configuration**
- Access container: `pct enter 107`
- Follow host-specific post-deployment steps in each host's README.md
**Available container tarballs:**
- `nix build .#containers-git`
- `nix build .#containers-wiki`
- `nix build .#containers-redmine`
- `nix build .#containers-nextcloud`
**Note**: The proxy host is a full VM and does not have a container tarball. Use Method 1 or 3 for proxy deployment.
### Method 3: ISO Installer
Build a bootable ISO installer for manual installation on VMs or bare metal.
**Build ISO:**
```bash
nix build .#installer-iso
# Result will be in result/iso/nixos-*.iso
```
**Build VM for testing:**
```bash
nix build .#installer-vm
```
**Deployment:**
1. Upload ISO to Proxmox storage
2. Create VM and attach ISO as boot device
3. Boot VM and follow installation prompts
4. Run installation commands manually
5. Reboot and remove ISO
### Method 4: Regular Updates
For already-deployed systems, apply configuration updates:
**Option A: Using nixos-rebuild from your local machine**
```bash
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip>
```
Example:
```bash
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
```
**Note**: This requires an SSH config entry for the proxy (uses port 1005):
```
# ~/.ssh/config
Host 141.56.51.1
Port 1005
```
**Option B: Using auto-generated update scripts**
The flake generates convenience scripts for each host:
```bash
nix run .#git-update
nix run .#wiki-update
nix run .#redmine-update
nix run .#nextcloud-update
nix run .#proxy-update
```
These scripts automatically extract the target IP from the configuration.
**Option C: Remote execution (no local Nix installation)**
If Nix isn't installed locally, run the command on the target system:
```bash
ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"
```
Replace `proxy` with the appropriate hostname and adjust the IP address.
## Required DNS Records
The following DNS records must be configured for the current infrastructure:
| Name | Type | IP | Service |
|------|------|-----|---------|
| *.htw.stura-dresden.de | CNAME | proxy.htw.stura-dresden.de | Reverse proxy |
| proxy.htw.stura-dresden.de | A | 141.56.51.1 | Proxy IPv4 |
| proxy.htw.stura-dresden.de | AAAA | 2a01:4f8:1c19:96f8::1 | IPv6 Gateway (v6proxy) |
**Note**: All public services point to the proxy IPs. The IPv4 proxy (141.56.51.1) handles SNI-based routing to backend hosts. The IPv6 gateway (v6proxy at 2a01:4f8:1c19:96f8::1) forwards all IPv6 traffic to the IPv4 proxy. Backend IPs are internal and not exposed in DNS.
Additional services managed by the proxy (not in this repository):
- stura.htw-dresden.de → Plone
- tix.htw.stura-dresden.de → Pretix
- vot.htw.stura-dresden.de → OpenSlides
- mail.htw.stura-dresden.de → Mail server
## Development
### Code Formatting
Format all Nix files using the RFC-style formatter:
```bash
nix fmt
```
### Testing Changes
Before deploying to production:
1. Test flake evaluation: `nix flake check`
2. Build configurations locally: `nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel`
3. Review generated configurations
4. Deploy to test systems first if available
### Adding a New Host
1. **Create host directory:**
```bash
mkdir hosts/newhostname
```
2. **Create `hosts/newhostname/default.nix`:**
```nix
{ config, lib, pkgs, modulesPath, ... }:
{
imports = [
"${modulesPath}/virtualisation/proxmox-lxc.nix" # For LXC containers
# Or for VMs:
# ./hardware-configuration.nix
];
networking = {
hostName = "newhostname";
interfaces.eth0.ipv4.addresses = [{ # or ens18 for VMs
address = "141.56.51.XXX";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
firewall.allowedTCPPorts = [ 80 443 ];
};
# Add your services here
services.nginx.enable = true;
# ...
system.stateVersion = "25.11";
}
```
3. **The flake automatically discovers the new host** via `builtins.readDir ./hosts`
4. **If the host runs nginx**, the proxy automatically adds forwarding rules (you still need to add DNS records)
5. **Deploy:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#newhostname --target-host root@141.56.51.XXX
```
## Repository Information
- **Repository**: https://codeberg.org/stura-htw-dresden/stura-infra
- **ACME Email**: cert@stura.htw-dresden.de
- **NixOS Version**: 25.11
- **Architecture**: x86_64-linux
## Flake Inputs
- `nixpkgs`: NixOS 25.11
- `authentik`: Identity provider (nix-community/authentik-nix)
- `mailserver`: Simple NixOS mailserver (nixos-25.11 branch)
- `sops`: Secret management (Mic92/sops-nix)
- `disko`: Declarative disk partitioning
## Common Patterns
### Network Configuration
All hosts follow this pattern:
```nix
networking = {
hostName = "<name>";
interfaces.<interface>.ipv4.addresses = [{
address = "<ip>";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
};
```
- LXC containers use `eth0`
- VMs/bare metal typically use `ens18`
### Nginx + ACME Pattern
For web services:
```nix
services.nginx = {
enable = true;
virtualHosts."<fqdn>" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# service config
};
};
};
```
This automatically:
- Integrates with the proxy's ACME challenge forwarding
- Generates HAProxy backend configuration
- Requests Let's Encrypt certificates
### Firewall Rules
Hosts only need to allow traffic from the proxy:
```nix
networking.firewall.allowedTCPPorts = [ 80 443 ];
```
SSH ports vary:
- Proxy: port 1005 (admin access)
- Other hosts: port 22 (default)

118
docs/build-docs.sh Executable file
View file

@ -0,0 +1,118 @@
#!/usr/bin/env bash
set -euo pipefail
# Script to collect README files and prepare them for Hugo
# This script is called during the Nix build process
REPO_ROOT="${1:-.}"
CONTENT_DIR="${2:-content}"
echo "Building documentation from $REPO_ROOT to $CONTENT_DIR"
mkdir -p "$CONTENT_DIR"
mkdir -p "$CONTENT_DIR/hosts"
# Function to convert relative links to work in Hugo
# Converts markdown links like [text](../other/README.md) to [text](/other/)
fix_links() {
local file="$1"
local base_path="$2"
sed -E \
-e 's|\[([^\]]+)\]\((\.\./)+hosts/([^/]+)/README\.md\)|\[\1\](/hosts/\3/)|g' \
-e 's|\[([^\]]+)\]\(\.\./([^/]+)/README\.md\)|\[\1\](/hosts/\2/)|g' \
-e 's|\[([^\]]+)\]\(\.\./(README\.md)\)|\[\1\](/)|g' \
-e 's|\[([^\]]+)\]\(\.\./\.\./README\.md\)|\[\1\](/)|g' \
-e 's|\[([^\]]+)\]\(\./([^/]+)/README\.md\)|\[\1\](/\2/)|g' \
-e 's|\[hosts/([^/]+)/README\.md\]\(hosts/\1/README\.md\)|\[hosts/\1/\]\(/hosts/\1/\)|g' \
-e 's|\[([^\]]+)\]\(hosts/([^/]+)/README\.md\)|\[\1\](/hosts/\2/)|g' \
-e 's|\[([^\]]+)\]\(README\.md\)|\[\1\](/)|g' \
"$file"
}
# Process main README.md
if [ -f "$REPO_ROOT/README.md" ]; then
echo "Processing main README.md..."
{
cat <<'EOF'
---
title: "StuRa HTW Infrastructure"
date: 2024-01-01
weight: 1
---
EOF
fix_links "$REPO_ROOT/README.md" "/"
} > "$CONTENT_DIR/_index.md"
fi
# Process CLAUDE.md as a separate page
if [ -f "$REPO_ROOT/CLAUDE.md" ]; then
echo "Processing CLAUDE.md..."
{
cat <<'EOF'
---
title: "Claude Code Guide"
date: 2024-01-01
weight: 10
---
EOF
fix_links "$REPO_ROOT/CLAUDE.md" "/"
} > "$CONTENT_DIR/claude-guide.md"
fi
# Create hosts index page
cat > "$CONTENT_DIR/hosts/_index.md" <<'EOF'
---
title: "Hosts"
date: 2024-01-01
weight: 2
---
# Host Configurations
This section contains documentation for each host in the infrastructure.
EOF
# Process host README files
if [ -d "$REPO_ROOT/hosts" ]; then
for host_dir in "$REPO_ROOT/hosts"/*; do
if [ -d "$host_dir" ]; then
host_name=$(basename "$host_dir")
readme="$host_dir/README.md"
if [ -f "$readme" ]; then
echo "Processing host: $host_name"
{
cat <<EOF
---
title: "$host_name"
date: 2024-01-01
---
EOF
fix_links "$readme" "/hosts/$host_name"
} > "$CONTENT_DIR/hosts/$host_name.md"
fi
fi
done
fi
# Process keys README if it exists
if [ -f "$REPO_ROOT/keys/README.md" ]; then
echo "Processing keys/README.md..."
{
cat <<'EOF'
---
title: "Key Management"
date: 2024-01-01
weight: 5
---
EOF
fix_links "$REPO_ROOT/keys/README.md" "/keys"
} > "$CONTENT_DIR/keys.md"
fi
echo "Documentation build complete!"

20
docs/hugo.yaml Normal file
View file

@ -0,0 +1,20 @@
baseURL: 'https://docs.adm.htw.stura-dresden.de/'
languageCode: en-us
title: StuRa HTW Infrastructure Documentation
theme: hugo-book
params:
BookTheme: auto
BookToC: true
BookRepo: https://codeberg.org/stura-htw-dresden/stura-infra
BookEditPath: edit/master
BookSearch: true
BookComments: false
BookPortableLinks: true
BookMenuBundle: true
menu:
after:
- name: Repository
url: https://codeberg.org/stura-htw-dresden/stura-infra
weight: 10

Binary file not shown.

Before

Width:  |  Height:  |  Size: 125 KiB

154
flake.lock generated
View file

@ -15,11 +15,11 @@
"uv2nix": "uv2nix"
},
"locked": {
"lastModified": 1771685920,
"narHash": "sha256-PPAgCKlRpxcZlEJ8NH2CGVaEogOc4nOs/eNF0hlAC2E=",
"lastModified": 1772909021,
"narHash": "sha256-hcstQ1Z9aQSJM3AVCLb0/OPTicbME9nhP01GiPrOjZM=",
"owner": "nix-community",
"repo": "authentik-nix",
"rev": "3df5c213032b8d28073d4baead699acea62ab50d",
"rev": "7e4730351fb6df479c46a1bf7e23d46a0b0c5d46",
"type": "github"
},
"original": {
@ -31,11 +31,11 @@
"authentik-go": {
"flake": false,
"locked": {
"lastModified": 1770333754,
"narHash": "sha256-Yyna75Nd6485tZP9IpdEa5QNomswe9hRfM+w3MuET9E=",
"lastModified": 1771856219,
"narHash": "sha256-zTEmvxe+BpfWYvAl675PnhXCH4jV4GUTFb1MrQ1Eyno=",
"owner": "goauthentik",
"repo": "client-go",
"rev": "280022b0a8de5c8f4b2965d1147a1c4fa846ba64",
"rev": "4c1444ee54d945fbcc5ae107b4f191ca0352023d",
"type": "github"
},
"original": {
@ -47,16 +47,16 @@
"authentik-src": {
"flake": false,
"locked": {
"lastModified": 1770911230,
"narHash": "sha256-alTyrMBbjZbw4jhEna8saabf93sqSrZCu+Z5xH3pZ7M=",
"lastModified": 1772567399,
"narHash": "sha256-0Vpf1hj9C8r+rhrCgwoNazpQ+mwgjdjDhuoKCxYQFWw=",
"owner": "goauthentik",
"repo": "authentik",
"rev": "19ad8d3ae3f266ec1096bc4461fdf6bcda1aa079",
"rev": "0dccbd4193c45c581e9fb7cd89df0c1487510f1f",
"type": "github"
},
"original": {
"owner": "goauthentik",
"ref": "version/2025.12.4",
"ref": "version/2026.2.1",
"repo": "authentik",
"type": "github"
}
@ -84,11 +84,11 @@
]
},
"locked": {
"lastModified": 1771881364,
"narHash": "sha256-A5uE/hMium5of/QGC6JwF5TGoDAfpNtW00T0s9u/PN8=",
"lastModified": 1773025010,
"narHash": "sha256-khlHllTsovXgT2GZ0WxT4+RvuMjNeR5OW0UYeEHPYQo=",
"owner": "nix-community",
"repo": "disko",
"rev": "a4cb7bf73f264d40560ba527f9280469f1f081c6",
"rev": "7b9f7f88ab3b339f8142dc246445abb3c370d3d3",
"type": "github"
},
"original": {
@ -100,11 +100,11 @@
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1765121682,
"narHash": "sha256-4VBOP18BFeiPkyhy9o4ssBNQEvfvv1kXkasAYd0+rrA=",
"lastModified": 1767039857,
"narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "65f23138d8d09a92e30f1e5c87611b23ef451bf3",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
@ -114,6 +114,22 @@
}
},
"flake-compat_2": {
"flake": false,
"locked": {
"lastModified": 1767039857,
"narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=",
"owner": "NixOS",
"repo": "flake-compat",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "flake-compat",
"type": "github"
}
},
"flake-compat_3": {
"flake": false,
"locked": {
"lastModified": 1761588595,
@ -134,11 +150,11 @@
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1765835352,
"narHash": "sha256-XswHlK/Qtjasvhd1nOa1e8MgZ8GS//jBoTqWtrS1Giw=",
"lastModified": 1769996383,
"narHash": "sha256-AnYjnFWgS49RlqX7LrC4uA+sCCDBj0Ry/WOJ5XWAsa0=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "a34fae9c08a15ad73f295041fec82323541400a9",
"rev": "57928607ea566b5db3ad13af0e57e921e6b12381",
"type": "github"
},
"original": {
@ -169,12 +185,34 @@
}
},
"git-hooks": {
"inputs": {
"flake-compat": "flake-compat_2",
"gitignore": "gitignore",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1772893680,
"narHash": "sha256-JDqZMgxUTCq85ObSaFw0HhE+lvdOre1lx9iI6vYyOEs=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "8baab586afc9c9b57645a734c820e4ac0a604af9",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"git-hooks_2": {
"inputs": {
"flake-compat": [
"mailserver",
"flake-compat"
],
"gitignore": "gitignore",
"gitignore": "gitignore_2",
"nixpkgs": [
"mailserver",
"nixpkgs"
@ -195,6 +233,27 @@
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"gitignore_2": {
"inputs": {
"nixpkgs": [
"mailserver",
@ -219,16 +278,16 @@
"mailserver": {
"inputs": {
"blobs": "blobs",
"flake-compat": "flake-compat_2",
"git-hooks": "git-hooks",
"flake-compat": "flake-compat_3",
"git-hooks": "git-hooks_2",
"nixpkgs": "nixpkgs_2"
},
"locked": {
"lastModified": 1766537863,
"narHash": "sha256-HEt+wbazRgJYeY+lgj65bxhPyVc4x7NEB2bs5NU6DF8=",
"lastModified": 1773313890,
"narHash": "sha256-NXm/kOAk7HLziH1uWaUbNb9MhDS8yxFfQ8fMK5eN8/A=",
"ref": "nixos-25.11",
"rev": "23f0a53ca6e58e61e1ea2b86791c69b79c91656d",
"revCount": 841,
"rev": "9cdd6869e513df8153db4b920c8f15d394e150f7",
"revCount": 842,
"type": "git",
"url": "https://gitlab.com/simple-nixos-mailserver/nixos-mailserver"
},
@ -266,11 +325,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1768305791,
"narHash": "sha256-AIdl6WAn9aymeaH/NvBj0H9qM+XuAuYbGMZaP0zcXAQ=",
"lastModified": 1771848320,
"narHash": "sha256-0MAd+0mun3K/Ns8JATeHT1sX28faLII5hVLq0L3BdZU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "1412caf7bf9e660f2f962917c14b1ea1c3bc695e",
"rev": "2fc6539b481e1d2569f25f8799236694180c0993",
"type": "github"
},
"original": {
@ -282,11 +341,11 @@
},
"nixpkgs-lib": {
"locked": {
"lastModified": 1765674936,
"narHash": "sha256-k00uTP4JNfmejrCLJOwdObYC9jHRrr/5M/a/8L2EIdo=",
"lastModified": 1769909678,
"narHash": "sha256-cBEymOf4/o3FD5AZnzC3J9hLbiZ+QDT/KDuyHXVJOpM=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "2075416fcb47225d9b68ac469a5c4801a9c4dd85",
"rev": "72716169fe93074c333e8d0173151350670b824c",
"type": "github"
},
"original": {
@ -313,11 +372,11 @@
},
"nixpkgs_3": {
"locked": {
"lastModified": 1771903837,
"narHash": "sha256-sdaqdnsQCv3iifzxwB22tUwN/fSHoN7j2myFW5EIkGk=",
"lastModified": 1773222311,
"narHash": "sha256-BHoB/XpbqoZkVYZCfXJXfkR+GXFqwb/4zbWnOr2cRcU=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "e764fc9a405871f1f6ca3d1394fb422e0a0c3951",
"rev": "0590cd39f728e129122770c029970378a79d076a",
"type": "github"
},
"original": {
@ -343,11 +402,11 @@
]
},
"locked": {
"lastModified": 1763662255,
"narHash": "sha256-4bocaOyLa3AfiS8KrWjZQYu+IAta05u3gYZzZ6zXbT0=",
"lastModified": 1771423342,
"narHash": "sha256-7uXPiWB0YQ4HNaAqRvVndYL34FEp1ZTwVQHgZmyMtC8=",
"owner": "pyproject-nix",
"repo": "build-system-pkgs",
"rev": "042904167604c681a090c07eb6967b4dd4dae88c",
"rev": "04e9c186e01f0830dad3739088070e4c551191a4",
"type": "github"
},
"original": {
@ -364,11 +423,11 @@
]
},
"locked": {
"lastModified": 1764134915,
"narHash": "sha256-xaKvtPx6YAnA3HQVp5LwyYG1MaN4LLehpQI8xEdBvBY=",
"lastModified": 1771518446,
"narHash": "sha256-nFJSfD89vWTu92KyuJWDoTQJuoDuddkJV3TlOl1cOic=",
"owner": "pyproject-nix",
"repo": "pyproject.nix",
"rev": "2c8df1383b32e5443c921f61224b198a2282a657",
"rev": "eb204c6b3335698dec6c7fc1da0ebc3c6df05937",
"type": "github"
},
"original": {
@ -381,6 +440,7 @@
"inputs": {
"authentik": "authentik",
"disko": "disko",
"git-hooks": "git-hooks",
"mailserver": "mailserver",
"nixpkgs": "nixpkgs_3",
"sops": "sops"
@ -393,11 +453,11 @@
]
},
"locked": {
"lastModified": 1771889317,
"narHash": "sha256-YV17Q5lEU0S9ppw08Y+cs4eEQJBuc79AzblFoHORLMU=",
"lastModified": 1773096132,
"narHash": "sha256-M3zEnq9OElB7zqc+mjgPlByPm1O5t2fbUrH3t/Hm5Ag=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "b027513c32e5b39b59f64626b87fbe168ae02094",
"rev": "d1ff3b1034d5bab5d7d8086a7803c5a5968cd784",
"type": "github"
},
"original": {
@ -433,11 +493,11 @@
]
},
"locked": {
"lastModified": 1765631794,
"narHash": "sha256-90d//IZ4GXipNsngO4sb2SAPbIC/a2P+IAdAWOwpcOM=",
"lastModified": 1772187362,
"narHash": "sha256-gCojeIlQ/rfWMe3adif3akyHsT95wiMkLURpxTeqmPc=",
"owner": "pyproject-nix",
"repo": "uv2nix",
"rev": "4cca323a547a1aaa9b94929c4901bed5343eafe8",
"rev": "abe65de114300de41614002fe9dce2152ac2ac23",
"type": "github"
},
"original": {

111
flake.nix
View file

@ -18,6 +18,10 @@
url = "github:nix-community/disko";
inputs.nixpkgs.follows = "nixpkgs";
};
git-hooks = {
url = "github:cachix/git-hooks.nix";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs =
@ -28,6 +32,7 @@
mailserver,
disko,
sops,
git-hooks,
}:
let
sshkeys = [
@ -38,7 +43,80 @@
in
rec {
formatter.x86_64-linux = nixpkgs.legacyPackages.x86_64-linux.nixfmt-rfc-style;
devShells.x86_64-linux.default =
let
pkgs = nixpkgs.legacyPackages.x86_64-linux;
pre-commit-check = git-hooks.lib.x86_64-linux.run {
src = ./.;
hooks = {
nixfmt-rfc-style.enable = true;
};
};
in
pkgs.mkShell {
# Import GPG keys from keys directory
sopsPGPKeyDirs = [
"${toString ./.}/keys/hosts"
"${toString ./.}/keys/users"
];
# Isolate sops GPG keys to .git/gnupg (optional)
# sopsCreateGPGHome = true;
shellHook = ''
${pre-commit-check.shellHook}
'';
nativeBuildInputs = [
sops.packages.x86_64-linux.sops-import-keys-hook
];
buildInputs = pre-commit-check.enabledPackages ++ [
pkgs.sops
];
};
packages.x86_64-linux =
let
pkgs = nixpkgs.legacyPackages.x86_64-linux;
# Hugo documentation site package
docs-site = pkgs.stdenv.mkDerivation {
name = "stura-infra-docs";
src = ./.;
nativeBuildInputs = [ pkgs.hugo ];
buildPhase = ''
# Create Hugo structure
mkdir -p hugo-site
cp ${./docs/hugo.yaml} hugo-site/hugo.yaml
# Install hugo-book theme
mkdir -p hugo-site/themes
cp -r ${
pkgs.fetchFromGitHub {
owner = "alex-shpak";
repo = "hugo-book";
rev = "v13";
sha256 = "sha256-r2KfmWK7BC7LjnZVvwb2Mbqnd8a6Q32fBqiQfZTpGy4=";
}
} hugo-site/themes/hugo-book
# Build content from README files
bash ${./docs/build-docs.sh} . hugo-site/content
# Build Hugo site
cd hugo-site
hugo --minify
'';
installPhase = ''
mkdir -p $out
cp -r public/* $out/
'';
};
in
builtins.foldl'
(
result: name:
@ -46,12 +124,18 @@
// {
# run nixos-rebuild switch on the target system
# the config will be built locally and copied over
"${name}-update" = nixpkgs.legacyPackages.x86_64-linux.writeShellScriptBin "update" ''
nixos-rebuild switch --flake .#${name} --target-host root@${(builtins.head (nixosConfigurations.${name}.config.networking.interfaces.${builtins.head (builtins.attrNames nixosConfigurations.${name}.config.networking.interfaces)}.ipv4.addresses)).address}
"${name}-update" = pkgs.writeShellScriptBin "update" ''
nixos-rebuild switch --flake .#${name} --target-host root@${
(builtins.head (
nixosConfigurations.${name}.config.networking.interfaces.${
builtins.head (builtins.attrNames nixosConfigurations.${name}.config.networking.interfaces)
}.ipv4.addresses
)).address
}
'';
}
)
{ }
{ inherit docs-site; }
(
# filter all nixos configs containing installer
builtins.filter (item: !nixpkgs.lib.hasInfix "-" item) (builtins.attrNames nixosConfigurations)
@ -88,6 +172,27 @@
installer-iso = iso-config.config.system.build.isoImage;
installer-vm = iso-config.config.system.build.vm;
}
)
// (
# Container tarballs for LXC deployment to Proxmox
# Only generates tarballs for hosts that import proxmox-lxc.nix
let
lxcHosts = builtins.filter (
name:
let
hostPath = ./hosts/${name}/default.nix;
content = builtins.readFile hostPath;
in
builtins.match ".*proxmox-lxc.nix.*" content != null
) (builtins.attrNames (builtins.readDir ./hosts));
in
builtins.foldl' (
result: name:
result
// {
"containers-${name}" = nixosConfigurations.${name}.config.system.build.tarball;
}
) { } lxcHosts
);
nixosConfigurations = builtins.foldl' (

210
hosts/git/README.md Normal file
View file

@ -0,0 +1,210 @@
# Git Host - Forgejo
Forgejo git server at 141.56.51.7 running in an LXC container.
## Overview
- **Hostname**: git
- **FQDN**: git.adm.htw.stura-dresden.de
- **IP Address**: 141.56.51.7
- **Type**: Proxmox LXC Container
- **Services**: Forgejo, Nginx (reverse proxy), OpenSSH
## Services
### Forgejo
Forgejo is a self-hosted Git service (fork of Gitea) providing:
- Git repository hosting
- Web interface for repository management
- Issue tracking
- Pull requests
- OAuth2 integration support
**Configuration**:
- **Socket**: `/run/forgejo/forgejo.sock` (Unix socket)
- **Root URL**: https://git.adm.htw.stura-dresden.de
- **Protocol**: HTTP over Unix socket (Nginx handles TLS)
### Nginx
Nginx acts as a reverse proxy between the network and Forgejo:
- Receives HTTPS requests (TLS termination)
- Forwards to Forgejo via Unix socket
- Manages ACME/Let's Encrypt certificates
- WebSocket support enabled for live updates
### OAuth2 Auto-Registration
OAuth2 client auto-registration is enabled:
- `ENABLE_AUTO_REGISTRATION = true`
- `REGISTER_EMAIL_CONFIRM = false`
- Username field: email
This allows users to register automatically via OAuth2 providers without manual approval.
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7
```
**Using container tarball:**
```bash
nix build .#containers-git
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname git \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1
pct start 107
```
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#git --target-host root@141.56.51.7
# Or use auto-generated script
nix run .#git-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Access the web interface:**
```
https://git.adm.htw.stura-dresden.de
```
2. **Complete initial setup:**
- Create the first admin account via web UI
- Configure any additional settings
- Set up SSH keys for git access
3. **Configure OAuth2 (optional):**
- If using an external identity provider (e.g., authentik)
- Add OAuth2 application in the provider
- Configure OAuth2 settings in Forgejo admin panel
- Auto-registration is already enabled in configuration
4. **Set up repositories:**
- Create organizations
- Create repositories
- Configure access permissions
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Inspects TLS handshake and routes HTTPS traffic for git.adm.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic based on Host header
- **ACME challenges**: Forwards `/.well-known/acme-challenge/` requests to this host for Let's Encrypt verification
- **Auto-redirect**: Redirects HTTP to HTTPS (except ACME challenges)
This host handles its own TLS certificates via ACME. The proxy passes through encrypted traffic without decryption.
## Troubleshooting
### Forgejo socket permissions
If Forgejo fails to start or Nginx cannot connect:
```bash
# Check socket exists
ls -l /run/forgejo/forgejo.sock
# Check Forgejo service status
systemctl status forgejo
# Check Nginx service status
systemctl status nginx
# View Forgejo logs
journalctl -u forgejo -f
```
**Solution**: Ensure the Forgejo user has proper permissions and the socket path is correct in both Forgejo and Nginx configurations.
### Nginx proxy configuration
If the web interface is unreachable:
```bash
# Check Nginx configuration
nginx -t
# View Nginx error logs
journalctl -u nginx -f
# Test socket connection
curl --unix-socket /run/forgejo/forgejo.sock http://localhost/
```
**Solution**: Verify the `proxyPass` directive in Nginx configuration points to the correct Unix socket.
### SSH access issues
If git operations over SSH fail:
```bash
# Check SSH service
systemctl status sshd
# Test SSH connection
ssh -T git@git.adm.htw.stura-dresden.de
# Check Forgejo SSH settings
cat /var/lib/forgejo/custom/conf/app.ini | grep -A 5 "\[server\]"
```
**Solution**: Ensure SSH keys are properly added to user accounts and SSH daemon is running.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-git.adm.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-git.adm.htw.stura-dresden.de -f
# Manually trigger certificate renewal
systemctl start acme-git.adm.htw.stura-dresden.de
```
**Solution**: Verify DNS points to proxy (141.56.51.1) and proxy is forwarding ACME challenges correctly.
## Files and Directories
- **Configuration**: `/nix/store/.../forgejo/` (managed by Nix)
- **Data directory**: `/var/lib/forgejo/`
- **Custom config**: `/var/lib/forgejo/custom/conf/app.ini`
- **Repositories**: `/var/lib/forgejo/data/gitea-repositories/`
- **Socket**: `/run/forgejo/forgejo.sock`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.7/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 22, 80, 443 allowed
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [Forgejo Documentation](https://forgejo.org/docs/latest/)
- [NixOS Forgejo Options](https://search.nixos.org/options?query=services.forgejo)

View file

@ -2,23 +2,28 @@
config,
lib,
pkgs,
modulesPath,
...
}:
{
imports = [
./hardware-configuration.nix
"${modulesPath}/virtualisation/proxmox-lxc.nix"
];
networking = {
hostName = "git";
interfaces.ens18.ipv4.addresses = [
fqdn = "git.adm.htw.stura-dresden.de";
interfaces.eth0.ipv4.addresses = [
{
address = "141.56.51.97";
address = "141.56.51.7";
prefixLength = 24;
}
];
defaultGateway.address = "141.56.51.254";
defaultGateway = {
address = "141.56.51.254";
interface = "eth0";
};
firewall.allowedTCPPorts = [
80
443
@ -42,7 +47,7 @@
username = "email";
};
service = {
SHOW_REGISTRATION_BUTTON = "false";
# SHOW_REGISTRATION_BUTTON = "false";
};
};
};
@ -62,6 +67,6 @@
};
system.stateVersion = "24.11";
system.stateVersion = "25.11";
}

353
hosts/nextcloud/README.md Normal file
View file

@ -0,0 +1,353 @@
# Nextcloud Host
Nextcloud 31 instance at 141.56.51.16 running in an LXC container.
## Overview
- **Hostname**: cloud
- **FQDN**: cloud.htw.stura-dresden.de
- **IP Address**: 141.56.51.16
- **Type**: Proxmox LXC Container
- **Services**: Nextcloud, PostgreSQL, Redis (caching + locking), Nginx, Nullmailer
## Services
### Nextcloud
Nextcloud 31 provides file hosting and collaboration:
- **Admin user**: administration
- **Max upload size**: 1GB
- **Database**: PostgreSQL (via Unix socket)
- **Caching**: Redis (via Unix socket)
- **Default phone region**: DE (Germany)
- **HTTPS**: Enabled via Nginx reverse proxy
- **Log level**: 4 (warnings and errors)
- **Maintenance window**: 4 AM (prevents maintenance during business hours)
**Pre-installed apps:**
- Calendar
- Deck (Kanban board)
- Tasks
- Notes
- Contacts
### PostgreSQL
Database backend for Nextcloud:
- **Database name**: nextcloud
- **User**: nextcloud
- **Connection**: Unix socket (`/run/postgresql`)
- **Privileges**: Full access to nextcloud database
### Redis
Two Redis instances for performance:
- **Cache**: General caching via `/run/redis-nextcloud/redis.sock`
- **Locking**: Distributed locking mechanism
- **Port**: 0 (Unix socket only)
- **User**: nextcloud
### Nginx
Reverse proxy with recommended settings:
- **Gzip compression**: Enabled
- **Optimization**: Enabled
- **Proxy settings**: Enabled
- **TLS**: Enabled with ACME certificates
- **Access logs**: Disabled (privacy)
- **Error logs**: Only emergency level (`/dev/null emerg`)
### Nullmailer
Simple mail relay for sending email notifications:
- **Relay host**: mail.stura.htw-dresden.de:25
- **From address**: files@stura.htw-dresden.de
- **HELO host**: cloud.htw.stura-dresden.de
- **Protocol**: SMTP (port 25, no auth)
Nextcloud uses Nullmailer's sendmail interface to send email notifications.
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#nextcloud --target-host root@141.56.51.16
```
**Using container tarball:**
```bash
nix build .#containers-nextcloud
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 116 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname cloud \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.16/24,gw=141.56.51.254 \
--memory 4096 \
--cores 4 \
--rootfs local-lvm:20 \
--unprivileged 1 \
--features nesting=1
pct start 116
```
**Note**: Nextcloud benefits from more resources (4GB RAM, 20GB disk recommended).
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#nextcloud --target-host root@141.56.51.16
# Or use auto-generated script
nix run .#nextcloud-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Set admin password:**
```bash
echo "your-secure-password" > /var/lib/nextcloud/adminpassFile
chmod 600 /var/lib/nextcloud/adminpassFile
chown nextcloud:nextcloud /var/lib/nextcloud/adminpassFile
```
2. **Access the web interface:**
```
https://cloud.htw.stura-dresden.de
```
3. **Complete initial setup:**
- Log in with admin credentials (user: administration)
- Review security & setup warnings
- Configure background jobs (cron is already configured via NixOS)
4. **Configure additional apps:**
- Navigate to Apps section
- Enable/disable apps as needed
- Pre-installed apps: Calendar, Deck, Tasks, Notes, Contacts
5. **Configure trusted domains** (if needed):
- Current trusted domains: cloud.htw.stura-dresden.de, www.cloud.htw.stura-dresden.de
- Edit via NixOS config if you need to add more domains
6. **Test email notifications** (optional):
- Navigate to Settings → Administration → Basic settings
- Send test email
- Verify email delivery through Nullmailer relay
7. **Configure user authentication:**
- Add users manually, or
- Configure LDAP/OAuth if using external identity provider
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Routes HTTPS traffic for cloud.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic and redirects to HTTPS
- **ACME challenges**: Forwards certificate verification requests
This host manages its own ACME certificates. Nginx handles TLS termination.
## Troubleshooting
### Redis connection issues
If Nextcloud shows "Redis not available" errors:
```bash
# Check Redis status
systemctl status redis-nextcloud
# Check socket exists and permissions
ls -l /run/redis-nextcloud/redis.sock
# Test Redis connection
redis-cli -s /run/redis-nextcloud/redis.sock ping
# View Redis logs
journalctl -u redis-nextcloud -f
```
**Solution**: Ensure Redis is running and the nextcloud user has access to the socket.
### PostgreSQL permissions
If Nextcloud cannot connect to the database:
```bash
# Check PostgreSQL status
systemctl status postgresql
# Check database exists
sudo -u postgres psql -c "\l" | grep nextcloud
# Check user and permissions
sudo -u postgres psql -c "\du" | grep nextcloud
# Test connection as nextcloud user
sudo -u nextcloud psql -d nextcloud -c "SELECT version();"
# View PostgreSQL logs
journalctl -u postgresql -f
```
**Solution**: Ensure the nextcloud database and user exist with proper permissions.
### Upload size limits
If large file uploads fail:
```bash
# Check Nextcloud upload size setting
grep -i "upload" /var/lib/nextcloud/config/config.php
# Check PHP-FPM settings
systemctl status phpfpm-nextcloud
# View PHP error logs
tail -f /var/log/phpfpm-nextcloud.log
```
**Solution**: The max upload is set to 1GB via `maxUploadSize`. If you need larger files, modify the NixOS configuration.
### Opcache configuration
If PHP performance is poor:
```bash
# Check PHP opcache settings
php -i | grep opcache
# Check opcache status via Nextcloud admin panel
# Settings → Administration → Overview → PHP
# Restart PHP-FPM to clear cache
systemctl restart phpfpm-nextcloud
```
**Solution**: The opcache interned strings buffer is set to 32MB. If you see opcache errors, this may need adjustment.
### Mail relay issues
If email notifications are not being sent:
```bash
# Check Nullmailer status
systemctl status nullmailer
# Check mail queue
mailq
# View Nullmailer logs
journalctl -u nullmailer -f
# Test mail relay
echo "Test message" | mail -s "Test" user@example.com
# Check Nextcloud mail settings
sudo -u nextcloud php /var/lib/nextcloud/occ config:list | grep mail
```
**Solution**: Verify the mail relay host (mail.stura.htw-dresden.de) is reachable and accepting SMTP connections on port 25.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-cloud.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-cloud.htw.stura-dresden.de -f
# Check Nginx HTTPS configuration
nginx -t
# View Nginx error logs
journalctl -u nginx -f
```
**Solution**: Ensure DNS points to proxy (141.56.51.1) and the proxy forwards ACME challenges to this host.
### Maintenance mode stuck
If Nextcloud is stuck in maintenance mode:
```bash
# Disable maintenance mode
sudo -u nextcloud php /var/lib/nextcloud/occ maintenance:mode --off
# Check status
sudo -u nextcloud php /var/lib/nextcloud/occ status
# Run system check
sudo -u nextcloud php /var/lib/nextcloud/occ check
```
**Solution**: Maintenance mode is automatically disabled after updates, but can sometimes get stuck.
## Files and Directories
- **Nextcloud data**: `/var/lib/nextcloud/`
- **Admin password**: `/var/lib/nextcloud/adminpassFile`
- **Configuration**: `/var/lib/nextcloud/config/config.php`
- **Apps**: `/var/lib/nextcloud/apps/`
- **User files**: `/var/lib/nextcloud/data/`
- **PostgreSQL data**: `/var/lib/postgresql/`
- **Redis socket**: `/run/redis-nextcloud/redis.sock`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.16/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 80, 443 allowed
## Configuration Details
- **Version**: Nextcloud 31
- **Database type**: PostgreSQL
- **Caching**: Redis (APCU disabled)
- **HTTPS**: Yes (enforced via forceSSL)
- **Trusted domains**:
- cloud.htw.stura-dresden.de
- www.cloud.htw.stura-dresden.de
- **PHP opcache**: Interned strings buffer 32MB
- **Maintenance window**: 4 AM (hour 4)
- **Log level**: 4 (warnings and errors)
## Useful Commands
```bash
# Run occ commands (Nextcloud CLI)
sudo -u nextcloud php /var/lib/nextcloud/occ <command>
# List all users
sudo -u nextcloud php /var/lib/nextcloud/occ user:list
# Scan files for changes
sudo -u nextcloud php /var/lib/nextcloud/occ files:scan --all
# Run background jobs
sudo -u nextcloud php /var/lib/nextcloud/occ background:cron
# Update apps
sudo -u nextcloud php /var/lib/nextcloud/occ app:update --all
# Check for Nextcloud updates
sudo -u nextcloud php /var/lib/nextcloud/occ update:check
```
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [Nextcloud Documentation](https://docs.nextcloud.com/)
- [Nextcloud Admin Manual](https://docs.nextcloud.com/server/stable/admin_manual/)
- [NixOS Nextcloud Options](https://search.nixos.org/options?query=services.nextcloud)

View file

@ -1,7 +1,365 @@
# Zentraler Reverse Proxy Für ehemals öffentliche IP-Adressen
# Proxy Host - Central Reverse Proxy
Central reverse proxy at 141.56.51.1 running as a full VM (not LXC container).
## Overview
- **Hostname**: proxy
- **IP Address**: 141.56.51.1
- **Type**: Full VM (not LXC)
- **Services**: HAProxy, OpenSSH (ports 1005, 2142)
- **Role**: Central traffic router for all StuRa HTW Dresden services
## Architecture
The proxy is the central entry point for all HTTP/HTTPS traffic:
- All public DNS records point to this IP (141.56.51.1)
- HAProxy performs SNI-based routing for HTTPS traffic
- HTTP Host header-based routing for unencrypted traffic
- Automatic redirect from HTTP to HTTPS (except ACME challenges)
- Backend services handle their own TLS certificates
## Services
### HAProxy
HAProxy routes traffic using two methods:
**1. HTTP Mode (Port 80)**
- Inspects the HTTP Host header
- Routes requests to appropriate backend
- Forwards ACME challenges (/.well-known/acme-challenge/) to backends
- Redirects all other HTTP traffic to HTTPS (301 redirect)
- Default backend serves an index page listing all services
**2. TCP Mode (Port 443)**
- SNI inspection during TLS handshake
- Routes encrypted traffic without decryption
- Backends terminate TLS and serve their own certificates
- 1-second inspection delay to buffer packets
**Key features:**
- Logging: All connections logged to systemd journal
- Stats page: Available at http://127.0.0.1:8404/stats (localhost only)
- Max connections: 50,000
- Buffer size: 32,762 bytes
- Timeouts: 5s connect, 30s client/server
### SSH Services
**Port 1005: Admin SSH Access**
- Primary SSH port for administrative access
- Configured in `services.openssh.listenAddresses`
- Used for system management and deployments
**Port 2142: SSH Jump to srs2**
- Forwards SSH connections to srs2 (141.56.51.2:80)
- 30-minute session timeout
- TCP keep-alive enabled
- Used for accessing legacy systems
### Auto-Generated Forwarding
The proxy configuration is **partially auto-generated**:
1. **Static forwards**: Manually defined in the `forwards` attribute set in default.nix
2. **Dynamic forwards**: Automatically generated from all NixOS configurations in this flake that have `services.nginx.enable = true`
When you deploy a new host with Nginx enabled, the proxy automatically:
- Detects the nginx virtualHosts
- Extracts the IP address from the host configuration
- Generates HAProxy backend rules for ports 80 and 443
**No manual proxy configuration needed for nginx-enabled hosts!**
## Deployment Type
Unlike other hosts in this infrastructure, the proxy is a **full VM** (not an LXC container):
- Has dedicated hardware-configuration.nix
- Uses disko for declarative disk management (hetzner-disk.nix)
- Network interface: `ens18` (not `eth0` like LXC containers)
- Requires more resources and dedicated storage
## Disko Configuration
The proxy uses Btrfs with the following layout (hetzner-disk.nix):
- **Filesystem**: Btrfs
- **Compression**: zstd
- **Subvolumes**:
- `/` - Root filesystem
- `/nix` - Nix store
- `/var` - Variable data
- `/home` - User home directories (if needed)
This provides better performance and snapshot capabilities compared to LXC containers.
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere (recommended):**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
```
This handles disk partitioning via disko automatically.
### Updates
**Note**: SSH runs on port 1005, so you need an SSH config entry:
```bash
# ~/.ssh/config
Host 141.56.51.1
Port 1005
```
Then deploy:
```bash
# From local machine
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
# Or use auto-generated script
nix run .#proxy-update
```
## Network Configuration
- **Interface**: ens18 (VM interface)
- **IP**: 141.56.51.1/24
- **Gateway**: 141.56.51.254
- **DNS**: 9.9.9.9, 1.1.1.1 (public DNS, not HTW internal)
- **Firewall**: nftables enabled
- **Open ports**: 22, 80, 443, 1005, 2142
## Adding New Services
### Method 1: Deploy a host with Nginx (Automatic)
The easiest method - just deploy a new host with nginx enabled:
```nix
# hosts/newservice/default.nix
services.nginx = {
enable = true;
virtualHosts."newservice.htw.stura-dresden.de" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# your config
};
};
};
```
The flake automatically:
1. Discovers the nginx virtualHost
2. Extracts the IP address from networking configuration
3. Generates HAProxy forwarding rules
4. No manual proxy changes needed!
**You still need to:**
- Add DNS record pointing to 141.56.51.1
- Deploy the proxy to pick up changes: `nix run .#proxy-update`
### Method 2: Manual forwarding (for non-Nginx services)
For services not using Nginx, manually add to the `forwards` attribute set in hosts/proxy/default.nix:
```nix
forwards = {
# ... existing forwards ...
newservice = {
dest = "141.56.51.XXX";
domain = "newservice.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
};
```
Then deploy the proxy.
## SNI Routing Explained
**SNI (Server Name Indication)** is a TLS extension that includes the hostname in the ClientHello handshake.
How it works:
1. Client initiates TLS connection to 141.56.51.1
2. Client sends ClientHello with SNI field (e.g., "wiki.htw.stura-dresden.de")
3. HAProxy inspects the SNI field (does not decrypt)
4. HAProxy routes the entire encrypted connection to the backend (e.g., 141.56.51.13)
5. Backend terminates TLS and serves the content
**Benefits:**
- No TLS decryption at proxy (end-to-end encryption)
- Each backend manages its own certificates
- Simple certificate renewal via ACME/Let's Encrypt
- No certificate syncing required
## ACME Challenge Forwarding
Let's Encrypt verification works seamlessly:
1. Backend requests certificate from Let's Encrypt
2. Let's Encrypt queries DNS, finds 141.56.51.1
3. Let's Encrypt requests `http://domain/.well-known/acme-challenge/<token>`
4. HAProxy detects ACME challenge path
5. HAProxy forwards to backend without HTTPS redirect
6. Backend responds with challenge token
7. Let's Encrypt verifies and issues certificate
This is why HTTP→HTTPS redirect has an exception for `/.well-known/acme-challenge/` paths.
## HAProxy Stats
Access HAProxy statistics page (localhost only):
```bash
# SSH into proxy
ssh -p 1005 root@141.56.51.1
# Access stats via curl
curl http://127.0.0.1:8404/stats
# Or forward port to your local machine
ssh -p 1005 -L 8404:127.0.0.1:8404 root@141.56.51.1
# Then browse to http://localhost:8404/stats
```
The stats page shows:
- Current connections per backend
- Health check status
- Traffic statistics
- Error counts
## Configuration Structure
The HAProxy configuration is generated from Nix using `lib.foldlAttrs`:
```nix
forwards = {
service_name = {
dest = "141.56.51.XXX";
domain = "service.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
};
```
This generates:
- ACL rules for HTTP Host header matching
- Backend definitions for ports 80 and 443
- SNI routing rules for HTTPS
- HTTP→HTTPS redirects (except ACME)
## Current Forwards
The proxy currently forwards traffic for:
**In this repository:**
- git.adm.htw.stura-dresden.de → 141.56.51.7 (Forgejo)
- wiki.htw.stura-dresden.de → 141.56.51.13 (MediaWiki)
- pro.htw.stura-dresden.de → 141.56.51.15 (Redmine)
- cloud.htw.stura-dresden.de → 141.56.51.16 (Nextcloud)
**External services (managed outside this repo):**
- stura.htw-dresden.de → 141.56.51.3 (Plone)
- tix.htw.stura-dresden.de → 141.56.51.220 (Pretix)
- vot.htw.stura-dresden.de → 141.56.51.57 (OpenSlides)
- mail.htw.stura-dresden.de → 141.56.51.14 (Mail server)
- lists.htw.stura-dresden.de → 141.56.51.14 (Mailing lists)
## Troubleshooting
### HAProxy not starting
```bash
# Check HAProxy status
systemctl status haproxy
# Check configuration syntax
haproxy -c -f /etc/haproxy/haproxy.cfg
# View HAProxy logs
journalctl -u haproxy -f
```
### Backend not reachable
```bash
# Check backend connectivity
curl -v http://141.56.51.XXX:80
curl -vk https://141.56.51.XXX:443
# Check HAProxy stats for backend status
curl http://127.0.0.1:8404/stats | grep backend_name
# Test DNS resolution
dig +short domain.htw.stura-dresden.de
# Check firewall rules
nft list ruleset | grep 141.56.51.XXX
```
### ACME challenges failing
```bash
# Verify HTTP forwarding works
curl -v http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
# Check HAProxy ACL for ACME
grep -i acme /etc/haproxy/haproxy.cfg
# Verify no HTTPS redirect for ACME paths
curl -I http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
# Should return 200/404, not 301 redirect
```
### SSH Jump not working
```bash
# Check SSH jump frontend
systemctl status haproxy
journalctl -u haproxy | grep ssh_jump
# Test connection to srs2
telnet 141.56.51.2 80
# Check HAProxy backend configuration
grep -A 5 "ssh_srs2" /etc/haproxy/haproxy.cfg
```
## Files and Directories
- **HAProxy config**: `/etc/haproxy/haproxy.cfg` (generated by Nix)
- **HAProxy socket**: `/run/haproxy/admin.sock` (if enabled)
- **Disk config**: `./hetzner-disk.nix` (disko configuration)
- **Hardware config**: `./hardware-configuration.nix` (VM hardware)
- **NixOS config**: `./default.nix` (proxy configuration)
## Security Considerations
- **No TLS decryption**: End-to-end encryption maintained
- **No access logs**: HAProxy logs to journal (can be filtered)
- **Stats page**: Localhost only (not exposed)
- **Firewall**: Only necessary ports open
- **SSH**: Custom port (1005) reduces automated attacks
---
## Zentraler Reverse Proxy Für ehemals öffentliche IP-Adressen
(Original German documentation preserved)
Die Instanzen können weitestgehend unverändert weiterlaufen, es müssen nur alle DNS-Einträge auf 141.56.51.1 geändert werden.
## HAProxy
### HAProxy
Zum Weiterleiten der Connections wird HAProxy verwendet.
HAProxy unterstützt verschiedene Modi.
Für uns sind http und tcp relevant.
@ -66,3 +424,12 @@ backend cloud_443
...
...
```
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Git README](../git/README.md) - Forgejo configuration
- [Wiki README](../wiki/README.md) - MediaWiki configuration
- [Redmine README](../redmine/README.md) - Redmine configuration
- [Nextcloud README](../nextcloud/README.md) - Nextcloud configuration
- [HAProxy Documentation](http://www.haproxy.org/#docs)

View file

@ -41,8 +41,17 @@
# wenn instanzen in die flake migriert sind könnte man das autogenerierien
services =
let
# Documentation site from flake package
docsSite = self.packages.x86_64-linux.docs-site;
# jeder Block beschreibt eine Weiterleitung von port 80 und 443 für einen fqdn
forwards = {
docs = {
dest = "127.0.0.1";
domain = "docs.adm.htw.stura-dresden.de";
httpPort = 8080;
httpsPort = 8443;
};
plone = {
dest = "141.56.51.3";
domain = "stura.htw-dresden.de";
@ -73,12 +82,30 @@
httpPort = 80;
httpsPort = 443;
};
post = {
dest = "141.56.51.56";
domain = "post.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
vot = {
dest = "141.56.51.57";
domain = "vot.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
mail = {
dest = "141.56.51.14";
domain = "mail.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
lists = {
dest = "141.56.51.14";
domain = "lists.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
dat = {
dest = "141.56.51.17";
domain = "dat.stu.htw.stura-dresden.de";
@ -188,6 +215,44 @@
}
];
};
# Nginx to serve the documentation site
nginx = {
enable = true;
virtualHosts."docs.adm.htw.stura-dresden.de" = {
enableACME = true;
listen = [
{
addr = "127.0.0.1";
port = 8080;
}
];
locations."/" = {
root = docsSite;
tryFiles = "$uri $uri/ $uri.html =404";
};
};
# HTTPS version for internal serving
appendHttpConfig = ''
server {
listen 127.0.0.1:8443 ssl http2;
server_name docs.adm.htw.stura-dresden.de;
ssl_certificate ${config.security.acme.certs."docs.adm.htw.stura-dresden.de".directory}/cert.pem;
ssl_certificate_key ${
config.security.acme.certs."docs.adm.htw.stura-dresden.de".directory
}/key.pem;
location / {
root ${docsSite};
try_files $uri $uri/ $uri.html =404;
}
}
'';
};
# ACME certificate for docs site
haproxy = {
enable = true;
config = ''
@ -224,7 +289,8 @@
# hier wird eine regel pro domain aus der forwarder liste generiert
${lib.foldlAttrs (
prev: name: value:
prev + ''
prev
+ ''
acl is_${name} hdr(host) -i ${value.domain}
''
) "" forwards}

332
hosts/redmine/README.md Normal file
View file

@ -0,0 +1,332 @@
# Redmine Host - Project Management
Redmine project management system at 141.56.51.15 running in an LXC container.
## Overview
- **Hostname**: pro
- **FQDN**: pro.htw.stura-dresden.de
- **IP Address**: 141.56.51.15
- **Type**: Proxmox LXC Container
- **Services**: Redmine (Rails), Nginx (reverse proxy), OpenSSH
## Services
### Redmine
Redmine is a flexible project management web application:
- **Port**: 3000 (local only, not exposed)
- **Database**: SQLite (default NixOS configuration)
- **SMTP relay**: mail.htw.stura-dresden.de:25
- **Image processing**: ImageMagick enabled
- **PDF support**: Ghostscript enabled
- **Auto-upgrade**: Enabled (Redmine updates automatically)
**Features:**
- Issue tracking
- Project wikis
- Time tracking
- Gantt charts and calendars
- Multiple project support
- Role-based access control
### Nginx
Nginx acts as a reverse proxy:
- Receives HTTPS requests (TLS termination)
- Forwards to Redmine on localhost:3000
- Manages ACME/Let's Encrypt certificates
- Default virtual host (catches all traffic to this IP)
**Privacy configuration:**
- Access logs: Disabled
- Error logs: Emergency level only (`/dev/null emerg`)
### Email Delivery
SMTP is configured for email notifications:
- **Delivery method**: SMTP
- **SMTP host**: mail.htw.stura-dresden.de
- **SMTP port**: 25
- **Authentication**: None (internal relay)
Redmine can send notifications for:
- New issues
- Issue updates
- Comments
- Project updates
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#redmine --target-host root@141.56.51.15
```
**Using container tarball:**
```bash
nix build .#containers-redmine
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 115 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname pro \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.15/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:10 \
--unprivileged 1 \
--features nesting=1
pct start 115
```
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#redmine --target-host root@141.56.51.15
# Or use auto-generated script
nix run .#redmine-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Access the web interface:**
```
https://pro.htw.stura-dresden.de
```
2. **Complete initial setup:**
- Log in with default admin credentials (admin/admin)
- **Immediately change the admin password**
- Configure basic settings (Settings → Administration)
3. **Configure LDAP authentication** (optional):
- Navigate to Administration → LDAP authentication
- Add LDAP server if using external identity provider
- Configure attribute mapping
4. **Set up projects:**
- Create projects via Administration → Projects → New project
- Configure project modules (issues, wiki, time tracking, etc.)
- Set up roles and permissions
5. **Configure email notifications:**
- Administration → Settings → Email notifications
- Verify SMTP settings are working
- Set default email preferences
- Test email delivery
6. **Configure issue tracking:**
- Administration → Trackers (Bug, Feature, Support, etc.)
- Administration → Issue statuses
- Administration → Workflows
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Routes HTTPS traffic for pro.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic and redirects to HTTPS
- **ACME challenges**: Forwards certificate verification requests
This host manages its own ACME certificates. Nginx handles TLS termination.
## Troubleshooting
### SMTP connection issues
If email notifications are not being sent:
```bash
# Check Redmine email configuration
cat /var/lib/redmine/config/configuration.yml | grep -A 10 email_delivery
# Test SMTP connectivity
telnet mail.htw.stura-dresden.de 25
# View Redmine logs
tail -f /var/lib/redmine/log/production.log
# Check mail queue (if using local sendmail)
mailq
```
**Solution**: Verify the SMTP relay (mail.htw.stura-dresden.de) is reachable and accepting connections on port 25.
### ImageMagick/Ghostscript paths
If image processing or PDF thumbnails fail:
```bash
# Check ImageMagick installation
which convert
/run/current-system/sw/bin/convert --version
# Check Ghostscript installation
which gs
/run/current-system/sw/bin/gs --version
# Test image conversion
/run/current-system/sw/bin/convert test.png -resize 100x100 output.png
# View Redmine logs for image processing errors
grep -i imagemagick /var/lib/redmine/log/production.log
```
**Solution**: ImageMagick and Ghostscript are enabled via NixOS config. Paths are automatically configured.
### Database migration failures
If Redmine fails to start after an update:
```bash
# Check Redmine service status
systemctl status redmine
# View Redmine logs
journalctl -u redmine -f
# Manually run database migrations (if needed)
cd /var/lib/redmine
sudo -u redmine bundle exec rake db:migrate RAILS_ENV=production
# Check database schema version
sudo -u redmine bundle exec rake db:version RAILS_ENV=production
```
**Solution**: Auto-upgrade is enabled, but migrations can sometimes fail. Check logs for specific errors.
### Nginx proxy configuration
If the web interface is unreachable:
```bash
# Check Nginx configuration
nginx -t
# Check Nginx status
systemctl status nginx
# View Nginx error logs
journalctl -u nginx -f
# Test local Redmine connection
curl http://127.0.0.1:3000
```
**Solution**: Verify Nginx is proxying correctly to localhost:3000 and that Redmine is running.
### Redmine service not starting
If Redmine fails to start:
```bash
# Check service status
systemctl status redmine
# View detailed logs
journalctl -u redmine -n 100
# Check database file permissions
ls -l /var/lib/redmine/db/
# Check configuration
ls -l /var/lib/redmine/config/
# Try starting manually
cd /var/lib/redmine
sudo -u redmine bundle exec rails server -e production
```
**Solution**: Check logs for specific errors. Common issues include database permissions, missing gems, or configuration errors.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-pro.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-pro.htw.stura-dresden.de -f
# Check certificate files
ls -l /var/lib/acme/pro.htw.stura-dresden.de/
# Manually trigger renewal
systemctl start acme-pro.htw.stura-dresden.de
```
**Solution**: Ensure DNS points to proxy (141.56.51.1) and the proxy forwards ACME challenges to this host.
## Files and Directories
- **Redmine home**: `/var/lib/redmine/`
- **Configuration**: `/var/lib/redmine/config/`
- `configuration.yml` - Email and general settings
- `database.yml` - Database configuration
- **Logs**: `/var/lib/redmine/log/production.log`
- **Database**: `/var/lib/redmine/db/` (SQLite)
- **Files/attachments**: `/var/lib/redmine/files/`
- **Plugins**: `/var/lib/redmine/plugins/`
- **Themes**: `/var/lib/redmine/public/themes/`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.15/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 22, 80, 443 allowed
## Configuration Details
- **Redmine version**: Latest from NixOS 25.11
- **Database**: SQLite (default)
- **Web server**: Nginx (reverse proxy)
- **Application server**: Puma (default Rails server)
- **Ruby version**: Determined by NixOS Redmine package
- **SMTP**: mail.htw.stura-dresden.de:25
- **ImageMagick**: Enabled (minimagick)
- **Ghostscript**: Enabled (PDF support)
- **Font**: Liberation Sans Regular
## Automatic Maintenance
- **Auto-upgrade**: Enabled (system automatically updates)
- **Auto-reboot**: Allowed (system may reboot for updates)
- **Store optimization**: Automatic
- **Garbage collection**: Automatic (delete older than 42 days)
## Useful Commands
```bash
# Access Redmine console
cd /var/lib/redmine
sudo -u redmine bundle exec rails console -e production
# Run rake tasks
sudo -u redmine bundle exec rake <task> RAILS_ENV=production
# Database backup
sudo -u redmine cp /var/lib/redmine/db/production.sqlite3 /backup/redmine-$(date +%Y%m%d).sqlite3
# View running processes
ps aux | grep redmine
# Restart Redmine
systemctl restart redmine
```
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [Redmine Documentation](https://www.redmine.org/projects/redmine/wiki/Guide)
- [Redmine Administration Guide](https://www.redmine.org/projects/redmine/wiki/RedmineAdministration)
- [NixOS Redmine Options](https://search.nixos.org/options?query=services.redmine)

View file

@ -41,7 +41,6 @@
networking = {
enableIPv6 = false;
hostName = "pro";
domain = lib.mkForce "stura.htw-dresden.de";
firewall.allowedTCPPorts = [ 22 80 443 ];
interfaces.eth0.ipv4.addresses = [
{
@ -147,14 +146,6 @@
forceSSL = true;
enableACME = true;
};
services.nginx.virtualHosts."pro.htw.stura-dresden.de" = {
#### https://search.nixos.org/options?show=services.nginx.virtualHosts.<name>.default
locations."/" = {
proxyPass = "http://127.0.0.1:${toString config.services.redmine.port}";
};
forceSSL = true;
enableACME = true;
};
############################
#### Probleme

344
hosts/v6proxy/README.md Normal file
View file

@ -0,0 +1,344 @@
# v6proxy Host - IPv6 Gateway
IPv6 gateway proxy hosted on Hetzner VPS, forwarding all IPv6 traffic to the main IPv4 proxy.
## Overview
- **Hostname**: v6proxy
- **IPv4 Address**: 178.104.18.93/32
- **IPv6 Address**: 2a01:4f8:1c19:96f8::1/64
- **Type**: Hetzner VPS (Full VM)
- **Services**: HAProxy (IPv6 to IPv4 forwarding)
- **Role**: IPv6 gateway for StuRa HTW Dresden infrastructure
## Architecture
The v6proxy serves as an IPv6 gateway because the main infrastructure at 141.56.51.0/24 is IPv4-only:
- All IPv6 DNS records (AAAA) point to this host (2a01:4f8:1c19:96f8::1)
- HAProxy forwards all IPv6 HTTP/HTTPS traffic to the main proxy at 141.56.51.1
- The main proxy then handles routing to backend services
- Simple pass-through configuration - no SNI inspection or routing logic
## Why This Host Exists
The HTW Dresden network (141.56.51.0/24) does not have native IPv6 connectivity. This Hetzner VPS provides:
1. **IPv6 connectivity**: Public IPv6 address for all services
2. **Transparent forwarding**: All traffic is forwarded to the IPv4 proxy
3. **No maintenance overhead**: Simple configuration, no routing logic
This allows all StuRa services to be accessible via IPv6 without requiring IPv6 support in the HTW network.
## Services
### HAProxy
HAProxy runs in simple TCP forwarding mode:
**HTTP (Port 80)**
- Binds to IPv6 `:::80`
- Forwards all traffic to `141.56.51.1:80`
- No HTTP inspection or routing
**HTTPS (Port 443)**
- Binds to IPv6 `:::443`
- Forwards all traffic to `141.56.51.1:443`
- No SNI inspection or TLS termination
**Key features:**
- Logging: All connections logged to systemd journal
- Stats page: Available at http://127.0.0.1:8404/stats (localhost only)
- Max connections: 50,000
- Buffer size: 32,762 bytes
- Timeouts: 5s connect, 30s client/server
### Configuration Philosophy
Unlike the main proxy at 141.56.51.1, this proxy is **intentionally simple**:
- No SNI inspection
- No HTTP host header routing
- No ACME challenge handling
- Just pure TCP forwarding
All routing logic happens at the main IPv4 proxy. This keeps the v6proxy configuration minimal and reduces maintenance burden.
## Deployment Type
The v6proxy is a **Hetzner VPS** (full VM):
- Hosted outside HTW network
- Uses Hetzner-specific disk layout (hetzner-disk.nix)
- Network interface: `eth0`
- Both IPv4 and IPv6 connectivity
## Network Configuration
### IPv4
- **Address**: 178.104.18.93/32
- **Gateway**: 172.31.1.1
- **Interface**: eth0
### IPv6
- **Address**: 2a01:4f8:1c19:96f8::1/64
- **Gateway**: fe80::1 (link-local)
- **Route**: Default route via fe80::1
- **Interface**: eth0
### DNS
- **Nameservers**: 9.9.9.9, 1.1.1.1 (Quad9 and Cloudflare)
- Uses public DNS servers (not HTW internal DNS)
### Firewall
- **Firewall**: nftables enabled
- **Open ports**: 22, 80, 443
## DNS Configuration
For IPv6 support, configure AAAA records pointing to this host:
```
proxy.htw.stura-dresden.de AAAA 2a01:4f8:1c19:96f8::1
*.htw.stura-dresden.de CNAME proxy.htw.stura-dresden.de
```
This provides IPv6 access to all services while IPv4 traffic continues to use the main proxy (141.56.51.1).
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere (recommended):**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#v6proxy --target-host root@178.104.18.93
```
This handles disk partitioning via disko automatically.
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#v6proxy --target-host root@178.104.18.93
# Or use auto-generated script
nix run .#v6proxy-update
```
## Disko Configuration
The v6proxy uses the Hetzner-specific disk layout (hetzner-disk.nix):
- **Filesystem**: Btrfs or Ext4 (Hetzner default)
- Declarative disk management via disko
- Automatic partitioning on installation
## Traffic Flow
IPv6 request flow:
1. Client connects to `2a01:4f8:1c19:96f8::1` (v6proxy)
2. v6proxy forwards to `141.56.51.1:443` (main IPv4 proxy)
3. Main proxy performs SNI inspection and routes to backend
4. Backend responds through the chain
For the client, the IPv6 connectivity is transparent - they don't know the backend infrastructure is IPv4-only.
## HAProxy Configuration
The HAProxy configuration is minimal:
```haproxy
frontend http-in
bind :::80
use_backend http_80
frontend sni_router
bind :::443
mode tcp
use_backend http_443
backend http_80
mode http
server proxy 141.56.51.1:80
backend http_443
mode tcp
server proxy 141.56.51.1:443
```
This is intentionally simple - all routing intelligence is at 141.56.51.1.
## HAProxy Stats
Access HAProxy statistics page (localhost only):
```bash
# SSH into v6proxy
ssh root@178.104.18.93
# Access stats via curl
curl http://127.0.0.1:8404/stats
# Or forward port to your local machine
ssh -L 8404:127.0.0.1:8404 root@178.104.18.93
# Then browse to http://localhost:8404/stats
```
The stats page shows:
- Current connections to main proxy backend
- Traffic statistics
- Connection status
## Monitoring
### Check HAProxy Status
```bash
# HAProxy service status
systemctl status haproxy
# View HAProxy logs
journalctl -u haproxy -f
# Check configuration
haproxy -c -f /etc/haproxy/haproxy.cfg
```
### Test Connectivity
```bash
# Test IPv6 HTTP forwarding
curl -6 -v http://[2a01:4f8:1c19:96f8::1]/
# Test IPv6 HTTPS forwarding
curl -6 -vk https://[2a01:4f8:1c19:96f8::1]/
# Test backend connectivity (IPv4 to main proxy)
curl -v http://141.56.51.1/
curl -vk https://141.56.51.1/
# Check IPv6 routing
ip -6 route show
ping6 2a01:4f8:1c19:96f8::1
```
## Troubleshooting
### HAProxy not starting
```bash
# Check HAProxy status
systemctl status haproxy
# Check configuration syntax
haproxy -c -f /etc/haproxy/haproxy.cfg
# View HAProxy logs
journalctl -u haproxy -f
```
### IPv6 connectivity issues
```bash
# Verify IPv6 address is configured
ip -6 addr show eth0
# Check IPv6 routing
ip -6 route show
# Test IPv6 connectivity
ping6 2606:4700:4700::1111 # Cloudflare DNS
# Check IPv6 firewall rules
nft list ruleset | grep ip6
```
### Backend (main proxy) unreachable
```bash
# Test IPv4 connectivity to main proxy
ping 141.56.51.1
curl -v http://141.56.51.1/
curl -vk https://141.56.51.1/
# Check HAProxy backend status
curl http://127.0.0.1:8404/stats | grep proxy
# View connection errors
journalctl -u haproxy | grep -i error
```
### DNS not resolving
```bash
# Check nameserver configuration
cat /etc/resolv.conf
# Test DNS resolution
dig git.adm.htw.stura-dresden.de A
dig git.adm.htw.stura-dresden.de AAAA
# Test with specific nameserver
dig @9.9.9.9 git.adm.htw.stura-dresden.de
```
## Security Considerations
- **No TLS termination**: Traffic is passed through encrypted to main proxy
- **No deep packet inspection**: Simple TCP forwarding only
- **Minimal attack surface**: No routing logic or service-specific configuration
- **Public IPv6 address**: Exposed to the internet, firewall must be properly configured
## Performance Considerations
- **Additional hop**: IPv6 traffic goes through an extra proxy hop
- **Latency**: Hetzner → HTW network adds some latency
- **Bandwidth**: Hetzner provides high bandwidth, unlikely to be a bottleneck
- **Connection limits**: HAProxy configured for 50,000 concurrent connections
For most use cases, the additional latency is negligible (typically <10ms within Germany).
## Cost and Hosting
- **Provider**: Hetzner Cloud
- **Type**: VPS (Virtual Private Server)
- **Location**: Germany, Nürnberg, Falkenstein war voll.
- **Cost**: Minimal - basic VPS tier sufficient for forwarding traffic
## Future Improvements
Possible improvements (not currently needed):
1. **Native IPv6 at HTW**: If HTW network gains IPv6, this proxy can be decommissioned
2. **GeoDNS**: Use GeoDNS to route IPv4 and IPv6 separately
3. **Monitoring**: Add automated monitoring and alerting
4. **Failover**: Add a second IPv6 proxy for redundancy
## Files and Directories
- **HAProxy config**: `/etc/haproxy/haproxy.cfg` (generated by Nix)
- **Disk config**: `./hetzner-disk.nix` (disko configuration)
- **Hardware config**: `./hardware-configuration.nix` (Hetzner VPS hardware)
- **NixOS config**: `./default.nix` (v6proxy configuration)
## Relationship to Main Proxy
| Feature | v6proxy (IPv6 Gateway) | proxy (Main Proxy) |
|---------|------------------------|-------------------|
| IP Version | IPv4 + IPv6 | IPv4 only |
| Location | Hetzner Cloud | HTW network (141.56.51.0/24) |
| Function | Simple forwarding | SNI routing, service routing |
| Complexity | Minimal | Complex routing logic |
| HAProxy Mode | TCP forwarding | TCP + HTTP with SNI inspection |
| TLS Handling | Pass-through | Pass-through (SNI inspection) |
| ACME Handling | None | Forwards challenges to backends |
The v6proxy is intentionally minimal - all intelligence lives at the main proxy.
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - Main IPv4 proxy configuration
- [HAProxy Documentation](http://www.haproxy.org/#docs)
- [Hetzner Cloud Docs](https://docs.hetzner.com/cloud/)

109
hosts/v6proxy/default.nix Normal file
View file

@ -0,0 +1,109 @@
{
self,
config,
lib,
pkgs,
...
}:
{
imports = [
./hardware-configuration.nix
./hetzner-disk.nix
];
networking = {
hostName = "v6proxy";
interfaces.eth0 = {
ipv4.addresses = [
{
address = "178.104.18.93";
prefixLength = 32;
}
];
ipv6 = {
addresses = [
{
address = "2a01:4f8:1c19:96f8::1";
prefixLength = 64;
}
];
routes = [
{ address = "::"; prefixLength = 0; via = "fe80::1";}
];
};
};
defaultGateway.address = "172.31.1.1";
defaultGateway.interface = "eth0";
nameservers = [
"9.9.9.9"
"1.1.1.1"
];
firewall = {
allowedTCPPorts = [
22
80
443
];
};
nftables = {
enable = true;
};
};
# wenn instanzen in die flake migriert sind könnte man das autogenerierien
services ={
haproxy = {
enable = true;
config = ''
global
# schreibe globalen log ins journal ip -> app
log /dev/log format raw local0
maxconn 50000
# man könnte metriken über einen socket file statt einen lokalen port machen für user permission control
# stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
tune.bufsize 32762
defaults
log global
mode tcp
option tcplog
timeout connect 5s
timeout client 30s
timeout server 30s
# stats seite zeigt backend connection status, wenn check gesetzt ist
frontend stats
bind 127.0.0.1:8404
mode http
stats enable
stats uri /stats
stats refresh 10s
stats show-legends
stats show-node
stats show-modules
frontend http-in
bind :::80
use_backend http_80
frontend sni_router
bind :::443
mode tcp
use_backend http_443
backend http_80
mode http
server proxy 141.56.51.1:80
backend http_443
mode tcp
server proxy 141.56.51.1:443
'';
};
};
environment.systemPackages = with pkgs; [
];
system.stateVersion = "25.11";
}

View file

@ -0,0 +1,38 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{
config,
lib,
pkgs,
modulesPath,
...
}:
{
imports = [
(modulesPath + "/profiles/qemu-guest.nix")
];
boot.initrd.availableKernelModules = [
"ata_piix"
"uhci_hcd"
"virtio_pci"
"virtio_scsi"
"sd_mod"
"sr_mod"
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
# fileSystems."/" =
# {
# device = "/dev/sda1";
# fsType = "ext4";
# };
# swapDevices = [ ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View file

@ -0,0 +1,56 @@
{
disko.devices = {
disk = {
main = {
type = "disk";
device = "/dev/sda";
content = {
type = "gpt";
partitions = {
boot = {
size = "1M";
type = "EF02"; # for grub MBR
};
ESP = {
priority = 1;
name = "ESP";
start = "1M";
end = "512M";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
};
};
root = {
size = "100%";
content = {
type = "btrfs";
extraArgs = [ "-f" ]; # Override existing partition
subvolumes = {
"/rootfs" = {
mountpoint = "/";
};
"/home" = {
mountOptions = [ "compress=zstd" ];
mountpoint = "/home";
};
# Sub(sub)volume doesn't need a mountpoint as its parent is mounted
"/nix" = {
mountOptions = [
"compress=zstd"
"noatime"
];
mountpoint = "/nix";
};
};
};
};
};
};
};
};
};
}

297
hosts/wiki/README.md Normal file
View file

@ -0,0 +1,297 @@
# Wiki Host - MediaWiki
MediaWiki instance at 141.56.51.13 running in an LXC container.
## Overview
- **Hostname**: wiki
- **FQDN**: wiki.htw.stura-dresden.de
- **IP Address**: 141.56.51.13
- **Type**: Proxmox LXC Container
- **Services**: MediaWiki, MariaDB, Apache httpd, PHP-FPM
## Services
### MediaWiki
The StuRa HTW Dresden wiki runs MediaWiki with extensive customization:
- **Name**: Wiki StuRa HTW Dresden
- **Language**: German (de)
- **Default skin**: Vector (classic)
- **Session timeout**: 3 hours (10800 seconds)
- **ImageMagick**: Enabled for image processing
- **Instant Commons**: Enabled (access to Wikimedia Commons images)
### Custom Namespaces
The wiki defines several custom namespaces for organizational purposes:
| Namespace | ID | Purpose |
|-----------|-----|---------|
| StuRa | 100 | Standard StuRa content |
| Intern | 102 | Internal (non-public) StuRa content |
| Admin | 104 | Administrative wiki content |
| Person | 106 | Individual person pages (non-public) |
| Faranto | 108 | Faranto e.V. content |
| ET | 212 | ET Fachschaft content |
| ET_intern | 412 | ET internal content |
| LaUCh | 216 | LaUCh Fachschaft content |
| LaUCh_intern | 416 | LaUCh internal content |
Each namespace has a corresponding discussion namespace (odd numbered ID).
### User Groups and Permissions
**Custom user groups:**
- **intern**: Access to Intern and Person namespaces
- **ET**: Access to ET_intern namespace
- **LUC**: Access to LaUCh_intern namespace
These groups have the same base permissions as standard users (move pages, edit, upload, etc.) plus access to their respective restricted namespaces.
### Spam Prevention
**QuestyCaptcha** is configured to prevent automated spam:
- Challenges users with questions about HTW and StuRa
- Triggered on: edit, create, createtalk, addurl, createaccount, badlogin
- Questions are specific to local knowledge (e.g., "Welche Anzahl an Referaten hat unser StuRa geschaffen?")
### Extensions
The following extensions are installed:
- **Lockdown**: Restricts namespace access by user group
- **ContributionScores**: Statistics of contributions by user
- **UserMerge**: Merge and delete user accounts (for spam cleanup)
- **Interwiki**: Use interwiki links (e.g., Wikipedia references)
- **Cite**: Reference system (footnotes)
- **ConfirmEdit/QuestyCaptcha**: CAPTCHA challenges
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#wiki --target-host root@141.56.51.13
```
**Using container tarball:**
```bash
nix build .#containers-wiki
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 113 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname wiki \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.13/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:10 \
--unprivileged 1 \
--features nesting=1
pct start 113
```
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#wiki --target-host root@141.56.51.13
# Or use auto-generated script
nix run .#wiki-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Set admin password:**
```bash
echo "your-secure-password" > /var/lib/mediawiki/mediawiki-password
chmod 600 /var/lib/mediawiki/mediawiki-password
```
2. **Set database password:**
```bash
echo "your-db-password" > /var/lib/mediawiki/mediawiki-dbpassword
chmod 600 /var/lib/mediawiki/mediawiki-dbpassword
```
3. **Access the web interface:**
```
https://wiki.htw.stura-dresden.de
```
4. **Complete initial setup:**
- Log in with admin credentials
- Configure additional settings via Special:Version
- Set up main page
5. **Configure namespace permissions:**
- Add users to `intern`, `ET`, or `LUC` groups via Special:UserRights
- Verify namespace restrictions work correctly
- Test that non-members cannot access restricted namespaces
6. **Add users to appropriate groups:**
- Navigate to Special:UserRights
- Select user
- Add to: intern, ET, LUC, sysop, bureaucrat (as needed)
7. **Upload logo and favicon** (optional):
- Place files in `/var/lib/mediawiki/images/`
- Files: `logo.png`, `logo.svg`, `favicon.png`
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Routes HTTPS traffic for wiki.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic and redirects to HTTPS
- **ACME challenges**: Forwards certificate verification requests
This host manages its own ACME certificates. Apache httpd handles TLS termination.
## Troubleshooting
### Locale warnings
When accessing the container with `pct enter`, you may see:
```
sh: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
sh: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
```
**This is a known issue and can be safely ignored.** It only affects the interactive shell environment, not the running services. Regular SSH access provides a proper shell with correct locale settings.
### Database connection issues
If MediaWiki cannot connect to the database:
```bash
# Check MariaDB status
systemctl status mysql
# Check database exists
mysql -u root -e "SHOW DATABASES;"
# Check user permissions
mysql -u root -e "SHOW GRANTS FOR 'mediawiki'@'localhost';"
# View MediaWiki logs
journalctl -u mediawiki -f
```
**Solution**: Ensure the database password in `/var/lib/mediawiki/mediawiki-dbpassword` matches the database user password.
### Extension loading problems
If extensions are not working:
```bash
# Check extension files exist
ls -l /nix/store/*-mediawiki-extensions/
# View PHP errors
tail -f /var/log/httpd/error_log
# Test MediaWiki configuration
php /var/lib/mediawiki/maintenance/checkSetup.php
```
**Solution**: Verify extensions are properly defined in the configuration and compatible with the MediaWiki version.
### ImageMagick configuration
If image uploads or thumbnails fail:
```bash
# Check ImageMagick installation
which convert
/run/current-system/sw/bin/convert --version
# Test image conversion
/run/current-system/sw/bin/convert input.png -resize 100x100 output.png
# Check MediaWiki image directory permissions
ls -ld /var/lib/mediawiki/images/
```
**Solution**: Ensure ImageMagick path is set correctly (`$wgImageMagickConvertCommand`) and the images directory is writable.
### Namespace permission issues
If users can access restricted namespaces:
```bash
# Check Lockdown extension is loaded
grep -i lockdown /var/lib/mediawiki/LocalSettings.php
# Verify user group membership
# Log in as admin and check Special:UserRights
# Check namespace permission configuration
grep -A 5 "wgNamespacePermissionLockdown" /var/lib/mediawiki/LocalSettings.php
```
**Solution**: Verify the Lockdown extension is installed and `$wgNamespacePermissionLockdown` is configured correctly for each restricted namespace.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-wiki.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-wiki.htw.stura-dresden.de -f
# Check Apache HTTPS configuration
httpd -t -D DUMP_VHOSTS
```
**Solution**: Ensure DNS points to proxy (141.56.51.1) and the proxy forwards ACME challenges to this host.
## Files and Directories
- **MediaWiki data**: `/var/lib/mediawiki/`
- **Password file**: `/var/lib/mediawiki/mediawiki-password`
- **DB password file**: `/var/lib/mediawiki/mediawiki-dbpassword`
- **Images**: `/var/lib/mediawiki/images/`
- **LocalSettings**: `/var/lib/mediawiki/LocalSettings.php` (generated)
- **Extensions**: `/nix/store/.../mediawiki-extensions/`
- **Database**: MariaDB stores data in `/var/lib/mysql/`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.13/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 80, 443 allowed
## Configuration Details
- **Time zone**: Europe/Berlin
- **Table prefix**: sturawiki
- **Emergency contact**: wiki@stura.htw-dresden.de
- **Password sender**: wiki@stura.htw-dresden.de
- **External images**: Allowed
- **File uploads**: Enabled
- **Email notifications**: Enabled (user talk, watchlist)
## Automatic Maintenance
- **Auto-upgrade**: Enabled (system automatically updates)
- **Auto-reboot**: Allowed (system may reboot for updates)
- **Store optimization**: Automatic
- **Garbage collection**: Automatic
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [MediaWiki Documentation](https://www.mediawiki.org/wiki/Documentation)
- [NixOS MediaWiki Options](https://search.nixos.org/options?query=services.mediawiki)
- [Extension:Lockdown](https://www.mediawiki.org/wiki/Extension:Lockdown)
- [Extension:QuestyCaptcha](https://www.mediawiki.org/wiki/Extension:QuestyCaptcha)

18
keys/.gitignore vendored Normal file
View file

@ -0,0 +1,18 @@
# Prevent accidental commit of private keys
*.key
*.priv
*.private
*_priv
*-priv
*.sec
*secret*
# Only allow public keys
!*.asc
!*.gpg
!*.pub
!*.age
# Allow this gitignore and README
!.gitignore
!README.md

40
keys/README.md Normal file
View file

@ -0,0 +1,40 @@
# Keys Directory
This directory contains GPG/age public keys for sops encryption.
## Structure
- `hosts/` - Host-specific public keys (for servers to decrypt their own secrets)
- `users/` - User/admin public keys (for team members to decrypt secrets)
## Adding Keys
### GPG Keys
Export your GPG public key:
```bash
gpg --export --armor YOUR_KEY_ID > keys/users/yourname.asc
```
Export a host's public key:
```bash
gpg --export --armor HOST_KEY_ID > keys/hosts/hostname.asc
```
### Age Keys
For age keys, save the public key to a file:
```bash
echo "age1..." > keys/users/yourname.age
echo "age1..." > keys/hosts/hostname.age
```
## Usage
When you enter the dev shell (`nix develop`), all keys in these directories will be automatically imported into your GPG keyring via the sops-import-keys-hook.
## Important
- Only commit **public** keys (.asc, .age files with public keys)
- Never commit private keys
- Update `.sops.yaml` to reference the fingerprints/keys for access control

0
keys/hosts/.gitkeep Normal file
View file

0
keys/users/.gitkeep Normal file
View file