readme docs

This commit is contained in:
goeranh 2026-03-13 16:59:54 +01:00
parent 6e0d407b1c
commit 9466ab3656
No known key found for this signature in database
6 changed files with 1872 additions and 53 deletions

362
README.md
View file

@ -1,71 +1,331 @@
# StuRa HTW Dresden Mailserver # StuRa HTW Dresden Infrastructure - NixOS Configuration
neue mailserver config, ersetzt von Hand konfiguriertes FreeBSD Relay System ohne Mailkonten.
Ziel ist es den Identity-Provider goauthentik mit ldap an simple-nixos-mailserver anzubinden. Declarative infrastructure management for StuRa HTW Dresden using NixOS and a flake-based configuration. This repository replaces the hand-configured FreeBSD relay system with a modern, reproducible infrastructure.
![Flake Struktur](./flake-show.png "nix flake show") ## Architecture
In dieser Flake werden, wie durch den command `nix flake show` zu sehen, mehrere NixOS-Configuration und Pakete definiert.
# Ordner Hosts ### Overview
jeder ornder ist ein system - es wird `builtins.readDir` verwendet, um alle Unterordner zu finden und ein nixos System fpr jeden davon zu generieren.
- authentik
- mail
- git
- redmine
Datei `hosts/<name>/default.nix` wird evaluiert und muss die alle weiteren z.B. authentik.nix importieren. This infrastructure uses a flake-based approach with automatic host discovery:
Davon ausgenommen ist der inhalt von default.nix im Hauptordner, diese Datei enthält alle globalen Einstellungen, die in jedem System aktiviert werden. - **Centralized reverse proxy**: HAProxy at 141.56.51.1 routes all traffic via SNI inspection and HTTP host headers
- **Automatic host discovery**: Each subdirectory in `hosts/` becomes a NixOS configuration via `builtins.readDir`
- **Global configuration**: Settings in `default.nix` are automatically applied to all hosts
- **ACME certificates**: All services use Let's Encrypt certificates managed locally on each host
# Todo ### Network
- mailverteiler mitgliedschaft aus ldap gruppen?
- aliase aus ldap attributen?
- forgejo an authentik via oauth
- demo mäßg redmine in container kopieren - **Network**: 141.56.51.0/24
- demo mäßg forgejo in container einrichten - **Gateway**: 141.56.51.254
- **DNS**: 141.56.1.1, 141.56.1.2 (HTW internal)
- **Domain**: htw.stura-dresden.de
# Setup ## Repository Structure
Folgende DNS-Records werden benötigt:
| Name | Type | IP |
|------|------|----|
|mail.test.htw.stura-dresden.de|A|141.56.51.95|
|lists.test.htw.stura-dresden.de|A|141.56.51.95|
|test.htw.stura-dresden.de|A|141.56.51.95|
|auth.test.htw.stura-dresden.de|A|141.56.51.96|
Man könnte auch nur mail.test.htw.stura-dresden auf die ip zeigen lassen und die anderen beiden Records als CNAME Verweis auf diesen namen zeigen lassen ```
stura-infra/
├── flake.nix # Main flake configuration with auto-discovery
├── default.nix # Global settings applied to all hosts
├── hosts/ # Host-specific configurations
│ ├── proxy/ # Central reverse proxy (HAProxy)
│ │ ├── default.nix
│ │ ├── hardware-configuration.nix
│ │ ├── hetzner-disk.nix
│ │ └── README.md
│ ├── git/ # Forgejo git server
│ │ └── default.nix
│ ├── wiki/ # MediaWiki instance
│ │ └── default.nix
│ ├── nextcloud/ # Nextcloud instance
│ │ └── default.nix
│ └── redmine/ # Redmine project management
│ └── default.nix
└── README.md # This file
```
## Setup Authentik ## Host Overview
| Host | IP | Type | Services | Documentation |
|------|-----|------|----------|---------------|
| proxy | 141.56.51.1 | VM | HAProxy, SSH Jump | [hosts/proxy/README.md](hosts/proxy/README.md) |
| git | 141.56.51.7 | LXC | Forgejo, Nginx | [hosts/git/README.md](hosts/git/README.md) |
| wiki | 141.56.51.13 | LXC | MediaWiki, MariaDB, Apache | [hosts/wiki/README.md](hosts/wiki/README.md) |
| redmine | 141.56.51.15 | LXC | Redmine, Nginx | [hosts/redmine/README.md](hosts/redmine/README.md) |
| nextcloud | 141.56.51.16 | LXC | Nextcloud, PostgreSQL, Redis, Nginx | [hosts/nextcloud/README.md](hosts/nextcloud/README.md) |
## Deployment Methods
### Method 1: Initial Installation with nixos-anywhere (Recommended)
Use `nixos-anywhere` for initial system installation. This handles disk partitioning (via disko) and bootstrapping automatically.
**For VM hosts (proxy):**
```bash ```bash
nix run github:nix-community/nixos-anywhere -- --flake .#authentik --target-host root@141.56.51.96 nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
``` ```
### im installierten System **For LXC containers (git, wiki, redmine, nextcloud):**
Authentik kann nicht ohne env datei starten
```bash ```bash
echo "AUTHENTIK_SECRET_KEY=$(openssl rand -hex 32)" > /var/lib/authentik_secret nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7
```
danach muss man dann im browser den initial setup flow machen
und dann ldap provider einrichten
https://docs.goauthentik.io/add-secure-apps/providers/ldap/generic_setup
/var/lib/authentik-ldap-env
```
AUTHENTIK_HOST=https://auth.test.htw.stura-dresden.de
AUTHENTIK_TOKEN=<token>
``` ```
## Setup Mail This method is ideal for:
- First-time installation on bare metal or fresh VMs
- Complete system rebuilds
- Migration to new hardware
### Method 2: Container Tarball Deployment to Proxmox
Build and deploy LXC container tarballs for git, wiki, redmine, and nextcloud hosts.
**Step 1: Build container tarball locally**
```bash ```bash
nix run github:nix-community/nixos-anywhere -- --flake .#mail --target-host root@141.56.51.95 nix build .#containers-git
# Result will be in result/tarball/nixos-system-x86_64-linux.tar.xz
``` ```
# Proxy (und andere Systeme) neu updaten **Step 2: Copy to Proxmox host**
```bash
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
```
Die config liegt nicht mehr wie gehabt auf dem server sondern heir in dem git repo. Deployt wird vom eigenen Laptop über ssh in die Instanz. **Step 3: Create container on Proxmox**
`nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1` ```bash
# Example for git host (container ID 107, adjust as needed)
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname git \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1
Das funktioniert nur wenn man einen ssh-config eintrag für die IP hat, da der Port gesetzt werden muss. # Configure storage and settings via Proxmox web interface if needed
## Kein NIX auf der lokalen Maschine ```
Wenn lokal kein nix installiert is kann natürlich nich nixos-rebuild verwendet werden. Stattdessen kann man den command auf dem Zielsystem ausführen:
`ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"` **Step 4: Start container**
```bash
pct start 107
```
**Step 5: Post-deployment configuration**
- Access container: `pct enter 107`
- Follow host-specific post-deployment steps in each host's README.md
**Available container tarballs:**
- `nix build .#containers-git`
- `nix build .#containers-wiki`
- `nix build .#containers-redmine`
- `nix build .#containers-nextcloud`
**Note**: The proxy host is a full VM and does not have a container tarball. Use Method 1 or 3 for proxy deployment.
### Method 3: ISO Installer
Build a bootable ISO installer for manual installation on VMs or bare metal.
**Build ISO:**
```bash
nix build .#installer-iso
# Result will be in result/iso/nixos-*.iso
```
**Build VM for testing:**
```bash
nix build .#installer-vm
```
**Deployment:**
1. Upload ISO to Proxmox storage
2. Create VM and attach ISO as boot device
3. Boot VM and follow installation prompts
4. Run installation commands manually
5. Reboot and remove ISO
### Method 4: Regular Updates
For already-deployed systems, apply configuration updates:
**Option A: Using nixos-rebuild from your local machine**
```bash
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip>
```
Example:
```bash
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
```
**Note**: This requires an SSH config entry for the proxy (uses port 1005):
```
# ~/.ssh/config
Host 141.56.51.1
Port 1005
```
**Option B: Using auto-generated update scripts**
The flake generates convenience scripts for each host:
```bash
nix run .#git-update
nix run .#wiki-update
nix run .#redmine-update
nix run .#nextcloud-update
nix run .#proxy-update
```
These scripts automatically extract the target IP from the configuration.
**Option C: Remote execution (no local Nix installation)**
If Nix isn't installed locally, run the command on the target system:
```bash
ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"
```
Replace `proxy` with the appropriate hostname and adjust the IP address.
## Required DNS Records
The following DNS records must be configured for the current infrastructure:
| Name | Type | IP | Service |
|------|------|-----|---------|
| *.htw.stura-dresden.de | CNAME | proxy.htw.stura-dresden.de | Reverse proxy |
| proxy.htw.stura-dresden.de | A | 141.56.51.1 | Proxy IPv4 |
| proxy.htw.stura-dresden.de | AAAA | 2a01:4f8:1c19:96f8::1 | Proxy IPv6 |
**Note**: All public services point to the proxy IP (141.56.51.1). The proxy handles SNI-based routing to backend hosts. Backend IPs are internal and not exposed in DNS.
Additional services managed by the proxy (not in this repository):
- stura.htw-dresden.de → Plone
- tix.htw.stura-dresden.de → Pretix
- vot.htw.stura-dresden.de → OpenSlides
- mail.htw.stura-dresden.de → Mail server
## Development
### Code Formatting
Format all Nix files using the RFC-style formatter:
```bash
nix fmt
```
### Testing Changes
Before deploying to production:
1. Test flake evaluation: `nix flake check`
2. Build configurations locally: `nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel`
3. Review generated configurations
4. Deploy to test systems first if available
### Adding a New Host
1. **Create host directory:**
```bash
mkdir hosts/newhostname
```
2. **Create `hosts/newhostname/default.nix`:**
```nix
{ config, lib, pkgs, modulesPath, ... }:
{
imports = [
"${modulesPath}/virtualisation/proxmox-lxc.nix" # For LXC containers
# Or for VMs:
# ./hardware-configuration.nix
];
networking = {
hostName = "newhostname";
interfaces.eth0.ipv4.addresses = [{ # or ens18 for VMs
address = "141.56.51.XXX";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
firewall.allowedTCPPorts = [ 80 443 ];
};
# Add your services here
services.nginx.enable = true;
# ...
system.stateVersion = "25.11";
}
```
3. **The flake automatically discovers the new host** via `builtins.readDir ./hosts`
4. **If the host runs nginx**, the proxy automatically adds forwarding rules (you still need to add DNS records)
5. **Deploy:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#newhostname --target-host root@141.56.51.XXX
```
## Repository Information
- **Repository**: https://codeberg.org/stura-htw-dresden/stura-infra
- **ACME Email**: cert@stura.htw-dresden.de
- **NixOS Version**: 25.11
- **Architecture**: x86_64-linux
## Flake Inputs
- `nixpkgs`: NixOS 25.11
- `authentik`: Identity provider (nix-community/authentik-nix)
- `mailserver`: Simple NixOS mailserver (nixos-25.11 branch)
- `sops`: Secret management (Mic92/sops-nix)
- `disko`: Declarative disk partitioning
## Common Patterns
### Network Configuration
All hosts follow this pattern:
```nix
networking = {
hostName = "<name>";
interfaces.<interface>.ipv4.addresses = [{
address = "<ip>";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
};
```
- LXC containers use `eth0`
- VMs/bare metal typically use `ens18`
### Nginx + ACME Pattern
For web services:
```nix
services.nginx = {
enable = true;
virtualHosts."<fqdn>" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# service config
};
};
};
```
This automatically:
- Integrates with the proxy's ACME challenge forwarding
- Generates HAProxy backend configuration
- Requests Let's Encrypt certificates
### Firewall Rules
Hosts only need to allow traffic from the proxy:
```nix
networking.firewall.allowedTCPPorts = [ 80 443 ];
```
SSH ports vary:
- Proxy: port 1005 (admin access)
- Other hosts: port 22 (default)

210
hosts/git/README.md Normal file
View file

@ -0,0 +1,210 @@
# Git Host - Forgejo
Forgejo git server at 141.56.51.7 running in an LXC container.
## Overview
- **Hostname**: git
- **FQDN**: git.adm.htw.stura-dresden.de
- **IP Address**: 141.56.51.7
- **Type**: Proxmox LXC Container
- **Services**: Forgejo, Nginx (reverse proxy), OpenSSH
## Services
### Forgejo
Forgejo is a self-hosted Git service (fork of Gitea) providing:
- Git repository hosting
- Web interface for repository management
- Issue tracking
- Pull requests
- OAuth2 integration support
**Configuration**:
- **Socket**: `/run/forgejo/forgejo.sock` (Unix socket)
- **Root URL**: https://git.adm.htw.stura-dresden.de
- **Protocol**: HTTP over Unix socket (Nginx handles TLS)
### Nginx
Nginx acts as a reverse proxy between the network and Forgejo:
- Receives HTTPS requests (TLS termination)
- Forwards to Forgejo via Unix socket
- Manages ACME/Let's Encrypt certificates
- WebSocket support enabled for live updates
### OAuth2 Auto-Registration
OAuth2 client auto-registration is enabled:
- `ENABLE_AUTO_REGISTRATION = true`
- `REGISTER_EMAIL_CONFIRM = false`
- Username field: email
This allows users to register automatically via OAuth2 providers without manual approval.
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7
```
**Using container tarball:**
```bash
nix build .#containers-git
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname git \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1
pct start 107
```
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#git --target-host root@141.56.51.7
# Or use auto-generated script
nix run .#git-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Access the web interface:**
```
https://git.adm.htw.stura-dresden.de
```
2. **Complete initial setup:**
- Create the first admin account via web UI
- Configure any additional settings
- Set up SSH keys for git access
3. **Configure OAuth2 (optional):**
- If using an external identity provider (e.g., authentik)
- Add OAuth2 application in the provider
- Configure OAuth2 settings in Forgejo admin panel
- Auto-registration is already enabled in configuration
4. **Set up repositories:**
- Create organizations
- Create repositories
- Configure access permissions
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Inspects TLS handshake and routes HTTPS traffic for git.adm.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic based on Host header
- **ACME challenges**: Forwards `/.well-known/acme-challenge/` requests to this host for Let's Encrypt verification
- **Auto-redirect**: Redirects HTTP to HTTPS (except ACME challenges)
This host handles its own TLS certificates via ACME. The proxy passes through encrypted traffic without decryption.
## Troubleshooting
### Forgejo socket permissions
If Forgejo fails to start or Nginx cannot connect:
```bash
# Check socket exists
ls -l /run/forgejo/forgejo.sock
# Check Forgejo service status
systemctl status forgejo
# Check Nginx service status
systemctl status nginx
# View Forgejo logs
journalctl -u forgejo -f
```
**Solution**: Ensure the Forgejo user has proper permissions and the socket path is correct in both Forgejo and Nginx configurations.
### Nginx proxy configuration
If the web interface is unreachable:
```bash
# Check Nginx configuration
nginx -t
# View Nginx error logs
journalctl -u nginx -f
# Test socket connection
curl --unix-socket /run/forgejo/forgejo.sock http://localhost/
```
**Solution**: Verify the `proxyPass` directive in Nginx configuration points to the correct Unix socket.
### SSH access issues
If git operations over SSH fail:
```bash
# Check SSH service
systemctl status sshd
# Test SSH connection
ssh -T git@git.adm.htw.stura-dresden.de
# Check Forgejo SSH settings
cat /var/lib/forgejo/custom/conf/app.ini | grep -A 5 "\[server\]"
```
**Solution**: Ensure SSH keys are properly added to user accounts and SSH daemon is running.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-git.adm.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-git.adm.htw.stura-dresden.de -f
# Manually trigger certificate renewal
systemctl start acme-git.adm.htw.stura-dresden.de
```
**Solution**: Verify DNS points to proxy (141.56.51.1) and proxy is forwarding ACME challenges correctly.
## Files and Directories
- **Configuration**: `/nix/store/.../forgejo/` (managed by Nix)
- **Data directory**: `/var/lib/forgejo/`
- **Custom config**: `/var/lib/forgejo/custom/conf/app.ini`
- **Repositories**: `/var/lib/forgejo/data/gitea-repositories/`
- **Socket**: `/run/forgejo/forgejo.sock`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.7/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 22, 80, 443 allowed
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [Forgejo Documentation](https://forgejo.org/docs/latest/)
- [NixOS Forgejo Options](https://search.nixos.org/options?query=services.forgejo)

353
hosts/nextcloud/README.md Normal file
View file

@ -0,0 +1,353 @@
# Nextcloud Host
Nextcloud 31 instance at 141.56.51.16 running in an LXC container.
## Overview
- **Hostname**: cloud
- **FQDN**: cloud.htw.stura-dresden.de
- **IP Address**: 141.56.51.16
- **Type**: Proxmox LXC Container
- **Services**: Nextcloud, PostgreSQL, Redis (caching + locking), Nginx, Nullmailer
## Services
### Nextcloud
Nextcloud 31 provides file hosting and collaboration:
- **Admin user**: administration
- **Max upload size**: 1GB
- **Database**: PostgreSQL (via Unix socket)
- **Caching**: Redis (via Unix socket)
- **Default phone region**: DE (Germany)
- **HTTPS**: Enabled via Nginx reverse proxy
- **Log level**: 4 (warnings and errors)
- **Maintenance window**: 4 AM (prevents maintenance during business hours)
**Pre-installed apps:**
- Calendar
- Deck (Kanban board)
- Tasks
- Notes
- Contacts
### PostgreSQL
Database backend for Nextcloud:
- **Database name**: nextcloud
- **User**: nextcloud
- **Connection**: Unix socket (`/run/postgresql`)
- **Privileges**: Full access to nextcloud database
### Redis
Two Redis instances for performance:
- **Cache**: General caching via `/run/redis-nextcloud/redis.sock`
- **Locking**: Distributed locking mechanism
- **Port**: 0 (Unix socket only)
- **User**: nextcloud
### Nginx
Reverse proxy with recommended settings:
- **Gzip compression**: Enabled
- **Optimization**: Enabled
- **Proxy settings**: Enabled
- **TLS**: Enabled with ACME certificates
- **Access logs**: Disabled (privacy)
- **Error logs**: Only emergency level (`/dev/null emerg`)
### Nullmailer
Simple mail relay for sending email notifications:
- **Relay host**: mail.stura.htw-dresden.de:25
- **From address**: files@stura.htw-dresden.de
- **HELO host**: cloud.htw.stura-dresden.de
- **Protocol**: SMTP (port 25, no auth)
Nextcloud uses Nullmailer's sendmail interface to send email notifications.
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#nextcloud --target-host root@141.56.51.16
```
**Using container tarball:**
```bash
nix build .#containers-nextcloud
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 116 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname cloud \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.16/24,gw=141.56.51.254 \
--memory 4096 \
--cores 4 \
--rootfs local-lvm:20 \
--unprivileged 1 \
--features nesting=1
pct start 116
```
**Note**: Nextcloud benefits from more resources (4GB RAM, 20GB disk recommended).
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#nextcloud --target-host root@141.56.51.16
# Or use auto-generated script
nix run .#nextcloud-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Set admin password:**
```bash
echo "your-secure-password" > /var/lib/nextcloud/adminpassFile
chmod 600 /var/lib/nextcloud/adminpassFile
chown nextcloud:nextcloud /var/lib/nextcloud/adminpassFile
```
2. **Access the web interface:**
```
https://cloud.htw.stura-dresden.de
```
3. **Complete initial setup:**
- Log in with admin credentials (user: administration)
- Review security & setup warnings
- Configure background jobs (cron is already configured via NixOS)
4. **Configure additional apps:**
- Navigate to Apps section
- Enable/disable apps as needed
- Pre-installed apps: Calendar, Deck, Tasks, Notes, Contacts
5. **Configure trusted domains** (if needed):
- Current trusted domains: cloud.htw.stura-dresden.de, www.cloud.htw.stura-dresden.de
- Edit via NixOS config if you need to add more domains
6. **Test email notifications** (optional):
- Navigate to Settings → Administration → Basic settings
- Send test email
- Verify email delivery through Nullmailer relay
7. **Configure user authentication:**
- Add users manually, or
- Configure LDAP/OAuth if using external identity provider
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Routes HTTPS traffic for cloud.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic and redirects to HTTPS
- **ACME challenges**: Forwards certificate verification requests
This host manages its own ACME certificates. Nginx handles TLS termination.
## Troubleshooting
### Redis connection issues
If Nextcloud shows "Redis not available" errors:
```bash
# Check Redis status
systemctl status redis-nextcloud
# Check socket exists and permissions
ls -l /run/redis-nextcloud/redis.sock
# Test Redis connection
redis-cli -s /run/redis-nextcloud/redis.sock ping
# View Redis logs
journalctl -u redis-nextcloud -f
```
**Solution**: Ensure Redis is running and the nextcloud user has access to the socket.
### PostgreSQL permissions
If Nextcloud cannot connect to the database:
```bash
# Check PostgreSQL status
systemctl status postgresql
# Check database exists
sudo -u postgres psql -c "\l" | grep nextcloud
# Check user and permissions
sudo -u postgres psql -c "\du" | grep nextcloud
# Test connection as nextcloud user
sudo -u nextcloud psql -d nextcloud -c "SELECT version();"
# View PostgreSQL logs
journalctl -u postgresql -f
```
**Solution**: Ensure the nextcloud database and user exist with proper permissions.
### Upload size limits
If large file uploads fail:
```bash
# Check Nextcloud upload size setting
grep -i "upload" /var/lib/nextcloud/config/config.php
# Check PHP-FPM settings
systemctl status phpfpm-nextcloud
# View PHP error logs
tail -f /var/log/phpfpm-nextcloud.log
```
**Solution**: The max upload is set to 1GB via `maxUploadSize`. If you need larger files, modify the NixOS configuration.
### Opcache configuration
If PHP performance is poor:
```bash
# Check PHP opcache settings
php -i | grep opcache
# Check opcache status via Nextcloud admin panel
# Settings → Administration → Overview → PHP
# Restart PHP-FPM to clear cache
systemctl restart phpfpm-nextcloud
```
**Solution**: The opcache interned strings buffer is set to 32MB. If you see opcache errors, this may need adjustment.
### Mail relay issues
If email notifications are not being sent:
```bash
# Check Nullmailer status
systemctl status nullmailer
# Check mail queue
mailq
# View Nullmailer logs
journalctl -u nullmailer -f
# Test mail relay
echo "Test message" | mail -s "Test" user@example.com
# Check Nextcloud mail settings
sudo -u nextcloud php /var/lib/nextcloud/occ config:list | grep mail
```
**Solution**: Verify the mail relay host (mail.stura.htw-dresden.de) is reachable and accepting SMTP connections on port 25.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-cloud.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-cloud.htw.stura-dresden.de -f
# Check Nginx HTTPS configuration
nginx -t
# View Nginx error logs
journalctl -u nginx -f
```
**Solution**: Ensure DNS points to proxy (141.56.51.1) and the proxy forwards ACME challenges to this host.
### Maintenance mode stuck
If Nextcloud is stuck in maintenance mode:
```bash
# Disable maintenance mode
sudo -u nextcloud php /var/lib/nextcloud/occ maintenance:mode --off
# Check status
sudo -u nextcloud php /var/lib/nextcloud/occ status
# Run system check
sudo -u nextcloud php /var/lib/nextcloud/occ check
```
**Solution**: Maintenance mode is automatically disabled after updates, but can sometimes get stuck.
## Files and Directories
- **Nextcloud data**: `/var/lib/nextcloud/`
- **Admin password**: `/var/lib/nextcloud/adminpassFile`
- **Configuration**: `/var/lib/nextcloud/config/config.php`
- **Apps**: `/var/lib/nextcloud/apps/`
- **User files**: `/var/lib/nextcloud/data/`
- **PostgreSQL data**: `/var/lib/postgresql/`
- **Redis socket**: `/run/redis-nextcloud/redis.sock`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.16/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 80, 443 allowed
## Configuration Details
- **Version**: Nextcloud 31
- **Database type**: PostgreSQL
- **Caching**: Redis (APCU disabled)
- **HTTPS**: Yes (enforced via forceSSL)
- **Trusted domains**:
- cloud.htw.stura-dresden.de
- www.cloud.htw.stura-dresden.de
- **PHP opcache**: Interned strings buffer 32MB
- **Maintenance window**: 4 AM (hour 4)
- **Log level**: 4 (warnings and errors)
## Useful Commands
```bash
# Run occ commands (Nextcloud CLI)
sudo -u nextcloud php /var/lib/nextcloud/occ <command>
# List all users
sudo -u nextcloud php /var/lib/nextcloud/occ user:list
# Scan files for changes
sudo -u nextcloud php /var/lib/nextcloud/occ files:scan --all
# Run background jobs
sudo -u nextcloud php /var/lib/nextcloud/occ background:cron
# Update apps
sudo -u nextcloud php /var/lib/nextcloud/occ app:update --all
# Check for Nextcloud updates
sudo -u nextcloud php /var/lib/nextcloud/occ update:check
```
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [Nextcloud Documentation](https://docs.nextcloud.com/)
- [Nextcloud Admin Manual](https://docs.nextcloud.com/server/stable/admin_manual/)
- [NixOS Nextcloud Options](https://search.nixos.org/options?query=services.nextcloud)

View file

@ -1,7 +1,365 @@
# Zentraler Reverse Proxy Für ehemals öffentliche IP-Adressen # Proxy Host - Central Reverse Proxy
Central reverse proxy at 141.56.51.1 running as a full VM (not LXC container).
## Overview
- **Hostname**: proxy
- **IP Address**: 141.56.51.1
- **Type**: Full VM (not LXC)
- **Services**: HAProxy, OpenSSH (ports 1005, 2142)
- **Role**: Central traffic router for all StuRa HTW Dresden services
## Architecture
The proxy is the central entry point for all HTTP/HTTPS traffic:
- All public DNS records point to this IP (141.56.51.1)
- HAProxy performs SNI-based routing for HTTPS traffic
- HTTP Host header-based routing for unencrypted traffic
- Automatic redirect from HTTP to HTTPS (except ACME challenges)
- Backend services handle their own TLS certificates
## Services
### HAProxy
HAProxy routes traffic using two methods:
**1. HTTP Mode (Port 80)**
- Inspects the HTTP Host header
- Routes requests to appropriate backend
- Forwards ACME challenges (/.well-known/acme-challenge/) to backends
- Redirects all other HTTP traffic to HTTPS (301 redirect)
- Default backend serves an index page listing all services
**2. TCP Mode (Port 443)**
- SNI inspection during TLS handshake
- Routes encrypted traffic without decryption
- Backends terminate TLS and serve their own certificates
- 1-second inspection delay to buffer packets
**Key features:**
- Logging: All connections logged to systemd journal
- Stats page: Available at http://127.0.0.1:8404/stats (localhost only)
- Max connections: 50,000
- Buffer size: 32,762 bytes
- Timeouts: 5s connect, 30s client/server
### SSH Services
**Port 1005: Admin SSH Access**
- Primary SSH port for administrative access
- Configured in `services.openssh.listenAddresses`
- Used for system management and deployments
**Port 2142: SSH Jump to srs2**
- Forwards SSH connections to srs2 (141.56.51.2:80)
- 30-minute session timeout
- TCP keep-alive enabled
- Used for accessing legacy systems
### Auto-Generated Forwarding
The proxy configuration is **partially auto-generated**:
1. **Static forwards**: Manually defined in the `forwards` attribute set in default.nix
2. **Dynamic forwards**: Automatically generated from all NixOS configurations in this flake that have `services.nginx.enable = true`
When you deploy a new host with Nginx enabled, the proxy automatically:
- Detects the nginx virtualHosts
- Extracts the IP address from the host configuration
- Generates HAProxy backend rules for ports 80 and 443
**No manual proxy configuration needed for nginx-enabled hosts!**
## Deployment Type
Unlike other hosts in this infrastructure, the proxy is a **full VM** (not an LXC container):
- Has dedicated hardware-configuration.nix
- Uses disko for declarative disk management (hetzner-disk.nix)
- Network interface: `ens18` (not `eth0` like LXC containers)
- Requires more resources and dedicated storage
## Disko Configuration
The proxy uses Btrfs with the following layout (hetzner-disk.nix):
- **Filesystem**: Btrfs
- **Compression**: zstd
- **Subvolumes**:
- `/` - Root filesystem
- `/nix` - Nix store
- `/var` - Variable data
- `/home` - User home directories (if needed)
This provides better performance and snapshot capabilities compared to LXC containers.
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere (recommended):**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1
```
This handles disk partitioning via disko automatically.
### Updates
**Note**: SSH runs on port 1005, so you need an SSH config entry:
```bash
# ~/.ssh/config
Host 141.56.51.1
Port 1005
```
Then deploy:
```bash
# From local machine
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1
# Or use auto-generated script
nix run .#proxy-update
```
## Network Configuration
- **Interface**: ens18 (VM interface)
- **IP**: 141.56.51.1/24
- **Gateway**: 141.56.51.254
- **DNS**: 9.9.9.9, 1.1.1.1 (public DNS, not HTW internal)
- **Firewall**: nftables enabled
- **Open ports**: 22, 80, 443, 1005, 2142
## Adding New Services
### Method 1: Deploy a host with Nginx (Automatic)
The easiest method - just deploy a new host with nginx enabled:
```nix
# hosts/newservice/default.nix
services.nginx = {
enable = true;
virtualHosts."newservice.htw.stura-dresden.de" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# your config
};
};
};
```
The flake automatically:
1. Discovers the nginx virtualHost
2. Extracts the IP address from networking configuration
3. Generates HAProxy forwarding rules
4. No manual proxy changes needed!
**You still need to:**
- Add DNS record pointing to 141.56.51.1
- Deploy the proxy to pick up changes: `nix run .#proxy-update`
### Method 2: Manual forwarding (for non-Nginx services)
For services not using Nginx, manually add to the `forwards` attribute set in hosts/proxy/default.nix:
```nix
forwards = {
# ... existing forwards ...
newservice = {
dest = "141.56.51.XXX";
domain = "newservice.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
};
```
Then deploy the proxy.
## SNI Routing Explained
**SNI (Server Name Indication)** is a TLS extension that includes the hostname in the ClientHello handshake.
How it works:
1. Client initiates TLS connection to 141.56.51.1
2. Client sends ClientHello with SNI field (e.g., "wiki.htw.stura-dresden.de")
3. HAProxy inspects the SNI field (does not decrypt)
4. HAProxy routes the entire encrypted connection to the backend (e.g., 141.56.51.13)
5. Backend terminates TLS and serves the content
**Benefits:**
- No TLS decryption at proxy (end-to-end encryption)
- Each backend manages its own certificates
- Simple certificate renewal via ACME/Let's Encrypt
- No certificate syncing required
## ACME Challenge Forwarding
Let's Encrypt verification works seamlessly:
1. Backend requests certificate from Let's Encrypt
2. Let's Encrypt queries DNS, finds 141.56.51.1
3. Let's Encrypt requests `http://domain/.well-known/acme-challenge/<token>`
4. HAProxy detects ACME challenge path
5. HAProxy forwards to backend without HTTPS redirect
6. Backend responds with challenge token
7. Let's Encrypt verifies and issues certificate
This is why HTTP→HTTPS redirect has an exception for `/.well-known/acme-challenge/` paths.
## HAProxy Stats
Access HAProxy statistics page (localhost only):
```bash
# SSH into proxy
ssh -p 1005 root@141.56.51.1
# Access stats via curl
curl http://127.0.0.1:8404/stats
# Or forward port to your local machine
ssh -p 1005 -L 8404:127.0.0.1:8404 root@141.56.51.1
# Then browse to http://localhost:8404/stats
```
The stats page shows:
- Current connections per backend
- Health check status
- Traffic statistics
- Error counts
## Configuration Structure
The HAProxy configuration is generated from Nix using `lib.foldlAttrs`:
```nix
forwards = {
service_name = {
dest = "141.56.51.XXX";
domain = "service.htw.stura-dresden.de";
httpPort = 80;
httpsPort = 443;
};
};
```
This generates:
- ACL rules for HTTP Host header matching
- Backend definitions for ports 80 and 443
- SNI routing rules for HTTPS
- HTTP→HTTPS redirects (except ACME)
## Current Forwards
The proxy currently forwards traffic for:
**In this repository:**
- git.adm.htw.stura-dresden.de → 141.56.51.7 (Forgejo)
- wiki.htw.stura-dresden.de → 141.56.51.13 (MediaWiki)
- pro.htw.stura-dresden.de → 141.56.51.15 (Redmine)
- cloud.htw.stura-dresden.de → 141.56.51.16 (Nextcloud)
**External services (managed outside this repo):**
- stura.htw-dresden.de → 141.56.51.3 (Plone)
- tix.htw.stura-dresden.de → 141.56.51.220 (Pretix)
- vot.htw.stura-dresden.de → 141.56.51.57 (OpenSlides)
- mail.htw.stura-dresden.de → 141.56.51.14 (Mail server)
- lists.htw.stura-dresden.de → 141.56.51.14 (Mailing lists)
## Troubleshooting
### HAProxy not starting
```bash
# Check HAProxy status
systemctl status haproxy
# Check configuration syntax
haproxy -c -f /etc/haproxy/haproxy.cfg
# View HAProxy logs
journalctl -u haproxy -f
```
### Backend not reachable
```bash
# Check backend connectivity
curl -v http://141.56.51.XXX:80
curl -vk https://141.56.51.XXX:443
# Check HAProxy stats for backend status
curl http://127.0.0.1:8404/stats | grep backend_name
# Test DNS resolution
dig +short domain.htw.stura-dresden.de
# Check firewall rules
nft list ruleset | grep 141.56.51.XXX
```
### ACME challenges failing
```bash
# Verify HTTP forwarding works
curl -v http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
# Check HAProxy ACL for ACME
grep -i acme /etc/haproxy/haproxy.cfg
# Verify no HTTPS redirect for ACME paths
curl -I http://domain.htw.stura-dresden.de/.well-known/acme-challenge/test
# Should return 200/404, not 301 redirect
```
### SSH Jump not working
```bash
# Check SSH jump frontend
systemctl status haproxy
journalctl -u haproxy | grep ssh_jump
# Test connection to srs2
telnet 141.56.51.2 80
# Check HAProxy backend configuration
grep -A 5 "ssh_srs2" /etc/haproxy/haproxy.cfg
```
## Files and Directories
- **HAProxy config**: `/etc/haproxy/haproxy.cfg` (generated by Nix)
- **HAProxy socket**: `/run/haproxy/admin.sock` (if enabled)
- **Disk config**: `./hetzner-disk.nix` (disko configuration)
- **Hardware config**: `./hardware-configuration.nix` (VM hardware)
- **NixOS config**: `./default.nix` (proxy configuration)
## Security Considerations
- **No TLS decryption**: End-to-end encryption maintained
- **No access logs**: HAProxy logs to journal (can be filtered)
- **Stats page**: Localhost only (not exposed)
- **Firewall**: Only necessary ports open
- **SSH**: Custom port (1005) reduces automated attacks
---
## Zentraler Reverse Proxy Für ehemals öffentliche IP-Adressen
(Original German documentation preserved)
Die Instanzen können weitestgehend unverändert weiterlaufen, es müssen nur alle DNS-Einträge auf 141.56.51.1 geändert werden. Die Instanzen können weitestgehend unverändert weiterlaufen, es müssen nur alle DNS-Einträge auf 141.56.51.1 geändert werden.
## HAProxy ### HAProxy
Zum Weiterleiten der Connections wird HAProxy verwendet. Zum Weiterleiten der Connections wird HAProxy verwendet.
HAProxy unterstützt verschiedene Modi. HAProxy unterstützt verschiedene Modi.
Für uns sind http und tcp relevant. Für uns sind http und tcp relevant.
@ -66,3 +424,12 @@ backend cloud_443
... ...
... ...
``` ```
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Git README](../git/README.md) - Forgejo configuration
- [Wiki README](../wiki/README.md) - MediaWiki configuration
- [Redmine README](../redmine/README.md) - Redmine configuration
- [Nextcloud README](../nextcloud/README.md) - Nextcloud configuration
- [HAProxy Documentation](http://www.haproxy.org/#docs)

332
hosts/redmine/README.md Normal file
View file

@ -0,0 +1,332 @@
# Redmine Host - Project Management
Redmine project management system at 141.56.51.15 running in an LXC container.
## Overview
- **Hostname**: pro
- **FQDN**: pro.htw.stura-dresden.de
- **IP Address**: 141.56.51.15
- **Type**: Proxmox LXC Container
- **Services**: Redmine (Rails), Nginx (reverse proxy), OpenSSH
## Services
### Redmine
Redmine is a flexible project management web application:
- **Port**: 3000 (local only, not exposed)
- **Database**: SQLite (default NixOS configuration)
- **SMTP relay**: mail.htw.stura-dresden.de:25
- **Image processing**: ImageMagick enabled
- **PDF support**: Ghostscript enabled
- **Auto-upgrade**: Enabled (Redmine updates automatically)
**Features:**
- Issue tracking
- Project wikis
- Time tracking
- Gantt charts and calendars
- Multiple project support
- Role-based access control
### Nginx
Nginx acts as a reverse proxy:
- Receives HTTPS requests (TLS termination)
- Forwards to Redmine on localhost:3000
- Manages ACME/Let's Encrypt certificates
- Default virtual host (catches all traffic to this IP)
**Privacy configuration:**
- Access logs: Disabled
- Error logs: Emergency level only (`/dev/null emerg`)
### Email Delivery
SMTP is configured for email notifications:
- **Delivery method**: SMTP
- **SMTP host**: mail.htw.stura-dresden.de
- **SMTP port**: 25
- **Authentication**: None (internal relay)
Redmine can send notifications for:
- New issues
- Issue updates
- Comments
- Project updates
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#redmine --target-host root@141.56.51.15
```
**Using container tarball:**
```bash
nix build .#containers-redmine
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 115 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname pro \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.15/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:10 \
--unprivileged 1 \
--features nesting=1
pct start 115
```
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#redmine --target-host root@141.56.51.15
# Or use auto-generated script
nix run .#redmine-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Access the web interface:**
```
https://pro.htw.stura-dresden.de
```
2. **Complete initial setup:**
- Log in with default admin credentials (admin/admin)
- **Immediately change the admin password**
- Configure basic settings (Settings → Administration)
3. **Configure LDAP authentication** (optional):
- Navigate to Administration → LDAP authentication
- Add LDAP server if using external identity provider
- Configure attribute mapping
4. **Set up projects:**
- Create projects via Administration → Projects → New project
- Configure project modules (issues, wiki, time tracking, etc.)
- Set up roles and permissions
5. **Configure email notifications:**
- Administration → Settings → Email notifications
- Verify SMTP settings are working
- Set default email preferences
- Test email delivery
6. **Configure issue tracking:**
- Administration → Trackers (Bug, Feature, Support, etc.)
- Administration → Issue statuses
- Administration → Workflows
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Routes HTTPS traffic for pro.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic and redirects to HTTPS
- **ACME challenges**: Forwards certificate verification requests
This host manages its own ACME certificates. Nginx handles TLS termination.
## Troubleshooting
### SMTP connection issues
If email notifications are not being sent:
```bash
# Check Redmine email configuration
cat /var/lib/redmine/config/configuration.yml | grep -A 10 email_delivery
# Test SMTP connectivity
telnet mail.htw.stura-dresden.de 25
# View Redmine logs
tail -f /var/lib/redmine/log/production.log
# Check mail queue (if using local sendmail)
mailq
```
**Solution**: Verify the SMTP relay (mail.htw.stura-dresden.de) is reachable and accepting connections on port 25.
### ImageMagick/Ghostscript paths
If image processing or PDF thumbnails fail:
```bash
# Check ImageMagick installation
which convert
/run/current-system/sw/bin/convert --version
# Check Ghostscript installation
which gs
/run/current-system/sw/bin/gs --version
# Test image conversion
/run/current-system/sw/bin/convert test.png -resize 100x100 output.png
# View Redmine logs for image processing errors
grep -i imagemagick /var/lib/redmine/log/production.log
```
**Solution**: ImageMagick and Ghostscript are enabled via NixOS config. Paths are automatically configured.
### Database migration failures
If Redmine fails to start after an update:
```bash
# Check Redmine service status
systemctl status redmine
# View Redmine logs
journalctl -u redmine -f
# Manually run database migrations (if needed)
cd /var/lib/redmine
sudo -u redmine bundle exec rake db:migrate RAILS_ENV=production
# Check database schema version
sudo -u redmine bundle exec rake db:version RAILS_ENV=production
```
**Solution**: Auto-upgrade is enabled, but migrations can sometimes fail. Check logs for specific errors.
### Nginx proxy configuration
If the web interface is unreachable:
```bash
# Check Nginx configuration
nginx -t
# Check Nginx status
systemctl status nginx
# View Nginx error logs
journalctl -u nginx -f
# Test local Redmine connection
curl http://127.0.0.1:3000
```
**Solution**: Verify Nginx is proxying correctly to localhost:3000 and that Redmine is running.
### Redmine service not starting
If Redmine fails to start:
```bash
# Check service status
systemctl status redmine
# View detailed logs
journalctl -u redmine -n 100
# Check database file permissions
ls -l /var/lib/redmine/db/
# Check configuration
ls -l /var/lib/redmine/config/
# Try starting manually
cd /var/lib/redmine
sudo -u redmine bundle exec rails server -e production
```
**Solution**: Check logs for specific errors. Common issues include database permissions, missing gems, or configuration errors.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-pro.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-pro.htw.stura-dresden.de -f
# Check certificate files
ls -l /var/lib/acme/pro.htw.stura-dresden.de/
# Manually trigger renewal
systemctl start acme-pro.htw.stura-dresden.de
```
**Solution**: Ensure DNS points to proxy (141.56.51.1) and the proxy forwards ACME challenges to this host.
## Files and Directories
- **Redmine home**: `/var/lib/redmine/`
- **Configuration**: `/var/lib/redmine/config/`
- `configuration.yml` - Email and general settings
- `database.yml` - Database configuration
- **Logs**: `/var/lib/redmine/log/production.log`
- **Database**: `/var/lib/redmine/db/` (SQLite)
- **Files/attachments**: `/var/lib/redmine/files/`
- **Plugins**: `/var/lib/redmine/plugins/`
- **Themes**: `/var/lib/redmine/public/themes/`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.15/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 22, 80, 443 allowed
## Configuration Details
- **Redmine version**: Latest from NixOS 25.11
- **Database**: SQLite (default)
- **Web server**: Nginx (reverse proxy)
- **Application server**: Puma (default Rails server)
- **Ruby version**: Determined by NixOS Redmine package
- **SMTP**: mail.htw.stura-dresden.de:25
- **ImageMagick**: Enabled (minimagick)
- **Ghostscript**: Enabled (PDF support)
- **Font**: Liberation Sans Regular
## Automatic Maintenance
- **Auto-upgrade**: Enabled (system automatically updates)
- **Auto-reboot**: Allowed (system may reboot for updates)
- **Store optimization**: Automatic
- **Garbage collection**: Automatic (delete older than 42 days)
## Useful Commands
```bash
# Access Redmine console
cd /var/lib/redmine
sudo -u redmine bundle exec rails console -e production
# Run rake tasks
sudo -u redmine bundle exec rake <task> RAILS_ENV=production
# Database backup
sudo -u redmine cp /var/lib/redmine/db/production.sqlite3 /backup/redmine-$(date +%Y%m%d).sqlite3
# View running processes
ps aux | grep redmine
# Restart Redmine
systemctl restart redmine
```
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [Redmine Documentation](https://www.redmine.org/projects/redmine/wiki/Guide)
- [Redmine Administration Guide](https://www.redmine.org/projects/redmine/wiki/RedmineAdministration)
- [NixOS Redmine Options](https://search.nixos.org/options?query=services.redmine)

297
hosts/wiki/README.md Normal file
View file

@ -0,0 +1,297 @@
# Wiki Host - MediaWiki
MediaWiki instance at 141.56.51.13 running in an LXC container.
## Overview
- **Hostname**: wiki
- **FQDN**: wiki.htw.stura-dresden.de
- **IP Address**: 141.56.51.13
- **Type**: Proxmox LXC Container
- **Services**: MediaWiki, MariaDB, Apache httpd, PHP-FPM
## Services
### MediaWiki
The StuRa HTW Dresden wiki runs MediaWiki with extensive customization:
- **Name**: Wiki StuRa HTW Dresden
- **Language**: German (de)
- **Default skin**: Vector (classic)
- **Session timeout**: 3 hours (10800 seconds)
- **ImageMagick**: Enabled for image processing
- **Instant Commons**: Enabled (access to Wikimedia Commons images)
### Custom Namespaces
The wiki defines several custom namespaces for organizational purposes:
| Namespace | ID | Purpose |
|-----------|-----|---------|
| StuRa | 100 | Standard StuRa content |
| Intern | 102 | Internal (non-public) StuRa content |
| Admin | 104 | Administrative wiki content |
| Person | 106 | Individual person pages (non-public) |
| Faranto | 108 | Faranto e.V. content |
| ET | 212 | ET Fachschaft content |
| ET_intern | 412 | ET internal content |
| LaUCh | 216 | LaUCh Fachschaft content |
| LaUCh_intern | 416 | LaUCh internal content |
Each namespace has a corresponding discussion namespace (odd numbered ID).
### User Groups and Permissions
**Custom user groups:**
- **intern**: Access to Intern and Person namespaces
- **ET**: Access to ET_intern namespace
- **LUC**: Access to LaUCh_intern namespace
These groups have the same base permissions as standard users (move pages, edit, upload, etc.) plus access to their respective restricted namespaces.
### Spam Prevention
**QuestyCaptcha** is configured to prevent automated spam:
- Challenges users with questions about HTW and StuRa
- Triggered on: edit, create, createtalk, addurl, createaccount, badlogin
- Questions are specific to local knowledge (e.g., "Welche Anzahl an Referaten hat unser StuRa geschaffen?")
### Extensions
The following extensions are installed:
- **Lockdown**: Restricts namespace access by user group
- **ContributionScores**: Statistics of contributions by user
- **UserMerge**: Merge and delete user accounts (for spam cleanup)
- **Interwiki**: Use interwiki links (e.g., Wikipedia references)
- **Cite**: Reference system (footnotes)
- **ConfirmEdit/QuestyCaptcha**: CAPTCHA challenges
## Deployment
See the [main README](../../README.md) for deployment methods.
### Initial Installation
**Using nixos-anywhere:**
```bash
nix run github:nix-community/nixos-anywhere -- --flake .#wiki --target-host root@141.56.51.13
```
**Using container tarball:**
```bash
nix build .#containers-wiki
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/
pct create 113 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname wiki \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.13/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:10 \
--unprivileged 1 \
--features nesting=1
pct start 113
```
### Updates
```bash
# From local machine
nixos-rebuild switch --flake .#wiki --target-host root@141.56.51.13
# Or use auto-generated script
nix run .#wiki-update
```
## Post-Deployment Steps
After deploying for the first time:
1. **Set admin password:**
```bash
echo "your-secure-password" > /var/lib/mediawiki/mediawiki-password
chmod 600 /var/lib/mediawiki/mediawiki-password
```
2. **Set database password:**
```bash
echo "your-db-password" > /var/lib/mediawiki/mediawiki-dbpassword
chmod 600 /var/lib/mediawiki/mediawiki-dbpassword
```
3. **Access the web interface:**
```
https://wiki.htw.stura-dresden.de
```
4. **Complete initial setup:**
- Log in with admin credentials
- Configure additional settings via Special:Version
- Set up main page
5. **Configure namespace permissions:**
- Add users to `intern`, `ET`, or `LUC` groups via Special:UserRights
- Verify namespace restrictions work correctly
- Test that non-members cannot access restricted namespaces
6. **Add users to appropriate groups:**
- Navigate to Special:UserRights
- Select user
- Add to: intern, ET, LUC, sysop, bureaucrat (as needed)
7. **Upload logo and favicon** (optional):
- Place files in `/var/lib/mediawiki/images/`
- Files: `logo.png`, `logo.svg`, `favicon.png`
## Integration with Proxy
The central proxy at 141.56.51.1 handles:
- **SNI routing**: Routes HTTPS traffic for wiki.htw.stura-dresden.de
- **HTTP routing**: Routes HTTP traffic and redirects to HTTPS
- **ACME challenges**: Forwards certificate verification requests
This host manages its own ACME certificates. Apache httpd handles TLS termination.
## Troubleshooting
### Locale warnings
When accessing the container with `pct enter`, you may see:
```
sh: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
sh: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
```
**This is a known issue and can be safely ignored.** It only affects the interactive shell environment, not the running services. Regular SSH access provides a proper shell with correct locale settings.
### Database connection issues
If MediaWiki cannot connect to the database:
```bash
# Check MariaDB status
systemctl status mysql
# Check database exists
mysql -u root -e "SHOW DATABASES;"
# Check user permissions
mysql -u root -e "SHOW GRANTS FOR 'mediawiki'@'localhost';"
# View MediaWiki logs
journalctl -u mediawiki -f
```
**Solution**: Ensure the database password in `/var/lib/mediawiki/mediawiki-dbpassword` matches the database user password.
### Extension loading problems
If extensions are not working:
```bash
# Check extension files exist
ls -l /nix/store/*-mediawiki-extensions/
# View PHP errors
tail -f /var/log/httpd/error_log
# Test MediaWiki configuration
php /var/lib/mediawiki/maintenance/checkSetup.php
```
**Solution**: Verify extensions are properly defined in the configuration and compatible with the MediaWiki version.
### ImageMagick configuration
If image uploads or thumbnails fail:
```bash
# Check ImageMagick installation
which convert
/run/current-system/sw/bin/convert --version
# Test image conversion
/run/current-system/sw/bin/convert input.png -resize 100x100 output.png
# Check MediaWiki image directory permissions
ls -ld /var/lib/mediawiki/images/
```
**Solution**: Ensure ImageMagick path is set correctly (`$wgImageMagickConvertCommand`) and the images directory is writable.
### Namespace permission issues
If users can access restricted namespaces:
```bash
# Check Lockdown extension is loaded
grep -i lockdown /var/lib/mediawiki/LocalSettings.php
# Verify user group membership
# Log in as admin and check Special:UserRights
# Check namespace permission configuration
grep -A 5 "wgNamespacePermissionLockdown" /var/lib/mediawiki/LocalSettings.php
```
**Solution**: Verify the Lockdown extension is installed and `$wgNamespacePermissionLockdown` is configured correctly for each restricted namespace.
### ACME certificate issues
If HTTPS is not working:
```bash
# Check ACME certificate status
systemctl status acme-wiki.htw.stura-dresden.de
# View ACME logs
journalctl -u acme-wiki.htw.stura-dresden.de -f
# Check Apache HTTPS configuration
httpd -t -D DUMP_VHOSTS
```
**Solution**: Ensure DNS points to proxy (141.56.51.1) and the proxy forwards ACME challenges to this host.
## Files and Directories
- **MediaWiki data**: `/var/lib/mediawiki/`
- **Password file**: `/var/lib/mediawiki/mediawiki-password`
- **DB password file**: `/var/lib/mediawiki/mediawiki-dbpassword`
- **Images**: `/var/lib/mediawiki/images/`
- **LocalSettings**: `/var/lib/mediawiki/LocalSettings.php` (generated)
- **Extensions**: `/nix/store/.../mediawiki-extensions/`
- **Database**: MariaDB stores data in `/var/lib/mysql/`
## Network
- **Interface**: eth0 (LXC container)
- **IP**: 141.56.51.13/24
- **Gateway**: 141.56.51.254
- **Firewall**: Ports 80, 443 allowed
## Configuration Details
- **Time zone**: Europe/Berlin
- **Table prefix**: sturawiki
- **Emergency contact**: wiki@stura.htw-dresden.de
- **Password sender**: wiki@stura.htw-dresden.de
- **External images**: Allowed
- **File uploads**: Enabled
- **Email notifications**: Enabled (user talk, watchlist)
## Automatic Maintenance
- **Auto-upgrade**: Enabled (system automatically updates)
- **Auto-reboot**: Allowed (system may reboot for updates)
- **Store optimization**: Automatic
- **Garbage collection**: Automatic
## See Also
- [Main README](../../README.md) - Deployment methods and architecture
- [Proxy README](../proxy/README.md) - How the central proxy routes traffic
- [MediaWiki Documentation](https://www.mediawiki.org/wiki/Documentation)
- [NixOS MediaWiki Options](https://search.nixos.org/options?query=services.mediawiki)
- [Extension:Lockdown](https://www.mediawiki.org/wiki/Extension:Lockdown)
- [Extension:QuestyCaptcha](https://www.mediawiki.org/wiki/Extension:QuestyCaptcha)