hearth/terraform/minimal
Eric Garcia 3879d2fe35 fix: install Traefik CRDs for IngressRouteTCP SSH routing
The IngressRouteTCP resource was being silently ignored because
Traefik CRDs were never installed. This caused SSH traffic on
port 22 to be handled as HTTP, returning 400 Bad Request.

Add CRD installation step before Traefik deployment.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 13:00:28 -05:00
..
main.tf feat(dns): Add self-hosted PowerDNS for 5 managed domains 2026-01-24 07:42:48 -05:00
plan.tfplan fix(minimal): Fix template variable names and S3 lifecycle 2026-01-24 06:26:05 -05:00
README.md feat(minimal): Add k3s-on-EC2 infrastructure for single user 2026-01-24 06:21:55 -05:00
terraform.tfvars.example feat(minimal): Add k3s-on-EC2 infrastructure for single user 2026-01-24 06:21:55 -05:00
user-data.sh fix: install Traefik CRDs for IngressRouteTCP SSH routing 2026-02-03 13:00:28 -05:00
variables.tf fix(minimal): Replace Traefik HelmChart with direct deployment 2026-01-24 06:42:32 -05:00

Hearth Minimal Deployment

Single EC2 + k3s infrastructure for ~1 user. Cost: ~$7.50/month.

Architecture

┌─────────────────────────────────────────────────────────────┐
│                         Internet                             │
└─────────────────────────────────────────────────────────────┘
                              │
                    ┌─────────┴─────────┐
                    │   Elastic IP      │
                    │ git.beyond...     │
                    └─────────┬─────────┘
                              │
              ┌───────────────┼───────────────┐
              │               │               │
           :22 (git)       :443 (https)    :2222 (admin ssh)
              │               │               │
┌─────────────┴───────────────┴───────────────┴─────────────┐
│                     EC2 t4g.small (spot)                   │
│                                                            │
│  ┌──────────────────────────────────────────────────────┐  │
│  │                        k3s                            │  │
│  │  ┌─────────────┐  ┌─────────────┐  ┌──────────────┐  │  │
│  │  │   Traefik   │  │   Forgejo   │  │   SQLite     │  │  │
│  │  │  (ingress)  │  │   (git)     │  │   (data)     │  │  │
│  │  └─────────────┘  └─────────────┘  └──────────────┘  │  │
│  └──────────────────────────────────────────────────────┘  │
│                                                            │
│  EBS gp3 20GB                                              │
└────────────────────────────────────────────────────────────┘
                              │
                    Daily Backup to S3

Cost Breakdown

Component Monthly
EC2 t4g.small spot ~$5
EBS gp3 20GB ~$2
Elastic IP $0 (attached)
S3 backups ~$0.50
Total ~$7.50

Prerequisites

  1. AWS CLI configured with hearth profile
  2. Terraform >= 1.5.0
  3. Domain with DNS access

Deployment

# 1. Initialize terraform
cd terraform/minimal
terraform init

# 2. Review configuration
vim terraform.tfvars  # Set your domain and email

# 3. Plan
terraform plan

# 4. Apply
terraform apply

# 5. Note the outputs
terraform output

# 6. Configure DNS
# Add A record: git.yourdomain.com -> <elastic_ip>

# 7. Wait for DNS propagation (5-30 minutes)

# 8. Visit https://git.yourdomain.com to complete Forgejo setup

Post-Deployment

SSH Access

# Admin SSH (system access)
ssh -p 2222 ec2-user@<elastic_ip>

# Check k3s status
sudo kubectl get pods -A

# View Forgejo logs
sudo kubectl logs -n forgejo deploy/forgejo

Git Access

# Clone a repo (after creating it in web UI)
git clone git@git.yourdomain.com:org/repo.git

Backups

Automatic daily backups to S3 at 3 AM UTC.

# Manual backup
sudo /usr/local/bin/backup-forgejo.sh hearth-backups-<account_id>

# List backups
aws s3 ls s3://hearth-backups-<account_id>/backups/

Restore from Backup

# Download backup
aws s3 cp s3://hearth-backups-<account_id>/backups/backup-TIMESTAMP.tar.gz /tmp/

# Extract
tar -xzf /tmp/backup-TIMESTAMP.tar.gz -C /tmp/restore

# Stop Forgejo
sudo kubectl scale deploy/forgejo -n forgejo --replicas=0

# Restore database
sudo cp /tmp/restore/gitea.db /data/forgejo/gitea/gitea.db
sudo chown 1000:1000 /data/forgejo/gitea/gitea.db

# Start Forgejo
sudo kubectl scale deploy/forgejo -n forgejo --replicas=1

Upgrade Path

When you outgrow this setup:

  1. More resources: Change instance type in terraform.tfvars
  2. High availability: Migrate to EKS using the same manifests
  3. Multiple users: Add authentication via Keycloak

The Kubernetes manifests are portable to any k8s cluster.

Troubleshooting

Forgejo not starting

sudo kubectl describe pod -n forgejo
sudo kubectl logs -n forgejo deploy/forgejo

TLS not working

# Check Traefik logs
sudo kubectl logs -n traefik deploy/traefik

# Verify DNS is pointing to correct IP
dig git.yourdomain.com

Spot instance interrupted

The instance will automatically restart. Data is preserved on EBS. Check instance status in AWS console.

Security Notes

  1. Restrict admin access: Update admin_cidr_blocks in terraform.tfvars
  2. SSH keys: Add your public key to ~/.ssh/authorized_keys on the instance
  3. Forgejo admin: Create admin account during initial setup
  4. Updates: Automatic security updates enabled via dnf-automatic