The IngressRouteTCP resource was being silently ignored because Traefik CRDs were never installed. This caused SSH traffic on port 22 to be handled as HTTP, returning 400 Bad Request. Add CRD installation step before Traefik deployment. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| main.tf | ||
| plan.tfplan | ||
| README.md | ||
| terraform.tfvars.example | ||
| user-data.sh | ||
| variables.tf | ||
Hearth Minimal Deployment
Single EC2 + k3s infrastructure for ~1 user. Cost: ~$7.50/month.
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Internet │
└─────────────────────────────────────────────────────────────┘
│
┌─────────┴─────────┐
│ Elastic IP │
│ git.beyond... │
└─────────┬─────────┘
│
┌───────────────┼───────────────┐
│ │ │
:22 (git) :443 (https) :2222 (admin ssh)
│ │ │
┌─────────────┴───────────────┴───────────────┴─────────────┐
│ EC2 t4g.small (spot) │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ k3s │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌──────────────┐ │ │
│ │ │ Traefik │ │ Forgejo │ │ SQLite │ │ │
│ │ │ (ingress) │ │ (git) │ │ (data) │ │ │
│ │ └─────────────┘ └─────────────┘ └──────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ EBS gp3 20GB │
└────────────────────────────────────────────────────────────┘
│
Daily Backup to S3
Cost Breakdown
| Component | Monthly |
|---|---|
| EC2 t4g.small spot | ~$5 |
| EBS gp3 20GB | ~$2 |
| Elastic IP | $0 (attached) |
| S3 backups | ~$0.50 |
| Total | ~$7.50 |
Prerequisites
- AWS CLI configured with
hearthprofile - Terraform >= 1.5.0
- Domain with DNS access
Deployment
# 1. Initialize terraform
cd terraform/minimal
terraform init
# 2. Review configuration
vim terraform.tfvars # Set your domain and email
# 3. Plan
terraform plan
# 4. Apply
terraform apply
# 5. Note the outputs
terraform output
# 6. Configure DNS
# Add A record: git.yourdomain.com -> <elastic_ip>
# 7. Wait for DNS propagation (5-30 minutes)
# 8. Visit https://git.yourdomain.com to complete Forgejo setup
Post-Deployment
SSH Access
# Admin SSH (system access)
ssh -p 2222 ec2-user@<elastic_ip>
# Check k3s status
sudo kubectl get pods -A
# View Forgejo logs
sudo kubectl logs -n forgejo deploy/forgejo
Git Access
# Clone a repo (after creating it in web UI)
git clone git@git.yourdomain.com:org/repo.git
Backups
Automatic daily backups to S3 at 3 AM UTC.
# Manual backup
sudo /usr/local/bin/backup-forgejo.sh hearth-backups-<account_id>
# List backups
aws s3 ls s3://hearth-backups-<account_id>/backups/
Restore from Backup
# Download backup
aws s3 cp s3://hearth-backups-<account_id>/backups/backup-TIMESTAMP.tar.gz /tmp/
# Extract
tar -xzf /tmp/backup-TIMESTAMP.tar.gz -C /tmp/restore
# Stop Forgejo
sudo kubectl scale deploy/forgejo -n forgejo --replicas=0
# Restore database
sudo cp /tmp/restore/gitea.db /data/forgejo/gitea/gitea.db
sudo chown 1000:1000 /data/forgejo/gitea/gitea.db
# Start Forgejo
sudo kubectl scale deploy/forgejo -n forgejo --replicas=1
Upgrade Path
When you outgrow this setup:
- More resources: Change instance type in terraform.tfvars
- High availability: Migrate to EKS using the same manifests
- Multiple users: Add authentication via Keycloak
The Kubernetes manifests are portable to any k8s cluster.
Troubleshooting
Forgejo not starting
sudo kubectl describe pod -n forgejo
sudo kubectl logs -n forgejo deploy/forgejo
TLS not working
# Check Traefik logs
sudo kubectl logs -n traefik deploy/traefik
# Verify DNS is pointing to correct IP
dig git.yourdomain.com
Spot instance interrupted
The instance will automatically restart. Data is preserved on EBS. Check instance status in AWS console.
Security Notes
- Restrict admin access: Update
admin_cidr_blocksin terraform.tfvars - SSH keys: Add your public key to
~/.ssh/authorized_keyson the instance - Forgejo admin: Create admin account during initial setup
- Updates: Automatic security updates enabled via dnf-automatic