feat(minimal): Add k3s-on-EC2 infrastructure for single user

Decision from 12-expert alignment dialogue on single-user scale.
Implements Option E with modifications:

- t4g.small spot instance (~$5/mo)
- k3s with Traefik for ingress + Let's Encrypt TLS
- SQLite database for Forgejo
- S3 backups with 30-day lifecycle
- EBS gp3 20GB encrypted
- Admin SSH on port 2222, Git SSH on port 22

Total cost: ~$7.50/month

Includes:
- terraform/minimal/ - full terraform configuration
- terraform/bootstrap/ - state backend (already applied)
- docs/spikes/0001-single-user-scale.md - decision documentation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Eric Garcia 2026-01-24 06:21:55 -05:00
parent e78000831e
commit b1065ca887
7 changed files with 1353 additions and 0 deletions

View file

@ -0,0 +1,277 @@
# Spike: Single-User Infrastructure Scale
**Date:** 2026-01-24
**Status:** Decided
## Decision
**Chosen: Option E (k3s on EC2)** with modifications from 12-expert alignment dialogue.
Key decisions:
- t4g.small spot instance (~$5/mo)
- k3s with Traefik for ingress + Let's Encrypt TLS
- SQLite database (simpler than PostgreSQL for single user)
- S3 for backups with lifecycle policies
- EBS gp3 20GB with encryption
- Admin SSH on port 2222, Git SSH on port 22
Implementation: `terraform/minimal/`
Cost: ~$7.50/month
## Problem
Current design targets production scale (~$100-150/mo). For ~1 user, we need something much smaller.
## Current Design Cost Breakdown
| Component | Monthly | Purpose |
|-----------|---------|---------|
| EKS Control Plane | $73 | Kubernetes API |
| NAT Gateways (3x) | $96 | Private subnet internet |
| NLB | $16 | Load balancing |
| EFS | $5+ | Shared storage |
| S3 | $5 | Backups, blobs |
| Spot nodes | $10-50 | Compute |
| **Total** | **$205-245** | |
That's absurd for 1 user running Forgejo.
## Options
### Option A: Single EC2 + Docker (~$5-15/mo)
**Architecture:**
```
Internet → EC2 (t4g.small) → Docker → Forgejo
→ SQLite (local)
→ Local disk
```
**Cost:**
- t4g.small spot: ~$3-5/mo
- EBS gp3 20GB: ~$2/mo
- Elastic IP: $0 (if attached)
- **Total: ~$5-7/mo**
**Pros:**
- Dead simple
- Can SSH directly
- Easy to understand and debug
- Can grow later
**Cons:**
- Single point of failure
- Manual updates
- No k8s experience
**Implementation:**
```bash
# User data script
docker run -d \
--name forgejo \
-p 80:3000 -p 22:22 \
-v /data/forgejo:/data \
--restart always \
codeberg.org/forgejo/forgejo:9
```
### Option B: Lightsail Container (~$7/mo)
**Architecture:**
```
Internet → Lightsail Container Service → Forgejo
→ Lightsail Storage
```
**Cost:**
- Nano container: $7/mo (includes 512MB RAM, 0.25 vCPU)
- Storage: included
- HTTPS: included
- **Total: ~$7/mo**
**Pros:**
- Managed TLS
- Simple deployment
- AWS-native
- Easy scaling path
**Cons:**
- Limited resources on nano
- Lightsail-specific
- Less control
### Option C: Fargate Spot (~$10-20/mo)
**Architecture:**
```
Internet → ALB → Fargate Spot → Forgejo
→ EFS (minimal)
```
**Cost:**
- Fargate Spot (0.25 vCPU, 0.5GB): ~$3-5/mo
- ALB: ~$16/mo (overkill, but required)
- EFS: ~$1/mo (minimal usage)
- **Total: ~$20/mo**
**Pros:**
- Serverless containers
- Auto-restart on failure
- Path to EKS later
**Cons:**
- ALB cost dominates
- More complex than EC2
### Option D: EKS Minimal (~$85/mo)
**Architecture:**
```
Internet → NLB → EKS (Fargate only) → Forgejo
→ EFS
```
**Cost:**
- EKS Control Plane: $73
- Fargate pod: ~$5
- NLB: ~$0 (use NodePort + instance IP)
- EFS: ~$5
- **Total: ~$83/mo**
**Pros:**
- Real Kubernetes
- Can scale up cleanly
- Production-like
**Cons:**
- Still expensive for 1 user
- $73 floor just for control plane
### Option E: k3s on EC2 (~$8-15/mo)
**Architecture:**
```
Internet → EC2 (t4g.small) → k3s → Forgejo
→ SQLite
→ Local storage
```
**Cost:**
- t4g.small spot: ~$5/mo
- EBS: ~$2/mo
- **Total: ~$7/mo**
**Pros:**
- Real Kubernetes (k3s)
- Can use same manifests as EKS
- Cheap
- Learning path to EKS
**Cons:**
- Self-managed k8s
- Single node
- Updates are manual
## Comparison Matrix
| Option | Cost/mo | Complexity | Scaling Path | K8s Compatible |
|--------|---------|------------|--------------|----------------|
| A: EC2+Docker | $5-7 | Low | Manual | No |
| B: Lightsail | $7 | Low | Limited | No |
| C: Fargate | $20 | Medium | Good | Partial |
| D: EKS Minimal | $83 | High | Excellent | Yes |
| E: k3s on EC2 | $7-10 | Medium | Good | Yes |
## Recommendation
**For 1 user who wants Forgejo NOW:** Option A (EC2 + Docker)
- Get running in 10 minutes
- $5-7/month
- Upgrade to k3s or EKS later
**For 1 user who wants k8s experience:** Option E (k3s on EC2)
- Same manifests work on EKS later
- $7-10/month
- Real Kubernetes learning
**For future growth path:**
```
EC2+Docker → k3s → EKS (when needed)
$5 $7 $100+
```
## Implementation: Option A (Fastest)
### Terraform for Single EC2
```hcl
# Minimal VPC
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
}
# Single EC2 instance
resource "aws_instance" "forgejo" {
ami = data.aws_ami.amazon_linux_2023.id
instance_type = "t4g.small"
subnet_id = aws_subnet.public.id
user_data = <<-EOF
#!/bin/bash
dnf install -y docker
systemctl enable --now docker
docker run -d --name forgejo \
-p 80:3000 -p 22:22 \
-v /data/forgejo:/data \
--restart always \
codeberg.org/forgejo/forgejo:9
EOF
root_block_device {
volume_size = 20
volume_type = "gp3"
}
}
# Elastic IP for stable DNS
resource "aws_eip" "forgejo" {
instance = aws_instance.forgejo.id
}
```
### DNS Setup
Point `git.beyondtheuniverse.superviber.com` to the Elastic IP.
### TLS Options
1. **Caddy reverse proxy** (auto Let's Encrypt)
2. **Traefik** (auto Let's Encrypt)
3. **certbot** on the instance
## Next Steps
1. Choose option
2. Implement minimal terraform
3. Deploy Forgejo
4. Create hearth repo in Forgejo
5. Push hearth to Forgejo
6. Iterate
## Questions
1. Do you need Kubernetes experience/compatibility?
2. Is $5-7/mo acceptable?
3. Do you want TLS managed or manual?

View file

@ -0,0 +1,93 @@
# Terraform Backend Bootstrap
# Creates S3 bucket and DynamoDB table for terraform state
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.30"
}
}
}
provider "aws" {
region = "us-east-1"
profile = "hearth"
default_tags {
tags = {
Project = "hearth"
ManagedBy = "terraform"
Environment = "production"
}
}
}
# S3 Bucket for Terraform State
resource "aws_s3_bucket" "terraform_state" {
bucket = "hearth-terraform-state-${data.aws_caller_identity.current.account_id}"
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
bucket_key_enabled = true
}
}
resource "aws_s3_bucket_public_access_block" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# DynamoDB Table for State Locking
resource "aws_dynamodb_table" "terraform_locks" {
name = "hearth-terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
data "aws_caller_identity" "current" {}
output "state_bucket" {
value = aws_s3_bucket.terraform_state.id
}
output "state_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
}
output "lock_table" {
value = aws_dynamodb_table.terraform_locks.id
}
output "account_id" {
value = data.aws_caller_identity.current.account_id
}

174
terraform/minimal/README.md Normal file
View file

@ -0,0 +1,174 @@
# Hearth Minimal Deployment
Single EC2 + k3s infrastructure for ~1 user. Cost: ~$7.50/month.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Internet │
└─────────────────────────────────────────────────────────────┘
┌─────────┴─────────┐
│ Elastic IP │
│ git.beyond... │
└─────────┬─────────┘
┌───────────────┼───────────────┐
│ │ │
:22 (git) :443 (https) :2222 (admin ssh)
│ │ │
┌─────────────┴───────────────┴───────────────┴─────────────┐
│ EC2 t4g.small (spot) │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ k3s │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌──────────────┐ │ │
│ │ │ Traefik │ │ Forgejo │ │ SQLite │ │ │
│ │ │ (ingress) │ │ (git) │ │ (data) │ │ │
│ │ └─────────────┘ └─────────────┘ └──────────────┘ │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ EBS gp3 20GB │
└────────────────────────────────────────────────────────────┘
Daily Backup to S3
```
## Cost Breakdown
| Component | Monthly |
|-----------|---------|
| EC2 t4g.small spot | ~$5 |
| EBS gp3 20GB | ~$2 |
| Elastic IP | $0 (attached) |
| S3 backups | ~$0.50 |
| **Total** | **~$7.50** |
## Prerequisites
1. AWS CLI configured with `hearth` profile
2. Terraform >= 1.5.0
3. Domain with DNS access
## Deployment
```bash
# 1. Initialize terraform
cd terraform/minimal
terraform init
# 2. Review configuration
vim terraform.tfvars # Set your domain and email
# 3. Plan
terraform plan
# 4. Apply
terraform apply
# 5. Note the outputs
terraform output
# 6. Configure DNS
# Add A record: git.yourdomain.com -> <elastic_ip>
# 7. Wait for DNS propagation (5-30 minutes)
# 8. Visit https://git.yourdomain.com to complete Forgejo setup
```
## Post-Deployment
### SSH Access
```bash
# Admin SSH (system access)
ssh -p 2222 ec2-user@<elastic_ip>
# Check k3s status
sudo kubectl get pods -A
# View Forgejo logs
sudo kubectl logs -n forgejo deploy/forgejo
```
### Git Access
```bash
# Clone a repo (after creating it in web UI)
git clone git@git.yourdomain.com:org/repo.git
```
### Backups
Automatic daily backups to S3 at 3 AM UTC.
```bash
# Manual backup
sudo /usr/local/bin/backup-forgejo.sh hearth-backups-<account_id>
# List backups
aws s3 ls s3://hearth-backups-<account_id>/backups/
```
### Restore from Backup
```bash
# Download backup
aws s3 cp s3://hearth-backups-<account_id>/backups/backup-TIMESTAMP.tar.gz /tmp/
# Extract
tar -xzf /tmp/backup-TIMESTAMP.tar.gz -C /tmp/restore
# Stop Forgejo
sudo kubectl scale deploy/forgejo -n forgejo --replicas=0
# Restore database
sudo cp /tmp/restore/gitea.db /data/forgejo/gitea/gitea.db
sudo chown 1000:1000 /data/forgejo/gitea/gitea.db
# Start Forgejo
sudo kubectl scale deploy/forgejo -n forgejo --replicas=1
```
## Upgrade Path
When you outgrow this setup:
1. **More resources**: Change instance type in terraform.tfvars
2. **High availability**: Migrate to EKS using the same manifests
3. **Multiple users**: Add authentication via Keycloak
The Kubernetes manifests are portable to any k8s cluster.
## Troubleshooting
### Forgejo not starting
```bash
sudo kubectl describe pod -n forgejo
sudo kubectl logs -n forgejo deploy/forgejo
```
### TLS not working
```bash
# Check Traefik logs
sudo kubectl logs -n traefik deploy/traefik
# Verify DNS is pointing to correct IP
dig git.yourdomain.com
```
### Spot instance interrupted
The instance will automatically restart. Data is preserved on EBS.
Check instance status in AWS console.
## Security Notes
1. **Restrict admin access**: Update `admin_cidr_blocks` in terraform.tfvars
2. **SSH keys**: Add your public key to `~/.ssh/authorized_keys` on the instance
3. **Forgejo admin**: Create admin account during initial setup
4. **Updates**: Automatic security updates enabled via dnf-automatic

396
terraform/minimal/main.tf Normal file
View file

@ -0,0 +1,396 @@
# Hearth Minimal Infrastructure
# Single EC2 + k3s for ~1 user
# Cost: ~$7.50/month
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.30"
}
}
backend "s3" {
bucket = "hearth-terraform-state-181640953119"
key = "hearth-minimal/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "hearth-terraform-locks"
encrypt = true
profile = "hearth"
}
}
provider "aws" {
region = var.aws_region
profile = "hearth"
default_tags {
tags = {
Project = "hearth"
Environment = "minimal"
ManagedBy = "terraform"
}
}
}
# -----------------------------------------------------------------------------
# Data Sources
# -----------------------------------------------------------------------------
data "aws_availability_zones" "available" {
state = "available"
}
data "aws_caller_identity" "current" {}
# Amazon Linux 2023 ARM64 (for t4g instances)
data "aws_ami" "al2023" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["al2023-ami-*-arm64"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
# -----------------------------------------------------------------------------
# VPC - Minimal single public subnet
# -----------------------------------------------------------------------------
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "hearth-minimal"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "hearth-minimal"
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = data.aws_availability_zones.available.names[0]
map_public_ip_on_launch = true
tags = {
Name = "hearth-minimal-public"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "hearth-minimal-public"
}
}
resource "aws_route_table_association" "public" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public.id
}
# -----------------------------------------------------------------------------
# Security Group
# -----------------------------------------------------------------------------
resource "aws_security_group" "forgejo" {
name = "hearth-forgejo"
description = "Security group for Forgejo server"
vpc_id = aws_vpc.main.id
# SSH for Git (Forgejo)
ingress {
description = "Git SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP (redirect to HTTPS)
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTPS
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Admin SSH (restricted - update with your IP)
ingress {
description = "Admin SSH"
from_port = 2222
to_port = 2222
protocol = "tcp"
cidr_blocks = var.admin_cidr_blocks
}
# Kubernetes API (for local kubectl, restricted)
ingress {
description = "Kubernetes API"
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = var.admin_cidr_blocks
}
# All outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "hearth-forgejo"
}
}
# -----------------------------------------------------------------------------
# IAM Role for EC2 (S3 backup access)
# -----------------------------------------------------------------------------
resource "aws_iam_role" "forgejo" {
name = "hearth-forgejo"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy" "forgejo_backup" {
name = "forgejo-backup"
role = aws_iam_role.forgejo.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
]
Resource = [
aws_s3_bucket.backups.arn,
"${aws_s3_bucket.backups.arn}/*"
]
},
{
Effect = "Allow"
Action = [
"ec2:CreateSnapshot",
"ec2:DescribeSnapshots",
"ec2:DeleteSnapshot"
]
Resource = "*"
}
]
})
}
resource "aws_iam_instance_profile" "forgejo" {
name = "hearth-forgejo"
role = aws_iam_role.forgejo.name
}
# -----------------------------------------------------------------------------
# S3 Bucket for Backups
# -----------------------------------------------------------------------------
resource "aws_s3_bucket" "backups" {
bucket = "hearth-backups-${data.aws_caller_identity.current.account_id}"
tags = {
Name = "hearth-backups"
}
}
resource "aws_s3_bucket_versioning" "backups" {
bucket = aws_s3_bucket.backups.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "backups" {
bucket = aws_s3_bucket.backups.id
rule {
id = "expire-old-backups"
status = "Enabled"
filter {
prefix = ""
}
# Keep 30 days of backups
expiration {
days = 30
}
# Move to cheaper storage after 7 days
transition {
days = 7
storage_class = "STANDARD_IA"
}
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "backups" {
bucket = aws_s3_bucket.backups.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
# -----------------------------------------------------------------------------
# EC2 Instance
# -----------------------------------------------------------------------------
resource "aws_instance" "forgejo" {
ami = data.aws_ami.al2023.id
instance_type = var.instance_type
subnet_id = aws_subnet.public.id
iam_instance_profile = aws_iam_instance_profile.forgejo.name
vpc_security_group_ids = [aws_security_group.forgejo.id]
# Use spot instance for cost savings
instance_market_options {
market_type = "spot"
spot_options {
max_price = var.spot_max_price
spot_instance_type = "persistent"
instance_interruption_behavior = "stop"
}
}
root_block_device {
volume_size = var.volume_size
volume_type = "gp3"
iops = 3000
throughput = 125
delete_on_termination = false # Preserve data on instance termination
encrypted = true
tags = {
Name = "hearth-forgejo-root"
}
}
user_data = base64encode(templatefile("${path.module}/user-data.sh", {
domain = var.domain
letsencrypt_email = var.letsencrypt_email
ssh_port = var.admin_ssh_port
s3_bucket = aws_s3_bucket.backups.id
}))
tags = {
Name = "hearth-forgejo"
}
lifecycle {
ignore_changes = [ami] # Don't replace on AMI updates
}
}
# -----------------------------------------------------------------------------
# Elastic IP (stable DNS)
# -----------------------------------------------------------------------------
resource "aws_eip" "forgejo" {
instance = aws_instance.forgejo.id
domain = "vpc"
tags = {
Name = "hearth-forgejo"
}
}
# -----------------------------------------------------------------------------
# Outputs
# -----------------------------------------------------------------------------
output "instance_id" {
description = "EC2 instance ID"
value = aws_instance.forgejo.id
}
output "public_ip" {
description = "Elastic IP address"
value = aws_eip.forgejo.public_ip
}
output "ssh_command" {
description = "SSH command for admin access"
value = "ssh -p ${var.admin_ssh_port} ec2-user@${aws_eip.forgejo.public_ip}"
}
output "forgejo_url" {
description = "Forgejo web URL"
value = "https://${var.domain}"
}
output "git_clone_url" {
description = "Git clone URL format"
value = "git@${var.domain}:ORG/REPO.git"
}
output "backup_bucket" {
description = "S3 bucket for backups"
value = aws_s3_bucket.backups.id
}
output "dns_record" {
description = "DNS A record to create"
value = "${var.domain}${aws_eip.forgejo.public_ip}"
}

View file

@ -0,0 +1,14 @@
# Hearth Minimal Configuration
# Copy to terraform.tfvars and update values
domain = "git.example.com"
letsencrypt_email = "admin@example.com"
# EC2 Configuration
instance_type = "t4g.small"
volume_size = 20
# Admin access - restrict to your IP for security
# Find your IP: curl -s ifconfig.me
admin_cidr_blocks = ["YOUR_IP/32"] # e.g., ["1.2.3.4/32"]
admin_ssh_port = 2222

View file

@ -0,0 +1,352 @@
#!/bin/bash
set -euo pipefail
# Hearth Minimal - EC2 User Data Script
# Installs k3s and deploys Forgejo
exec > >(tee /var/log/user-data.log) 2>&1
echo "Starting user-data script at $(date)"
# -----------------------------------------------------------------------------
# Variables from Terraform
# -----------------------------------------------------------------------------
DOMAIN="${domain}"
LETSENCRYPT_EMAIL="${letsencrypt_email}"
SSH_PORT="${ssh_port}"
S3_BUCKET="${s3_bucket}"
# -----------------------------------------------------------------------------
# System Setup
# -----------------------------------------------------------------------------
# Update system
dnf update -y
# Install required packages
dnf install -y docker git jq awscli
# Enable and start Docker (for building if needed)
systemctl enable --now docker
# Move SSH to alternate port for admin access
sed -i "s/#Port 22/Port $SSH_PORT/" /etc/ssh/sshd_config
systemctl restart sshd
# Enable automatic security updates
dnf install -y dnf-automatic
sed -i 's/apply_updates = no/apply_updates = yes/' /etc/dnf/automatic.conf
systemctl enable --now dnf-automatic-install.timer
# -----------------------------------------------------------------------------
# Install k3s
# -----------------------------------------------------------------------------
echo "Installing k3s..."
curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--write-kubeconfig-mode 644
# Wait for k3s to be ready
echo "Waiting for k3s to be ready..."
until kubectl get nodes 2>/dev/null | grep -q "Ready"; do
sleep 5
done
echo "k3s is ready"
# -----------------------------------------------------------------------------
# Install Traefik with Let's Encrypt
# -----------------------------------------------------------------------------
echo "Installing Traefik..."
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
repo: https://traefik.github.io/charts
chart: traefik
targetNamespace: traefik
valuesContent: |-
ports:
ssh:
port: 2222
exposedPort: 22
expose:
default: true
protocol: TCP
web:
redirectTo:
port: websecure
websecure:
tls:
enabled: true
ingressRoute:
dashboard:
enabled: false
certificatesResolvers:
letsencrypt:
acme:
email: ${LETSENCRYPT_EMAIL}
storage: /data/acme.json
httpChallenge:
entryPoint: web
persistence:
enabled: true
size: 128Mi
additionalArguments:
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "--entrypoints.ssh.address=:2222/tcp"
service:
type: LoadBalancer
EOF
# Wait for Traefik
echo "Waiting for Traefik..."
sleep 30
# -----------------------------------------------------------------------------
# Create Forgejo Namespace and Resources
# -----------------------------------------------------------------------------
echo "Creating Forgejo namespace..."
kubectl create namespace forgejo --dry-run=client -o yaml | kubectl apply -f -
# Create Forgejo data directory
mkdir -p /data/forgejo
chown 1000:1000 /data/forgejo
# Create PV and PVC for Forgejo data
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: forgejo-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/forgejo
storageClassName: local
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-data
namespace: forgejo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local
EOF
# -----------------------------------------------------------------------------
# Deploy Forgejo
# -----------------------------------------------------------------------------
echo "Deploying Forgejo..."
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo
namespace: forgejo
spec:
replicas: 1
selector:
matchLabels:
app: forgejo
strategy:
type: Recreate
template:
metadata:
labels:
app: forgejo
spec:
securityContext:
fsGroup: 1000
containers:
- name: forgejo
image: codeberg.org/forgejo/forgejo:9
ports:
- name: http
containerPort: 3000
- name: ssh
containerPort: 22
env:
- name: FORGEJO__server__DOMAIN
value: "${DOMAIN}"
- name: FORGEJO__server__ROOT_URL
value: "https://${DOMAIN}"
- name: FORGEJO__server__SSH_DOMAIN
value: "${DOMAIN}"
- name: FORGEJO__server__SSH_PORT
value: "22"
- name: FORGEJO__server__LFS_START_SERVER
value: "true"
- name: FORGEJO__database__DB_TYPE
value: "sqlite3"
- name: FORGEJO__database__PATH
value: "/data/gitea/gitea.db"
- name: FORGEJO__security__INSTALL_LOCK
value: "false"
- name: FORGEJO__service__DISABLE_REGISTRATION
value: "false"
- name: FORGEJO__log__MODE
value: "console"
- name: FORGEJO__log__LEVEL
value: "Info"
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
livenessProbe:
httpGet:
path: /api/healthz
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /api/healthz
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: data
persistentVolumeClaim:
claimName: forgejo-data
---
apiVersion: v1
kind: Service
metadata:
name: forgejo
namespace: forgejo
spec:
selector:
app: forgejo
ports:
- name: http
port: 3000
targetPort: 3000
- name: ssh
port: 22
targetPort: 22
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: forgejo
namespace: forgejo
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
spec:
rules:
- host: ${DOMAIN}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: forgejo
port:
number: 3000
---
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: forgejo-ssh
namespace: forgejo
spec:
entryPoints:
- ssh
routes:
- match: HostSNI(\`*\`)
services:
- name: forgejo
port: 22
EOF
# -----------------------------------------------------------------------------
# Setup Backup Cron Job
# -----------------------------------------------------------------------------
echo "Setting up backup cron..."
cat <<'BACKUP_SCRIPT' > /usr/local/bin/backup-forgejo.sh
#!/bin/bash
set -euo pipefail
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
S3_BUCKET="$1"
BACKUP_DIR="/tmp/backup-$TIMESTAMP"
mkdir -p "$BACKUP_DIR"
# Backup Forgejo SQLite database
if [ -f /data/forgejo/gitea/gitea.db ]; then
sqlite3 /data/forgejo/gitea/gitea.db ".backup '$BACKUP_DIR/gitea.db'"
fi
# Backup k3s state
cp -r /var/lib/rancher/k3s/server/db "$BACKUP_DIR/k3s-db" 2>/dev/null || true
# Create tarball
tar -czf "/tmp/backup-$TIMESTAMP.tar.gz" -C "$BACKUP_DIR" .
# Upload to S3
aws s3 cp "/tmp/backup-$TIMESTAMP.tar.gz" "s3://$S3_BUCKET/backups/backup-$TIMESTAMP.tar.gz"
# Cleanup
rm -rf "$BACKUP_DIR" "/tmp/backup-$TIMESTAMP.tar.gz"
# Keep only last 7 days of backups in S3 (lifecycle policy handles older ones)
echo "Backup completed: s3://$S3_BUCKET/backups/backup-$TIMESTAMP.tar.gz"
BACKUP_SCRIPT
chmod +x /usr/local/bin/backup-forgejo.sh
# Add cron job for daily backup at 3 AM
echo "0 3 * * * root /usr/local/bin/backup-forgejo.sh ${S3_BUCKET} >> /var/log/backup.log 2>&1" > /etc/cron.d/forgejo-backup
# Initial backup
/usr/local/bin/backup-forgejo.sh ${S3_BUCKET} || true
# -----------------------------------------------------------------------------
# Done
# -----------------------------------------------------------------------------
echo "User-data script completed at $(date)"
echo ""
echo "=========================================="
echo "Forgejo deployment complete!"
echo "=========================================="
echo ""
echo "Web URL: https://${DOMAIN}"
echo "Git SSH: git@${DOMAIN}:ORG/REPO.git"
echo "Admin SSH: ssh -p ${SSH_PORT} ec2-user@<ELASTIC_IP>"
echo ""
echo "Next steps:"
echo "1. Point DNS: ${DOMAIN} -> <ELASTIC_IP>"
echo "2. Wait for DNS propagation"
echo "3. Visit https://${DOMAIN} to complete setup"
echo "=========================================="

View file

@ -0,0 +1,47 @@
# Hearth Minimal - Variables
variable "aws_region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "domain" {
description = "Domain for Forgejo (e.g., git.example.com)"
type = string
}
variable "letsencrypt_email" {
description = "Email for Let's Encrypt certificate notifications"
type = string
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t4g.small" # 2 vCPU, 2GB RAM, ARM64
}
variable "volume_size" {
description = "Root volume size in GB"
type = number
default = 20
}
variable "spot_max_price" {
description = "Maximum spot price (empty = on-demand price)"
type = string
default = "" # Use on-demand price as max
}
variable "admin_ssh_port" {
description = "SSH port for admin access"
type = number
default = 2222
}
variable "admin_cidr_blocks" {
description = "CIDR blocks allowed for admin SSH and k8s API"
type = list(string)
default = ["0.0.0.0/0"] # Restrict this in production!
}