VERSICH

How Startups and SMBs Can Host Multiple Websites on a Low-Tier AWS EC2 Instance without Disk Issues Using S3

how startups and smbs can host multiple websites on a low-tier aws ec2 instance without disk issues using s3

We helped a client consolidate multiple websites onto a single low tier EC2 instance (t2.micro, 16 GB) by moving builds off the instance to GitHub Actions, deploying only build artifacts to EC2 via SSH, and uploading backups and media to S3 then deleting local copies. This resulted in: predictable disk usage, reduced disk I/O spikes, and stable sites under modest concurrency, all at low cost.

Problem Observed

Symptoms

  • Several sites (WordPress, Company Website, Blog systems, Product microsites) shared a single t2.micro with a 16 GB root volume.
  • Source trees, build artifacts, media uploads, and backups all lived on the same EBS volume.
  • Instances became unresponsive under modest traffic (~5–10 concurrent views) because builds and backup I/O saturated the instance and disk.

Root causes

  • Build tasks and backups ran on the instance, consuming CPU and disk I/O.
  • Local backups and media accumulated until the 16 GB volume filled.
  • No automated off instance storage or cleanup for large files.

Solution Summary

What Versich implemented

  • Source in GitHub: We moved the site source to GitHub and used a prod branch for production.
  • Builds in GitHub Actions: We run builds off instance so EC2 never compiles code.
  • Deploy artifacts only: upload only the built build/ or public/ artifacts to EC2 via SSH/SCP.
  • Backups and media to S3: upload backups and media to S3 and delete local copies after successful upload.
  • Automate: trigger S3 uploads from GitHub Actions after deploy or run a scheduled script on EC2.
  • Secure credentials: store SSH keys in GitHub Secrets and prefer EC2 IAM roles or GitHub OIDC for AWS access.

Why this works

  • Offloading builds removes CPU/disk pressure from the instance.
  • Moving backups/media to S3 frees EBS space and avoids long-term local storage growth.
  • Deploying only artifacts keeps the EC2 filesystem small and predictable.

Implementation Steps

The below are the implementation steps taken to implement the DevOps solution:

Step1:  Prepare repositories and branches

  • Create a repo per site or a mono repo with per site folders.
  • Use a prod branch for production deploys.
  • Keep build scripts in the repo so Actions can run npm run build, hugo, jekyll build, or packaging steps for PHP/WordPress.

Step 2: GitHub Actions workflow Build and Deploy

Purpose

  • Build in GitHub Actions.
  • Copy only built artifacts to EC2 using SCP/SSH.
  • Optionally trigger the remote S3 upload script on EC2.

Workflow example

name: Build and Deploy to EC2
on:
  push:
    branches: [ prod ]

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build site
        run: |
          # Example for Node static site; replace with your build commands
          npm ci
          npm run build

      - name: Copy build to EC2 via SCP
        env:
          SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
        run: |
          mkdir -p ~/.ssh
          echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          scp -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa -r ./build/* ec2-user@${{ secrets.EC2_HOST }}:/var/www/site/

      - name: Trigger remote S3 upload script
        env:
          SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
        run: |
          ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa ec2-user@${{ secrets.EC2_HOST }} 'sudo /usr/local/bin/upload-to-s3.sh'

Notes

  • Store EC2_SSH_KEY and EC2_HOST in GitHub Secrets.
  • Use ec2-user or the appropriate user for your AMI.
  • If you prefer, use a community action like appleboy/scp-action for SCP steps.

Step 3: S3 Upload Script on EC2

Purpose

Upload backups and media to S3 and delete local copies to keep the EC2 disk clean.

Install and place script

  • Save the script below to /usr/local/bin/upload-to-s3.sh and make it executable: sudo chmod +x /usr/local/bin/upload-to-s3.sh.

Script

#!/usr/bin/env bash
set -euo pipefail

log(){ echo "[$(date -u +'%Y-%m-%dT%H:%M:%SZ')] $*"; }

: "${S3_BUCKET:?Need S3_BUCKET environment variable}"
: "${BACKUP_DIR:=/var/backups}"
: "${S3_PREFIX:=backups}"
DATE_FOLDER=$(date +%F)

log "Starting backup upload to S3..."
command -v aws >/dev/null 2>&1 || { log "AWS CLI missing; installing..."; sudo apt-get update && sudo apt-get install -y awscli; }

log "Checking S3 bucket access: s3://${S3_BUCKET}"
aws s3 ls "s3://${S3_BUCKET}" >/dev/null

cd "${BACKUP_DIR}"
log "Found files: $(ls -1 *.tar.gz *.sql 2>/dev/null || true)"

for f in *.tar.gz *.sql; do
  [ -f "$f" ] || continue
  log "Uploading $f..."
  aws s3 cp "$f" "s3://${S3_BUCKET}/${S3_PREFIX}/${DATE_FOLDER}/$f" || { log "Upload failed for $f"; exit 1; }
  rm -- "$f"
  log "Uploaded and removed $f"
done

log "Completed backup upload to S3."

Environment variables required on EC2

  • S3_BUCKET - bucket name (e.g., my-company-backups)
  • Optional: BACKUP_DIR, S3_PREFIX

Permissions

•    Attach an IAM role to the EC2 instance with s3:PutObject and s3:ListBucket for the target bucket. If using AWS keys, store them securely and rotate regularly.

Step 4: Scheduling and Triggering Uploads

  • Trigger from GitHub Actions: After deploy, SSH into EC2 and run the upload script (see workflow above).
  • Run on a schedule from EC2: Add a cron job to run nightly or hourly depending on backup frequency.

Example cron entry

10 2 * * * /usr/local/bin/upload-to-s3.sh >> /var/log/upload-to-s3.log 2>&1

Step 5: EC2 Housekeeping and Runtime Best Practices

  • Remove build tools and source trees from production instances. Keep only runtime dependencies.
  • Keep web root small, store only runtime assets and caches.
  • Clear temp directories periodically (/tmp, application caches).
  • Monitor disk usage with a simple cron script that alerts when free space drops below a threshold.

Sample disk check script

#!/usr/bin/env bash
THRESHOLD=80

USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
  echo "Disk usage at ${USAGE}% on $(hostname) at $(date)"
fi

Validation and Results

What to measure

  • Disk usage(df -h) before and after migration.
  • CloudWatch metrics for DiskReadOps, DiskWriteOps, and CPUUtilization.
  • Site response times and error rates under modest concurrency tests.

Sample validation commands

# Disk usage
df -h / | awk 'NR==1 || NR==2'

# List uploaded backups in S3
aws s3 ls s3://my-company-backups/backups/2025-01-19/

Expected results

  • Significant reduction in used EBS space on the instance.
  • Lower disk I/O and fewer spikes in CloudWatch metrics during builds/backups.
  • Stable site response under the same modest concurrency that previously caused failures.

Security, Cost, and Operational Recommendations

Security

  • Prefer IAM roles or GitHub OIDC over long lived AWS keys.
  • Store SSH private keys in GitHub Secrets and rotate regularly.
  • Limit IAM role permissions to the specific S3 bucket and required actions.

Cost

  • S3 storage for backups is inexpensive for SMBs; use STANDARD_IA or Glacier for older backups if retention is long.
  • Monitor S3 request costs if backups are frequent.

Operational

  • Test restores from S3 regularly to ensure backups are valid.
  • Define a retention policy (e.g., keep last 30 daily backups).
  • Set CloudWatch alarms for disk usage and I/O to detect regressions early.

Sample minimal IAM policy for EC2 role

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::my-company-backups",
        "arn:aws:s3:::my-company-backups/*"
      ]
    }
  ]
}

Migration Checklist for Engineers

  • Prepare GitHub repo: move source, add build scripts, create prod branch.
  • Create GitHub Actions workflow: build, SCP deploy, optional remote trigger.
  • Provision S3 bucket: create bucket, set bucket policy and encryption.
  • Attach IAM role to EC2: grant s3:PutObject and s3:ListBucket.
  • Install AWS CLI on EC2: ensure aws is available for the upload script.
  • Deploy upload script: place at /usr/local/bin/upload-to-s3.sh and make executable.
  • Test upload: run script manually and verify files in S3.
  • Schedule cron: add cron entry if not triggered from Actions.
  • Remove build tools: uninstall compilers and dev dependencies from production.
  • Monitor: set CloudWatch alarms for disk usage and I/O.

Before and After Comparison

Attribute

Before

After

Disk usage on EC2

Source + media + backups filled 16 GB

Only runtime artifacts; backups moved to S3

Build location

On EC2

GitHub Actions

Backups

Stored locally

Uploaded to S3 and removed locally

Instance stability

Choked under low concurrency

Stable under same load

Operational cost

Multiple instances per site

Single low-tier instance; S3 storage cost

Business Benefits for Startups and SMBs

  • Predictable Resource Usage: This approach allows startups and SMBs to run multiple live websites on a low-tier EC2 instance while keeping disk usage predictable and avoiding unexpected performance issues.
  • Improved Site Stability: By moving builds and deployment workloads out of the EC2 instance and into GitHub Actions, startups and SMBs reduce CPU and disk strain, resulting in more stable websites under everyday traffic.
  • Lower Storage Costs: Using AWS S3 for backups and media helps startups and SMBs avoid filling limited EC2 disk space while benefiting from inexpensive, highly durable cloud storage.
  • Simplified Server Management: Deploying only production-ready artifacts keeps the EC2 environment small and clean, making it easier for small teams to manage, troubleshoot, and recover systems.
  • Cost-Efficient Hosting at Scale: This solution enables startups and SMBs to host multiple websites without upgrading to larger EC2 instances or repeatedly expanding EBS volumes as their online presence grows.
  • Reduced Risk of Downtime: Automating backup uploads and cleanup prevents disk saturation, helping startups and SMBs avoid outages caused by backups or deployments running at the wrong time.
  • Easy Multi-Site Growth: The architecture supports adding new websites with minimal additional cost or operational effort, which is ideal for startups and SMBs launching new products, blogs, or microsites.
  • Security Without Complexity: Leveraging IAM roles, GitHub Secrets, and automated deployments improves security while keeping the setup simple enough for lean teams without dedicated DevOps staff.

Conclusion

Versich’s DevOps team converted a fragile, disk constrained multi site setup into a stable, low cost deployment pattern by offloading builds to GitHub Actions, deploying only artifacts to EC2, and moving backups/media to S3 with an automated upload and delete script. This approach is practical for startups and SMBs that need to host several small sites on a single low tier instance without risking disk I/O saturation or unexpected downtime.