TL;DR - A working mysql backup strategy does not require enterprise tools.
mysqldump+cron+ offsite storage handles most startup databases reliably for years. - Follow the 7-4-3 rotation: keep 7 daily backups, 4 weekly backups, and 3 monthly backups. This gives you 90+ days of recovery points using minimal storage. - The backup that has never been restored is not a backup. Schedule monthly restore tests or accept that your backups might be worthless when you need them most. - Managed database providers (AWS RDS, DigitalOcean, PlanetScale) include automated backups, but you still need to understand retention, test restores, and keep an independent copy. - A complete mysqldump backup script, cron schedule, rotation policy, and restore test procedure are included below. Copy them and adapt to your setup.
Table of Contents
- MySQL Backup Strategy for Startups: Simple, Reliable, Affordable
- Why Startups Lose Data (It Is Not Always Hardware Failure)
- The mysqldump + Cron Starter Kit
- Backup Rotation: The 7-4-3 Policy
- Offsite Storage: Get Backups Off the Server
- Testing Restores: The Step Everyone Skips
- Managed Provider Backups: What You Get and What You Don't
- AutoMySQLBackup: A Step Up Without the Complexity
- Encrypting Backups
- FAQ
- Conclusion
MySQL Backup Strategy for Startups: Simple, Reliable, Affordable
Every startup founder assumes their database is safe until the day it is not. A junior developer runs DROP TABLE users in production. A migration script corrupts a column. A hosting provider has an outage and the disk does not come back intact. A disgruntled contractor deletes records before their access is revoked. In each case, the only thing standing between your company and catastrophic data loss is your mysql backup strategy.
The problem is that most backup advice is written for enterprises with dedicated DBAs, $50,000 backup appliances, and 300-page disaster recovery plans. Startups do not have any of that. What startups need is a backup strategy that is simple enough for one developer to set up in an afternoon, reliable enough to survive real incidents, and affordable enough to run on a $20/month server.
This article is that strategy. Every section includes the actual commands and scripts. No theory without implementation.
Why Startups Lose Data (It Is Not Always Hardware Failure)
Before diving into the technical setup, it is worth understanding what actually causes data loss at startups. The threats are more mundane than you might expect.
Human error is the leading cause. According to a 2024 Databarracks Data Health Check, 37% of data loss incidents are caused by human error — accidental deletion, botched migrations, and misconfigured scripts. Hardware failure accounts for about 18%, and cyberattacks (including ransomware) account for 22%.
Startups are disproportionately affected because they typically have fewer safeguards:
- No separate staging database, so developers run queries against production
- Shared credentials with full privileges, so any mistake affects all data
- No backup testing, so the "backups" that have been running for months turn out to be corrupted or incomplete
- No point-in-time recovery capability, so even with backups you can only restore to the last full dump
A 2023 Veeam Data Protection Trends Report found that 85% of organizations experienced at least one ransomware attack in the previous year, and of those, 16% were unable to recover their data. For startups without a tested backup strategy, that number is significantly higher.
The cost of data loss for a startup is existential. An Arcserve survey reported that 40% of small businesses never recover from a major data loss event. For a SaaS startup, losing customer data can mean losing every customer.
The good news: a working backup strategy prevents all of this, and it costs almost nothing to implement.
The mysqldump + Cron Starter Kit
For databases under 50 GB — which covers the vast majority of startups — mysqldump is the right tool. It is included with every MySQL installation, it produces portable SQL files, and it has been battle-tested for over two decades.
The backup script
Save this as /usr/local/bin/backup-mysql.sh:
#!/bin/bash
set -euo pipefail
# --- Configuration ---
DB_USER="backup_user"
DB_PASS="your-strong-password-here"
DB_HOST="localhost"
BACKUP_DIR="/var/backups/mysql"
DATE=$(date +%Y-%m-%d_%H%M)
RETENTION_DAYS=7
# --- Create backup directory if it does not exist ---
mkdir -p "$BACKUP_DIR"
# --- Dump all databases ---
mysqldump \
--user="$DB_USER" \
--password="$DB_PASS" \
--host="$DB_HOST" \
--single-transaction \
--routines \
--triggers \
--events \
--quick \
--lock-tables=false \
--all-databases \
| gzip > "$BACKUP_DIR/all-databases-$DATE.sql.gz"
# --- Verify the backup is not empty ---
FILESIZE=$(stat -c%s "$BACKUP_DIR/all-databases-$DATE.sql.gz" 2>/dev/null || stat -f%z "$BACKUP_DIR/all-databases-$DATE.sql.gz")
if [ "$FILESIZE" -lt 1000 ]; then
echo "ERROR: Backup file is suspiciously small ($FILESIZE bytes). Check mysqldump output." >&2
exit 1
fi
# --- Delete backups older than retention period ---
find "$BACKUP_DIR" -name "all-databases-*.sql.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: all-databases-$DATE.sql.gz ($FILESIZE bytes)"
# Make it executable
chmod +x /usr/local/bin/backup-mysql.sh
Key mysqldump flags explained
| Flag | Purpose |
|---|---|
--single-transaction |
Takes a consistent snapshot without locking InnoDB tables. Critical for production databases. |
--routines |
Includes stored procedures and functions. Without this, you lose them on restore. |
--triggers |
Includes triggers. Enabled by default in MySQL 8.0+ but explicit is better. |
--events |
Includes scheduled events. Often forgotten, causing silent failures after restore. |
--quick |
Dumps rows one at a time instead of buffering in memory. Essential for large tables. |
--lock-tables=false |
Combined with --single-transaction, avoids table locks entirely for InnoDB. |
The cron schedule
# Edit crontab
crontab -e
# Daily backup at 3:00 AM server time
0 3 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup.log 2>&1
That is it. A working automated backup in two files. The script runs every night, creates a compressed dump, verifies the file is not empty, and cleans up old backups. Logs go to /var/log/mysql-backup.log so you can check on failures.
Create a dedicated backup user
Do not use root for backups. Create a user with the minimum privileges needed:
CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'strong-random-password';
GRANT SELECT, SHOW VIEW, TRIGGER, LOCK TABLES, RELOAD,
PROCESS, REPLICATION CLIENT, EVENT
ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;
Backup Rotation: The 7-4-3 Policy
Running daily backups with a flat 7-day retention means you can only go back one week. If data corruption goes unnoticed for 10 days, you have no clean backup to restore from.
The 7-4-3 rotation policy solves this:
- 7 daily backups — one per day for the last week
- 4 weekly backups — one per week for the last month (kept every Sunday)
- 3 monthly backups — one per month for the last quarter (kept on the 1st)
This gives you 90+ days of recovery points using roughly 14 backup files at any given time. For a 1 GB database compressed to 200 MB, that is about 2.8 GB of total storage.
Rotation script
Replace the simple find -delete in the starter kit with this rotation logic:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/var/backups/mysql"
TODAY=$(date +%u) # Day of week (1=Monday, 7=Sunday)
DAY_OF_MONTH=$(date +%d) # Day of month
# Daily backups: keep last 7 days
find "$BACKUP_DIR/daily" -name "*.sql.gz" -mtime +7 -delete
# Weekly backups: keep on Sundays, retain 4 weeks
if [ "$TODAY" -eq 7 ]; then
cp "$BACKUP_DIR/daily/all-databases-$(date +%Y-%m-%d)_"*.sql.gz \
"$BACKUP_DIR/weekly/" 2>/dev/null || true
fi
find "$BACKUP_DIR/weekly" -name "*.sql.gz" -mtime +28 -delete
# Monthly backups: keep on 1st, retain 90 days
if [ "$DAY_OF_MONTH" -eq "01" ]; then
cp "$BACKUP_DIR/daily/all-databases-$(date +%Y-%m-%d)_"*.sql.gz \
"$BACKUP_DIR/monthly/" 2>/dev/null || true
fi
find "$BACKUP_DIR/monthly" -name "*.sql.gz" -mtime +90 -delete
Create the directories once:
mkdir -p /var/backups/mysql/{daily,weekly,monthly}
According to the Veeam 2024 Data Protection Trends Report, 76% of organizations have experienced a gap between how frequently they back up data and how much data they can afford to lose (RPO gap). A rotation policy narrows that gap without requiring continuous replication.
Offsite Storage: Get Backups Off the Server
A backup stored on the same server as the database is not a backup. If the server's disk fails, you lose both. If ransomware encrypts the server, it encrypts the backups too.
After the local backup completes, push a copy to offsite storage. The three most cost-effective options for startups:
Option 1: Amazon S3 (or S3-compatible)
# Install AWS CLI
apt install awscli -y
# Upload after backup
aws s3 cp "$BACKUP_DIR/daily/all-databases-$DATE.sql.gz" \
s3://your-backup-bucket/mysql/daily/
# Use S3 lifecycle rules to handle retention automatically
# (set via AWS Console or Terraform)
S3 Standard costs $0.023/GB/month. For a 200 MB compressed backup stored 14 times, that is about $0.06/month. S3 Glacier for monthly archives drops that to $0.004/GB/month.
Option 2: Backblaze B2
# Install B2 CLI
pip install b2
# Upload
b2 upload-file your-bucket-name \
"$BACKUP_DIR/daily/all-databases-$DATE.sql.gz" \
"mysql/daily/all-databases-$DATE.sql.gz"
B2 costs $0.006/GB/month for storage — roughly 4x cheaper than S3 Standard. For startups watching every dollar, this is the best option.
Option 3: rsync to a second server
rsync -avz "$BACKUP_DIR/" backup-user@second-server:/var/backups/mysql/
If you already have a second server (staging, monitoring, etc.), this costs nothing extra.
Whichever option you choose, automate it. Add the upload command to your backup script so offsite storage happens every time a backup runs, not when you remember to do it.
Testing Restores: The Step Everyone Skips
A 2023 survey by Zerto (now part of HPE) found that 36% of organizations have never tested restoring from their backups. Among small businesses, the number is even higher.
An untested backup is a liability disguised as a safety net. Here is what can go wrong silently:
mysqldumpfailed partway through and produced a truncated file- A schema change broke compatibility with the restore process
- The backup user lost privileges and dumps started coming back empty
- The gzip compression was corrupted during transfer
- Character set issues mangled UTF-8 data
Monthly restore test script
Save as /usr/local/bin/test-mysql-restore.sh:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/var/backups/mysql/daily"
TEST_DB="restore_test"
LATEST_BACKUP=$(ls -t "$BACKUP_DIR"/all-databases-*.sql.gz | head -1)
LOG="/var/log/mysql-restore-test.log"
echo "=== Restore test: $(date) ===" >> "$LOG"
echo "Backup file: $LATEST_BACKUP" >> "$LOG"
# Create a test database
mysql -u root -e "CREATE DATABASE IF NOT EXISTS $TEST_DB;" 2>> "$LOG"
# Restore into it (extract only relevant database, or restore all)
gunzip -c "$LATEST_BACKUP" | mysql -u root "$TEST_DB" 2>> "$LOG"
# Verify row counts on critical tables
USERS_COUNT=$(mysql -u root -N -e "SELECT COUNT(*) FROM $TEST_DB.users;" 2>> "$LOG")
echo "Restored users table: $USERS_COUNT rows" >> "$LOG"
# Clean up
mysql -u root -e "DROP DATABASE $TEST_DB;" 2>> "$LOG"
echo "Restore test PASSED" >> "$LOG"
echo "---" >> "$LOG"
# Schedule monthly on the 1st at 5:00 AM
0 5 1 * * /usr/local/bin/test-mysql-restore.sh
If the restore fails, the set -euo pipefail at the top causes the script to exit with a non-zero code, and the cron daemon sends an email to root. Set up mail forwarding or pipe the output to a monitoring tool so failures do not go unnoticed.
Managed Provider Backups: What You Get and What You Don't
If you run MySQL on a managed provider, you already have automated backups — but you need to understand their limitations.
| Provider | Automated Backups | Retention | Point-in-Time Recovery | Independent Export |
|---|---|---|---|---|
| AWS RDS | Yes, daily snapshots | 1-35 days (default 7) | Yes, to any second within retention | Manual snapshot + export to S3 |
| DigitalOcean Managed DB | Yes, daily | 7 days | Yes, to any second within 7 days | Manual backup via UI/API |
| Google Cloud SQL | Yes, daily | 7 days (configurable to 365) | Yes | Export to Cloud Storage |
| PlanetScale | Branch-based | Varies by plan | Via branching | pscale database dump |
| Linode Managed DB | Yes, daily | 3 days | No | Manual via mysqldump |
What managed backups handle well
- Automated scheduling — no cron to configure
- Storage and retention — managed by the provider
- Point-in-time recovery — restores to any second, not just the last snapshot
- Infrastructure resilience — replicated across availability zones
What you still need to do yourself
- Test restores regularly. Managed backups can restore, but have you actually done it? Restore to a temporary instance quarterly and verify the data.
- Keep an independent copy. If your cloud account is compromised (credentials stolen, billing issue locks the account), you lose access to the managed backups too. Run a weekly
mysqldumpfrom outside the provider and store it independently. - Understand retention limits. DigitalOcean's default 7-day retention means an issue discovered on day 8 has no clean backup. Extend retention or supplement with your own rotation policy.
- Document the restore procedure. When the incident happens at 2 AM, you need a step-by-step runbook, not a provider documentation search.
AutoMySQLBackup: A Step Up Without the Complexity
If you want rotation, compression, and email notifications without writing your own scripts, AutoMySQLBackup is a well-established open-source tool that wraps mysqldump with sensible defaults.
Installation
# Debian/Ubuntu
apt install automysqlbackup
# Or install from source
wget https://github.com/sixhop/AutoMySQLBackup/archive/master.tar.gz
tar xzf master.tar.gz
cd AutoMySQLBackup-master
cp automysqlbackup /usr/local/bin/
cp automysqlbackup.conf /etc/
Configuration highlights
Edit /etc/automysqlbackup.conf:
# Database credentials
CONFIG_mysql_dump_username='backup_user'
CONFIG_mysql_dump_password='your-strong-password'
CONFIG_mysql_dump_host='localhost'
# What to back up (empty = all databases)
CONFIG_db_names=()
# Rotation
CONFIG_rotation_daily=7
CONFIG_rotation_weekly=4
CONFIG_rotation_monthly=3
# Compression
CONFIG_mysql_dump_compression='gzip'
# Email notification on failure
CONFIG_mail_address='ops@yourstartup.com'
# Backup directory
CONFIG_backup_dir='/var/backups/mysql'
AutoMySQLBackup handles daily/weekly/monthly rotation out of the box — the same 7-4-3 policy described above, but without writing the rotation logic yourself. It also logs each run and can email on failure.
Add it to cron:
0 3 * * * /usr/local/bin/automysqlbackup /etc/automysqlbackup.conf
Encrypting Backups
Unencrypted backups are a security risk. If an attacker gains access to your backup storage, they get a complete copy of your database in plaintext SQL. For compliance (SOC 2, HIPAA, GDPR), encrypted backups are a requirement, not an option.
Encrypt during backup
Modify the backup script to pipe through openssl:
mysqldump --single-transaction --routines --triggers --events \
--all-databases -u backup_user -p"$DB_PASS" \
| gzip \
| openssl enc -aes-256-cbc -salt -pbkdf2 \
-pass file:/etc/mysql-backup-key \
> "$BACKUP_DIR/daily/all-databases-$DATE.sql.gz.enc"
Decrypt for restore
openssl enc -d -aes-256-cbc -pbkdf2 \
-pass file:/etc/mysql-backup-key \
-in all-databases-2026-04-11_0300.sql.gz.enc \
| gunzip \
| mysql -u root
Key management
- Store the encryption key (
/etc/mysql-backup-key) separately from the backups. If both are in the same place, encryption adds no protection. - Keep a copy of the key in a password manager (1Password, Bitwarden) or secrets manager (AWS Secrets Manager, HashiCorp Vault).
- If you use S3 for offsite storage, enable S3 server-side encryption (SSE-S3 or SSE-KMS) as a second layer.
For a deeper dive on backup encryption and what auditors expect, see our database security checklist for startups.
FAQ
How often should a startup back up its MySQL database?
Daily is the minimum. For most startups, a nightly mysqldump via cron provides a good balance between recovery capability and resource usage. If your application writes data continuously and losing even a few hours is unacceptable, consider enabling MySQL binary logging for point-in-time recovery between daily dumps. The binary log records every write operation, so you can replay changes up to the exact second before an incident. For databases under 10 GB, a nightly dump takes under 5 minutes and uses minimal CPU.
Is mysqldump good enough for production databases?
Yes, for databases under roughly 50 GB. The --single-transaction flag takes a consistent snapshot of InnoDB tables without locking, so it runs safely during production traffic. Above 50 GB, dump and restore times start to stretch into hours, and you should consider Percona XtraBackup (physical backups that are faster for large datasets) or a managed provider with snapshot-based backups. According to Percona's benchmarks, XtraBackup can back up a 100 GB database in minutes compared to over an hour for mysqldump.
What is the difference between mysqldump and physical backups?
mysqldump creates a logical backup — a SQL file with CREATE TABLE and INSERT statements that can rebuild the database from scratch. Physical backup tools like Percona XtraBackup copy the raw InnoDB data files directly. Logical backups are portable (you can restore to a different MySQL version or even MariaDB), human-readable, and simple to set up. Physical backups are significantly faster for large databases and support incremental backups. Most startups should start with mysqldump and only move to physical backups when dump times become a problem.
Should I back up individual databases or use --all-databases?
Use --all-databases for your primary backup. This ensures you capture everything, including system tables, user grants, and databases you might forget to list explicitly. If you also want per-database backup files for faster partial restores, run a second pass that dumps each database individually. The storage overhead is minimal compared to the convenience of being able to restore a single database without processing a monolithic dump file.
How do I monitor that my backups are actually running?
Check three things: (1) the cron job's log file for errors, (2) the backup file size (a file under 1 KB almost certainly means the dump failed), and (3) the timestamp of the latest backup file. Wrap these checks in a simple monitoring script that runs after the backup and sends an alert (email, Slack webhook, PagerDuty) if anything is off. Better yet, integrate with a dead man's switch service like Cronitor or Healthchecks.io — if the backup script does not ping the service within the expected window, you get an alert. Healthchecks.io is free for up to 20 checks.
Conclusion
A mysql backup strategy for startups comes down to four things: automate the dumps, rotate the files, store copies offsite, and test your restores. Everything in this article can be set up in an afternoon by a single developer, and it costs less than $1/month for offsite storage.
Start with the mysqldump + cron starter kit. Add the 7-4-3 rotation policy. Push copies to S3 or Backblaze B2. Schedule a monthly restore test. Encrypt everything. That is the entire strategy — no enterprise backup appliances, no dedicated DBA, no $10,000/year backup SaaS.
The worst time to discover your backups do not work is during the incident that requires them. Run through the steps in this article today, and the next time a migration goes sideways or a query deletes the wrong rows, recovery is a 10-minute restore instead of a company-ending event.
If you need to inspect or manage your MySQL database during a recovery, or just want a quick way to verify row counts after a restore, DBEverywhere gives you browser-based phpMyAdmin and Adminer access without installing anything locally. The free tier includes 5 sessions per month — enough to handle the occasional backup verification or emergency database inspection. Paid plans at $5/month give you unlimited sessions with 8-hour timeouts for longer work.
Try DBEverywhere Free
Access your database from any browser. No installation, no Docker, no SSH tunnels.
Get Started