TL;DR — How to Import a Large SQL File
- The mysql command line (
mysql -u user -p database < file.sql) is the fastest and most reliable way to import a large SQL file of any size. No upload limits, no timeouts.- If you must use phpMyAdmin, increase
upload_max_filesize,post_max_size, andmax_execution_timeinphp.ini. The defaults (2MB upload, 30-second timeout) are far too low for anything serious.- For shared hosting where you cannot change PHP settings, split the file into smaller chunks with tools like BigDump or csplit.
- Adminer handles larger imports better than phpMyAdmin out of the box thanks to streaming uploads, but still has server-side limits.
- When importing remotely, add
--compressto cut transfer time by 60-80%.- DBEverywhere gives you browser-based phpMyAdmin and Adminer with pre-configured limits — no server setup needed.
Table of Contents
- Why Large SQL Imports Fail
- Method 1: MySQL Command Line (Fastest, Any Size)
- Method 2: phpMyAdmin with Increased Limits
- Method 3: Split the File into Chunks
- Method 4: Adminer (Streaming Import)
- Common Errors and Fixes
- Method Comparison Table
- Frequently Asked Questions
- Conclusion
How to Import a Large SQL File Without Timeout Errors
You have a 500MB SQL dump and need it in your MySQL database. You drag it into phpMyAdmin, wait two minutes, and get a blank white page or a timeout error. If this sounds familiar, you are not alone — importing a large SQL file is one of the most common frustrations in database management, and the default settings of nearly every hosting environment are designed to block you.
The problem is not MySQL itself. MySQL can ingest gigabytes of SQL without breaking a sweat. The problem is the layers sitting between you and MySQL: PHP upload limits (default 2MB), execution timeouts (default 30 seconds), and web server request size caps. This guide walks you through four methods to bypass all of them, from the fastest command-line approach to GUI-based workarounds for when SSH is not an option.
Method 1: MySQL Command Line (Fastest, Any Size)
If you have SSH or terminal access to your server, the mysql command-line client is the fastest, most reliable way to import SQL files of any size. There are no upload limits, no PHP timeouts, and no file size caps. It streams the file directly into the MySQL server.
Basic local import
mysql -u your_username -p your_database < /path/to/file.sql
You will be prompted for your password. The -p flag without a value tells mysql to ask interactively, which is safer than putting the password in the command.
Remote server import
To import a large SQL file into a remote MySQL server (AWS RDS, DigitalOcean Managed Database, PlanetScale, etc.):
mysql -h your-db-host.example.com -P 3306 -u your_username -p your_database < file.sql
Import with compression (recommended for remote)
When importing over the network, the --compress flag enables gzip compression on the connection. SQL files are highly compressible text, so this typically reduces transfer time by 60-80%:
mysql -h your-db-host.example.com -u your_username -p --compress your_database < file.sql
Monitor progress with pv
Large imports can take minutes or hours with no feedback. The pv (pipe viewer) utility shows a progress bar, transfer speed, ETA, and total bytes transferred:
pv file.sql | mysql -u your_username -p your_database
Output looks like this:
2.35GiB 0:12:45 [3.15MiB/s] [=========> ] 38% ETA 0:20:31
Install pv with apt install pv (Debian/Ubuntu), brew install pv (macOS), or yum install pv (CentOS/RHEL).
Using the SOURCE command from within MySQL
If you are already connected to the MySQL shell, use SOURCE instead of shell redirection:
mysql> USE your_database;
mysql> SOURCE /path/to/file.sql;
This is functionally identical to the < redirect method. The file path must be accessible from the machine running the mysql client.
Speed tips for very large files (1GB+)
For files over 1GB, add these settings at the top of your SQL file or pass them as session variables to dramatically speed up the import:
SET autocommit = 0;
SET unique_checks = 0;
SET foreign_key_checks = 0;
-- Your SQL dump here --
SET foreign_key_checks = 1;
SET unique_checks = 1;
COMMIT;
SET autocommit = 1;
Disabling autocommit alone can make imports 3-5x faster on InnoDB tables because MySQL batches disk writes instead of flushing after every single INSERT.
When to use this method: Always, if you have terminal access. There is no file size limit, and it is the fastest option by a wide margin.
Method 2: phpMyAdmin with Increased Limits
phpMyAdmin is the most popular web-based MySQL tool, but its default import limits are painfully low. Out of the box on most servers:
| Setting | Default Value | What It Controls |
|---|---|---|
upload_max_filesize |
2MB | Maximum file upload size |
post_max_size |
8MB | Maximum POST request body |
max_execution_time |
30 seconds | PHP script timeout |
max_input_time |
60 seconds | Time to parse input data |
memory_limit |
128MB | PHP memory ceiling |
A 50MB SQL file will fail on three of those five defaults. Here is how to fix them.
Edit php.ini
Locate your php.ini file. You can find its path by running php --ini or checking phpMyAdmin's "Show PHP information" link on the home page.
For importing files up to 512MB, set these values:
; /etc/php/8.2/apache2/php.ini (adjust path for your PHP version)
upload_max_filesize = 512M
post_max_size = 512M
max_execution_time = 600
max_input_time = 600
memory_limit = 512M
For importing files up to 2GB:
upload_max_filesize = 2G
post_max_size = 2G
max_execution_time = 3600
max_input_time = 3600
memory_limit = 2G
After editing, restart your web server:
# Apache
sudo systemctl restart apache2
# Nginx + PHP-FPM
sudo systemctl restart php8.2-fpm
Increase phpMyAdmin's own config (optional)
phpMyAdmin also has its own config value, $cfg['ExecTimeLimit'], which defaults to 300 seconds. For very large imports, increase it in config.inc.php:
$cfg['ExecTimeLimit'] = 0; // 0 = no limit
Using .htaccess (Apache, when you cannot edit php.ini)
On shared hosting where you have no access to php.ini, you may be able to override values via .htaccess in the phpMyAdmin directory:
php_value upload_max_filesize 256M
php_value post_max_size 256M
php_value max_execution_time 600
php_value max_input_time 600
php_value memory_limit 256M
Note: this only works if your hosting provider allows php_value overrides. Many shared hosts disable this.
Nginx request body size
If you use Nginx as a reverse proxy, you also need to increase its client body size limit. The default is 1MB:
# In your server block or http block
client_max_body_size 512M;
Reload Nginx: sudo nginx -s reload.
When to use this method: When you need a GUI and have access to server configuration. Works well up to ~1-2GB depending on server resources and patience. For files larger than 2GB, use the command line.
Method 3: Split the File into Chunks
When you are on shared hosting with locked-down PHP settings (no access to php.ini, .htaccess overrides disabled), your best option is to split the SQL file into smaller chunks that fit within the upload limit.
Using csplit (built into Linux/macOS)
Split a SQL dump by table definitions — each DROP TABLE or CREATE TABLE statement starts a new chunk:
csplit -f chunk_ -b '%03d.sql' dump.sql '/^-- Table structure/' '{*}'
This creates chunk_000.sql, chunk_001.sql, etc. Import them one at a time through phpMyAdmin.
Using mysqldump with --no-data and --no-create-info
If you have not created the dump yet, split it at the source:
# Export schema only (small file)
mysqldump -u user -p --no-data your_database > schema.sql
# Export data only, one file per table
for table in $(mysql -u user -p -N -e "SHOW TABLES" your_database); do
mysqldump -u user -p --no-create-info your_database "$table" > "data_${table}.sql"
done
This gives you granular control — import the schema first, then each table's data individually.
BigDump: the chunked import script
BigDump is a free PHP script specifically designed for this problem. You upload bigdump.php plus your SQL file to the server via FTP, then run it in your browser. It reads the SQL file line by line and imports in small batches, working around PHP timeouts by automatically restarting between chunks.
Setup:
- Upload
bigdump.phpand your SQL file to your web root - Edit the database credentials at the top of
bigdump.php - Navigate to
https://yourdomain.com/bigdump.phpin your browser - Select your SQL file and start the import
BigDump processes roughly 3,000 queries per batch before resetting the timer. It has been the go-to solution for shared hosting imports for over a decade.
Using split for fixed-size chunks
If you just need to get under a specific upload limit:
# Split into 25MB chunks
split -b 25m dump.sql chunk_
# This creates chunk_aa, chunk_ab, chunk_ac, etc.
Warning: blind byte-splitting can cut a SQL statement in half. Use csplit with a pattern or a SQL-aware splitter instead.
When to use this method: Shared hosting with restrictive limits you cannot change. Works for any file size, but is the most tedious approach.
Method 4: Adminer (Streaming Import)
Adminer is a lightweight alternative to phpMyAdmin that handles large imports better out of the box. While phpMyAdmin loads the entire uploaded file into PHP memory before processing it, Adminer streams the file — reading and executing SQL statements incrementally without loading the full file into RAM.
Why Adminer handles large files better
- Streaming parser: Adminer reads the SQL file in chunks, executing statements as they come in rather than buffering the entire file. This means a 500MB file does not require 500MB of PHP memory.
- Gzip support: Adminer can import
.sql.gzfiles directly. A 500MB SQL file often compresses to 50-80MB as gzip, making uploads much faster and fitting within tighter upload limits. - Single-file deployment: Adminer is a single PHP file (~480KB). Upload it anywhere and it works. No complex installation, no dependency conflicts.
Import via Adminer
- Navigate to Adminer in your browser and log in
- Select your database
- Click Import in the left sidebar
- Choose your
.sqlor.sql.gzfile - Click Execute
You are still subject to PHP's upload_max_filesize and post_max_size, but because Adminer streams the import, you can import files much larger than your memory_limit would normally allow. A server with memory_limit = 128M can import a 500MB file through Adminer — something that would crash phpMyAdmin.
Adminer on the command line
Adminer also supports command-line imports via PHP CLI, bypassing web server limits entirely:
php adminer.php --import < dump.sql
When to use this method: When you want a GUI but need better large-file handling than phpMyAdmin provides. Especially useful when you cannot increase memory_limit.
If you want to use Adminer without installing anything, DBEverywhere hosts both Adminer and phpMyAdmin with pre-tuned import settings — connect to your database in seconds from any browser.
Common Errors and Fixes
These are the error messages you will hit most often when trying to import a large SQL file, along with the exact fix for each.
"MySQL server has gone away" (Error 2006)
Cause: The SQL file contains a statement larger than the max_allowed_packet setting, or the import took so long that the connection timed out (wait_timeout).
Fix: Increase both values. Edit your MySQL config (/etc/mysql/my.cnf or /etc/my.cnf):
[mysqld]
max_allowed_packet = 256M
wait_timeout = 28800
net_read_timeout = 600
net_write_timeout = 600
Then restart MySQL: sudo systemctl restart mysql.
For a one-off session fix without restarting:
SET GLOBAL max_allowed_packet = 268435456; -- 256MB
SET GLOBAL wait_timeout = 28800; -- 8 hours
The default max_allowed_packet is only 64MB on MySQL 8.0 and 4MB on older versions. A single INSERT with a large BLOB or TEXT column can easily exceed this.
"Got a packet bigger than 'max_allowed_packet' bytes"
Cause: Same root issue as above, but MySQL is telling you the exact packet that failed.
Fix: Same as above — increase max_allowed_packet. As a rule of thumb, set it to at least twice the size of your largest single row.
Foreign key constraint errors during import
Cause: Tables are being imported in an order where a child table references a parent table that has not been created yet.
Fix: Disable foreign key checks before the import and re-enable them after:
SET FOREIGN_KEY_CHECKS = 0;
SOURCE /path/to/dump.sql;
SET FOREIGN_KEY_CHECKS = 1;
Or from the command line:
mysql -u user -p -e "SET FOREIGN_KEY_CHECKS=0; SOURCE /path/to/dump.sql; SET FOREIGN_KEY_CHECKS=1;" your_database
Dumps created with mysqldump normally include SET FOREIGN_KEY_CHECKS=0 at the top automatically. If yours does not, add it manually.
Character encoding mismatches (mojibake, garbled text)
Cause: The SQL file was exported in one encoding but the import session is using another. Common when moving between MySQL 5.x (latin1 defaults) and MySQL 8.0 (utf8mb4 defaults).
Fix: Set the correct character set at the start of your import:
SET NAMES utf8mb4;
SET CHARACTER SET utf8mb4;
Or from the command line:
mysql -u user -p --default-character-set=utf8mb4 your_database < dump.sql
"The file exceeds the maximum upload size" (phpMyAdmin)
Cause: PHP's upload_max_filesize or post_max_size is too low.
Fix: See Method 2 above. If you cannot change PHP settings, gzip the file first — gzip dump.sql — and upload the .sql.gz file instead. phpMyAdmin supports compressed imports natively, and a 200MB SQL file typically compresses to 20-30MB.
Import hangs with no error
Cause: Usually the max_execution_time was reached silently, or the MySQL connection dropped due to wait_timeout. Nginx and Apache also have their own request timeouts (default 60 seconds for Nginx's proxy_read_timeout).
Fix: Check all layers of the timeout chain:
| Layer | Setting | Recommended Value |
|---|---|---|
| PHP | max_execution_time |
600-3600 |
| MySQL | wait_timeout |
28800 |
| MySQL | net_read_timeout |
600 |
| Nginx | proxy_read_timeout |
600 |
| Apache | Timeout |
600 |
Method Comparison Table
| Method | Max File Size | Speed | Ease of Use | Access Required |
|---|---|---|---|---|
| mysql CLI | Unlimited | Fastest | Moderate (terminal) | SSH / terminal |
| phpMyAdmin (tuned) | ~2GB practical | Slow (web upload) | Easy (GUI) | php.ini access |
| Split + phpMyAdmin | Unlimited (chunked) | Very slow | Tedious | FTP / file manager |
| Adminer (streaming) | ~2GB practical | Medium (streaming) | Easy (GUI) | Web server |
| DBEverywhere | Pre-configured limits | Medium | Easiest | Browser only |
Bottom line: Use the mysql command line for anything over 100MB. Use a GUI tool when you need visual confirmation of what is happening or when terminal access is not available.
Frequently Asked Questions
How long does it take to import a 1GB SQL file?
It depends on the content. A 1GB file of INSERTs into InnoDB tables typically takes 5-15 minutes via the mysql command line on a modern server. With autocommit=0 and foreign_key_checks=0, it can drop to 2-5 minutes. Through phpMyAdmin, the same file could take 30-60 minutes due to PHP overhead and web server buffering — assuming it does not timeout first.
Can I import a .sql.gz file directly?
Via mysql CLI: Yes. Use zcat or gunzip -c to decompress on the fly:
zcat dump.sql.gz | mysql -u user -p your_database
Via phpMyAdmin: Yes. phpMyAdmin natively supports .sql.gz, .sql.bz2, and .sql.zip uploads. This is one of the best ways to work around upload size limits — a 500MB SQL file compresses to roughly 50-80MB as gzip.
Via Adminer: Yes. Adminer supports .sql.gz imports directly.
Should I use --force when importing?
The --force flag (mysql --force ...) tells the client to continue importing even when individual statements fail. Use it when you are importing a dump that may contain DROP TABLE IF EXISTS statements for tables that do not exist, or when you know some statements will fail and you want the rest to proceed. Do not use it blindly on production imports — you want to know about errors.
What is the maximum size of a MySQL database?
MySQL itself has no hard limit on database size. Individual InnoDB tables can be up to 64TB. The practical limit is your disk space. The import method is almost always the bottleneck, not MySQL.
How do I import only specific tables from a large dump?
If you only need certain tables from a multi-gigabyte dump, extract them first rather than importing the entire file:
# Extract a single table's data from a dump
sed -n '/^-- Table structure for table `orders`/,/^-- Table structure for table/p' dump.sql > orders.sql
For more reliable extraction, use mysqldump to create per-table dumps at export time (see Method 3).
Conclusion
Every method of importing a large SQL file comes down to the same principle: get the SQL statements to the MySQL server with as few intermediaries as possible. The mysql command line does this directly and is always the right answer when terminal access is available. When it is not, tuning PHP settings for phpMyAdmin, using Adminer's streaming import, or splitting the file into chunks will get you there — just more slowly.
The most important settings to remember: max_allowed_packet (MySQL, default 64MB), upload_max_filesize (PHP, default 2MB), and max_execution_time (PHP, default 30 seconds). These three defaults are responsible for the vast majority of import failures.
If you are tired of configuring PHP limits, managing Docker containers, or opening SSH tunnels just to run a database import, DBEverywhere gives you phpMyAdmin and Adminer in the browser with sensible defaults already configured. Connect to your MySQL, PostgreSQL, or MariaDB server from any browser in under a minute — try it free.
Related reading: phpMyAdmin vs Adminer: Which Should You Use in 2026? | How to Access Your Database from Any Browser | Security Risks of Self-Hosted phpMyAdmin
Try DBEverywhere Free
Access your database from any browser. No installation, no Docker, no SSH tunnels.
Get Started