Back to Blog
Troubleshooting 12 min read

TL;DR 1. Connection drops after 8 hours idle? wait_timeout defaults to 28800 seconds. Set it to match your application's needs. 2. "MySQL server has gone away"? Your connection sat idle longer than wait_timeout and the server closed it. Your app needs to reconnect. 3. Connection fails immediately? connect_timeout (default 10s) may be too short for high-latency links. Increase it. 4. Query killed mid-execution? net_read_timeout or net_write_timeout (both default 30s) expired during a large data transfer. 5. All timeouts are tunable in my.cnf or at the session level with SET.

Table of Contents

MySQL Connection Timeout: Why It Happens and How to Fix It

A mysql connection timeout is one of the most common — and most confusing — errors developers encounter. The confusion comes from the fact that MySQL has at least five different timeout variables, each governing a different phase of the connection lifecycle. An idle connection dropping after 8 hours is a completely different problem from a connection that fails to establish in the first place, but both get described as "timeout errors."

According to a 2024 Percona survey, connection management issues rank among the top three MySQL operational challenges, affecting roughly 40% of production deployments. This guide covers every MySQL timeout variable, explains when each one fires, and gives you the exact my.cnf configurations to fix each scenario.

The Five MySQL Timeout Variables Explained

Before diagnosing anything, you need to understand which timeout you're actually hitting. MySQL has five key timeout variables, each controlling a different stage:

Variable Default Phase What It Controls
connect_timeout 10 seconds Connection handshake How long the server waits for a client to complete the TCP handshake and authentication
wait_timeout 28800 seconds (8 hours) Idle connection How long the server keeps an idle non-interactive connection open before closing it
interactive_timeout 28800 seconds (8 hours) Idle interactive connection Same as wait_timeout, but for connections flagged as interactive (e.g., mysql CLI)
net_read_timeout 30 seconds Active data transfer How long the server waits for more data from the client during an active read
net_write_timeout 60 seconds Active data transfer How long the server waits for acknowledgment when writing data to the client

The critical detail: wait_timeout and interactive_timeout apply to idle connections. net_read_timeout and net_write_timeout apply to active connections mid-query. connect_timeout applies only during the initial handshake. Mixing these up leads to wrong fixes.

You can find the full specification in the MySQL Server System Variables documentation.

Scenario 1: Connection Drops After 8 Hours Idle

The symptom: Your application works fine during the day, but the first query after an overnight period (or any long idle stretch) fails. The connection was open, but it's now dead.

Why this happens: MySQL's wait_timeout defaults to 28800 seconds — exactly 8 hours. When a connection sits idle for longer than this, the server closes it silently. The client has no idea the connection is gone until it tries to use it. The next query attempt hits a closed socket and throws an error.

This is the single most common mysql timeout error developers encounter, accounting for an estimated 60% of connection-related issues according to Percona's MySQL troubleshooting data.

Diagnose it:

-- Check current wait_timeout and interactive_timeout:
SHOW VARIABLES LIKE 'wait_timeout';
SHOW VARIABLES LIKE 'interactive_timeout';

-- Check how long existing connections have been idle:
SELECT id, user, host, db, command, time AS idle_seconds
FROM information_schema.processlist
WHERE command = 'Sleep'
ORDER BY time DESC;

If time for sleeping connections is approaching 28800, those connections are about to be killed.

Fix it:

Option A — Increase the timeout in my.cnf:

[mysqld]
# Keep idle connections open for 24 hours instead of 8:
wait_timeout = 86400
interactive_timeout = 86400

Restart MySQL after the change:

sudo systemctl restart mysql

Option B — Set it per session (no restart required):

SET SESSION wait_timeout = 86400;
SET SESSION interactive_timeout = 86400;

Option C — Set it globally (takes effect for new connections, no restart):

SET GLOBAL wait_timeout = 86400;
SET GLOBAL interactive_timeout = 86400;

Which one should you change? wait_timeout applies to connections that your application code opens. interactive_timeout applies to connections opened with the CLIENT_INTERACTIVE flag — typically the mysql CLI tool and some GUI clients. Most application drivers use wait_timeout. Change both to be safe.

Scenario 2: "MySQL Server Has Gone Away" Error

The symptom: ERROR 2006 (HY000): MySQL server has gone away. This error appears in application logs, often intermittently. It may trigger the related error ERROR 2013 (HY000): Lost connection to MySQL server during query.

Why this happens: This is usually the downstream effect of Scenario 1. Your application opened a connection, the connection sat idle past wait_timeout, MySQL closed it, and then your application tried to run a query on the dead connection. The "gone away" message means "the server closed this connection and you didn't notice."

However, there's a second cause: the query itself exceeded max_allowed_packet. If you try to send a query or result set larger than max_allowed_packet (default 64MB in MySQL 8.0, 4MB in 5.7), the server terminates the connection.

Diagnose it:

-- Check max_allowed_packet:
SHOW VARIABLES LIKE 'max_allowed_packet';

-- Check if the error correlates with large INSERT/LOAD DATA statements
-- or with idle periods in your application logs.

Fix it:

For idle-connection "gone away" errors, see Scenario 1. For packet-size "gone away" errors:

[mysqld]
# Increase max packet size to 256MB:
max_allowed_packet = 256M

Or per session:

SET GLOBAL max_allowed_packet = 268435456;

In application code, the most robust fix is connection validation. Most connection pools support a "test on borrow" or "ping before use" option. For example, in Python with SQLAlchemy:

# pool_pre_ping sends a lightweight query before reusing a connection
engine = create_engine("mysql://user:pass@host/db", pool_pre_ping=True)

In PHP (PDO):

$pdo = new PDO($dsn, $user, $pass, [
    PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
    // PDO automatically reconnects on timeout with persistent connections:
    PDO::ATTR_PERSISTENT => false,
]);

Connecting from a browser-based tool? DBEverywhere manages connection lifecycle automatically — sessions stay alive for up to 8 hours on the paid tier without you worrying about timeout settings. No connection pool tuning needed.

Scenario 3: Connection Fails Immediately on High-Latency Links

The symptom: The connection attempt itself fails before any query is sent. You see ERROR 2003 (HY000): Can't connect to MySQL server on 'host' (110) or your application's connection timeout exception fires. This happens consistently, not intermittently.

Why this happens: connect_timeout defaults to 10 seconds. During the initial handshake, the server and client exchange protocol version, authentication, and capability flags. Over high-latency links (cross-region connections, satellite internet, overloaded DNS), 10 seconds may not be enough for the TCP three-way handshake plus MySQL authentication to complete.

Cross-region database connections commonly add 100-300ms of round-trip latency. The MySQL handshake involves 3-5 round trips, so connections across continents can easily take 1-2 seconds under normal load and significantly more when the network or server is under pressure. A 2023 Cloudflare analysis measured median cross-Atlantic latency at 85ms, meaning a 5-round-trip handshake takes 425ms minimum — and spikes can be 10x that.

Diagnose it:

# Measure round-trip latency to the database server:
ping -c 10 your-db-host.example.com

# Measure TCP connection time specifically:
time nc -zv your-db-host.example.com 3306
-- Check the current connect_timeout:
SHOW VARIABLES LIKE 'connect_timeout';

Fix it:

[mysqld]
# Increase handshake timeout to 30 seconds:
connect_timeout = 30

Also increase the client-side timeout. This is separate from the MySQL server setting — your application driver has its own connect timeout:

# Python (mysql-connector-python):
connection = mysql.connector.connect(
    host='your-db-host.example.com',
    user='appuser',
    password='password',
    connection_timeout=30  # seconds
)
// Node.js (mysql2):
const connection = mysql.createConnection({
  host: 'your-db-host.example.com',
  user: 'appuser',
  password: 'password',
  connectTimeout: 30000  // milliseconds
});

Scenario 4: Large Queries Killed Mid-Execution

The symptom: A query starts executing but gets killed partway through. You see ERROR 2013 (HY000): Lost connection to MySQL server during query. This happens specifically with large SELECT results, bulk INSERT statements, or LOAD DATA INFILE operations.

Why this happens: net_read_timeout (default 30s) controls how long the server waits for data from the client during an active transfer. net_write_timeout (default 60s) controls how long the server waits for the client to acknowledge data being sent. If your network is slow or the data volume is large enough to saturate the link, these timeouts can fire mid-query.

This is distinct from wait_timeout, which only applies to idle connections. net_read_timeout and net_write_timeout apply while the connection is actively transferring data.

Diagnose it:

SHOW VARIABLES LIKE 'net_read_timeout';
SHOW VARIABLES LIKE 'net_write_timeout';

If the errors only occur during large data operations and the timeout values are at defaults (30s/60s), these are likely the culprits.

Fix it:

[mysqld]
# Increase active-transfer timeouts:
net_read_timeout = 120
net_write_timeout = 120

Or per session before running a large operation:

SET SESSION net_read_timeout = 300;
SET SESSION net_write_timeout = 300;

-- Now run your large query:
SELECT * FROM huge_table;

Important: Increasing these values does not increase general query execution time. MySQL has a separate max_execution_time variable (MySQL 5.7.8+) that limits how long a SELECT can run. net_read_timeout and net_write_timeout only govern the network transfer phase.

Scenario 5: Connection Pool Returning Dead Connections

The symptom: Random "gone away" or "connection reset" errors that occur sporadically, not tied to any particular query or time of day. Restarting the application temporarily fixes it, then the errors return.

Why this happens: Application connection pools maintain a set of open connections for reuse. If MySQL closes an idle connection (via wait_timeout) but the pool doesn't know, the next checkout returns a dead connection. This is particularly common with default pool configurations that don't validate connections before handing them out.

According to HikariCP benchmarks, connection validation adds less than 1ms of overhead per checkout — a negligible cost for reliability.

Fix it by language/framework:

Java (HikariCP):

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://host:3306/db");
// Validate connections before checkout:
config.setConnectionTestQuery("SELECT 1");
// Keep pool connections shorter than MySQL's wait_timeout:
config.setMaxLifetime(25200000); // 7 hours in ms (< 8 hour wait_timeout)
config.setIdleTimeout(600000);   // 10 minutes

Python (SQLAlchemy):

engine = create_engine(
    "mysql://user:pass@host/db",
    pool_pre_ping=True,          # Validate before reuse
    pool_recycle=25200,           # Recycle connections every 7 hours
)

PHP (Laravel):

// config/database.php
'mysql' => [
    'driver' => 'mysql',
    'host' => env('DB_HOST'),
    // ...
    'options' => [
        PDO::ATTR_EMULATE_PREPARES => true,
        PDO::ATTR_TIMEOUT => 30,
    ],
],

Node.js (mysql2 pool):

const pool = mysql.createPool({
  host: 'your-db-host.example.com',
  user: 'appuser',
  password: 'password',
  waitForConnections: true,
  connectionLimit: 10,
  idleTimeout: 25200000, // 7 hours
});

The golden rule: set your pool's max connection lifetime shorter than MySQL's wait_timeout. If wait_timeout is 28800 (8 hours), recycle pool connections at 25200 (7 hours). This ensures the pool always retires connections before MySQL kills them.

How to Check Your Current Timeout Settings

Run this single query to see all timeout-related variables at once:

SHOW VARIABLES WHERE Variable_name IN (
  'connect_timeout',
  'wait_timeout',
  'interactive_timeout',
  'net_read_timeout',
  'net_write_timeout',
  'max_allowed_packet',
  'max_execution_time'
);

Sample output for a default MySQL 8.0 installation:

Variable_name Value
connect_timeout 10
interactive_timeout 28800
max_allowed_packet 67108864
max_execution_time 0
net_read_timeout 30
net_write_timeout 60
wait_timeout 28800

If any value looks unfamiliar, someone (or a hosting provider) has already changed it from the MySQL 8.0 defaults.

Recommended Timeout Configurations

Here are practical my.cnf configurations for common scenarios:

Web application with connection pooling:

[mysqld]
wait_timeout = 600        # 10 minutes — pool handles reconnection
interactive_timeout = 3600 # 1 hour for admin CLI sessions
connect_timeout = 20       # Generous handshake time
net_read_timeout = 60      # 1 minute for active reads
net_write_timeout = 120    # 2 minutes for active writes
max_allowed_packet = 64M

Data pipeline or ETL workload:

[mysqld]
wait_timeout = 86400       # 24 hours — long-running jobs
interactive_timeout = 86400
connect_timeout = 30
net_read_timeout = 300     # 5 minutes — large data transfers
net_write_timeout = 300
max_allowed_packet = 512M  # Large bulk inserts

Shared hosting or resource-constrained server:

[mysqld]
wait_timeout = 300         # 5 minutes — aggressive cleanup
interactive_timeout = 600
connect_timeout = 10
net_read_timeout = 30
net_write_timeout = 60
max_allowed_packet = 16M
max_connections = 50       # Prevent resource exhaustion

Managed Databases: What You Can and Cannot Change

On managed database services, you typically don't have access to my.cnf. Here's what each provider allows:

Provider How to Change Timeouts Variables Exposed
AWS RDS Custom parameter group All five timeout variables, plus max_allowed_packet
DigitalOcean Managed Databases Database configuration panel wait_timeout, interactive_timeout, max_allowed_packet, net_read_timeout, net_write_timeout
PlanetScale Not configurable Fixed by PlanetScale (Vitess proxy handles connections)
Google Cloud SQL Database flags All five timeout variables
Azure Database for MySQL Server parameters All five timeout variables

For providers that don't expose timeout settings, your only option is to handle timeouts on the client side with connection validation and pool recycling.

If you're working across multiple providers and want consistent connection behavior without tuning each one individually, a browser-based tool like DBEverywhere abstracts away these differences. The gateway manages the database connection server-side, so timeout tuning happens once rather than per-client.

How to connect to managed databases from your browser ->

FAQ

What causes "MySQL server has gone away" specifically?

Two things: either the connection was idle longer than wait_timeout (server closed it) or the query/result exceeded max_allowed_packet (server terminated the connection). Check max_allowed_packet with SHOW VARIABLES LIKE 'max_allowed_packet'. If the value is smaller than your largest query or result set, increase it. If not, the issue is idle timeout — see the connection pool fix in Scenario 5.

Should I set wait_timeout to a very high value to avoid timeouts?

No. Setting wait_timeout to days or weeks means idle connections accumulate and consume server resources — each MySQL connection uses roughly 256KB to 10MB of memory depending on buffers. A better approach is setting a moderate wait_timeout (10 minutes to 1 hour) and using connection pool validation (pool_pre_ping, connectionTestQuery) to handle reconnection transparently.

How do I know which timeout is causing my specific error?

The pattern tells you: errors after long idle periods point to wait_timeout. Errors during the initial connection point to connect_timeout. Errors mid-query during large transfers point to net_read_timeout or net_write_timeout. Errors with "gone away" after sending a large statement point to max_allowed_packet. Check your application logs for timing — if the error always happens at exactly 28800 seconds of inactivity, it is wait_timeout.

Do these timeouts affect phpMyAdmin or Adminer sessions?

Yes. phpMyAdmin and Adminer open MySQL connections per-request but may keep persistent connections depending on configuration. If you're running phpMyAdmin against a server with a low wait_timeout, long pauses between clicks can cause "gone away" errors. phpMyAdmin has its own $cfg['LoginCookieValidity'] setting (default 1440 seconds) that can also expire the session independently of MySQL timeouts.

Can I set different timeouts for different users?

Not directly at the MySQL user level — timeout variables are global or session-scoped, not user-scoped. However, your application can SET SESSION wait_timeout = X immediately after connecting to customize it per connection. Proxy layers like ProxySQL also allow per-rule timeout overrides, giving you user-level control without changing MySQL configuration.

Conclusion

A mysql connection timeout error always traces back to one of five variables: connect_timeout, wait_timeout, interactive_timeout, net_read_timeout, or net_write_timeout. The fix depends on which phase of the connection lifecycle is timing out:

  1. Idle drops after 8 hours: Increase wait_timeout and interactive_timeout, or set your connection pool's max lifetime shorter than the timeout.
  2. "MySQL server has gone away": Enable connection validation in your pool (pool_pre_ping, connectionTestQuery) and check max_allowed_packet for large queries.
  3. Connection fails to establish: Increase connect_timeout and the client-side connect timeout. Check for network latency.
  4. Query killed mid-transfer: Increase net_read_timeout and net_write_timeout.
  5. Pool returns dead connections: Set pool recycling time below wait_timeout and enable pre-checkout validation.

Run SHOW VARIABLES LIKE '%timeout%' to see every timeout value at once, cross-reference with the MySQL 8.0 System Variables reference, and tune from there.

If managing timeout variables across local clients, application code, and multiple database providers sounds like more complexity than you need, DBEverywhere handles connection management server-side. Five free sessions per month, $5/mo for unlimited — phpMyAdmin and Adminer in your browser with no timeout tuning required.

Try DBEverywhere free ->

Try DBEverywhere Free

Access your database from any browser. No installation, no Docker, no SSH tunnels.

Get Started