TL;DR - A database access audit trail answers four questions: who connected, when they connected, what they did, and from where they connected. - MySQL provides three built-in logging mechanisms: the general log (all queries), the slow query log (performance), and the error log (authentication failures). None of them are enabled by default in a format suitable for auditing. - Dedicated audit plugins — MariaDB Audit Plugin, Percona Audit Log, and MySQL Enterprise Audit — provide structured, filterable audit logs designed for compliance. - SOC 2, GDPR, HIPAA, and PCI DSS all require some form of database access logging. The specifics vary, but the common requirement is proving who accessed sensitive data and when. - Application-level logging fills the gaps that database-level logging misses — mapping database connections back to individual users, capturing business context, and correlating access with application events.
Table of Contents
- Database Access Audit Trails: What to Log and Why
- Why Audit Trails Matter Beyond Compliance
- The Four Questions Every Audit Trail Must Answer
- MySQL Built-in Logging: What You Get Out of the Box
- Audit Plugins: Purpose-Built Logging
- Application-Level Audit Logging
- What Compliance Frameworks Actually Require
- Storage, Retention, and Tamper Protection
- Practical Implementation: A Tiered Approach
- FAQ
- Conclusion
Database Access Audit Trails: What to Log and Why
When a security incident happens — a data breach, an unauthorized export, a suspicious deletion — the first question is always the same: who accessed the database, and what did they do?
If you cannot answer that question with specifics — exact timestamps, source IPs, usernames, and queries — you have a gap that no amount of retrospective investigation can fill. A database access audit trail is the system that provides those answers, and for most organizations, it is either missing entirely or configured so poorly that it is useless when it matters.
This is not just a compliance checkbox. Audit trails are how you reconstruct incidents, prove innocence, identify insider threats, debug access issues, and demonstrate to auditors that your data handling meets the standard your customers expect. The cost of implementing them is trivial. The cost of not having them is discovered only when you need them and they are not there.
This guide covers what to log, how to configure it in MySQL and MariaDB, what compliance frameworks require, and how to build a practical audit system that works without drowning you in data.
Why Audit Trails Matter Beyond Compliance
Compliance is the most commonly cited reason for database audit logging, but it is not the most important one.
Incident response. When a breach occurs, the audit trail is your forensic record. Without it, you are guessing. The IBM 2025 Cost of a Data Breach Report found that organizations with comprehensive logging and monitoring detected breaches 108 days faster on average than those without. At $4.88 million average breach cost, those 108 days translate directly into money — every day a breach goes undetected increases the cost by roughly $45,000.
Insider threat detection. According to the 2025 Verizon DBIR, 15% of data breaches involved internal actors. These are not always malicious — many are accidental, like a developer running a DELETE without a WHERE clause. Audit trails help you distinguish between malicious intent and honest mistakes, and they deter both.
Accountability. When everyone knows that database access is logged, behavior changes. Shared credentials disappear. Developers stop running ad-hoc queries on production without a reason. Access requests go through proper channels. The audit trail creates accountability even when nobody ever reads the logs.
Debugging access issues. "I cannot connect to the database" is a common support request. Audit logs show you whether the connection attempt was made, whether it was authenticated, and why it failed — saving hours of back-and-forth troubleshooting.
Customer trust. If you handle customer data, being able to tell them exactly who accessed their records and when is a differentiator. It is also increasingly a legal requirement.
The Four Questions Every Audit Trail Must Answer
A useful database access audit trail answers four questions for every interaction:
1. Who
The identity of the person or system that accessed the database. This means: - The MySQL username used for authentication - The application or tool used to connect (phpMyAdmin, Adminer, MySQL Workbench, application code) - Ideally, the real human identity behind the connection (which requires application-level logging if multiple people share a MySQL user)
2. When
Timestamps for: - Connection established - Each query or command executed - Connection closed or timed out
Timestamps must be in a consistent timezone (UTC is the standard) and precise enough to correlate with other log sources. Second-level precision is the minimum; millisecond precision is preferable.
3. What
The action performed: - Authentication events (login success, login failure, logout) - Queries executed (SELECT, INSERT, UPDATE, DELETE) - Schema changes (CREATE, ALTER, DROP, GRANT, REVOKE) - Administrative commands (FLUSH, KILL, SET GLOBAL)
For compliance purposes, schema changes and administrative commands should always be logged regardless of whether full query logging is enabled.
4. From Where
The source of the connection: - Source IP address - Source port - Hostname (if DNS reverse lookup is configured) - For web-based tools, the user's browser IP and the tool's connection IP may differ — both should be logged
MySQL Built-in Logging: What You Get Out of the Box
MySQL provides three logging mechanisms out of the box. Each serves a different purpose, and none is a complete audit solution by itself.
General Query Log
The general log records every statement received by the MySQL server, including SELECT, INSERT, UPDATE, DELETE, DDL, and administrative commands.
Enable it:
SET GLOBAL general_log = 'ON';
SET GLOBAL general_log_file = '/var/log/mysql/general.log';
-- Or log to a table instead:
SET GLOBAL log_output = 'TABLE';
-- Queries go to mysql.general_log table
What it captures: - Timestamp - Thread ID - Command type (Connect, Query, Quit) - The full SQL statement - The MySQL user and source host for connection events
What it does not capture: - The result of the query (success or failure) - How many rows were affected - How long the query took - The application that sent the query (it only sees the MySQL user)
The problem: The general log has a significant performance impact on busy servers. Every single query is written to disk or table before it can complete. On a server handling 10,000 queries per second, this generates gigabytes of log data per day. It is not designed for production use at scale.
Best use: Enable it temporarily during incident investigations or on low-traffic development/staging servers.
Slow Query Log
The slow query log records queries that exceed a configurable execution time threshold.
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1; -- Log queries slower than 1 second
SET GLOBAL log_queries_not_using_indexes = 'ON'; -- Also log queries without index use
This is a performance tool, not a security tool. It does not capture who ran the query in a meaningful way for auditing, and it only logs queries that are slow — a fast SELECT * FROM users that exfiltrates your entire user table will not appear if it completes in under a second.
Error Log
The error log captures server-level events including startup/shutdown, replication errors, and — critically for security — authentication failures.
SET GLOBAL log_error_verbosity = 3; -- Maximum verbosity
Authentication failures appear as entries like:
2026-04-10T14:23:17.234Z [Warning] Access denied for user 'root'@'198.51.100.42' (using password: YES)
This is valuable for detecting brute-force attacks but does not log successful authentication or any queries.
The Gap
None of MySQL's built-in logging provides a complete audit trail. The general log captures too much with too little context. The slow query log captures too little. The error log only captures failures. For a proper database audit log, you need a purpose-built audit plugin.
Audit Plugins: Purpose-Built Logging
MariaDB Audit Plugin
The MariaDB Audit Plugin is open source and works with both MariaDB and MySQL (5.5+). It logs connections, disconnections, and queries with configurable filtering.
Installation:
INSTALL SONAME 'server_audit';
SET GLOBAL server_audit_logging = 'ON';
SET GLOBAL server_audit_events = 'CONNECT,QUERY_DDL,QUERY_DML,QUERY_DCL';
SET GLOBAL server_audit_file_path = '/var/log/mysql/audit.log';
SET GLOBAL server_audit_file_rotate_size = 104857600; -- 100MB rotation
SET GLOBAL server_audit_file_rotations = 10;
What it captures:
20260410 14:30:22,db_server,dev_alice,198.51.100.42,15,123,QUERY,app_db,'SELECT * FROM orders WHERE user_id = 42',0
Fields: timestamp, server host, user, source IP, connection ID, query ID, event type, database, query text, return code.
The server_audit_events variable lets you filter by event type:
- CONNECT — successful and failed login attempts
- QUERY_DDL — schema changes (CREATE, ALTER, DROP)
- QUERY_DML — data changes (INSERT, UPDATE, DELETE)
- QUERY_DCL — permission changes (GRANT, REVOKE)
- QUERY_DML_NO_SELECT — data changes without SELECT (reduces volume)
Performance impact: Significantly lower than the general log because filtering happens before write. Logging only CONNECT and QUERY_DDL has negligible impact on most workloads. Adding QUERY_DML increases I/O proportionally to your write throughput.
Percona Audit Log Plugin
Percona Audit Log Plugin ships with Percona Server for MySQL and logs to JSON or XML format, making it easier to ingest into log management systems like the ELK stack or Splunk.
INSTALL PLUGIN audit_log SONAME 'audit_log.so';
SET GLOBAL audit_log_format = 'JSON';
SET GLOBAL audit_log_policy = 'ALL'; -- LOGINS, QUERIES, or ALL
JSON output format:
{
"audit_record": {
"name": "Query",
"timestamp": "2026-04-10T14:30:22Z",
"connection_id": 15,
"status": 0,
"user": "dev_alice",
"priv_user": "dev_alice",
"host": "198.51.100.42",
"db": "app_db",
"command_class": "select",
"sqltext": "SELECT * FROM orders WHERE user_id = 42"
}
}
The structured format makes it straightforward to parse, filter, and alert on specific patterns — for example, any query that contains DROP TABLE or any connection from an unexpected IP.
MySQL Enterprise Audit
MySQL Enterprise Audit is available only in MySQL Enterprise Edition (paid). It provides the most comprehensive filtering, JSON and XML output, and integration with Oracle's audit vault. If you are already paying for MySQL Enterprise, use it. If not, the MariaDB or Percona plugins provide equivalent functionality for free.
Application-Level Audit Logging
Database-level audit plugins capture queries and connections, but they have a fundamental limitation: they see MySQL users, not people. If your application uses a connection pool with a single MySQL user, every query appears to come from app_user@10.0.1.5 regardless of which human initiated it.
Application-level logging bridges this gap:
Map queries to users. When your application executes a database query on behalf of a logged-in user, log the query along with the user's identity, the endpoint that triggered it, and the request ID for correlation.
Log access through database management tools. When a developer uses phpMyAdmin, Adminer, or a managed tool like DBEverywhere to access the database, the tool should log the session — who authenticated, when the session started, when it ended, and which database was accessed. DBEverywhere logs every session with the authenticated user, source IP, target database, and session duration.
Capture business context. A database audit log shows DELETE FROM orders WHERE id = 1234. An application audit log shows "User admin@company.com deleted order #1234 via the admin panel at 2026-04-10 14:30:22." The second version is what an auditor or incident responder actually needs.
Implementation pattern:
import logging
audit_logger = logging.getLogger('audit')
def execute_query(user_id, user_email, query, params, request_id):
audit_logger.info(
"db_query",
extra={
"user_id": user_id,
"user_email": user_email,
"query": query,
"params": str(params),
"request_id": request_id,
"source_ip": request.remote_addr,
"timestamp": datetime.utcnow().isoformat()
}
)
return db.execute(query, params)
The ideal setup combines both layers: database-level audit logging catches everything (including queries that bypass the application), and application-level logging adds the human context that database logs lack.
What Compliance Frameworks Actually Require
Different frameworks have different requirements, but they converge on the same core expectation: you must be able to demonstrate who accessed sensitive data, when, and whether that access was authorized.
SOC 2
SOC 2 Trust Services Criteria most relevant to database auditing:
- CC6.1: Logical access security — the organization restricts and monitors access to information assets. Requires demonstrating that database access is logged and that logs are reviewed.
- CC7.2: The organization monitors system components and their operation to detect anomalies. This includes monitoring database access patterns for unusual activity.
- CC7.3: The organization evaluates security events to determine whether they constitute incidents. Audit logs are the raw material for this evaluation.
In practice, SOC 2 auditors want to see: logs of who accessed the database, evidence that those logs are reviewed regularly, and evidence that anomalous access triggers an investigation.
GDPR
GDPR does not prescribe specific logging mechanisms, but:
- Article 5(1)(f) requires "appropriate security" of personal data, which regulators have consistently interpreted to include access logging.
- Article 30 requires records of processing activities, including who has access to personal data.
- Article 33 requires breach notification within 72 hours — which is impossible without audit logs that tell you what was accessed and when.
The Irish Data Protection Commission's enforcement actions have repeatedly cited inadequate logging as a contributing factor in GDPR penalty calculations.
HIPAA
The HIPAA Security Rule is the most explicit:
- 45 CFR 164.312(b) requires "mechanisms to record and examine activity in information systems that contain or use electronic protected health information." This is a direct requirement for database audit logging.
- 45 CFR 164.312(d) requires "procedures to verify that a person or entity seeking access to ePHI is the one claimed." Audit logs of authentication events satisfy this requirement.
PCI DSS
PCI DSS Requirement 10 is entirely about logging:
- 10.2.1: Logs of all individual user accesses to cardholder data
- 10.2.2: Logs of all actions taken by anyone with root or administrative privileges
- 10.2.4: Invalid logical access attempts
- 10.2.7: Creation and deletion of system-level objects
- 10.7: Retain logs for at least one year, with the most recent three months immediately available
PCI DSS is the most prescriptive framework and the closest to a technical specification for audit logging.
Storage, Retention, and Tamper Protection
Audit logs are only useful if they are available when you need them, have not been modified, and cover the time period in question.
Retention Periods
- PCI DSS: 1 year minimum, 3 months immediately accessible
- SOC 2: No specific retention period, but auditors typically expect 12 months
- HIPAA: 6 years for the policies and procedures that require logging (the regulation is about retaining the documentation, not the logs themselves, but 6 years of logs is the safe interpretation)
- GDPR: As long as necessary to demonstrate compliance — practical minimum is 2-3 years
- Practical recommendation: 12 months of immediately accessible logs, 3-5 years in cold storage (S3 Glacier, etc.)
Storage Sizing
A rough estimate for MySQL audit log storage:
- Average audit log entry: 200-500 bytes
- A server handling 1,000 queries per minute logging all DML and DDL: approximately 1-2 GB per day
- A server logging only connections and DDL: approximately 10-50 MB per day
- 12 months of full DML logging: 365-730 GB
- 12 months of connection + DDL only: 3.6-18 GB
The cost of storing 700 GB on S3 Standard for a year is approximately $16/month. Storage cost is not a valid reason to skip audit logging.
Tamper Protection
An attacker who compromises the database server may attempt to modify or delete audit logs to cover their tracks. Mitigations:
-
Ship logs off-server in real time. Use rsyslog, Filebeat, Fluentd, or a similar agent to forward audit logs to a centralized log management system (ELK, Splunk, CloudWatch Logs, Datadog) as they are generated. The attacker would need to compromise both the database server and the log management system.
-
Write-once storage. S3 Object Lock, Azure Immutable Blob Storage, or GCP Bucket Lock prevent modification or deletion of log files for a configured retention period.
-
Log integrity verification. Some audit plugins support log signing. At minimum, compute SHA-256 hashes of log files as they rotate and store the hashes separately.
Practical Implementation: A Tiered Approach
Not every database needs the same level of audit logging. Here is a practical tiered approach:
Tier 1: Minimum (Every Database)
- Authentication logging: All successful and failed login attempts, including source IP and username. Use the MySQL error log (for failures) and the MariaDB Audit Plugin with
server_audit_events = 'CONNECT'(for all connection events). - DDL logging: All schema changes.
server_audit_events = 'CONNECT,QUERY_DDL,QUERY_DCL'. - Retention: 12 months.
- Performance impact: Negligible.
Tier 2: Standard (Databases with Sensitive Data)
- Everything in Tier 1, plus:
- DML logging for sensitive tables: Log INSERT, UPDATE, DELETE on tables containing PII, financial data, or health records. Use the audit plugin's filtering to target specific databases or tables.
- Application-level logging: Map database queries to authenticated users and log the business context.
- Centralized log shipping: Forward logs to a separate system in real time.
- Retention: 3 years.
Tier 3: Full (Regulated Environments)
- Everything in Tier 2, plus:
- Full query logging: All DML including SELECT, on all tables. Be prepared for the storage and performance implications.
- Tamper-proof storage: Write-once cloud storage with integrity verification.
- Automated alerting: Alerts on failed login spikes, access from new IPs, DDL in production, and queries against sensitive tables outside business hours.
- Regular log review: Weekly or monthly review of audit logs for anomalous patterns, documented and signed off.
- Retention: 6 years.
Most startups should start at Tier 1 immediately — it is free, takes 10 minutes to configure, and satisfies the baseline requirement for every compliance framework. Move to Tier 2 when you handle customer data. Move to Tier 3 when you are subject to HIPAA, PCI DSS, or enterprise customer security reviews.
For teams using a managed database access tool, check what logging the tool provides. DBEverywhere logs all sessions — who authenticated, when, from which IP, and which database was accessed — providing application-level audit data without additional configuration.
FAQ
Does enabling audit logging slow down my database?
It depends on what you log. Logging only connection events and DDL statements has negligible performance impact — typically less than 1% overhead on query throughput. Logging all DML (INSERT, UPDATE, DELETE) adds I/O proportional to your write throughput, generally in the range of 2-5% overhead. Logging all queries including SELECT (the general log approach) can add 10-25% overhead on read-heavy workloads due to the I/O required. For most use cases, Tier 1 or Tier 2 logging has no noticeable impact.
Can I use MySQL's general log as my audit trail?
Technically yes, practically no. The general log is not designed for auditing — it lacks structured output, has no filtering, generates massive volumes, and has significant performance overhead. It also does not distinguish between authentication events, DDL, DML, and administrative commands in a way that is easy to filter or alert on. Use a purpose-built audit plugin (MariaDB Audit Plugin, Percona Audit Log, or MySQL Enterprise Audit) instead. Reserve the general log for temporary debugging during incident response.
What should I do if an auditor asks for database access logs and I do not have them?
Be transparent about the current state and present a remediation plan with a timeline. Install an audit plugin immediately — the MariaDB Audit Plugin can be installed and configured in under 30 minutes without a server restart. Enable Tier 1 logging (connections + DDL) as a baseline. Document the gap and the remediation date in your risk register. Going forward, the logs will be available. For the historical gap, you may be able to partially reconstruct access from web server logs, application logs, SSH auth logs, or cloud provider audit trails (AWS CloudTrail, etc.) to cover the period.
How do I log database access through phpMyAdmin or Adminer?
phpMyAdmin and Adminer connect to MySQL as a regular MySQL user, so queries executed through them appear in the database audit log attributed to the MySQL user they authenticated as. The limitation is that the audit log does not tell you which person was using phpMyAdmin — it only sees the MySQL user. To close this gap, use a tool that adds its own access logging layer. DBEverywhere logs every session with the authenticated user's identity, the source IP, and the session timestamps, providing the human-to-query attribution that raw database logs lack.
Do managed database providers (AWS RDS, DigitalOcean) provide audit logging?
Yes, most managed providers offer some form of audit logging. AWS RDS supports the MariaDB Audit Plugin for MariaDB instances and offers CloudWatch Logs integration. RDS for MySQL supports the MySQL audit log plugin on Enterprise-compatible editions. DigitalOcean Managed Databases provide connection logging through their metrics dashboard but not full query-level auditing. Google Cloud SQL supports the Cloud SQL Audit Log in Cloud Logging. Check your specific provider's documentation — the feature is usually available but must be explicitly enabled and may incur additional storage charges.
Conclusion
A database access audit trail is not a nice-to-have. It is the difference between "we were breached and we know exactly what was accessed" and "we were breached and we have no idea what happened."
The minimum implementation — install an audit plugin, log connections and DDL, retain for 12 months — takes 30 minutes and costs nothing. The storage for a year of connection and DDL logs for a typical application fits on a USB drive. There is no technical or financial reason to skip it.
Start with Tier 1 today: install the MariaDB Audit Plugin or Percona Audit Log, enable connection and DDL logging, and ship the logs to a centralized location. When you need to prove who accessed what, you will have the answer.
For teams that access databases through a browser-based tool, DBEverywhere adds application-level session logging on top of whatever database-level logging you configure — giving you the complete picture of who connected, when, from where, and which database they accessed. Combined with database-level query logging, this covers all four questions an auditor will ask.
The worst time to set up audit logging is after the breach. The best time was a year ago. The second best time is right now.
Try DBEverywhere Free
Access your database from any browser. No installation, no Docker, no SSH tunnels.
Get Started