Beige Techologies Career
Blog

Managing long-running queries and locks in MySQL 8 on AWS RDS

written by Zac

When I was a young boy out on the prairie I dreamed of managing Relational Databases, if you’re reading this you must feel the same way.

Impacts of Long-Running Transactions:

Long-running transactions can have several negative impacts on database performance and availability:

  1. Lock Contention: Long-running transactions can lead to lock contention, where multiple transactions compete for the same resources, causing delays and potential deadlocks.

  2. Resource Utilisation: Long-running transactions consume database resources, such as memory and CPU, for an extended period, reducing the system's overall capacity to handle new inbound transactions.

  3. Data Consistency: Prolonged locks held by long-running transactions can prevent concurrent transactions from accessing or modifying data, potentially leading to errors in applications waiting on this data.

mysqllocks

Identifying Long-Running Transactions:

Identifying and monitoring long-running transactions is crucial for maintaining database performance and stability. MySQL provides several tools and techniques for identifying these transactions. The simplest is the SHOW PROCESSLIST; command which displays information about active database connections and their associated queries. Long-running transactions can be identified by examining the execution time of each query. These transactions can be killed by making use of the KILL <pid>; or CALL mysql.rds_kill(<pid>);

To level up from here, we can query the information_schema.processlist , giving us the ability to filter active sessions based on the USER, COMMAND and TIME of the transaction. For example;

SHOW PROCESSLIST;
SELECT id, user, host, db, command, time, state, info
FROM information_schema.PROCESSLIST pl
WHERE pl.TIME > 300
AND pl.STATE <> pl.COMMAND NOT IN ('Daemon', 'Connect', 'Binlog Dump', 'Sleep');

Stored procedures however, present a subtly different challenge to visibility and transparency. By default, procedures are configured to execute as the user that defined them, as seen here under the SQL SECURITY characteristic. SQL SECURITY clauses specify the security context to be used when checking access privileges at routine execution time. What this means for us is say that user X creates the procedure and user Y then executes the procedure, we’ll see that it’s user X running this in the information_schema.processlist table. To get around this and get a sense for the source of the invocation of the stored procedure, we use the performance_schema.threads table. In the example below the th.PROCESSLIST_USER corresponds to the user that’s invoked the stored procedure.

SELECT pl.ID, th.PROCESSLIST_ID, pl.USER, th.PROCESSLIST_USER, pl.INFO, pl.TIME, pl.STATE
FROM information_schema.processlist pl
JOIN performance_schema.threads th ON pl.ID = th.PROCESSLIST_ID;

Lastly, there are times when transactions compete for resources and cause lock waits. To find these transactions we’ll make use of the performance_schema.data_locks table. Combined with the threads and processlist tables we can create a query such as the following to inspect locks. Lock type, mode, and status values depend on the storage engine you’re using, here are the possible values when using innodb:

  • LOCK_TYPE: Permitted values are RECORD for a row-level lock, TABLE for a table-level lock.

  • LOCK_MODE: Representing how the lock is requested

  • LOCK_STATUS: Current status of the lock request. Permitted values are GRANTED (lock is held) and WAITING (lock is being waited for).

SELECT pl.ID, th.PROCESSLIST_USER, pl.TIME, pl.STATE, dl.LOCK_TYPE, dl.LOCK_MODE, dl.LOCK_STATUS
FROM performance_schema.data_locks dl
JOIN performance_schema.threads th ON th.THREAD_ID = dl.THREAD_ID
JOIN information_schema.processlist pl ON th.PROCESSLIST_ID = pl.ID
WHERE pl.COMMAND NOT IN ('Daemon', 'Connect', 'Binlog Dump')
AND pl.TIME >= 300;

This merely scratches the surface of the toolkit in our Database Optimisation arsenal, although now you’ll be able to feign genius to your next hot date as you regale to them the inner workings of the InnoDB Storage Engine.

CORE for Alinta Energy
CORE for Alinta Energy

Alinta Energy, Australia’s 4th largest energy retailer, sought to streamline its operations and enhance its growth prospects with the implementation of a CIS capable of supporting their nation wide C&I and Mass Market business.

Read CORE for Alinta Energy blog
Beige Things Newsletter
Beige Things Newsletter

We're keeping the Beige Tech team connected with our monthly dose of workplace fun!

Read Beige Things Newsletter blog
Consumer Data Right (CDR) for Energy
Consumer Data Right (CDR) for Energy

Consumer Data Right (CDR) provides customers with greater access to their energy and banking data than ever before.

Read Consumer Data Right (CDR) for Energy blog
IkeGPS
IkeGPS

A custom application to replace a legacy system that was lacking end-user controls.

Read IkeGPS blog
Radi(c)al Thought: Your Energy Data has a Fingerprint
Your Energy Data has a Fingerprint

In the same way that no two human fingerprints are alike, every energy profile has its own story, weirdness, and patterns.

Read Radi(c)al Thought: Your Energy Data has a Fingerprint blog