SDReyes SDReyes - 1 year ago 101
MySQL Question

Why MySQL InnoDB can handle concurrent updates and PostgreSQL can't?

Let's imagine you have a table with this definition:

CREATE TABLE public.positions
id serial,
latitude numeric(18,12),
longitude numeric(18,12),
updated_at timestamp without time zone

And you have 50,000 rows in such table. now for testing purposes you will run an update like this:

update positions
set updated_at = now()
where latitude between 234.12 and 235.00;

that statement will update 1,000 rows from the 50,000 (in this specific dataset)

if you run such query in 30 different threads, MySQL innodb will succeed and postgres will fail with lots of deadlocks.


Answer Source

Plain old luck, I would say.

If thirty threads go ahead and want to update the same 1000 rows, they can either access these rows in the same order (in which case they will lock out each other and do it sequentially) or in different orders (in which case they will deadlock).

That's the same for InnoDB and PostgreSQL.

To analyze why things work out different in your case, you should start by comparing the execution plans. Maybe then you get a clue why PostgreSQL does not access all rows in the same order.

Lacking this information, I'd guess that you are experiencing a feature introduced in version 8.3 that speeds up concurrent sequential scans:

  • Concurrent large sequential scans can now share disk reads (Jeff Davis)

    This is accomplished by starting the new sequential scan in the middle of the table (where another sequential scan is already in-progress) and wrapping around to the beginning to finish. This can affect the order of returned rows in a query that does not specify ORDER BY. The synchronize_seqscans configuration parameter can be used to disable this if necessary.

Check if your PostgreSQL execution plan uses sequential scans and see if changing synchronize_seqscans avoids the deadlocks.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download