Gusman Gusman - 1 year ago 72
MySQL Question

Correct isolation level for sequential counter

I'm working on a billing system (C# code, MySQL cluster backend, InnoDB storage engine) and have a doubt about how to generate a truly unique sequence number for invoices.

Usually in other systems I created a unique service to get the invoice numbers, whenever an invoice where generated I asked to that service for a number and as that service guaranteed the exclusive access to a table which held the counter there would be no problem at all.

But this new system is clustered for high availability, so this approach is not acceptable as there needs to be multiple of these services running.

So the logic I'm aplying here is this:

  • start transaction

  • create invoice with no serial number

  • retrieve serial counter

  • write new counter to table

  • update invoice

  • commit

If i'm not wrong, if some other transaction updates the counter before the current transaction is finished then the commit will throw an exception and I can then retry the operation, that will ensure the sequentiality of invoice numbers.

So my question is, which one is the correct isolation level to achieve this? is READ_COMMITED enough or may yield to duplicates? Or there's a better approach to this?

Answer Source

Actually, both isolation levels would give you trouble if you are not careful.

Apart from technical differences (e.g. how many rows they lock), READ COMMITTED and REPEATABLE READ differ in how they handle the following situation:

start transaction;
select no from counters where type = 'INVOICE';
-- some other session changes the value and commits it
select no from counters where type = 'INVOICE';

READ COMMITTED would give you two different results, REPEATABLE READ would give you the old value for both select. But both isolation levels will not prevent anyone from changing the value, so you don't want either situation.

The important thing is to lock the row you are going to change, so that noone else can change it:

start transaction;
select no from counters where type = 'INVOICE' for update;
update counters set no = @newvalue where type = 'INVOICE';

or do the actual update first if the calculation is simple:

start transaction;
update counters set no = no + 1 where type = 'INVOICE';
select no from counters where type = 'INVOICE';

Assuming your table looks like this (and you don't query e.g. select max(no) from invoices to get the last number), both isolation levels will work. They will mainly differ in how (many rows) they lock. If you have an index on (in my example) type, they will behave exactly the same.

The decision would then depend on the rest of your queries. repeatable read is usually a good and safe choice (and the default for a reason); if you lower it, you may have to think harder about potential problems, but might gain some performance/less blocking.

You didn't specify how you set up the cluster, you obviously have to make sure they all use the same table or use different offsets on your masters.

To your question what happens when another transaction tries to change the value: The second transaction will wait at the point that needs a locked resource until the first one releases it (usually when it is ready), and you will only get an exception when a timeout is reached (or a deadlock is detected).

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download