I am looking for a good strategy of dealing with database deadlocks from within a Java 6 application; several parallel threads could, potentially, write into the same table at the same time. The database (Ingres RDMBS) will randomly kill one of the sessions if it detects a deadlock.
What would be an acceptable technique to deal with the deadlock situation, given the following requirements?
- the total elapsed time should be kept as small as reasonably possible
- killing a session will incur a significant (measurable) rollback
- time threads have no way to
communicate with each other i.e. the strategy should be autonomous
So far, the strategy I came up with is something along these lines:
short attempts = 0;
boolean success = false;
long delayMs = 0;
Random random = new Random();
do {
try {
//insert loads of records in table x
success = true;
} catch (ConcurrencyFailureException e) {
attempts++;
success = false;
delayMs = 1000*attempts+random.nextInt(1000*attempts);
try {
Thread.sleep(delayMs);
} catch (InterruptedException ie) {
}
}
} while (!success);
Can it be improved in any way? e.g. waiting for a fixed amount (magic number) of seconds. Is there a different strategy that will produce better results?
Note: Several database level techniques will be used to ensure deadlocks are, in practice, very rare. Also, the application will attempt to avoid scheduling threads that write into the same table at the same time. The situation above will be just a “worst case scenario”.
Note: The table in which records are inserted is organised as a heap partitioned table and has no indexes; each thread will insert records in it s own partition.