home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!dtix!darwin.sura.net!gatech!asuvax!ncar!uchinews!gsbacd.uchicago.edu!cs_mj
- From: cs_mj@gsbacd.uchicago.edu (Mark Jaeger)
- Newsgroups: comp.databases.ingres
- Subject: Re: How must I handle deadlocks in ESQL?
- Message-ID: <1992Jul27.130746.1@gsbacd.uchicago.edu>
- Date: 27 Jul 92 19:07:46 GMT
- References: <847@felix.Sublink.Org>
- Sender: news@uchinews.uchicago.edu (News System)
- Lines: 101
-
- In article <847@felix.Sublink.Org>, eb@felix.Sublink.Org
- (Enrico Badella) writes:
- > I admit I'm no RDBMS expert. I'm currently using Ingres 6.2 that used
- > to be bundle with SCO-ODT on a 33Mhz 386 clone.
- > My application must handle spikes of incomming data and apparently under
- > heavy load some DEADLOCK messages appear in errlog.log.
- > In my application I used this simple code to handle RDBMS errors
- >
- > EXEC SQL ......
- > if (sqlca.code < 0)
- > error
- >
- > The Ingres ESQL User's Guide shows some examples of deadlock handling
- > in transactions and suggests the following code
- >
- > start:
- > EXEC SQL ......
- > if ((sqlca.sqlcode == DEADLOCK) || (sqlca.code == FORCEABORT))
- > goto start;
- > if (sqlca.code < 0)
- > error
- >
- > At this point I have 2 questions:
- >
- > 1) won't the immediate rexecution of the SQL statement incurr in another
- > deadlock or create a worst condition of load on the RDBMS? Before
- > restarting the SQL statement would it be a good idea of sleeping for
- > 1 or 2 seconds?
-
- If the error that you have encountered is truly deadlock, then the hope
- is that the other transaction that caused the deadlock will now be able
- to proceed and commit, thereby releasing its locks. The process whose
- transaction got aborted will have to wait to acquire its locks until the
- other process has finished committing. Since the locks have to be
- acquired before the server starts executing the query (in QEF), this
- shouldn't create too big a load on the server, apart from the overhead
- of the deadlock search and the rollback that the deadlock caused. This
- isn't a problem if it occurs rarely. If it's frequent, then it's a
- heavy overhead on the server to do all that nonproductive work.
-
- It certainly shouldn't hurt to sleep briefly, although another process
- might get in in the meantime, and the process that slept might have to
- wait longer than it should.
-
- According to Derek Wright, one of the principal consultants for Ingres,
- it's not a good idea to retry endlessly, as the code above implies. In
- his words, if lightning strikes twice in the same place, it's better to
- move on. In other words, if you get deadlocking under heavy concurrent
- update situations, it's best to examine your table and transaction
- design to find out why it is occurring. If I see deadlocks more than a
- few times a day (they're reported in errlog.log), then I start to worry.
-
- > 2) I wasn't able to try the code because I didn't find the value of the
- > two constants DEADLOCK and FORCEABORT in the manuals or include files.
- > Apparently DEADLOCK is #defined as -4700 but I'm not sure; I just
- > printed the value of sqlca.code.
-
- I believe that there are three SQLCODE's that can be returned to the
- front end that represent "deadlock" (actually, retry situations).
- Actually, only one of them is truly deadlock, and that's the -4700 you
- cite. Another is -4706, which is a log full error. The third is -4708,
- but I still don't know what this is, and I wasn't able to find anyone
- from Ingres at the NAIUA in May who could tell me what it is.
-
- You would probably do well to get a later version of INGRES. At some
- point (release 6.3?), they introduced generic errors. I recommend using
- them. All of the above errors map to a generic "serialization" error,
- which means that the server couldn't execute the given query in a way
- that would make the set of all queries thrown at it serializable. I.e.,
- it could only get part way through the given transaction, and then it
- had to back out in order to proceed. As I mentioned, this can be caused
- by deadlock or by log full.
-
- The various error codes are defined in ii_config:generr.h, which is
- probably the best place to get them from. It's better to test the
- generic error, so that all of the three situations (-4700, -4706, and
- -4708) get treated the same way. Also, when Ingres introduces a _new_
- serialization error (which they did with release 6), then your code will
- continue to work without change, after recompiling.
-
- Log file full can be caused by a log file that is too small, or by
- transactions that are too big, or by one transaction that took too long
- to get committed.
-
- Deadlock can be caused by a variety of problems. Index overflow is one
- of them (look for overflow pages in the iitables system catalog, or
- non-btree tables with heavy inserts). Modifying to btree might help to
- avoid this. Another cause is lock escalation (look in the errlog.log
- file for indications of this). To solve this, try setting the lockmode
- to "level=table". Another general rule is to know your transactions and
- always access tables that are being updated in the same order (e.g.,
- alphabetically).
-
- All of above is really oversimplified. To really solve the problem,
- you'll have to take a close look at where the deadlocks are coming from.
-
- --Mark Jaeger internet: cs_mj@gsbvax.uchicago.edu
- Graduate School of Business yellnet: (312) 702-0328
- University of Chicago faxnet: (312) 702-0233
- Disclaimer: My opinions are my own and not those of my employer.
- Ich bin ein Virus. Mach' mit und kopiere mich in Deine .signature.
-