home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!gossip.pyramid.com!pyramid!infmx!davek
- From: davek@informix.com (David Kosenko)
- Newsgroups: comp.databases.informix
- Subject: Re: *** HELP - Index Locks are creating DEADLOCKS ***
- Message-ID: <1992Nov12.160935.21426@informix.com>
- Date: 12 Nov 92 16:09:35 GMT
- References: <1992Nov9.111120.25041@bernina.ethz.ch> <1992Nov10.160339.29203@informix.com> <1992Nov11.120353.14284@bernina.ethz.ch>
- Sender: news@informix.com (Usenet News)
- Organization: Informix Software, Inc.
- Lines: 120
-
- Thomas Schiele writes:
- >The test database:
- > CREATE DATABASE test WITH LOG;
- > CREATE TABLE rel (
- > id CHAR(1),
- > data CHAR(10)
- > )
- > LOCK MODE ROW;
- > INSERT INTO rel VALUES ("1", "one");
- > INSERT INTO rel VALUES ("2", "two");
- > INSERT INTO rel VALUES ("3", "three");
- > INSERT INTO rel VALUES ("4", "four");
- > INSERT INTO rel VALUES ("5", "five");
- >
- >Example 1:
- > We use the test database.
- > User A does the following and stays in transaction mode:
- > BEGIN WORK;
- > DELETE FROM rel WHERE id = "2";
- >
- > User B does the following:
- > BEGIN WORK;
- > DELETE FROM rel WHERE id = "4";
- > result: 244: Could not do a physical-order read to fetch next row.
- > 107: ISAM error: record is locked.
- > It doesn't seem to work as expected.
-
-
- Aha! This is where the explanation of what you are trying to do makes it all
- clear. In this case, a sequential scan is needed to find the row, since no
- indexes are available. But since row "2" is locked, the data cannot be scanned
- past that locked row (it cannot "jump over" a lock), so the lock error is
- returned.
-
- >Example 2:
- > We use our test database.
- > We install an index in our test database:
- > CREATE INDEX relind ON rel (
- > id
- > );
- >
- > User A does the following and stays in transaction mode (same as above):
- > BEGIN WORK;
- > DELETE FROM rel WHERE id = "2";
- >
- > User B does the following::
- > BEGIN WORK;
- > DELETE FROM rel WHERE id = "4";
- > result: 1 row(s) deleted.
- > It works as expected/
-
- That is because by using the index, it no longer has to scan the data
- sequentially, so the locked row is not a problem. If, on the other hand,
- you specified a range (e.g. WHERE id > "1"), you would have run into the
- same problem even with the index, i.e. it could not get past the lock on
- "2" in order to get the whole range.
-
- In both cases this is because we do not know what the value in the locked
- row is, so it is not "acceptable" to skip over it. By using a unique key
- and specifying that key in the update, you can ususally avoid this problem.
-
- >Just some questions:
- >1. Does this mean that the delete statement of user A in example 1 places
- > an exclusive lock on the table?
-
- No.
-
- >2. Or does this mean that the delete statement (or select etc.) places
- > locks on all rows (shared or exclusive locks) before (!) evaluating
- > the WHERE clause? (I can't imagine that :-) )
-
- That would be true if you had used an isolation level of REPEATABLE READ
- (whose purpose is to guarantee that the same rows would be returned for
- multiple executions of the query within a transaction, which requires the
- locking of all rows examined to generate the row set satisfying the query
- conditions). Since you are using the default isolation level (COMMITTED
- READ) this would not be the case.
-
- > Or does this even mean that informix uses the "optimization" mentioned
- > above to prevent exaclty that effect?
-
- The optimization method I mentioned applied only when a REPEATABLE READ is
- combined with a sequential scan of the table, which would require every row
- in the table to be locked; in such a case, a table lock is much more
- efficient. That is NOT happening in this case (as explained above).
-
- >3. Does this mean that I have to use an index as demonstrated in example 2
- > to turn off the "optimization" with table locks in order to get row
- > level locking?
-
- Again, this would only apply to REPEATABEL READS. In your case, though,
- indexes would be prudent (I'd venture to say necessary). In general,
- you want to avoid sequential scans of the data. General wisdom for SE
- used to be to avoid indexing "small" tables. For OnLine this is really
- not applicable (for SE, it was to reduce physical i/o, but OnLine uses
- shared memory buffering, so it really isn't costly). For SE, this is
- sometimes not a good strategy either, this case being a prime example.
- I would suggest using indexes on your tables where you are running into
- the problem you describe.
-
- >4. If question 3 can be answered with yes, then why is my statement
- > "Row level locking works only if you use an index on that table"
- > completely false?
-
- Besause row level locking does work even without indexes. The problem
- lies not in the row level locking, but in how set generation has to work.
- The implication of your statement is that the product is somehow "broken"
- when in fact it is working correctly, but the access of the data by the
- application is not well thought out relative to how it works.
-
- By the way, you can also (at least in most cases) avoid the error return by
- setting your lock mode to WAIT.
-
-
- Dave
- --
- Disclaimer: These opinions are not those of Informix Software, Inc.
- **************************************************************************
- "I look back with some satisfaction on what an idiot I was when I was 25,
- but when I do that, I'm assuming I'm no longer an idiot." - Andy Rooney
-