home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!cis.ohio-state.edu!ucbvax!UXA.ECN.BGU.EDU!mflll
- From: mflll@UXA.ECN.BGU.EDU ("Dr. Laurence Leff")
- Newsgroups: comp.doc.techreports
- Subject: TRList File tech.wiscB
- Message-ID: <199208181956.AA04012@uxa.ecn.bgu.edu>
- Date: 18 Aug 92 19:56:47 GMT
- Sender: daemon@ucbvax.BERKELEY.EDU
- Distribution: world
- Organization: The Internet
- Lines: 221
- Approved: trlist@smu.edu
-
-
- Technical Report List File tech.wiscB
-
-
- University of Wisconsin-Madison
- Dept. of Computer Sciences Technical Reports
- June 1992 - August 1992
-
- These reports are avaiable via anonymous ftp from ftp.cs.wisc.edu
- or can be ordered from tech-reports@cs.wisc.edu.
-
-
- %A Paul M. Bober
- %A Michael J. Carey
- %T Multiversion Query Locking
- %I Computer Sciences Department
- %C University of Wisconsin-Madison
- %R TR 1085a
- %D April 1992 (Revised June 1992).
- %X Multiversion two-phase locking (MV2PL) has been incorporated in some
- commercial transaction processing systems to support the serializable
- execution of queries. A drawback to this algorithm is the potentially
- high cost that it adds to maintain and access prior versions of data.
- In this paper, we present a new multiversion locking algorithm,
- \fImultiversion query locking\fP (MVQL), that reduces the cost of
- versioning by accepting weaker forms of consistency for queries than
- MV2PL. Nevertheless, queries are guaranteed to see
- transaction-consistent data. We present results from a detailed
- performance study that show that, under a wide range of conditions,
- MVQL provides higher throughput to queries and update transactions with
- a lower storage cost than MV2PL. In the worst case, the performance of
- MVQL approaches that of MV2PL.
-
-
-
- %A Gary Schultz
- %T Barrier Decomposition for the Parallel Optimization of Block-Angular
- Programs
- %D June 1992
- %R TR 1092
- %X This thesis concerns methods for the computational solution of
- smooth, block-angular optimization problems that have very large
- numbers of variables. These problems include multicommodity network
- flows and are currently among the most challenging important problems in
- operations research and industrial mathematics.
-
- After surveying relevant classical literature, we expand upon the
- barrier function theory of Fiacco and McCormick [1968]. These barrier
- functions allow us to generate a sequence of nonlinear approximating
- problems which have simpler constraints. Feasibility issues for the
- original problem are dealt with efficiently. Details regarding the
- accuracy of the approximation provided by the barrier problems are given
- for some barrier functions that involve a logarithm.
-
- Next, we describe a new decomposition method for the approximating
- barrier problems that is amenable to parallel computation . this
- decomposition method does not require psuedo-convexity of the objective
- function, allowing the theory of this thesis to apply to local
- minimization problems with nonconvex objective functions and coupling
- constraints. Computational results obtained by combining the barrier
- approximation and the decomposition methods are given for the PDS
- problems--a class of real-world multicommodity flow problems from the
- U.S. Air Force Military Airlift Command. We solve PDS problems larger
- than are solved elsewhere in the literature, ranging to PDS-70 which has
- 382.311.
-
- Finally, our decomposition method is generalized to include the case of
- flow of information in an asynchronous computing environment.
- Convergence of the generalized method is proven, and we show that the
- method contains, as a concrete case, the restricted simplicial
- decomposition algorithm of Hearn et al [1987].
-
-
- %A Seth J. White
- %A David J. DeWitt
- %T A Performance Study of Alternative Object Faulting and Pointer Swizzling
- Strategies
- %D June 1992.
- %R TR 1093
- %X This paper presents a portable, efficient method for accessing memory
- resident persistent objects in virtual memory in the context of the E
- programming language. Under the approach, objects are copied from the
- buffer pool of the underlying object manager into virtual memory on
- demand, as they are accessed by an E program. The cumulative effects of
- updates to a persistent object are then propagated back to the object
- manager via a single write operation at the end of each transaction. The
- method incorporates a comprehensive pointer swizzling mechanism to
- enchance performance. Swizzling is done a pointer-at-a-time and
- software checks are used to detect the use of swizzled pointers. The
- paper also presents the results of a performance study comparing the
- method presented here with several alternative software architectures
- including ObjectsStore V1.2, a commercially available OODBMS. The
- results highlight the tradeoffs between providing software vs.
- memory-mapped support for pointer swizzling and quantify the effects
- of pointer swizzling on overally performance. In addition, the
- significant performance impact of pointer swizzling on the generation of
- recovery information is examined. The experimental results show that in
- many situations a software approach can outperform the memory-mapped
- approach.
-
-
-
- %A Michael J. Franklin
- %A Michael J. Carey
- %A Miron Livny
- %T Global Memory Management in Client-Server DBMS Architectures
- %D June 1992
- %R TR 1094
- %I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
- %C MADISON, WI
- %X Earlier performance studies of client-server database systems have
- investigated algorithms for caching locks and data at client workstations to
- reduce latency and offload the server. These studies have been restricted to
- algorithms in which database pages that were not in the local client buffer
- pool or the server buffer pool were read in from disk. In this paper we
- investigate a technique that allows client page requests to be serviced by
- other clients, thus treating the entire system as a single memory hierarchy.
- We also present techniques for efficiently exploiting this global memory
- hierarchy by reducing the replication of pages between client and server buffer
- pools. Global memory management algorithms that employ various combinations
- of these techniques are then described, and the performance tradeoffs among the
- algorithms are investigated under a range of workloads and system
- configurations using a simulation model.
-
-
- %A Kurt P. Brown
- %A Michael J. Carey
- %A David J. DeWitt
- %A Manish Mehta
- %A Jeffrey F. Naughton
- %T Resource Allocation and Scheduling for Mixed Database Workloads
- %R TR 1095
- %I Computer Sciences Department
- %C University of Wisconsin-Madison
- %D July 1992
- %X This paper investigates resource management and scheduling issues that
- arise when a relational DBMS is faced with a workload containing both
- short on-line transactions and long-running queries.
- Relevant issues include (i) how memory should be divided among the
- transactions and queries, (ii) the impact of serial versus concurrent
- execution of long-running queries, and (iii) tradeoffs between different
- scheduling strategies for complex multi-join queries.
- Through a series of simulation experiments based on a detailed DBMS
- simulation model, we show that previously proposed resource allocation
- and scheduling algorithms are inadequate for handling mixed workloads.
- One particularly interesting example of this occurs when
- running a query together with a stream of transactions: in many
- cases, beginning with the memory allocation deemed optimal by
- previously proposed buffer management schemes, taking memory away
- from the query improves the performance of both the transactions and
- the query.
-
-
-
-
- %A Mark D. Hill
- %A James R. Larus
- %A Steven K. Reinhardt
- %A David A. Wood
- %T Cooperative Shared Memory: Software and Hardware for Scalable
- Multiprocessors
- %R TR 1096
- %I Computer Sciences Department
- %C University of Wisconsin-Madison
- %D July 1992.
- %X We believe the absence of massively-parallel, shared-memory machines
- follows from the lack of a shared-memory programming performance model
- that can inform programmers of the cost of operations (so they can
- avoid expensive ones) and can tell hardware designers which cases are
- common (so they can build simple hardware to optimize them).
- Cooperative shared memory, our approach to shared-memory design,
- addresses this problem.
-
- Our initial implementation of cooperative shared memory uses a simple
- programming model, called Check-In / Check-Out (CICO), in
- conjunction with even simpler hardware, called Dir1SW In CICO,
- programs bracket uses of shared data with a check-out
- annotation marking the expected first use and a check-in
- annotation terminating the expected use of the data. A
- cooperative prefetch annotation helps hide communication latency.
- Dir1SW is a minimal directory protocol that adds little complexity to
- message-passing hardware, but efficiently supports programs written
- within the CICO model.
-
-
- %A Daniel F. Lieuwen
- %T Optimizing and Parallelizing Loops in Object-Oriented Database
- Programming Languages
- %R TR 1097
- %I Computer Sciences Department
- %C University of Wisconsin-Madison
- %D July 1992
- %X Database programming languages like O2, E, and O++
- include the ability to iterate through a set. Nested itera-
- tors can be used to express joins. Without program
- analysis, such joins must be evaluated using a tuple-at-a-
- time nested-loops join algorithm, because otherwise program
- semantics may be violated. Ensuring that the program's
- semantics are preserved during transformation requires pay-
- ing careful attention to the flow of values through the pro-
- gram. This thesis presents conditions under which such
- transformations can be applied.
-
- This thesis then shows how to use a standard
- transformation-based optimizer to optimize these joins. An
- optimizer built using the EXODUS Optimizer Generator
- [GRAE87] was added to the AT&T Bell Laboratories'O++
- [AGRA89] compiler. We used the resulting optimizing com-
- piler to experimentally validate the ideas in this thesis in
- a centralized database setting. The experiments show that
- this technique can significantly improve the performance of
- database programming languages.
-
- The techniques and analysis are then applied to paral-
- lelizing loops. The transformations can be used to produce
- better parallel code than using a straightforward paralleli-
- zation of the nested iterators. Parallel algorithms for
- accessing nested sets are analyzed, and a new algorithm is
- presented.
-
-
-