%D Date the report was issued: month day, year (required)
%Z Date and time the bibliographic _record_ was last modified:
Mon, 28 Aug 95 18:04:22 GMT (required)
%R The report number: TRYY-## (required)
%I The report issuer: Dept of Computer Science, Univ. of AZ (required)
%U The url for the report or description (optional)
ftp://ftp.cs.arizona.edu/reports/
%X The report abstract (required)
%K Keywords (optional)
%Y Computing Reviews categories (optional)
%A Last Name, First Name Author
%T Title
%D Date issued
%Z Mon, 03 Jan 2011 00:00:00 MST
%R TR11-YY
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2005
%X abstract
%K keywords
%Y
note: each bibliographic citation must be terminated by a single blank line.
%A Christian Collberg, Todd Proebsting, Alex M Warren
%T Repeatability and Benefaction in Computer Systems Research . A Study
and a Modest Proposal
%D October 31, 2014
%Z Tue, 02 Dec 2014 00:00:00 MST
%R TR14-04
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2014
%X We describe a study into the extent to which Computer Systems
researchers share their code and data and the extent to which such
code builds. Starting with 601 papers from ACM conferences and
journals, we examine 401 papers whose results were backed by code. For
30.4% of these papers we were able to obtain the code and build it
within 30 minutes; for 45.9% of the papers we managed to build the
code, but it may have required extra effort; for 50.1% of the papers
either we managed to build the code or the authors stated the code
would build with reasonable effort. We also propose a novel sharing
specification scheme that requires researchers to specify the level of
sharing that reviewers and readers can assume from a paper.
%K
%Y
%A Randy Hackbarth, Audris Mockus, John Palframan (Avaya Labs Research) and Ravi Sethi (University of Arizona)
%T Customer Quality Improvement of Software Systems
%D October 31, 2014
%Z Fri, 31 Oct 2014 00:00:00 MST
%R TR14-03
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2014
%X The software quality improvement method in this paper is based on a multi-year program to improve the quality of delivered systems at Avaya, a global provider of business communication and collaboration systems. The improvement method is data driven and has three elements: (a) a downstream metric that quantifies quality, as perceived by customers; (b) an upstream implementation quality index that measures the effectiveness of error removal practices during development; and (c) prioritization tools and techniques for focusing limited development resources. The downstream customer quality metric is based on serious defects that are reported by customers after systems are deployed. The upstream implementation quality index serves as a predictor of future customer quality; it has a positive correlation with the customer quality metric. The prioritization techniques are used to focus limited resources on the top 1% riskiest files in the code. Governance for the improvement method is provided by regular reviews with an R&D quality council.
%K
%Y
%A Stephen G. Kobourov, Sergey Pupyrev,Paolo Simonetto
%T Visualizing Graphs as Maps with Contiguous Regions
%D April 24, 2014
%Z Thu, 24 Apr 2014 00:00:00 MST
%R TR14-02
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2014
%X Relational datasets, which also include clustering information, can be
visualized with tools such as BubbleSets, LineSets, SOM, and GMap. The countries in SOM-based
and GMap-based visualizations are fragmented, that is, they are represented by several disconnected
regions. Although countries can be uniquely colored to help with
identification, experimental data indicates that such fragmentation
makes it more difficult to identify the correct regions. On the other hand,
BubbleSet and LineSets visualizations (originally developed to show overlapping
sets) have contiguous regions but the regions may overlap, even when the
input clustering is non-overlapping.
We describe two methods for creating
non-fragmented and non-overlapping maps within the GMap framework.
The first approach achieves contiguity by preserving the given embedding in the plane and
creating a clustering based on geometric proximity.
The second approach achieves contiguity by preserving the
clustering information and distorting the given embedding
in the plane if it would result in fragmentation.
We formally analyze these methods and quantitatively evaluate them using embedding metrics
and clustering metrics.
We demonstrate the usefulness of the new methods with several datasets, and make them available in an online system at
http://gmap.cs.arizona.edu.
%K
%Y
%A Michael A. Bekos, Thomas C. van Dijk, Martin Fink, Philipp Kindermann, Stephen Kobourov, Sergey Pupyrev, Joachim Spoerhase, Alexander Wolff
%T Improved Approximation Algorithms for Semantic Word Clouds
%D February 11, 2014
%Z Tue, 11 Feb 2014 00:00:00 MST
%R TR14-01
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2014
%X We study the following geometric representation problem: Given a
graph whose vertices correspond to axis-aligned rectangles with
fixed dimensions, arrange the rectangles without overlaps in the
plane such that two rectangles touch if the graph
contains an edge between them. This problem is called
Contact Representation of Word Networks (Crown) since
it formalizes the geometric problem
behind drawing word clouds in which semantically related words are
close to each other. Crown is known to be
NP-hard, and there are approximation algorithms for certain graph
classes for the optimization version, \crown, in which realizing
each desired adjacency yields a certain profit.
We present the first $O(1)$-approximation algorithm for the general
case, when the input is a complete weighted graph, and for the
bipartite case. Since the
subgraph of realized adjacencies is necessarily planar, we also consider
several planar graph classes (namely stars, trees, outerplanar, and
planar graphs), improving upon the known results.
For some graph classes, we also describe improvements
in the unweighted case, where each adjacency yields the same
profit. Finally, we show that the problem is APX-hard on
bipartite graphs of bounded maximum degree.
%K
%Y
%A Christian Collberg, Todd Proebsting, Gina Moraila, Akash Shankaran, Zuoming Shi, Alex M Warren
%T Measuring Reproducibility in Computer Systems Research
%D December 10, 2013
%Z Tue, 10 Dec 2013 00:00:00 MST
%R TR13-03
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2013
%X We describe a study into the willingness of Computer Systems
researchers to share their code and data. We also propose a novel
sharing specification scheme that will require researchers to
specify the level of reproducibility that reviewers and readers can
assume from a paper either submitted for publication, or published.
%K
%Y
%A Lukas Barth, Stephen G. Kobourov, Sergey Pupyrev
%T An Experimental Study of Algorithms for Semantics-Preserving Word Cloud Layout
%D October 16, 2013
%Z Wed, 16 Oct 2013 00:00:00 MST
%R TR13-02
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2013
%X We study the problem of computing semantic-preserving word clouds in
which semantically related words are close to each other. We implement
three earlier algorithms for creating word clouds and three new ones.
We define several metrics for quantitative evaluation of the resulting
layouts such as realized ad- jacencies, layout distortion,
compactness, and uniform area use. We then compare all algorithms
according to these metrics, using two different data sets of word
documents from Wikipedia and ALENEX papers. We show that two of our
new algorithms, based on extracting heavy subgraphs from a weighted
graph, outper- form all the others by placing many more pairs of
related words so that their bounding boxes are adjacent. Moreover,
this improvement is not achieved at the expense of significantly
worsened measurements for the other metrics (distortion, compaction,
uniform area use). The online system implementing the algorithms, all
source code, and data sets are available at
http://wordcloud.cs.arizona.edu.
%K
%Y
%A E. Packer, S. Pupyrev, A. Efrat, S. Kobourov
%T Efficient Methods for Registration of Multiple Moving Points in Noisy Environments
%D July 30, 2013
%Z Tue, 30 Jul 2013 00:00:00 MST
%R TR13-01
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2013
%X Matching sets of trajectories obtained by two different resources is
a challenging and well motivated spatio-temporal problem. It arises
when the motion of the same set of moving objects is obtained by
two sensing devices (e.g. camera or radars) or when data is annotated
by different users. The ultimate goal is to pair the trajectories
so that each object is associated with two trajectories. Within this
context, two main questions arise: (1) how to measure similarities
between trajectories, and (2) how to use the similarity measure between
trajectories to arrive to a reliable matching. Here we describe
computationally efficient methods for several variants of the problem.
The proposed methods have been implemented and used in
experiments with real-world trajectory data. The results indicate
that they are not only theoretically sound, but also work well in
practice.
%K
%Y
%A J. Joseph Fowler and Stephen G. Kobourov
%T Planar Preprocessing for Spring Embedders
%D August 22, 2012
%Z Wed, 22 Aug 2012 00:00:00 MST
%R TR12-03
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2012
%X Spring embedders are conceptually simple and produce straight-line
drawings with an undeniable aesthetic appeal, which explains their prevalence
when it comes to automated graph drawing. However, when drawing planar graphs,
spring embedders often produce non-plane drawings, as edge crossings do not
factor into the objective function being minimized. On the other hand, there are
fairly straight-forward algorithms for creating plane straight-line drawings for
planar graphs, but the resulting layouts generally are not aesthetically pleasing,
as vertices are often grouped in small regions and edges lengths can vary dramatically.
It is known that the initial layout influences the output of a spring embedder,
and yet a random layout is nearly always the default. We study the effect of using
various plane initial drawings as an inputs to a spring embedder, measuring
the percent improvement in reducing crossings and in increasing node separation,
edge length uniformity, and angular resolution.
%K
%Y
%A Md. J. Alam, M. Kaufmann, S. G. Kobourov and T. Mchedlidze
%T Fitting Planar Graphs on Planar Maps
%D July 16, 2012
%Z Mon, 16 Jul 2012 00:00:00 MST
%R TR12-02
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2012
%X Graph and cartographic visualization have the common objective to
provide intuitive understanding of some underlying data. We consider a problem
that combines aspects of both by studying the problem of fitting planar graphs
on planar maps. After providing an NP-hardness result for the general decision
problem, we identify sufficient conditions so that a fit is possible. We generalize
our techniques to nonconvex rectilinear polygons, where we also address the
problem of effective distribution of the vertices inside the map regions.
%K
%Y
%A Md. Jawaherul Alam, Joe Fowler, and Stephen G. Kobourov
%T Outerplanar Graphs with Proper Touching Triangle Representations
%D June 15, 2012
%Z Fri, 15 Jun 2012 00:00:00 MST
%R TR12-01
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2012
%X A touching triangle representation of a planar graphs consists of triangles
representing vertices with pairs of adjacent triangles with non-empty common
boundaries representing the edges. We study the problem of recognizing planar
graphs with proper touching triangle representation, where the union of all
triangles is itself a triangle without holes. It has been conjectured that
testing whether a planar graph is a proper touching triangle graph (TTG)
can be done in polynomial time. Here we provide a necessary condition for a
biconnected outerplanar graph to be a proper TTG and provide a slightly weaker
suchcient condition. Together these two also give a characterization for a
more restricted class of outerplanar graphs.
%K
%Y
%A Naithani, Ajeya
%T Energy efficient buffer cache using phase change memory
%D August 11, 2011
%Z Mon, 03 Jan 2011 00:00:00 MST
%R TR11-04
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2011
%X Main memory consumes a significant portion of the overall energy of a modern
computer system. A major part of this energy can be attributed to the necessity of
keeping DRAM in ready state, even when the memory is rarely accessed. Recently,
Phase Change Memory (PCM) has emerged as a competitor to DRAM oering low
energy consumption in standby state with reasonable performance. However, the
issues of lower performance and high write energy of PCM need to be addressed
before we consider PCM as a building block in the memory hierarchy.
In our research, we leverage the advantages of energy efficiency of PCM and
low read/write latency of DRAM by designing a hybrid buer cache architecture
using PCM and DRAM. We target commercial le servers, where most of the main
memory is dedicated to the buer cache to improve the le-I/O response time. A
dynamic approach to enable or disable DRAM in the proposed hybrid architecture
for energy-performance efficiency is presented. We explore several schemes to bring
performance close to the DRAM-only system and reduce the main memory energy
requirements to 5% of the DRAM-only system. At the same time, our schemes yield
up to 78% memory access time improvement over the PCM-only system.
%K
%Y
%A Rufus, Johny
%T A comparative study of phase change memory (PCM): achieving significant reductions in energy consumption.
%D May 13, 2011
%Z Fri 13 May 2011 00:00:00 MST
%R TR11-03
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2011
%X Phase Change Memory(PCM) is an emerging technology in the storage hierarchy.
PCM is full of promises with low latencies and low energy consumption and high
scalability. Most of the research done regarding PCM focuses on using PCM as a
DRAM alternative or by using PCM as a hybrid component with DRAM as a part
of the primary storage.
Our work focuses on using PCM as a Hard Disk/Flash based SSD alterna-
tive. We focus on reducing the total energy consumption of the system, by using
the high performance PCM as a disk alternative and experiment with different
buffer cache configurations to figure out a way of reducing the memory needed by
the system. In the process we develop a new Translation Layer for PCM called
PCM Translation Layer (PTL) and develop a simulator based on PTL to conduct
our experiments. We try to develop a system with less memory and PCM based
secondary storage and strive to maintain the same performance given by a con-
ventional high performance system that uses larger memory and a Disk or Flash
based secondary storage. Thus without compromising performance, we are trying
to reduce the energy consumption of the system by using PCM as the secondary
storage media.
%K
%Y
%A Cleveland, Matthew
%T A distributed system for track discovery
%D May 13, 2011
%Z Fri 13 May 2011 00:00:00 MST
%R TR11-02
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2011
%X Existing data fitting algorithms for track discovery are accurate and field-proven. As
data sets increase in size, however, memory and computational restraints demand
more robust solutions than are currently available. In this paper we present a set
of algorithms for parallel data fitting. These algorithms make use of approximation
algorithms, intelligent caching, and modeling to facilitate the efficient parallelization
of the model fitting problem, with applications in track discovery.
%K
%Y
%A Tung, Qiyam
%A Efrat, Alon
%A Barnard, Kobus
%A Swaminathan, Ranjini
%T Expanding the Point -- Automatic Enlargement of Presentation Video Elements
%D January 6, 2011
%Z Thu 6 Jan 2011 12:27:00 MST
%R TR11-01
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2011
%X In this paper we present a system that assists users in viewing
videos of lectures on small screen devices, such as PDAs.
It automatically identifies semantic units on the slides, such
as bullets, groups of bullets, and images. As the participant
views the lecture, the system magnifies the appropriate semantic
unit while it is the focus of the discussion. The system
makes this decision based on cues from laser pointer gestures
and/or speech recognition transcript augmented and aligned
with WordNet distances. It then magnifies the semantic element
using the slide image and the homography between the
slide image and the video frame. Our experiment on identifying
laser-based events is fairly accurate. Furthermore, a
user study suggests that this kind of magnification has potential
for improving learning of technical content from video
lectures when resolution of the video is limited as is the case
when the lecture is being viewed on hand held devices.
%T Fall 2009 Human-Instructable Computing Wizard of OZ Studies
%D October 22, 2010
%Z Fri 22 Oct 2010 11:36:57 MST
%R TR10-05
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2010
%X The following report summarizes the series of Bootstrapped
Learning "Wizard of OZ" studies conducted by the LEARN Lab in the Fall of
2009. The studies investigated how humans tend to naturally instruct what
they believe is an electronic student. The studies included two domains:
Wubble World and Charlie the Robot. The domains and experiment protocols
are described, along with a sample of some of the transcripts collected.
These studies were conducted as part of the work for the DARPA Bootstrapped
Learning Program.
%K
%Y
%A Perianayagam, Somu
%T Rex: A Toolset for Reproducing Software Experiments
%D October 19, 2010
%Z Tue, 19 Oct 10 15:30:00 MST
%R TR10-04
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2010
%X Being able to reproduce experiments is the cornerstone of the scientific method. Software experiments are hard to reproduce even if identical hardware is available because external data sets could have changed, software used in the original experiment may be unavailable, or the input parameters for the experiment may not be documented. This paper presents Rex, a toolset that allows one to record an experiment and archive its apparatus, replay an experiment, conduct new experiments, and compare differences between experiments. The execution time overhead of recording experiments is on average about 1.6% and the space overhead of archiving an experiment ranges from 5 to 7GB.
%K
%Y
%A Lewis, Russell
%T Bodyguard: Running Protected Applications in Untrusted Operating Systems
%D April 14, 2010
%Z Fri, 21 May 10 10:55:00 MST
%R TR10-03
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2010
%X In this thesis, we present a method to run an application within a commodity operating system without
risking either the correctness or privacy of the application should the operating system be
compromised. Using a hypervisor, we invisibly intercept all attempts by the operating system to
corrupt the state of the application or access its data. We accomplish this first by tracking the current
state of the virtual space and verifying all actions by the operating system which might change this
state, and second by replacing the contents of physical pages with randomly generated restorable
signatures when the operating system attempts to access the contents. The system is sufficiently
flexible to allow a binary-unmodified operating system to perform typical tasks such as copy-on-write,
fork(), and swap, and sufficiently automatic that the protected application only needs small
modifications. Finally, we present automatic methods for adapting a legacy application which are able
to provide complete and seamless protection for many applications.
%K keywords
%Y
%A Krishnamoorthy, Nithya
%T Static Detection of Disassembly Errors
%D May 14, 2010
%Z Tue, 11 May 10 12:57:06 MST
%R TR10-02
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2010
%X The first step in understanding the semantics of a binary executable is to extract the
assembly instructions that could get executed if it is allowed to run. This sequence
of assembly instructions, typically obtained by static disassembly, is assumed to be
correct by many analysis techniques that build on it. However, static disassembly
can be incorrect; there can be accidental errors during disassembly or a disassem-
bler can be deliberately misled by binary obfuscation techniques, rendering this
assumption invalid.
This thesis proposes a machine learning approach to statically identify dis-
assembly errors in a static disassembly, so that such potential errors can be examined
more closely, using, for example, dynamic analysis. We show that a decision tree
classifier that is built using this approach identifies most known disassembly errors
in the malware that we used for evaluation.
%K keywords
%Y
%A Madhavan, Arun
%A Zhang, Beichuan
%T NAT Traversal by Tunneling
%D May 11, 2010
%Z Tue, 11 May 10 11:34:31 MST
%R TR10-01
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2010
%X Network Address Translation (NAT) is widely prevalent solution adopted to
alleviate the IPv4 address exhaustion problem. Due to the use of private IP
addresses on hosts behind the NAT, it is not possible for external hosts to
initiate communication with these hosts. This poses a hurdle to many
emerging applications, such as VoIP and P2P. Although a plethora of NAT
traversal solutions have been proposed in recent years, they suffer
from being application-specific, complex, or requiring some behavioral
compliance from the NAT.
The work presents an simple technique that is generic, works with nested
NATs, is incrementally deployable and only expects minimalistic common
behavior across all NAT implementations. The design includes the use of UDP
tunnels and a sequence of NAT addresses and private IP addresses to uniquely
identify a host. Simple and incrementally deployable changes are proposed to
DNS to learn the addresses.
%K
%Y
%A Last Name, First Name Author
%T Title
%D Date issued
%Z Mon, 03 Jan 05 00:00:00 GMT
%R TR09-06
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2005
%X abstract
%K keywords
%Y
%A Last Name, First Name Author
%T Title
%D Date issued
%Z Mon, 03 Jan 05 00:00:00 GMT
%R TR09-05
%I The Department of Computer Science, University of Arizona
%U ftp://ftp.cs.arizona.edu/reports/2005
%X abstract
%K keywords
%Y
%A Last Name, First Name Author
%T Title
%D Date issued
%Z Mon, 03 Jan 05 00:00:00 GMT
%R TR09-04 next
%I The Department of Computer Science, University of Arizona
%X Universal pointsets can be used for visualizing multiple relationships on the same set of objects or for visualizing dynamic graph processes. Using the same point in the plane to represent the same object helps preserve the viewer.s mental map. Small universal pointsets are highly desirable but often do not exist because of the restriction that a given object must be mapped to a fixed point in the plane. In colored simultaneous embeddings this restriction is
relaxed, by allowing a given object to map to a subset of points in the plane. Specifically, consider a set of graphs on the same set of n vertices partitioned into k colors. Finding a corresponding set of k-colored points in the plane in which each vertex is mapped to a point of the same color so as to allow a straight-line plane drawing of each graph is the problem of colored simultaneous geometric embedding. For trees, we show that there exists small universal pointsets (1) for 3-colored caterpillars of size n, (2) for 3-colored radius-2 stars of size n+3, and (3) for 2-colored spiders of size n. For outerplanar graphs, we show that these same universal pointsets also suffice for (1) 3-colored K3-caterpillars, (2) 3-colored K3-stars, and (3) 2-colored fans, respectively. We also show that there exist (i) a 2-colored planar graph
and pseudo-forest, (ii) three 3-colored outerplanar graphs, (iii) four 4-colored pseudo-forests, and (iv) three 5-colored pseudo-forests without simultaneous embeddedings.
%K keywords
%Y
%A Perkins, David N.
%T Predicting Secondary Structure of Proteins by Linear and Dynamic Programming
%D April 29, 2009
%Z Mon, 05 Jan 09 00:00:00 GMT
%R TR09-01
%I The Department of Computer Science, University of Arizona
%X Proteins are sequences of amino acids that fold into secondary and tertiary structure, which plays an important role in their function. As biologists have yet to discover the rules that govern how a protein folds in nature from its underlying sequence, this thesis tries a new approach to secondary structure prediction using dynamic programming on the input protein sequence. The sequence is broken into short words, where each word has a probability of folding into the three different types of secondary structure. By combining word probabilities with an abstraction called contexts, which model a run of the same secondary structure type up to a bounded length, the optimal prediction for an entire sequence can be computed via dynamic programming. The structure probabilities for words are learned from a training set of sequences with known secondary structure using linear programming. The combined approach to prediction using linear and dynamic programming achieves high accuracy on protein sequences whose words were observed in the training set, but is far less accurate on sequences with unobserved words not seen in the training set. The challenge for future work lies in interpolating probabilities for unobserved words to achieve improved generalization.
%K keywords
%Y
%A Huang, Huilong
%T Efficient Routing in Wireless Ad Hoc Networks
%D August 12, 2008
%Z Mon, 03 Jan 08 00:00:00 GMT
%R TR08-05
%I The Department of Computer Science, University of Arizona
%X We describe a new file system that provides, at the same time,
both name and content based access to files. To make this possible,
we introduce the concept of a semantic directory. Every
semantic directory has a query associated with it. When a user
creates a semantic directory, the file system automatically creates
a set of pointers to the files in the file system that satisfy
the query associated with the directory. This set of pointers is
called the query-result of the directory. To access the files
that satisfy the query, users just need to de-reference the
appropriate pointers. Users can also create files and sub-directories
within semantic directories in the usual way. Hence, users can
organize files in a hierarchy and access them by specifying path names,
and at the same time, retrieve files by asking queries that
describe their content.
Our file system also provides facilities for query-refinement and customization. When a user creates a new semantic sub-directory within a semantic directory, the file system ensures that the query-result of the sub-directory is a subset of the query-result of its parent. Hence, users can create a hierarchy of semantic directories to refine their queries. Users can also edit the set of pointers in a semantic directory, and thereby modify its query-result without modifying its query or the files in the file system. In this way, users can customize the results of queries according to their personal tastes, and use customized results to refine queries in the future. That is, users do not have to depend solely on the query language to achieve these objectives.
Our file system has many other features, including semantic mount-points that allow users to access information in other file systems by content. The file system does not depend on the query language used for content-based access. Hence, it is possible to integrate any content-based access mechanism into our file system.
%K dissertation
%Y
%A Coffman, E.G., Jr.
%A Downey, Peter
%A Winkler, Peter
%T Packing Rectangles in a Strip
%D April 8, 1997
%Z Wed, 08 Jan 97 00:00:00 GMT
%R TR97-04
%I The Department of Computer Science, University of Arizona