Hardcover ISBN:  9780821811849 
Product Code:  DIMACS/50 
List Price:  $98.00 
MAA Member Price:  $88.20 
AMS Member Price:  $78.40 
Electronic ISBN:  9781470440084 
Product Code:  DIMACS/50.E 
List Price:  $92.00 
MAA Member Price:  $82.80 
AMS Member Price:  $73.60 

Book DetailsDIMACS  Series in Discrete Mathematics and Theoretical Computer ScienceVolume: 50; 1999; 306 ppMSC: Primary 68;
We are especially proud to announce the publication of this DIMACS book—the 50th volume in this series, published by the AMS. The series was established out of a collaborative venture geared to unite the cuttingedge research at DIMACS with the resources at the AMS to produce useful, welldesigned, important mathematical and computational sciences works. This volume is a hallmark in this firmly grounded and wellreceived AMS series.
The AMS's 50th DIMACS volume is also particularly notable at this time: The year 1999 marks the 10th anniversary of the founding of DIMACS as a center. Participants in the DIMACS national research project are Rutgers University, Princeton University, AT&T Labs–Research, Bell Labs (Lucent Technologies), Telcordia Technologies, and NEC Research Institute.
The success of the joint publishing venture between the AMS and DIMACS is excellent. We continue to work concordantly with the Center to further their goal of playing a key national leadership role in the development, application, and dissemination of discrete mathematics and theoretical computer science. This 50th DIMACS volume is in celebration of that dynamic, ongoing partnership.
About the book:
Special techniques from computer science and mathematics are used to solve combinatorial problems whose associated data require a hierarchy of storage devices. These solutions employ “extended memory algorithms”. The input/output (I/O) communication between the levels of the hierarchy is often a significant bottleneck, especially in applications that process massive amounts of data. Gains in performance are possible by incorporating locality directly into the algorithms and managing the contents of each storage level.
The relative difference in data access speeds is more apparent between random access memory and magnetic disks. Therefore, much research has been devoted to algorithms that focus on this I/O bottleneck. These algorithms are usually called “external memory”, “outofcore”, or “I/O algorithms”.
This volume presents new research results and current techniques for the design and analysis of external memory algorithms. The articles grew out of the workshop, “External Memory Algorithms and Visualization” held at DIMACS. Leading researchers were invited to give lectures and to contribute their work. Topics presented include problems in computational geometry, graph theory, data compression, disk scheduling, linear algebra, statistics, software libraries, text and string processing, visualization, wavelets, and industrial applications.
The vitality of the research and the interdisciplinary nature of the event produced fruitful ground for the compelling fusion of ideas and methods. This volume comprises the rich results that grew out of that process.About the editors:
James Abello has been facing for several years a daily data avalanche that crosses the Infolab at AT&TShannon Laboratories. He and his colleagues have helped formulate some of the major challenges in real Massive Data Exploration. His encounter with Jeff Vitter has been a fruitful ground where practice and theory have been able to interact with success. Dr. Abello has a Ph.D. in Combinatorial Algorithms from the University of California, San Diego. He was the recipient of a University California President's Postdoctoral Fellowship in Computer Science. His publications include articles in combinatorics, graph theory, discrete and computational geometry, algorithm visualization systems, distributed computing and external memory algorithms. Dr. Abello has been with AT&T Research since 1995 and is currently a member of the Information Visualization Department at Shannon Laboratories, Florham Park, New Jersey.
Jeff Vitter is the cocreator of the widely used parallel I/O model and is a leader in the field of external memory algorithms and data structures. He is the Gilbert, Louis, and Edward Lehrman Professor of Computer Science and Chair of the Department of Computer Science at Duke University and CoDirector of the Center for Geometric Computing at Duke. He was previously on the faculty at Brown University. He has been named a Guggenheim Foundation Fellow, a Fellow of the Association for Computing Machinery, a Fellow of the Institute of Electrical and Electronics Engineers, a National Science Foundation Presidential Young Investigator, and a Fulbright Scholar. Professor Vitter works on efficient external memory algorithms in several domains, dealing with geographic information systems, sorting, FFT, matrix computations, graph traversal, range searching, data mining, and a variety of computational geometry and combinatorial problems. A related interest is how to take advantage of parallel disks and parallel hierarchical memories. He is currently doing algorithm engineering using the TPIE system. Other work includes prediction mechanisms for use by systems to improve locality for caching, prefetching, database query optimization, data mining, and resource management in mobile computers. He also works on image, video, and text compression, computational geometry, graphics, random sampling, and random variate generation.
ReadershipGraduate students, research and applied mathematicians interested in computer science.

Table of Contents

Chapters

External memory algorithms and data structures

Synopsis data structures for massive data sets

Calculating robust depth measures for large data sets

Efficient crosstrees for external memory

Computing on data streams

On maximum clique problems in very large graphs

I/Ooptimal computation of segment intersections

On showing lower bounds for externalmemory computational geometry problems

A survey of outofcore algorithms in numerical linear algebra

Concrete software libraries

S(b)tree library: An efficient way of indexing data

ASP: Adaptive online parallel disk scheduling

Efficient schemes for distributing data on parallel memory systems

External memory techniques for isosurface extraction in scientific visualization

Rtree retrieval of unstructured volume data for visualization


Request Review Copy
 Book Details
 Table of Contents

 Request Review Copy
We are especially proud to announce the publication of this DIMACS book—the 50th volume in this series, published by the AMS. The series was established out of a collaborative venture geared to unite the cuttingedge research at DIMACS with the resources at the AMS to produce useful, welldesigned, important mathematical and computational sciences works. This volume is a hallmark in this firmly grounded and wellreceived AMS series.
The AMS's 50th DIMACS volume is also particularly notable at this time: The year 1999 marks the 10th anniversary of the founding of DIMACS as a center. Participants in the DIMACS national research project are Rutgers University, Princeton University, AT&T Labs–Research, Bell Labs (Lucent Technologies), Telcordia Technologies, and NEC Research Institute.
The success of the joint publishing venture between the AMS and DIMACS is excellent. We continue to work concordantly with the Center to further their goal of playing a key national leadership role in the development, application, and dissemination of discrete mathematics and theoretical computer science. This 50th DIMACS volume is in celebration of that dynamic, ongoing partnership.
About the book:
Special techniques from computer science and mathematics are used to solve combinatorial problems whose associated data require a hierarchy of storage devices. These solutions employ “extended memory algorithms”. The input/output (I/O) communication between the levels of the hierarchy is often a significant bottleneck, especially in applications that process massive amounts of data. Gains in performance are possible by incorporating locality directly into the algorithms and managing the contents of each storage level.
The relative difference in data access speeds is more apparent between random access memory and magnetic disks. Therefore, much research has been devoted to algorithms that focus on this I/O bottleneck. These algorithms are usually called “external memory”, “outofcore”, or “I/O algorithms”.
This volume presents new research results and current techniques for the design and analysis of external memory algorithms. The articles grew out of the workshop, “External Memory Algorithms and Visualization” held at DIMACS. Leading researchers were invited to give lectures and to contribute their work. Topics presented include problems in computational geometry, graph theory, data compression, disk scheduling, linear algebra, statistics, software libraries, text and string processing, visualization, wavelets, and industrial applications.
The vitality of the research and the interdisciplinary nature of the event produced fruitful ground for the compelling fusion of ideas and methods. This volume comprises the rich results that grew out of that process.
About the editors:
James Abello has been facing for several years a daily data avalanche that crosses the Infolab at AT&TShannon Laboratories. He and his colleagues have helped formulate some of the major challenges in real Massive Data Exploration. His encounter with Jeff Vitter has been a fruitful ground where practice and theory have been able to interact with success. Dr. Abello has a Ph.D. in Combinatorial Algorithms from the University of California, San Diego. He was the recipient of a University California President's Postdoctoral Fellowship in Computer Science. His publications include articles in combinatorics, graph theory, discrete and computational geometry, algorithm visualization systems, distributed computing and external memory algorithms. Dr. Abello has been with AT&T Research since 1995 and is currently a member of the Information Visualization Department at Shannon Laboratories, Florham Park, New Jersey.
Jeff Vitter is the cocreator of the widely used parallel I/O model and is a leader in the field of external memory algorithms and data structures. He is the Gilbert, Louis, and Edward Lehrman Professor of Computer Science and Chair of the Department of Computer Science at Duke University and CoDirector of the Center for Geometric Computing at Duke. He was previously on the faculty at Brown University. He has been named a Guggenheim Foundation Fellow, a Fellow of the Association for Computing Machinery, a Fellow of the Institute of Electrical and Electronics Engineers, a National Science Foundation Presidential Young Investigator, and a Fulbright Scholar. Professor Vitter works on efficient external memory algorithms in several domains, dealing with geographic information systems, sorting, FFT, matrix computations, graph traversal, range searching, data mining, and a variety of computational geometry and combinatorial problems. A related interest is how to take advantage of parallel disks and parallel hierarchical memories. He is currently doing algorithm engineering using the TPIE system. Other work includes prediction mechanisms for use by systems to improve locality for caching, prefetching, database query optimization, data mining, and resource management in mobile computers. He also works on image, video, and text compression, computational geometry, graphics, random sampling, and random variate generation.
Graduate students, research and applied mathematicians interested in computer science.

Chapters

External memory algorithms and data structures

Synopsis data structures for massive data sets

Calculating robust depth measures for large data sets

Efficient crosstrees for external memory

Computing on data streams

On maximum clique problems in very large graphs

I/Ooptimal computation of segment intersections

On showing lower bounds for externalmemory computational geometry problems

A survey of outofcore algorithms in numerical linear algebra

Concrete software libraries

S(b)tree library: An efficient way of indexing data

ASP: Adaptive online parallel disk scheduling

Efficient schemes for distributing data on parallel memory systems

External memory techniques for isosurface extraction in scientific visualization

Rtree retrieval of unstructured volume data for visualization