The hybrid/heterogeneous nature of future microprocessors and large high-performance computing systems will result in a reliance on two major types of components: multicore/manycore central processing units and special purpose hardware/massively parallel accelerators. While these technologies have numerous benefits, they also pose substantial...
Programming Graphical User Interfaces with R introduces each of the major R packages for GUI programming: RGtk2, qtbase, Tcl/Tk, and gWidgets. With examples woven through the text as well as stand-alone demonstrations of simple yet reasonably complete applications, the book features topics especially relevant to statisticians...
This book is intended to have three roles and to serve three associated audiences: an
introductory text on Bayesian inference star ting from first principles, a graduate text on
effective current approaches to Bayesian modeling and computation in statistics and
related fields, and a handbook of Bayesian meth ods in applied...
Books on regression and the analysis of variance abound-many are introductory, many are theoretical. While most of them do serve a purpose, the fact remains that data analysis cannot be properly learned without actually doing it, and this means using a statistical software package. There are many of these to choose from as well, all with...
The future of high-performance computing (HPC) lies with large distributed parallel systems with three levels of parallelism, thousands of nodes containing MIMD* groups of SIMD* processors. For the past 50 years, the clock cycle of single processors has decreased steadily and the cost of each processor has decreased. Most applications would...
The first text of its kind, Stephen Chapman's best selling book on MATLAB has now been updated to reflect MATLAB 6.0. The first edition has been highly successful in engineering schools where introductory programming is taught using MATLAB rather than a traditional programming language. Although C, C++, and Java suit the needs of computer science...
Data clustering is a highly interdisciplinary field whose goal is to divide a
set of objects into homogeneous groups such that objects in the same group
are similar and objects in different groups are quite distinct. Thousands of
papers and a number of books on data clustering have been published over
the past 50 years....
After over fifteen years of research and trial and error,
micromap designs have evolved to the point where they
are slowly finding their way into mainstream statistical
visualizations. Now seems to be a good time to pull
all of the work together into a book in order to introduce
micromaps to a wide range of people interested in...
In the last three decades, machine learning research and practice have
focused on batch learning usually using small datasets. In batch learning, the
whole training data is available to the algorithm, which outputs a decision
model after processing the data eventually (or most of the times) multiple
times. The rationale behind this...
Create Fantastic Web Sites with Advice From Some of the Best in the Industry
Professional Web Design presents guidelines for professional Web development, including communicating with clients, creating a road map to a successful portfolio, rules for professional networking, and tips on designing user interfaces for business...
Exploratory data analysis (EDA) was conceived at a time when computers were not widely used, and thus computational ability was rather limited. As computational sophistication has increased, EDA has become an even more powerful process for visualizing and summarizing data before making model assumptions to generate hypotheses, encompassing...
From prehistoric times, mankind has looked up at the night sky, and puzzled at the changing positions of the stars. How far away they are is a question that has confounded scientists for centuries. Over the last few hundred years, many scientific careers – and considerable resources – have been devoted to measuring their positions...